Sample records for element computational aspects

  1. On the computational aspects of comminution in discrete element method

    NASA Astrophysics Data System (ADS)

    Chaudry, Mohsin Ali; Wriggers, Peter

    2018-04-01

    In this paper, computational aspects of crushing/comminution of granular materials are addressed. For crushing, maximum tensile stress-based criterion is used. Crushing model in discrete element method (DEM) is prone to problems of mass conservation and reduction in critical time step. The first problem is addressed by using an iterative scheme which, depending on geometric voids, recovers mass of a particle. In addition, a global-local framework for DEM problem is proposed which tends to alleviate the local unstable motion of particles and increases the computational efficiency.

  2. 3-D modeling of ductile tearing using finite elements: Computational aspects and techniques

    NASA Astrophysics Data System (ADS)

    Gullerud, Arne Stewart

    This research focuses on the development and application of computational tools to perform large-scale, 3-D modeling of ductile tearing in engineering components under quasi-static to mild loading rates. Two standard models for ductile tearing---the computational cell methodology and crack growth controlled by the crack tip opening angle (CTOA)---are described and their 3-D implementations are explored. For the computational cell methodology, quantification of the effects of several numerical issues---computational load step size, procedures for force release after cell deletion, and the porosity for cell deletion---enables construction of computational algorithms to remove the dependence of predicted crack growth on these issues. This work also describes two extensions of the CTOA approach into 3-D: a general 3-D method and a constant front technique. Analyses compare the characteristics of the extensions, and a validation study explores the ability of the constant front extension to predict crack growth in thin aluminum test specimens over a range of specimen geometries, absolutes sizes, and levels of out-of-plane constraint. To provide a computational framework suitable for the solution of these problems, this work also describes the parallel implementation of a nonlinear, implicit finite element code. The implementation employs an explicit message-passing approach using the MPI standard to maintain portability, a domain decomposition of element data to provide parallel execution, and a master-worker organization of the computational processes to enhance future extensibility. A linear preconditioned conjugate gradient (LPCG) solver serves as the core of the solution process. The parallel LPCG solver utilizes an element-by-element (EBE) structure of the computations to permit a dual-level decomposition of the element data: domain decomposition of the mesh provides efficient coarse-grain parallel execution, while decomposition of the domains into blocks of similar

  3. Terminological aspects of data elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strehlow, R.A.; Kenworthey, W.H. Jr.; Schuldt, R.E.

    1991-01-01

    The creation and display of data comprise a process that involves a sequence of steps requiring both semantic and systems analysis. An essential early step in this process is the choice, definition, and naming of data element concepts and is followed by the specification of other needed data element concept attributes. The attributes and the values of data element concept remain associated with them from their birth as a concept to a generic data element that serves as a template for final application. Terminology is, therefore, centrally important to the entire data creation process. Smooth mapping from natural language tomore » a database is a critical aspect of database, and consequently, it requires terminology standardization from the outset of database work. In this paper the semantic aspects of data elements are analyzed and discussed. Seven kinds of data element concept information are considered and those that require terminological development and standardization are identified. The four terminological components of a data element are the hierarchical type of a concept, functional dependencies, schematas showing conceptual structures, and definition statements. These constitute the conventional role of terminology in database design. 12 refs., 8 figs., 1 tab.« less

  4. Element-topology-independent preconditioners for parallel finite element computations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alexander, Scott

    1992-01-01

    A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.

  5. On the effects of grid ill-conditioning in three dimensional finite element vector potential magnetostatic field computations

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1990-01-01

    The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.

  6. Update to Computational Aspects of Nitrogen-Rich HEDMs

    DTIC Science & Technology

    2016-04-01

    ARL-TR-7656 ● APR 2016 US Army Research Laboratory Update to “Computational Aspects of Nitrogen -Rich HEDMs” by Betsy M...Computational Aspects of Nitrogen -Rich HEDMs” by Betsy M Rice, Edward FC Byrd, and William D Mattson Weapons and Materials Research Directorate...

  7. Computation of Asteroid Proper Elements: Recent Advances

    NASA Astrophysics Data System (ADS)

    Knežević, Z.

    2017-12-01

    The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.

  8. Synchrotron Imaging Computations on the Grid without the Computing Element

    NASA Astrophysics Data System (ADS)

    Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.

    2011-12-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  9. Nonlinear Finite Element Analysis of Shells with Large Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Chang, T. Y.; Sawamiphakdi, K.

    1984-01-01

    A higher order degenerated shell element with nine nodes was selected for large deformation and post-buckling analysis of thick or thin shells. Elastic-plastic material properties are also included. The post-buckling analysis algorithm is given. Using a square plate, it was demonstrated that the none-node element does not have shear locking effect even if its aspect ratio was increased to the order 10 to the 8th power. Two sample problems are given to illustrate the analysis capability of the shell element.

  10. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  11. Mathematical aspects of finite element methods for incompressible viscous flows

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.

    1986-01-01

    Mathematical aspects of finite element methods are surveyed for incompressible viscous flows, concentrating on the steady primitive variable formulation. The discretization of a weak formulation of the Navier-Stokes equations are addressed, then the stability condition is considered, the satisfaction of which insures the stability of the approximation. Specific choices of finite element spaces for the velocity and pressure are then discussed. Finally, the connection between different weak formulations and a variety of boundary conditions is explored.

  12. Computation of Asteroid Proper Elements on the Grid

    NASA Astrophysics Data System (ADS)

    Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.

    2009-12-01

    A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  13. Finite element dynamic analysis on CDC STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lambiotte, J. J., Jr.

    1978-01-01

    Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.

  14. Programmable computing with a single magnetoresistive element

    NASA Astrophysics Data System (ADS)

    Ney, A.; Pampuch, C.; Koch, R.; Ploog, K. H.

    2003-10-01

    The development of transistor-based integrated circuits for modern computing is a story of great success. However, the proved concept for enhancing computational power by continuous miniaturization is approaching its fundamental limits. Alternative approaches consider logic elements that are reconfigurable at run-time to overcome the rigid architecture of the present hardware systems. Implementation of parallel algorithms on such `chameleon' processors has the potential to yield a dramatic increase of computational speed, competitive with that of supercomputers. Owing to their functional flexibility, `chameleon' processors can be readily optimized with respect to any computer application. In conventional microprocessors, information must be transferred to a memory to prevent it from getting lost, because electrically processed information is volatile. Therefore the computational performance can be improved if the logic gate is additionally capable of storing the output. Here we describe a simple hardware concept for a programmable logic element that is based on a single magnetic random access memory (MRAM) cell. It combines the inherent advantage of a non-volatile output with flexible functionality which can be selected at run-time to operate as an AND, OR, NAND or NOR gate.

  15. Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox

    NASA Astrophysics Data System (ADS)

    Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas

    In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.

  16. Optically intraconnected computer employing dynamically reconfigurable holographic optical element

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A. (Inventor)

    1992-01-01

    An optically intraconnected computer and a reconfigurable holographic optical element employed therein. The basic computer comprises a memory for holding a sequence of instructions to be executed; logic for accessing the instructions in sequence; logic for determining for each the instruction the function to be performed and the effective address thereof; a plurality of individual elements on a common support substrate optimized to perform certain logical sequences employed in executing the instructions; and, element selection logic connected to the logic determining the function to be performed for each the instruction for determining the class of each function and for causing the instruction to be executed by those the elements which perform those associated the logical sequences affecting the instruction execution in an optimum manner. In the optically intraconnected version, the element selection logic is adapted for transmitting and switching signals to the elements optically.

  17. Computational Modeling for the Flow Over a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun

    1999-01-01

    The flow over a multi-element airfoil is computed using two two-equation turbulence models. The computations are performed using the INS2D) Navier-Stokes code for two angles of attack. Overset grids are used for the three-element airfoil. The computed results are compared with experimental data for the surface pressure, skin friction coefficient, and velocity magnitude. The computed surface quantities generally agree well with the measurement. The computed results reveal the possible existence of a mixing-layer-like region of flow next to the suction surface of the slat for both angles of attack.

  18. Development of non-linear finite element computer code

    NASA Technical Reports Server (NTRS)

    Becker, E. B.; Miller, T.

    1985-01-01

    Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.

  19. Books and monographs on finite element technology

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1985-01-01

    The present paper proviees a listing of all of the English books and some of the foreign books on finite element technology, taking into account also a list of the conference proceedings devoted solely to finite elements. The references are divided into categories. Attention is given to fundamentals, mathematical foundations, structural and solid mechanics applications, fluid mechanics applications, other applied science and engineering applications, computer implementation and software systems, computational and modeling aspects, special topics, boundary element methods, proceedings of symmposia and conferences on finite element technology, bibliographies, handbooks, and historical accounts.

  20. Business aspects of cardiovascular computed tomography: tackling the challenges.

    PubMed

    Bateman, Timothy M

    2008-01-01

    The purpose of this article is to provide a comprehensive understanding of the business issues surrounding provision of dedicated cardiovascular computed tomographic imaging. Some of the challenges include high up-front costs, current low utilization relative to scanner capability, and inadequate payments. Cardiovascular computed tomographic imaging is a valuable clinical modality that should be offered by cardiovascular centers-of-excellence. With careful consideration of the business aspects, moderate-to-large size cardiology programs should be able to implement an economically viable cardiovascular computed tomographic service.

  1. A computer program for calculating aerodynamic characteristics of low aspect-ratio wings with partial leading-edge separation

    NASA Technical Reports Server (NTRS)

    Mehrotra, S. C.; Lan, C. E.

    1978-01-01

    The necessary information for using a computer program to predict distributed and total aerodynamic characteristics for low aspect ratio wings with partial leading-edge separation is presented. The flow is assumed to be steady and inviscid. The wing boundary condition is formulated by the Quasi-Vortex-Lattice method. The leading edge separated vortices are represented by discrete free vortex elements which are aligned with the local velocity vector at midpoints to satisfy the force free condition. The wake behind the trailing edge is also force free. The flow tangency boundary condition is satisfied on the wing, including the leading and trailing edges. The program is restricted to delta wings with zero thickness and no camber. It is written in FORTRAN language and runs on CDC 6600 computer.

  2. Computation of scattering matrix elements of large and complex shaped absorbing particles with multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang

    2015-05-01

    Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.

  3. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  4. Physical aspects of computing the flow of a viscous fluid

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.

    1984-01-01

    One of the main themes in fluid dynamics at present and in the future is going to be computational fluid dynamics with the primary focus on the determination of drag, flow separation, vortex flows, and unsteady flows. A computation of the flow of a viscous fluid requires an understanding and consideration of the physical aspects of the flow. This is done by identifying the flow regimes and the scales of fluid motion, and the sources of vorticity. Discussions of flow regimes deal with conditions of incompressibility, transitional and turbulent flows, Navier-Stokes and non-Navier-Stokes regimes, shock waves, and strain fields. Discussions of the scales of fluid motion consider transitional and turbulent flows, thin- and slender-shear layers, triple- and four-deck regions, viscous-inviscid interactions, shock waves, strain rates, and temporal scales. In addition, the significance and generation of vorticity are discussed. These physical aspects mainly guide computations of the flow of a viscous fluid.

  5. Computational Aspects of Data Assimilation and the ESMF

    NASA Technical Reports Server (NTRS)

    daSilva, A.

    2003-01-01

    The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.

  6. Identity as an Element of Human and Language Universes: Axiological Aspect

    ERIC Educational Resources Information Center

    Zheltukhina, Marina R.; Vikulova, Larisa G.; Serebrennikova, Evgenia F.; Gerasimova, Svetlana A.; Borbotko, Liudmila A.

    2016-01-01

    Interest to axiosphere as the sphere of values and its correlation with the ever-progressive noosphere as sphere of knowledge of a person is due to comprehension of the modern period in the evolution of society. The aim of the article is to describe an axiological aspect of the research of identity as an element of human and language universes.…

  7. Finite element concepts in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    Finite element theory was employed to establish an implicit numerical solution algorithm for the time averaged unsteady Navier-Stokes equations. Both the multidimensional and a time-split form of the algorithm were considered, the latter of particular interest for problem specification on a regular mesh. A Newton matrix iteration procedure is outlined for solving the resultant nonlinear algebraic equation systems. Multidimensional discretization procedures are discussed with emphasis on automated generation of specific nonuniform solution grids and accounting of curved surfaces. The time-split algorithm was evaluated with regards to accuracy and convergence properties for hyperbolic equations on rectangular coordinates. An overall assessment of the viability of the finite element concept for computational aerodynamics is made.

  8. Computational aspects in mechanical modeling of the articular cartilage tissue.

    PubMed

    Mohammadi, Hadi; Mequanint, Kibret; Herzog, Walter

    2013-04-01

    This review focuses on the modeling of articular cartilage (at the tissue level), chondrocyte mechanobiology (at the cell level) and a combination of both in a multiscale computation scheme. The primary objective is to evaluate the advantages and disadvantages of conventional models implemented to study the mechanics of the articular cartilage tissue and chondrocytes. From monophasic material models as the simplest form to more complicated multiscale theories, these approaches have been frequently used to model articular cartilage and have contributed significantly to modeling joint mechanics, addressing and resolving numerous issues regarding cartilage mechanics and function. It should be noted that attentiveness is important when using different modeling approaches, as the choice of the model limits the applications available. In this review, we discuss the conventional models applicable to some of the mechanical aspects of articular cartilage such as lubrication, swelling pressure and chondrocyte mechanics and address some of the issues associated with the current modeling approaches. We then suggest future pathways for a more realistic modeling strategy as applied for the simulation of the mechanics of the cartilage tissue using multiscale and parallelized finite element method.

  9. A computer graphics program for general finite element analyses

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Sawyer, L. M.

    1978-01-01

    Documentation for a computer graphics program for displays from general finite element analyses is presented. A general description of display options and detailed user instructions are given. Several plots made in structural, thermal and fluid finite element analyses are included to illustrate program options. Sample data files are given to illustrate use of the program.

  10. Some Aspects of Parallel Implementation of the Finite Element Method on Message Passing Architectures

    DTIC Science & Technology

    1988-05-01

    for Advanced Computer Studies and Department of Computer Science University of Maryland College Park, MD 20742 4, ABSTRACT We discuss some aspects of...Computer Studies and Technology & Dept. of Compute. Scienc II. CONTROLLING OFFICE NAME AND ADDRESS Viyriyf~ 12. REPORT DATE Department of the Navy uo...number)-1/ 2.) We study the performance of CG and PCG by examining its performance for u E (0,1), for solving the two model problems with an accuracy

  11. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  12. The bacteriorhodopsin model membrane system as a prototype molecular computing element.

    PubMed

    Hong, F T

    1986-01-01

    The quest for more sophisticated integrated circuits to overcome the limitation of currently available silicon integrated circuits has led to the proposal of using biological molecules as computational elements by computer scientists and engineers. While the theoretical aspect of this possibility has been pursued by computer scientists, the research and development of experimental prototypes have not been pursued with an equal intensity. In this survey, we make an attempt to examine model membrane systems that incorporate the protein pigment bacteriorhodopsin which is found in Halobacterium halobium. This system was chosen for several reasons. The pigment/membrane system is sufficiently simple and stable for rigorous quantitative study, yet at the same time sufficiently complex in molecular structure to permit alteration of this structure in an attempt to manipulate the photosignal. Several methods of forming the pigment/membrane assembly are described and the potential application to biochip design is discussed. Experimental data using these membranes and measured by a tunable voltage clamp method are presented along with a theoretical analysis based on the Gouy-Chapman diffuse double layer theory to illustrate the usefulness of this approach. It is shown that detailed layouts of the pigment/membrane assembly as well as external loading conditions can modify the time course of the photosignal in a predictable manner. Some problems that may arise in the actual implementation and manufacturing, as well as the use of existing technology in protein chemistry, immunology, and recombinant DNA technology are discussed.

  13. Dental application of novel finite element analysis software for three-dimensional finite element modeling of a dentulous mandible from its computed tomography images.

    PubMed

    Nakamura, Keiko; Tajima, Kiyoshi; Chen, Ker-Kong; Nagamatsu, Yuki; Kakigawa, Hiroshi; Masumi, Shin-ich

    2013-12-01

    This study focused on the application of novel finite-element analysis software for constructing a finite-element model from the computed tomography data of a human dentulous mandible. The finite-element model is necessary for evaluating the mechanical response of the alveolar part of the mandible, resulting from occlusal force applied to the teeth during biting. Commercially available patient-specific general computed tomography-based finite-element analysis software was solely applied to the finite-element analysis for the extraction of computed tomography data. The mandibular bone with teeth was extracted from the original images. Both the enamel and the dentin were extracted after image processing, and the periodontal ligament was created from the segmented dentin. The constructed finite-element model was reasonably accurate using a total of 234,644 nodes and 1,268,784 tetrahedral and 40,665 shell elements. The elastic moduli of the heterogeneous mandibular bone were determined from the bone density data of the computed tomography images. The results suggested that the software applied in this study is both useful and powerful for creating a more accurate three-dimensional finite-element model of a dentulous mandible from the computed tomography data without the need for any other software.

  14. Rad-hard computer elements for space applications

    NASA Technical Reports Server (NTRS)

    Krishnan, G. S.; Longerot, Carl D.; Treece, R. Keith

    1993-01-01

    Space Hardened CMOS computer elements emulating a commercial microcontroller and microprocessor family have been designed, fabricated, qualified, and delivered for a variety of space programs including NASA's multiple launch International Solar-Terrestrial Physics (ISTP) program, Mars Observer, and government and commercial communication satellites. Design techniques and radiation performance of the 1.25 micron feature size products are described.

  15. A locally refined rectangular grid finite element method - Application to computational fluid dynamics and computational physics

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1991-01-01

    The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.

  16. Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Ogilvie, P. L.

    1978-01-01

    The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.

  17. Finite element computation on nearest neighbor connected machines

    NASA Technical Reports Server (NTRS)

    Mcaulay, A. D.

    1984-01-01

    Research aimed at faster, more cost effective parallel machines and algorithms for improving designer productivity with finite element computations is discussed. A set of 8 boards, containing 4 nearest neighbor connected arrays of commercially available floating point chips and substantial memory, are inserted into a commercially available machine. One-tenth Mflop (64 bit operation) processors provide an 89% efficiency when solving the equations arising in a finite element problem for a single variable regular grid of size 40 by 40 by 40. This is approximately 15 to 20 times faster than a much more expensive machine such as a VAX 11/780 used in double precision. The efficiency falls off as faster or more processors are envisaged because communication times become dominant. A novel successive overrelaxation algorithm which uses cyclic reduction in order to permit data transfer and computation to overlap in time is proposed.

  18. A computational model of the cognitive impact of decorative elements on the perception of suspense

    NASA Astrophysics Data System (ADS)

    Delatorre, Pablo; León, Carlos; Gervás, Pablo; Palomo-Duarte, Manuel

    2017-10-01

    Suspense is a key narrative issue in terms of emotional gratification, influencing the way in which the audience experiences a story. Virtually all narrative media uses suspense as a strategy for reader engagement regardless of the technology or genre. Being such an important narrative component, computational creativity has tackled suspense in a number of automatic storytelling. These systems are mainly based on narrative theories, and in general lack a cognitive approach involving the study of empathy or emotional effect of the environment impact. With this idea in mind, this paper reports on a computational model of the influence of decorative elements on suspense. It has been developed as part of a more general proposal for plot generation based on cognitive aspects. In order to test and parameterise the model, an evaluation based on textual stories and an evaluation based on a 3D virtual environment was run. In both cases, results suggest a direct influence of emotional perception of decorative objects in the suspense of a scene.

  19. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.

  20. Learning by statistical cooperation of self-interested neuron-like computing elements.

    PubMed

    Barto, A G

    1985-01-01

    Since the usual approaches to cooperative computation in networks of neuron-like computating elements do not assume that network components have any "preferences", they do not make substantive contact with game theoretic concepts, despite their use of some of the same terminology. In the approach presented here, however, each network component, or adaptive element, is a self-interested agent that prefers some inputs over others and "works" toward obtaining the most highly preferred inputs. Here we describe an adaptive element that is robust enough to learn to cooperate with other elements like itself in order to further its self-interests. It is argued that some of the longstanding problems concerning adaptation and learning by networks might be solvable by this form of cooperativity, and computer simulation experiments are described that show how networks of self-interested components that are sufficiently robust can solve rather difficult learning problems. We then place the approach in its proper historical and theoretical perspective through comparison with a number of related algorithms. A secondary aim of this article is to suggest that beyond what is explicitly illustrated here, there is a wealth of ideas from game theory and allied disciplines such as mathematical economics that can be of use in thinking about cooperative computation in both nervous systems and man-made systems.

  1. Computer simulation of functioning of elements of security systems

    NASA Astrophysics Data System (ADS)

    Godovykh, A. V.; Stepanov, B. P.; Sheveleva, A. A.

    2017-01-01

    The article is devoted to issues of development of the informational complex for simulation of functioning of the security system elements. The complex is described from the point of view of main objectives, a design concept and an interrelation of main elements. The proposed conception of the computer simulation provides an opportunity to simulate processes of security system work for training security staff during normal and emergency operation.

  2. A computer program for anisotropic shallow-shell finite elements using symbolic integration

    NASA Technical Reports Server (NTRS)

    Andersen, C. M.; Bowen, J. T.

    1976-01-01

    A FORTRAN computer program for anisotropic shallow-shell finite elements with variable curvature is described. A listing of the program is presented together with printed output for a sample case. Computation times and central memory requirements are given for several different elements. The program is based on a stiffness (displacement) finite-element model in which the fundamental unknowns consist of both the displacement and the rotation components of the reference surface of the shell. Two triangular and four quadrilateral elements are implemented in the program. The triangular elements have 6 or 10 nodes, and the quadrilateral elements have 4 or 8 nodes. Two of the quadrilateral elements have internal degrees of freedom associated with displacement modes which vanish along the edges of the elements (bubble modes). The triangular elements and the remaining two quadrilateral elements do not have bubble modes. The output from the program consists of arrays corresponding to the stiffness, the geometric stiffness, the consistent mass, and the consistent load matrices for individual elements. The integrals required for the generation of these arrays are evaluated by using symbolic (or analytic) integration in conjunction with certain group-theoretic techniques. The analytic expressions for the integrals are exact and were developed using the symbolic and algebraic manipulation language.

  3. Modeling of rolling element bearing mechanics. Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Greenhill, Lyn M.; Merchant, David H.

    1994-01-01

    This report provides the user's manual for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings, duplex angular contact ball bearings, and cylindrical roller bearings. The model includes the defects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program, and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. This report addresses input instructions for and features of the computer codes. A companion report addresses the theoretical basis for the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.

  4. Computational Modeling For The Transitional Flow Over A Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun; Rumsey, Chris L. (Technical Monitor)

    2000-01-01

    The transitional flow over a multi-element airfoil in a landing configuration are computed using a two equation transition model. The transition model is predictive in the sense that the transition onset is a result of the calculation and no prior knowledge of the transition location is required. The computations were performed using the INS2D) Navier-Stokes code. Overset grids are used for the three-element airfoil. The airfoil operating conditions are varied for a range of angle of attack and for two different Reynolds numbers of 5 million and 9 million. The computed results are compared with experimental data for the surface pressure, skin friction, transition onset location, and velocity magnitude. In general, the comparison shows a good agreement with the experimental data.

  5. Applications of Parallel Computation in Micro-Mechanics and Finite Element Method

    NASA Technical Reports Server (NTRS)

    Tan, Hui-Qian

    1996-01-01

    This project discusses the application of parallel computations related with respect to material analyses. Briefly speaking, we analyze some kind of material by elements computations. We call an element a cell here. A cell is divided into a number of subelements called subcells and all subcells in a cell have the identical structure. The detailed structure will be given later in this paper. It is obvious that the problem is "well-structured". SIMD machine would be a better choice. In this paper we try to look into the potentials of SIMD machine in dealing with finite element computation by developing appropriate algorithms on MasPar, a SIMD parallel machine. In section 2, the architecture of MasPar will be discussed. A brief review of the parallel programming language MPL also is given in that section. In section 3, some general parallel algorithms which might be useful to the project will be proposed. And, combining with the algorithms, some features of MPL will be discussed in more detail. In section 4, the computational structure of cell/subcell model will be given. The idea of designing the parallel algorithm for the model will be demonstrated. Finally in section 5, a summary will be given.

  6. Computer Security Systems Enable Access.

    ERIC Educational Resources Information Center

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  7. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  8. Computed tomography-based finite element analysis to assess fracture risk and osteoporosis treatment

    PubMed Central

    Imai, Kazuhiro

    2015-01-01

    Finite element analysis (FEA) is a computer technique of structural stress analysis and developed in engineering mechanics. FEA has developed to investigate structural behavior of human bones over the past 40 years. When the faster computers have acquired, better FEA, using 3-dimensional computed tomography (CT) has been developed. This CT-based finite element analysis (CT/FEA) has provided clinicians with useful data. In this review, the mechanism of CT/FEA, validation studies of CT/FEA to evaluate accuracy and reliability in human bones, and clinical application studies to assess fracture risk and effects of osteoporosis medication are overviewed. PMID:26309819

  9. A new hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

  10. On Undecidability Aspects of Resilient Computations and Implications to Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S

    2014-01-01

    Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classesmore » of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.« less

  11. Human-Computer Interaction: A Review of the Research on Its Affective and Social Aspects.

    ERIC Educational Resources Information Center

    Deaudelin, Colette; Dussault, Marc; Brodeur, Monique

    2003-01-01

    Discusses a review of 34 qualitative and non-qualitative studies related to affective and social aspects of student-computer interactions. Highlights include the nature of the human-computer interaction (HCI); the interface, comparing graphic and text types; and the relation between variables linked to HCI, mainly trust, locus of control,…

  12. Incorporating Knowledge of Legal and Ethical Aspects into Computing Curricula of South African Universities

    ERIC Educational Resources Information Center

    Wayman, Ian; Kyobe, Michael

    2012-01-01

    As students in computing disciplines are introduced to modern information technologies, numerous unethical practices also escalate. With the increase in stringent legislations on use of IT, users of technology could easily be held liable for violation of this legislation. There is however lack of understanding of social aspects of computing, and…

  13. Parallel fast multipole boundary element method applied to computational homogenization

    NASA Astrophysics Data System (ADS)

    Ptaszny, Jacek

    2018-01-01

    In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.

  14. Finite elements: Theory and application

    NASA Technical Reports Server (NTRS)

    Dwoyer, D. L. (Editor); Hussaini, M. Y. (Editor); Voigt, R. G. (Editor)

    1988-01-01

    Recent advances in FEM techniques and applications are discussed in reviews and reports presented at the ICASE/LaRC workshop held in Hampton, VA in July 1986. Topics addressed include FEM approaches for partial differential equations, mixed FEMs, singular FEMs, FEMs for hyperbolic systems, iterative methods for elliptic finite-element equations on general meshes, mathematical aspects of FEMS for incompressible viscous flows, and gradient weighted moving finite elements in two dimensions. Consideration is given to adaptive flux-corrected FEM transport techniques for CFD, mixed and singular finite elements and the field BEM, p and h-p versions of the FEM, transient analysis methods in computational dynamics, and FEMs for integrated flow/thermal/structural analysis.

  15. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  16. Cause and Cure - Deterioration in Accuracy of CFD Simulations With Use of High-Aspect-Ratio Triangular Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar

    2017-01-01

    Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD re-searchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions and also cause numerical instability. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where triangular/tetrahedral elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identities the reason behind the difficulties in use of such high-aspect ratio triangular/tetrahedral elements is presented here. As will be shown, it turns out that the degree of accuracy deterioration of gradient computation involving a triangular element is hinged on the value of its shape factor Gamma def = sq sin Alpha1 + sq sin Alpha2 + sq sin Alpha3, where Alpha1; Alpha2 and Alpha3 are the internal angles of the element. In fact, it is shown that the degree of accuracy deterioration increases monotonically as the value of Gamma decreases monotonically from its maximal value 9/4 (attained by an equilateral triangle only) to a value much less than 1 (associated with a highly obtuse triangle). By taking advantage of the fact that a high-aspect ratio triangle is not necessarily highly obtuse, and in fact it can have a shape factor whose value is close to the maximal value 9/4, a potential solution to avoid accuracy deterioration of gradient computation associated with a high-aspect ratio triangular grid is given. Also a brief discussion on the extension of the current mathematical framework to the

  17. A Computational Approach for Automated Posturing of a Human Finite Element Model

    DTIC Science & Technology

    2016-07-01

    Std. Z39.18 July 2016 Memorandum Report A Computational Approach for Automated Posturing of a Human Finite Element Model Justin McKee and Adam...protection by influencing the path that loading will be transferred into the body and is a major source of variability. The development of a finite element ...posture, human body, finite element , leg, spine 42 Adam Sokolow 410-306-2985Unclassified Unclassified Unclassified UU ii Approved for public release

  18. Stabilization and discontinuity-capturing parameters for space-time flow computations with finite element and isogeometric discretizations

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Otoguro, Yuto

    2018-04-01

    Stabilized methods, which have been very common in flow computations for many years, typically involve stabilization parameters, and discontinuity-capturing (DC) parameters if the method is supplemented with a DC term. Various well-performing stabilization and DC parameters have been introduced for stabilized space-time (ST) computational methods in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible and compressible flows. These parameters were all originally intended for finite element discretization but quite often used also for isogeometric discretization. The stabilization and DC parameters we present here for ST computations are in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible flows, target isogeometric discretization, and are also applicable to finite element discretization. The parameters are based on a direction-dependent element length expression. The expression is outcome of an easy to understand derivation. The key components of the derivation are mapping the direction vector from the physical ST element to the parent ST element, accounting for the discretization spacing along each of the parametric coordinates, and mapping what we have in the parent element back to the physical element. The test computations we present for pure-advection cases show that the parameters proposed result in good solution profiles.

  19. An investigation of the effect of aspect and compression ratios on sediment dispersion using discrete element modelling

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tan, Danielle S.

    2017-12-01

    We use discrete element modelling to simulate a system of sand being released underwater, similar to the process of releasing sediment tailings back into the sea in nodule harvesting, in 2D. The force model includes concentration-dependent drag, buoyancy, `added mass' and Stokeslet disturbance. For a fixed number of uniform-sized particles, we vary the aspect ratio and the compression ratio of the rectangular mass of granular media pre-release. We observed that the spreading leads to a nonlinear increase with aspect ratio. On the other hand, when the compression ratio is increased, the total spreading increases; however the spread of the bulk of the sand decreases at small aspect ratios and increases at large aspect ratios. We proposed a simple theoretical model for the horizontal spreading which depends on both the aspect and compression ratios.

  20. Patient-specific finite element modeling of bones.

    PubMed

    Poelert, Sander; Valstar, Edward; Weinans, Harrie; Zadpoor, Amir A

    2013-04-01

    Finite element modeling is an engineering tool for structural analysis that has been used for many years to assess the relationship between load transfer and bone morphology and to optimize the design and fixation of orthopedic implants. Due to recent developments in finite element model generation, for example, improved computed tomography imaging quality, improved segmentation algorithms, and faster computers, the accuracy of finite element modeling has increased vastly and finite element models simulating the anatomy and properties of an individual patient can be constructed. Such so-called patient-specific finite element models are potentially valuable tools for orthopedic surgeons in fracture risk assessment or pre- and intraoperative planning of implant placement. The aim of this article is to provide a critical overview of current themes in patient-specific finite element modeling of bones. In addition, the state-of-the-art in patient-specific modeling of bones is compared with the requirements for a clinically applicable patient-specific finite element method, and judgment is passed on the feasibility of application of patient-specific finite element modeling as a part of clinical orthopedic routine. It is concluded that further development in certain aspects of patient-specific finite element modeling are needed before finite element modeling can be used as a routine clinical tool.

  1. Aging of the midface bony elements: a three-dimensional computed tomographic study.

    PubMed

    Shaw, Robert B; Kahn, David M

    2007-02-01

    The face loses volume as the soft-tissue structures age. In this study, the authors demonstrate how specific bony aspects of the face change with age in both men and women and what impact this may have on the techniques used in facial cosmetic surgery. Facial bone computed tomographic scans were obtained from 60 Caucasian patients (30 women and 30 men). The authors' study population consisted of 10 male and 10 female subjects in each of three age categories. Each computed tomographic scan underwent three-dimensional reconstruction with volume rendering, and the following measurements were obtained: glabellar angle (maximal prominence of glabella to nasofrontal suture), pyriform angle (nasal bone to lateral inferior pyriform aperture), and maxillary angle (superior to inferior maxilla at the articulation of the inferior maxillary wing and alveolar arch). The pyriform aperture area was also obtained. The t test was used to identify any trends between age groups. The glabellar and maxillary angle in both the male and female subjects showed a significant decrease with increasing age. The pyriform angle did not show a significant change between age groups for either sex. There was a significant increase in pyriform aperture area from the young to the middle age group for both sexes. These results suggest that the bony elements of the midface change dramatically with age and, coupled with soft-tissue changes, lead to the appearance of the aged face.

  2. Experimental and Computational Investigation of Lift-Enhancing Tabs on a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1996-01-01

    An experimental and computational investigation of the effect of lift-enhancing tabs on a two-element airfoil has been conducted. The objective of the study was to develop an understanding of the flow physics associated with lift-enhancing tabs on a multi-element airfoil. An NACA 63(2)-215 ModB airfoil with a 30% chord fowler flap was tested in the NASA Ames 7- by 10-Foot Wind Tunnel. Lift-enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. A combination of tabs located at the main element and flap trailing edges increased the airfoil lift coefficient by 11% relative to the highest lift coefficient achieved by any baseline configuration at an angle of attack of 0 deg, and C(sub 1max) was increased by 3%. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computed results predicted all of the trends observed in the experimental data quite well. In addition, a simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift-enhancing tabs work. The tabs were modeled by a point vortex at the air-foil or flap trailing edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift-enhancing tabs on a multi-element airfoil. Results of the modeling effort indicate that the dominant effects of the tabs on the pressure distribution of each element of the airfoil can be captured with a potential flow model for cases with no flow separation.

  3. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  4. A stochastic method for computing hadronic matrix elements

    DOE PAGES

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  5. Computational Aspects of the h, p and h-p Versions of the Finite Element Method.

    DTIC Science & Technology

    1987-03-01

    Then we will 3,4,5,6 the dependence of the accuracy of the error fliell study the dependence between the error jjeljl and the on the computational...University, June 23-26, 1987 paper presented at the First World Congress on Computational [23] Szab6, B.A.: PROBE: Theoretical Manual, NOETIC Tech...ment agencies such as the National Bureau of Standards. 0 To be an international center of study and research for foreign students in numerical

  6. Effects of Scan Resolutions and Element Sizes on Bovine Vertebral Mechanical Parameters from Quantitative Computed Tomography-Based Finite Element Analysis

    PubMed Central

    Zhang, Meng; Gao, Jiazi; Huang, Xu; Zhang, Min; Liu, Bei

    2017-01-01

    Quantitative computed tomography-based finite element analysis (QCT/FEA) has been developed to predict vertebral strength. However, QCT/FEA models may be different with scan resolutions and element sizes. The aim of this study was to explore the effects of scan resolutions and element sizes on QCT/FEA outcomes. Nine bovine vertebral bodies were scanned using the clinical CT scanner and reconstructed from datasets with the two-slice thickness, that is, 0.6 mm (PA resolution) and 1 mm (PB resolution). There were significantly linear correlations between the predicted and measured principal strains (R2 > 0.7, P < 0.0001), and the predicted vertebral strength and stiffness were modestly correlated with the experimental values (R2 > 0.6, P < 0.05). Two different resolutions and six different element sizes were combined in pairs, and finite element (FE) models of bovine vertebral cancellous bones in the 12 cases were obtained. It showed that the mechanical parameters of FE models with the PB resolution were similar to those with the PA resolution. The computational accuracy of FE models with the element sizes of 0.41 × 0.41 × 0.6 mm3 and 0.41 × 0.41 × 1 mm3 was higher by comparing the apparent elastic modulus and yield strength. Therefore, scan resolution and element size should be chosen optimally to improve the accuracy of QCT/FEA. PMID:29065624

  7. On finite element implementation and computational techniques for constitutive modeling of high temperature composites

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.

    1989-01-01

    The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.

  8. Computational characterization of chromatin domain boundary-associated genomic elements

    PubMed Central

    Hong, Seungpyo

    2017-01-01

    Abstract Topologically associated domains (TADs) are 3D genomic structures with high internal interactions that play important roles in genome compaction and gene regulation. Their genomic locations and their association with CCCTC-binding factor (CTCF)-binding sites and transcription start sites (TSSs) were recently reported. However, the relationship between TADs and other genomic elements has not been systematically evaluated. This was addressed in the present study, with a focus on the enrichment of these genomic elements and their ability to predict the TAD boundary region. We found that consensus CTCF-binding sites were strongly associated with TAD boundaries as well as with the transcription factors (TFs) Zinc finger protein (ZNF)143 and Yin Yang (YY)1. TAD boundary-associated genomic elements include DNase I-hypersensitive sites, H3K36 trimethylation, TSSs, RNA polymerase II, and TFs such as Specificity protein 1, ZNF274 and SIX homeobox 5. Computational modeling with these genomic elements suggests that they have distinct roles in TAD boundary formation. We propose a structural model of TAD boundaries based on these findings that provides a basis for studying the mechanism of chromatin structure formation and gene regulation. PMID:28977568

  9. On a 3-D singularity element for computation of combined mode stress intensities

    NASA Technical Reports Server (NTRS)

    Atluri, S. N.; Kathiresan, K.

    1976-01-01

    A special three-dimensional singularity element is developed for the computation of combined modes 1, 2, and 3 stress intensity factors, which vary along an arbitrarily curved crack front in three dimensional linear elastic fracture problems. The finite element method is based on a displacement-hybrid finite element model, based on a modified variational principle of potential energy, with arbitrary element interior displacements, interelement boundary displacements, and element boundary tractions as variables. The special crack-front element used in this analysis contains the square root singularity in strains and stresses, where the stress-intensity factors K(1), K(2), and K(3) are quadratically variable along the crack front and are solved directly along with the unknown nodal displacements.

  10. Four Studies on Aspects of Assessing Computational Performance. Technical Report No. 297.

    ERIC Educational Resources Information Center

    Romberg, Thomas A., Ed.

    The four studies reported in this document deal with aspects of assessing students' performance on computational skills. The first study grew out of a need for an instrument to measure students' speed at recalling addition facts. This had seemed to be a very easy task, but it proved to be much more difficult than anticipated. The second study grew…

  11. Molecular computational elements encode large populations of small objects

    NASA Astrophysics Data System (ADS)

    Prasanna de Silva, A.; James, Mark R.; McKinney, Bernadine O. F.; Pears, David A.; Weir, Sheenagh M.

    2006-10-01

    Since the introduction of molecular computation, experimental molecular computational elements have grown to encompass small-scale integration, arithmetic and games, among others. However, the need for a practical application has been pressing. Here we present molecular computational identification (MCID), a demonstration that molecular logic and computation can be applied to a widely relevant issue. Examples of populations that need encoding in the microscopic world are cells in diagnostics or beads in combinatorial chemistry (tags). Taking advantage of the small size (about 1nm) and large `on/off' output ratios of molecular logic gates and using the great variety of logic types, input chemical combinations, switching thresholds and even gate arrays in addition to colours, we produce unique identifiers for members of populations of small polymer beads (about 100μm) used for synthesis of combinatorial libraries. Many millions of distinguishable tags become available. This method should be extensible to far smaller objects, with the only requirement being a `wash and watch' protocol. Our focus on converting molecular science into technology concerning analog sensors, turns to digital logic devices in the present work.

  12. Molecular computational elements encode large populations of small objects.

    PubMed

    de Silva, A Prasanna; James, Mark R; McKinney, Bernadine O F; Pears, David A; Weir, Sheenagh M

    2006-10-01

    Since the introduction of molecular computation, experimental molecular computational elements have grown to encompass small-scale integration, arithmetic and games, among others. However, the need for a practical application has been pressing. Here we present molecular computational identification (MCID), a demonstration that molecular logic and computation can be applied to a widely relevant issue. Examples of populations that need encoding in the microscopic world are cells in diagnostics or beads in combinatorial chemistry (tags). Taking advantage of the small size (about 1 nm) and large 'on/off' output ratios of molecular logic gates and using the great variety of logic types, input chemical combinations, switching thresholds and even gate arrays in addition to colours, we produce unique identifiers for members of populations of small polymer beads (about 100 microm) used for synthesis of combinatorial libraries. Many millions of distinguishable tags become available. This method should be extensible to far smaller objects, with the only requirement being a 'wash and watch' protocol. Our focus on converting molecular science into technology concerning analog sensors, turns to digital logic devices in the present work.

  13. Technical aspects of positron emission tomography/computed tomography in radiotherapy treatment planning.

    PubMed

    Scripes, Paola G; Yaparpalvi, Ravindra

    2012-09-01

    The usage of functional data in radiation therapy (RT) treatment planning (RTP) process is currently the focus of significant technical, scientific, and clinical development. Positron emission tomography (PET) using ((18)F) fluorodeoxyglucose is being increasingly used in RT planning in recent years. Fluorodeoxyglucose is the most commonly used radiotracer for diagnosis, staging, recurrent disease detection, and monitoring of tumor response to therapy (Lung Cancer 2012;76:344-349; Lung Cancer 2009;64:301-307; J Nucl Med 2008;49:532-540; J Nucl Med 2007;48:58S-67S). All the efforts to improve both PET and computed tomography (CT) image quality and, consequently, lesion detectability have a common objective to increase the accuracy in functional imaging and thus of coregistration into RT planning systems. In radiotherapy, improvement in target localization permits reduction of tumor margins, consequently reducing volume of normal tissue irradiated. Furthermore, smaller treated target volumes create the possibility of dose escalation, leading to increased chances of tumor cure and control. This article focuses on the technical aspects of PET/CT image acquisition, fusion, usage, and impact on the physics of RTP. The authors review the basic elements of RTP, modern radiation delivery, and the technical parameters of coregistration of PET/CT into RT computerized planning systems. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Plane stress analysis of wood members using isoparametric finite elements, a computer program

    Treesearch

    Gary D. Gerhardt

    1983-01-01

    A finite element program is presented which computes displacements, strains, and stresses in wood members of arbitrary shape which are subjected to plane strain/stressloading conditions. This report extends a program developed by R. L. Taylor in 1977, by adding both the cubic isoparametric finite element and the capability to analyze nonisotropic materials. The...

  15. Transient Finite Element Computations on a Variable Transputer System

    NASA Technical Reports Server (NTRS)

    Smolinski, Patrick J.; Lapczyk, Ireneusz

    1993-01-01

    A parallel program to analyze transient finite element problems was written and implemented on a system of transputer processors. The program uses the explicit time integration algorithm which eliminates the need for equation solving, making it more suitable for parallel computations. An interprocessor communication scheme was developed for arbitrary two dimensional grid processor configurations. Several 3-D problems were analyzed on a system with a small number of processors.

  16. Computer Program for Steady Transonic Flow over Thin Airfoils by Finite Elements

    DTIC Science & Technology

    1975-10-01

    COMPUTER PROGRAM FOR STEADY JJ TRANSONIC FLOW OVER THIN AIRFOILS BY g FINITE ELEMENTS • *q^^ r ̂ c HUNTSVILLE RESEARCH & ENGINEERING CENTER...jglMMi B Jun’ INC ORGANIMTION NAME ANO ADDRESS Lö^kfteed Missiles & Space Company, Inc. Huntsville Research & Engineering Center,^ Huntsville, Alab...This report was prepared by personnel in the Computational Mechamcs Section of the Lockheed Missiles fc Space Company, Inc.. Huntsville Research

  17. Architectural Aspects of Grid Computing and its Global Prospects for E-Science Community

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    2008-05-01

    The paper reviews the imminent Architectural Aspects of Grid Computing for e-Science community for scientific research and business/commercial collaboration beyond physical boundaries. Grid Computing provides all the needed facilities; hardware, software, communication interfaces, high speed internet, safe authentication and secure environment for collaboration of research projects around the globe. It provides highly fast compute engine for those scientific and engineering research projects and business/commercial applications which are heavily compute intensive and/or require humongous amounts of data. It also makes possible the use of very advanced methodologies, simulation models, expert systems and treasure of knowledge available around the globe under the umbrella of knowledge sharing. Thus it makes possible one of the dreams of global village for the benefit of e-Science community across the globe.

  18. Hypermatrix scheme for finite element systems on CDC STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Voigt, S. J.

    1975-01-01

    A study is made of the adaptation of the hypermatrix (block matrix) scheme for solving large systems of finite element equations to the CDC STAR-100 computer. Discussion is focused on the organization of the hypermatrix computation using Cholesky decomposition and the mode of storage of the different submatrices to take advantage of the STAR pipeline (streaming) capability. Consideration is also given to the associated data handling problems and the means of balancing the I/Q and cpu times in the solution process. Numerical examples are presented showing anticipated gain in cpu speed over the CDC 6600 to be obtained by using the proposed algorithms on the STAR computer.

  19. SYMBMAT: Symbolic computation of quantum transition matrix elements

    NASA Astrophysics Data System (ADS)

    Ciappina, M. F.; Kirchner, T.

    2012-08-01

    We have developed a set of Mathematica notebooks to compute symbolically quantum transition matrices relevant for atomic ionization processes. The utilization of a symbolic language allows us to obtain analytical expressions for the transition matrix elements required in charged-particle and laser induced ionization of atoms. Additionally, by using a few simple commands, it is possible to export these symbolic expressions to standard programming languages, such as Fortran or C, for the subsequent computation of differential cross sections or other observables. One of the main drawbacks in the calculation of transition matrices is the tedious algebraic work required when initial states other than the simple hydrogenic 1s state need to be considered. Using these notebooks the work is dramatically reduced and it is possible to generate exact expressions for a large set of bound states. We present explicit examples of atomic collisions (in First Born Approximation and Distorted Wave Theory) and laser-matter interactions (within the Dipole and Strong Field Approximations and different gauges) using both hydrogenic wavefunctions and Slater-Type Orbitals with arbitrary nlm quantum numbers as initial states. Catalogue identifier: AEMI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 71 628 No. of bytes in distributed program, including test data, etc.: 444 195 Distribution format: tar.gz Programming language: Mathematica Computer: Single machines using Linux or Windows (with cores with any clock speed, cache memory and bits in a word) Operating system: Any OS that supports Mathematica. The notebooks have been tested under Windows and Linux and with versions 6.x, 7.x and 8.x Classification: 2.6 Nature of problem

  20. Finite-element modelling of multilayer X-ray optics.

    PubMed

    Cheng, Xianchao; Zhang, Lin

    2017-05-01

    Multilayer optical elements for hard X-rays are an attractive alternative to crystals whenever high photon flux and moderate energy resolution are required. Prediction of the temperature, strain and stress distribution in the multilayer optics is essential in designing the cooling scheme and optimizing geometrical parameters for multilayer optics. The finite-element analysis (FEA) model of the multilayer optics is a well established tool for doing so. Multilayers used in X-ray optics typically consist of hundreds of periods of two types of materials. The thickness of one period is a few nanometers. Most multilayers are coated on silicon substrates of typical size 60 mm × 60 mm × 100-300 mm. The high aspect ratio between the size of the optics and the thickness of the multilayer (10 7 ) can lead to a huge number of elements for the finite-element model. For instance, meshing by the size of the layers will require more than 10 16 elements, which is an impossible task for present-day computers. Conversely, meshing by the size of the substrate will produce a too high element shape ratio (element geometry width/height > 10 6 ), which causes low solution accuracy; and the number of elements is still very large (10 6 ). In this work, by use of ANSYS layer-functioned elements, a thermal-structural FEA model has been implemented for multilayer X-ray optics. The possible number of layers that can be computed by presently available computers is increased considerably.

  1. Finite-element modelling of multilayer X-ray optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Xianchao; Zhang, Lin

    Multilayer optical elements for hard X-rays are an attractive alternative to crystals whenever high photon flux and moderate energy resolution are required. Prediction of the temperature, strain and stress distribution in the multilayer optics is essential in designing the cooling scheme and optimizing geometrical parameters for multilayer optics. The finite-element analysis (FEA) model of the multilayer optics is a well established tool for doing so. Multilayers used in X-ray optics typically consist of hundreds of periods of two types of materials. The thickness of one period is a few nanometers. Most multilayers are coated on silicon substrates of typical sizemore » 60 mm × 60 mm × 100–300 mm. The high aspect ratio between the size of the optics and the thickness of the multilayer (10 7) can lead to a huge number of elements for the finite-element model. For instance, meshing by the size of the layers will require more than 10 16elements, which is an impossible task for present-day computers. Conversely, meshing by the size of the substrate will produce a too high element shape ratio (element geometry width/height > 10 6), which causes low solution accuracy; and the number of elements is still very large (10 6). In this work, by use of ANSYS layer-functioned elements, a thermal-structural FEA model has been implemented for multilayer X-ray optics. The possible number of layers that can be computed by presently available computers is increased considerably.« less

  2. Compute Element and Interface Box for the Hazard Detection System

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Khanoyan, Garen; Stern, Ryan A.; Some, Raphael R.; Bailey, Erik S.; Carson, John M.; Vaughan, Geoffrey M.; Werner, Robert A.; Salomon, Phil M.; Martin, Keith E.; hide

    2013-01-01

    The Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is building a sensor that enables a spacecraft to evaluate autonomously a potential landing area to generate a list of hazardous and safe landing sites. It will also provide navigation inputs relative to those safe sites. The Hazard Detection System Compute Element (HDS-CE) box combines a field-programmable gate array (FPGA) board for sensor integration and timing, with a multicore computer board for processing. The FPGA does system-level timing and data aggregation, and acts as a go-between, removing the real-time requirements from the processor and labeling events with a high resolution time. The processor manages the behavior of the system, controls the instruments connected to the HDS-CE, and services the "heavy lifting" computational requirements for analyzing the potential landing spots.

  3. Optimal mapping of irregular finite element domains to parallel processors

    NASA Technical Reports Server (NTRS)

    Flower, J.; Otto, S.; Salama, M.

    1987-01-01

    Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.

  4. Accuracy of Three Dimensional Solid Finite Elements

    NASA Technical Reports Server (NTRS)

    Case, W. R.; Vandegrift, R. E.

    1984-01-01

    The results of a study to determine the accuracy of the three dimensional solid elements available in NASTRAN for predicting displacements is presented. Of particular interest in the study is determining how to effectively use solid elements in analyzing thick optical mirrors, as might exist in a large telescope. Surface deformations due to thermal and gravity loading can be significant contributors to the determination of the overall optical quality of a telescope. The study investigates most of the solid elements currently available in either COSMIC or MSC NASTRAN. Error bounds as a function of mesh refinement and element aspect ratios are addressed. It is shown that the MSC solid elements are, in general, more accurate than their COSMIC NASTRAN counterparts due to the specialized numerical integration used. In addition, the MSC elements appear to be more economical to use on the DEC VAX 11/780 computer.

  5. Magnetic Skyrmion as a Nonlinear Resistive Element: A Potential Building Block for Reservoir Computing

    NASA Astrophysics Data System (ADS)

    Prychynenko, Diana; Sitte, Matthias; Litzius, Kai; Krüger, Benjamin; Bourianoff, George; Kläui, Mathias; Sinova, Jairo; Everschor-Sitte, Karin

    2018-01-01

    Inspired by the human brain, there is a strong effort to find alternative models of information processing capable of imitating the high energy efficiency of neuromorphic information processing. One possible realization of cognitive computing involves reservoir computing networks. These networks are built out of nonlinear resistive elements which are recursively connected. We propose that a Skyrmion network embedded in magnetic films may provide a suitable physical implementation for reservoir computing applications. The significant key ingredient of such a network is a two-terminal device with nonlinear voltage characteristics originating from magnetoresistive effects, such as the anisotropic magnetoresistance or the recently discovered noncollinear magnetoresistance. The most basic element for a reservoir computing network built from "Skyrmion fabrics" is a single Skyrmion embedded in a ferromagnetic ribbon. In order to pave the way towards reservoir computing systems based on Skyrmion fabrics, we simulate and analyze (i) the current flow through a single magnetic Skyrmion due to the anisotropic magnetoresistive effect and (ii) the combined physics of local pinning and the anisotropic magnetoresistive effect.

  6. A coarse-grid-projection acceleration method for finite-element incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kashefi, Ali; Staples, Anne; FiN Lab Team

    2015-11-01

    Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.

  7. Control aspects of quantum computing using pure and mixed states.

    PubMed

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J

    2012-10-13

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems.

  8. Control aspects of quantum computing using pure and mixed states

    PubMed Central

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J.

    2012-01-01

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems. PMID:22946034

  9. Contours identification of elements in a cone beam computed tomography for investigating maxillary cysts

    NASA Astrophysics Data System (ADS)

    Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia

    2013-10-01

    Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.

  10. Partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computers

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  11. ElemeNT: a computational tool for detecting core promoter elements.

    PubMed

    Sloutskin, Anna; Danino, Yehuda M; Orenstein, Yaron; Zehavi, Yonathan; Doniger, Tirza; Shamir, Ron; Juven-Gershon, Tamar

    2015-01-01

    Core promoter elements play a pivotal role in the transcriptional output, yet they are often detected manually within sequences of interest. Here, we present 2 contributions to the detection and curation of core promoter elements within given sequences. First, the Elements Navigation Tool (ElemeNT) is a user-friendly web-based, interactive tool for prediction and display of putative core promoter elements and their biologically-relevant combinations. Second, the CORE database summarizes ElemeNT-predicted core promoter elements near CAGE and RNA-seq-defined Drosophila melanogaster transcription start sites (TSSs). ElemeNT's predictions are based on biologically-functional core promoter elements, and can be used to infer core promoter compositions. ElemeNT does not assume prior knowledge of the actual TSS position, and can therefore assist in annotation of any given sequence. These resources, freely accessible at http://lifefaculty.biu.ac.il/gershon-tamar/index.php/resources, facilitate the identification of core promoter elements as active contributors to gene expression.

  12. Experimental and computational investigation of lift-enhancing tabs on a multi-element airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale

    1996-01-01

    An experimental and computational investigation of the effect of lift enhancing tabs on a two-element airfoil was conducted. The objective of the study was to develop an understanding of the flow physics associated with lift enhancing tabs on a multi-element airfoil. A NACA 63(sub 2)-215 ModB airfoil with a 30 percent chord Fowler flap was tested in the NASA Ames 7 by 10 foot wind tunnel. Lift enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computer results predict all of the trends in the experimental data quite well. When the flow over the flap upper surface is attached, tabs mounted at the main element trailing edge (cove tabs) produce very little change in lift. At high flap deflections. however, the flow over the flap is separated and cove tabs produce large increases in lift and corresponding reductions in drag by eliminating the separated flow. Cove tabs permit high flap deflection angles to be achieved and reduce the sensitivity of the airfoil lift to the size of the flap gap. Tabs attached to the flap training edge (flap tabs) are effective at increasing lift without significantly increasing drag. A combination of a cove tab and a flap tab increased the airfoil lift coefficient by 11 percent relative to the highest lift tab coefficient achieved by any baseline configuration at an angle of attack of zero percent and the maximum lift coefficient was increased by more than 3 percent. A simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift enhancing tabs work. The tabs were modeled by a point vortex at the training edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift enhancing tabs on a multi-element airfoil. Results of the modeling

  13. Cause and Cure - Deterioration in Accuracy of CFD Simulations with Use of High-Aspect-Ratio Triangular/Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar

    2017-01-01

    Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD researchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where simplex elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identifies the reason behind the difficulties in use of such high-aspect ratio simplex elements is formulated using two different approaches and presented here. Drawing insights from the analysis, a potential solution to avoid that pitfall is also provided as part of this work. Furthermore, through the use of numerical simulations of practical viscous problems involving high-Reynolds number flows, how the gradient evaluation procedures of the CESE framework can be effectively used to produce accurate and stable results on such high-aspect ratio simplex meshes is also showcased.

  14. STARS: An integrated general-purpose finite element structural, aeroelastic, and aeroservoelastic analysis computer program

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal K.

    1991-01-01

    The details of an integrated general-purpose finite element structural analysis computer program which is also capable of solving complex multidisciplinary problems is presented. Thus, the SOLIDS module of the program possesses an extensive finite element library suitable for modeling most practical problems and is capable of solving statics, vibration, buckling, and dynamic response problems of complex structures, including spinning ones. The aerodynamic module, AERO, enables computation of unsteady aerodynamic forces for both subsonic and supersonic flow for subsequent flutter and divergence analysis of the structure. The associated aeroservoelastic analysis module, ASE, effects aero-structural-control stability analysis yielding frequency responses as well as damping characteristics of the structure. The program is written in standard FORTRAN to run on a wide variety of computers. Extensive graphics, preprocessing, and postprocessing routines are also available pertaining to a number of terminals.

  15. Computations of Disturbance Amplification Behind Isolated Roughness Elements and Comparison with Measurements

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Li, Fei; Bynum, Michael; Kegerise, Michael; King, Rudolph

    2015-01-01

    Computations are performed to study laminar-turbulent transition due to isolated roughness elements in boundary layers at Mach 3.5 and 5.95, with an emphasis on flow configurations for which experimental measurements from low disturbance wind tunnels are available. The Mach 3.5 case corresponds to a roughness element with right-triangle planform with hypotenuse that is inclined at 45 degrees with respect to the oncoming stream, presenting an obstacle with spanwise asymmetry. The Mach 5.95 case corresponds to a circular roughness element along the nozzle wall of the Purdue BAMQT wind tunnel facility. In both cases, the mean flow distortion due to the roughness element is characterized by long-lived streamwise streaks in the roughness wake, which can support instability modes that did not exist in the absence of the roughness element. The linear amplification characteristics of the wake flow are examined towards the eventual goal of developing linear growth correlations for the onset of transition.

  16. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  17. Generic element processor (application to nonlinear analysis)

    NASA Technical Reports Server (NTRS)

    Stanley, Gary

    1989-01-01

    The focus here is on one aspect of the Computational Structural Mechanics (CSM) Testbed: finite element technology. The approach involves a Generic Element Processor: a command-driven, database-oriented software shell that facilitates introduction of new elements into the testbed. This shell features an element-independent corotational capability that upgrades linear elements to geometrically nonlinear analysis, and corrects the rigid-body errors that plague many contemporary plate and shell elements. Specific elements that have been implemented in the Testbed via this mechanism include the Assumed Natural-Coordinate Strain (ANS) shell elements, developed with Professor K. C. Park (University of Colorado, Boulder), a new class of curved hybrid shell elements, developed by Dr. David Kang of LPARL (formerly a student of Professor T. Pian), other shell and solid hybrid elements developed by NASA personnel, and recently a repackaged version of the workhorse shell element used in the traditional STAGS nonlinear shell analysis code. The presentation covers: (1) user and developer interfaces to the generic element processor, (2) an explanation of the built-in corotational option, (3) a description of some of the shell-elements currently implemented, and (4) application to sample nonlinear shell postbuckling problems.

  18. Fiber pushout test: A three-dimensional finite element computational simulation

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Chamis, Christos C.

    1990-01-01

    A fiber pushthrough process was computationally simulated using three-dimensional finite element method. The interface material is replaced by an anisotropic material with greatly reduced shear modulus in order to simulate the fiber pushthrough process using a linear analysis. Such a procedure is easily implemented and is computationally very effective. It can be used to predict fiber pushthrough load for a composite system at any temperature. The average interface shear strength obtained from pushthrough load can easily be separated into its two components: one that comes from frictional stresses and the other that comes from chemical adhesion between fiber and the matrix and mechanical interlocking that develops due to shrinkage of the composite because of phase change during the processing. Step-by-step procedures are described to perform the computational simulation, to establish bounds on interfacial bond strength and to interpret interfacial bond quality.

  19. 2nd International Symposium on Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering (REES-2015)

    NASA Astrophysics Data System (ADS)

    Tavadyan, Levon, Prof; Sachkov, Viktor, Prof; Godymchuk, Anna, Dr.; Bogdan, Anna

    2016-01-01

    The 2nd International Symposium «Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering» (REES2015) was jointly organized by Tomsk State University (Russia), National Academy of Science (Armenia), Shenyang Polytechnic University (China), Moscow Institute of Physics and Engineering (Russia), Siberian Physical-technical Institute (Russia), and Tomsk Polytechnic University (Russia) in September, 7-15, 2015, Belokuriha, Russia. The Symposium provided a high quality of presentations and gathered engineers, scientists, academicians, and young researchers working in the field of rare and rare earth elements mining, modification, separation, elaboration and application, in order to facilitate aggregation and sharing interests and results for a better collaboration and activity visibility. The goal of the REES2015 was to bring researchers and practitioners together to share the latest knowledge on rare and rare earth elements technologies. The Symposium was aimed at presenting new trends in rare and rare earth elements mining, research and separation and recent achievements in advanced materials elaboration and developments for different purposes, as well as strengthening the already existing contacts between manufactures, highly-qualified specialists and young scientists. The topics of the REES2015 were: (1) Problems of extraction and separation of rare and rare earth elements; (2) Methods and approaches to the separation and isolation of rare and rare earth elements with ultra-high purity; (3) Industrial technologies of production and separation of rare and rare earth elements; (4) Economic aspects in technology of rare and rare earth elements; and (5) Rare and rare earth based materials (application in metallurgy, catalysis, medicine, optoelectronics, etc.). We want to thank the Organizing Committee, the Universities and Sponsors supporting the Symposium, and everyone who contributed to the organization of the event and to

  20. Computing element evolution towards Exascale and its impact on legacy simulation codes

    NASA Astrophysics Data System (ADS)

    Colin de Verdière, Guillaume J. L.

    2015-12-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.

  1. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow

  2. Topology optimization aided structural design: Interpretation, computational aspects and 3D printing.

    PubMed

    Kazakis, Georgios; Kanellopoulos, Ioannis; Sotiropoulos, Stefanos; Lagaros, Nikos D

    2017-10-01

    Construction industry has a major impact on the environment that we spend most of our life. Therefore, it is important that the outcome of architectural intuition performs well and complies with the design requirements. Architects usually describe as "optimal design" their choice among a rather limited set of design alternatives, dictated by their experience and intuition. However, modern design of structures requires accounting for a great number of criteria derived from multiple disciplines, often of conflicting nature. Such criteria derived from structural engineering, eco-design, bioclimatic and acoustic performance. The resulting vast number of alternatives enhances the need for computer-aided architecture in order to increase the possibility of arriving at a more preferable solution. Therefore, the incorporation of smart, automatic tools in the design process, able to further guide designer's intuition becomes even more indispensable. The principal aim of this study is to present possibilities to integrate automatic computational techniques related to topology optimization in the phase of intuition of civil structures as part of computer aided architectural design. In this direction, different aspects of a new computer aided architectural era related to the interpretation of the optimized designs, difficulties resulted from the increased computational effort and 3D printing capabilities are covered here in.

  3. Flutter: A finite element program for aerodynamic instability analysis of general shells of revolution with thermal prestress

    NASA Technical Reports Server (NTRS)

    Fallon, D. J.; Thornton, E. A.

    1983-01-01

    Documentation for the computer program FLUTTER is presented. The theory of aerodynamic instability with thermal prestress is discussed. Theoretical aspects of the finite element matrices required in the aerodynamic instability analysis are also discussed. General organization of the computer program is explained, and instructions are then presented for the execution of the program.

  4. Computational aspects in high intensity ultrasonic surgery planning.

    PubMed

    Pulkkinen, A; Hynynen, K

    2010-01-01

    Therapeutic ultrasound treatment planning is discussed and computational aspects regarding it are reviewed. Nonlinear ultrasound simulations were solved with a combined frequency domain Rayleigh and KZK model. Ultrasonic simulations were combined with thermal simulations and were used to compute heating of muscle tissue in vivo for four different focused ultrasound transducers. The simulations were compared with measurements and good agreement was found for large F-number transducers. However, at F# 1.9 the simulated rate of temperature rise was approximately a factor of 2 higher than the measured ones. The power levels used with the F# 1 transducer were too low to show any nonlinearity. The simulations were used to investigate the importance of nonlinarities generated in the coupling water, and also the importance of including skin in the simulations. Ignoring either of these in the model would lead to larger errors. Most notably, the nonlinearities generated in the water can enhance the focal temperature by more than 100%. The simulations also demonstrated that pulsed high power sonications may provide an opportunity to significantly (up to a factor of 3) reduce the treatment time. In conclusion, nonlinear propagation can play an important role in shaping the energy distribution during a focused ultrasound treatment and it should not be ignored in planning. However, the current simulation methods are accurate only with relatively large F-numbers and better models need to be developed for sharply focused transducers. Copyright 2009 Elsevier Ltd. All rights reserved.

  5. Probalistic Finite Elements (PFEM) structural dynamics and fracture mechanics

    NASA Technical Reports Server (NTRS)

    Liu, Wing-Kam; Belytschko, Ted; Mani, A.; Besterfield, G.

    1989-01-01

    The purpose of this work is to develop computationally efficient methodologies for assessing the effects of randomness in loads, material properties, and other aspects of a problem by a finite element analysis. The resulting group of methods is called probabilistic finite elements (PFEM). The overall objective of this work is to develop methodologies whereby the lifetime of a component can be predicted, accounting for the variability in the material and geometry of the component, the loads, and other aspects of the environment; and the range of response expected in a particular scenario can be presented to the analyst in addition to the response itself. Emphasis has been placed on methods which are not statistical in character; that is, they do not involve Monte Carlo simulations. The reason for this choice of direction is that Monte Carlo simulations of complex nonlinear response require a tremendous amount of computation. The focus of efforts so far has been on nonlinear structural dynamics. However, in the continuation of this project, emphasis will be shifted to probabilistic fracture mechanics so that the effect of randomness in crack geometry and material properties can be studied interactively with the effect of random load and environment.

  6. A boundary element alternating method for two-dimensional mixed-mode fracture problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Krishnamurthy, T.

    1992-01-01

    A boundary element alternating method, denoted herein as BEAM, is presented for two dimensional fracture problems. This is an iterative method which alternates between two solutions. An analytical solution for arbitrary polynomial normal and tangential pressure distributions applied to the crack faces of an embedded crack in an infinite plate is used as the fundamental solution in the alternating method. A boundary element method for an uncracked finite plate is the second solution. For problems of edge cracks a technique of utilizing finite elements with BEAM is presented to overcome the inherent singularity in boundary element stress calculation near the boundaries. Several computational aspects that make the algorithm efficient are presented. Finally, the BEAM is applied to a variety of two dimensional crack problems with different configurations and loadings to assess the validity of the method. The method gives accurate stress intensity factors with minimal computing effort.

  7. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.

  8. Mobile Genetic Elements: In Silico, In Vitro, In Vivo

    PubMed Central

    Arkhipova, Irina R.; Rice, Phoebe A.

    2016-01-01

    Mobile genetic elements (MGEs), also called transposable elements (TEs), represent universal components of most genomes and are intimately involved in nearly all aspects of genome organization, function, and evolution. However, there is currently a gap between fast-paced TE discovery in silico, stimulated by exponential growth of comparative genomic studies, and a limited number of experimental models amenable to more traditional in vitro and in vivo studies of structural, mechanistic, and regulatory properties of diverse MGEs. Experimental and computational scientists came together to bridge this gap at a recent conference, “Mobile Genetic Elements: in silico, in vitro, in vivo,” held at the Marine Biological Laboratory (MBL) in Woods Hole, MA, USA. PMID:26822117

  9. Computational Toxicology as Implemented by the US EPA ...

    EPA Pesticide Factsheets

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the T

  10. Connectivity Measures in EEG Microstructural Sleep Elements.

    PubMed

    Sakellariou, Dimitris; Koupparis, Andreas M; Kokkinos, Vasileios; Koutroumanidis, Michalis; Kostopoulos, George K

    2016-01-01

    During Non-Rapid Eye Movement sleep (NREM) the brain is relatively disconnected from the environment, while connectedness between brain areas is also decreased. Evidence indicates, that these dynamic connectivity changes are delivered by microstructural elements of sleep: short periods of environmental stimuli evaluation followed by sleep promoting procedures. The connectivity patterns of the latter, among other aspects of sleep microstructure, are still to be fully elucidated. We suggest here a methodology for the assessment and investigation of the connectivity patterns of EEG microstructural elements, such as sleep spindles. The methodology combines techniques in the preprocessing, estimation, error assessing and visualization of results levels in order to allow the detailed examination of the connectivity aspects (levels and directionality of information flow) over frequency and time with notable resolution, while dealing with the volume conduction and EEG reference assessment. The high temporal and frequency resolution of the methodology will allow the association between the microelements and the dynamically forming networks that characterize them, and consequently possibly reveal aspects of the EEG microstructure. The proposed methodology is initially tested on artificially generated signals for proof of concept and subsequently applied to real EEG recordings via a custom built MATLAB-based tool developed for such studies. Preliminary results from 843 fast sleep spindles recorded in whole night sleep of 5 healthy volunteers indicate a prevailing pattern of interactions between centroparietal and frontal regions. We demonstrate hereby, an opening to our knowledge attempt to estimate the scalp EEG connectivity that characterizes fast sleep spindles via an "EEG-element connectivity" methodology we propose. The application of the latter, via a computational tool we developed suggests it is able to investigate the connectivity patterns related to the occurrence

  11. Effectiveness of Multimedia Elements in Computer Supported Instruction: Analysis of Personalization Effects, Students' Performances and Costs

    ERIC Educational Resources Information Center

    Zaidel, Mark; Luo, XiaoHui

    2010-01-01

    This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…

  12. A Vectorial Model to Compute Terrain Parameters, Local and Remote Sheltering, Scattering and Albedo using TIN Domains for Hydrologic Modeling.

    NASA Astrophysics Data System (ADS)

    Moreno, H. A.; Ogden, F. L.; Steinke, R. C.; Alvarez, L. V.

    2015-12-01

    Triangulated Irregular Networks (TINs) are increasingly popular for terrain representation in high performance surface and hydrologic modeling by their skill to capture significant changes in surface forms such as topographical summits, slope breaks, ridges, valley floors, pits and cols. This work presents a methodology for estimating slope, aspect and the components of the incoming solar radiation by using a vectorial approach within a topocentric coordinate system by establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computing a unit vector defining the position of the sun at each hour and DOY. Thus, a dot product determines the radiation flux at each TIN element. Remote shading is computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector. Sky view fractions are computed by a simplified scanning algorithm in prescribed directions and are useful to determine diffuse radiation. Finally, remote radiation scattering is computed from the sky view factor complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. This methodology represents an improvement on the current algorithms to compute terrain and radiation parameters on TINs in an efficient manner. All terrain features (e.g. slope, aspect, sky view factors and remote sheltering) can be pre-computed and stored for easy access for a subsequent ground surface or hydrologic simulation.

  13. STARS: A general-purpose finite element computer program for analysis of engineering structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1984-01-01

    STARS (Structural Analysis Routines) is primarily an interactive, graphics-oriented, finite-element computer program for analyzing the static, stability, free vibration, and dynamic responses of damped and undamped structures, including rotating systems. The element library consists of one-dimensional (1-D) line elements, two-dimensional (2-D) triangular and quadrilateral shell elements, and three-dimensional (3-D) tetrahedral and hexahedral solid elements. These elements enable the solution of structural problems that include truss, beam, space frame, plane, plate, shell, and solid structures, or any combination thereof. Zero, finite, and interdependent deflection boundary conditions can be implemented by the program. The associated dynamic response analysis capability provides for initial deformation and velocity inputs, whereas the transient excitation may be either forces or accelerations. An effective in-core or out-of-core solution strategy is automatically employed by the program, depending on the size of the problem. Data input may be at random within a data set, and the program offers certain automatic data-generation features. Input data are formatted as an optimal combination of free and fixed formats. Interactive graphics capabilities enable convenient display of nodal deformations, mode shapes, and element stresses.

  14. Computational design of low aspect ratio wing-winglets for transonic wind-tunnel testing

    NASA Technical Reports Server (NTRS)

    Kuhlman, John M.; Brown, Christopher K.

    1989-01-01

    A computational design has been performed for three different low aspect ratio wing planforms fitted with nonplanar winglets; one of the three planforms has been selected to be constructed as a wind tunnel model for testing in the NASA LaRC 7 x 10 High Speed Wind Tunnel. A design point of M = 0.8, CL approx = 0.3 was selected, for wings of aspect ratio equal to 2.2, and leading edge sweep angles of 45 and 50 deg. Winglet length is 15 percent of the wing semispan, with a cant angle of 15 deg, and a leading edge sweep of 50 deg. Winglet total area equals 2.25 percent of the wing reference area. This report summarizes the design process and the predicted transonic performance for each configuration.

  15. Probabilistic finite elements for fatigue and fracture analysis

    NASA Astrophysics Data System (ADS)

    Belytschko, Ted; Liu, Wing Kam

    1993-04-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  16. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1993-01-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  17. A comparison of turbulence models in computing multi-element airfoil flows

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Menter, Florian; Durbin, Paul A.; Mansour, Nagi N.

    1994-01-01

    Four different turbulence models are used to compute the flow over a three-element airfoil configuration. These models are the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, a two-equation k-omega model, and a new one-equation Durbin-Mansour model. The flow is computed using the INS2D two-dimensional incompressible Navier-Stokes solver. An overset Chimera grid approach is utilized. Grid resolution tests are presented, and manual solution-adaptation of the grid was performed. The performance of each of the models is evaluated for test cases involving different angles-of-attack, Reynolds numbers, and flap riggings. The resulting surface pressure coefficients, skin friction, velocity profiles, and lift, drag, and moment coefficients are compared with experimental data. The models produce very similar results in most cases. Excellent agreement between computational and experimental surface pressures was observed, but only moderately good agreement was seen in the velocity profile data. In general, the difference between the predictions of the different models was less than the difference between the computational and experimental data.

  18. Development of computer-aided design system of elastic sensitive elements of automatic metering devices

    NASA Astrophysics Data System (ADS)

    Kalinkina, M. E.; Kozlov, A. S.; Labkovskaia, R. I.; Pirozhnikova, O. I.; Tkalich, V. L.; Shmakov, N. A.

    2018-05-01

    The object of research is the element base of devices of control and automation systems, including in its composition annular elastic sensitive elements, methods of their modeling, calculation algorithms and software complexes for automation of their design processes. The article is devoted to the development of the computer-aided design system of elastic sensitive elements used in weight- and force-measuring automation devices. Based on the mathematical modeling of deformation processes in a solid, as well as the results of static and dynamic analysis, the calculation of elastic elements is given using the capabilities of modern software systems based on numerical simulation. In the course of the simulation, the model was a divided hexagonal grid of finite elements with a maximum size not exceeding 2.5 mm. The results of modal and dynamic analysis are presented in this article.

  19. Efficient Computation of Info-Gap Robustness for Finite Element Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers anmore » alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.« less

  20. Three-Dimensional Effects in Multi-Element High Lift Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; LeeReusch, Elizabeth M.; Watson, Ralph D.

    2003-01-01

    In an effort to discover the causes for disagreement between previous two-dimensional (2-D) computations and nominally 2-D experiment for flow over the three-element McDonnell Douglas 30P-30N airfoil configuration at high lift, a combined experimental/CFD investigation is described. The experiment explores several different side-wall boundary layer control venting patterns, documents venting mass flow rates, and looks at corner surface flow patterns. The experimental angle of attack at maximum lift is found to be sensitive to the side-wall venting pattern: a particular pattern increases the angle of attack at maximum lift by at least 2 deg. A significant amount of spanwise pressure variation is present at angles of attack near maximum lift. A CFD study using three-dimensional (3-D) structured-grid computations, which includes the modeling of side-wall venting, is employed to investigate 3-D effects on the flow. Side-wall suction strength is found to affect the angle at which maximum lift is predicted. Maximum lift in the CFD is shown to be limited by the growth of an off-body corner flow vortex and consequent increase in spanwise pressure variation and decrease in circulation. The 3-D computations with and without wall venting predict similar trends to experiment at low angles of attack, but either stall too early or else overpredict lift levels near maximum lift by as much as 5%. Unstructured-grid computations demonstrate that mounting brackets lower the lift levels near maximum lift conditions.

  1. Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Thomas, James

    2008-01-01

    Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications

  2. Finite element computation of compressible flows with the SUPG formulation

    NASA Technical Reports Server (NTRS)

    Le Beau, G. J.; Tezduyar, T. E.

    1991-01-01

    Finite element computation of compressible Euler equations is presented in the context of the streamline-upwind/Petrov-Galerkin (SUPG) formulation. The SUPG formulation, which is based on adding stabilizing terms to the Galerkin formulation, is further supplemented with a shock capturing operator which addresses the difficulty in maintaining a satisfactory solution near discontinuities in the solution field. The shock capturing operator, which has been derived from work done in entropy variables for a similar operator, is shown to lead to an appropriate level of additional stabilization near shocks, without resulting in excessive numerical diffusion. An implicit treatment of the impermeable wall boundary condition is also presented. This treatment of the no-penetration condition offers increased stability for large Courant numbers, and accelerated convergence of the computations for both implicit and explicit applications. Several examples are presented to demonstrate the ability of this method to solve the equations governing compressible fluid flow.

  3. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  4. A boundary integral method for numerical computation of radar cross section of 3D targets using hybrid BEM/FEM with edge elements

    NASA Astrophysics Data System (ADS)

    Dodig, H.

    2017-11-01

    This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.

  5. OPTOELECTRONICS, FIBER OPTICS, AND OTHER ASPECTS OF QUANTUM ELECTRONICS: Nonlinear optical devices: basic elements of a future optical digital computer?

    NASA Astrophysics Data System (ADS)

    Fischer, R.; Müller, R.

    1989-08-01

    It is shown that nonlinear optical devices are the most promising elements for an optical digital supercomputer. The basic characteristics of various developed nonlinear elements are presented, including bistable Fabry-Perot etalons, interference filters, self-electrooptic effect devices, quantum-well devices utilizing transitions between the lowest electron states in the conduction band of GaAs, etc.

  6. Computational Analysis of Enhanced Magnetic Bioseparation in Microfluidic Systems with Flow-Invasive Magnetic Elements

    PubMed Central

    Khashan, S. A.; Alazzam, A.; Furlani, E. P.

    2014-01-01

    A microfluidic design is proposed for realizing greatly enhanced separation of magnetically-labeled bioparticles using integrated soft-magnetic elements. The elements are fixed and intersect the carrier fluid (flow-invasive) with their length transverse to the flow. They are magnetized using a bias field to produce a particle capture force. Multiple stair-step elements are used to provide efficient capture throughout the entire flow channel. This is in contrast to conventional systems wherein the elements are integrated into the walls of the channel, which restricts efficient capture to limited regions of the channel due to the short range nature of the magnetic force. This severely limits the channel size and hence throughput. Flow-invasive elements overcome this limitation and enable microfluidic bioseparation systems with superior scalability. This enhanced functionality is quantified for the first time using a computational model that accounts for the dominant mechanisms of particle transport including fully-coupled particle-fluid momentum transfer. PMID:24931437

  7. Finite element simulation of the mechanical impact of computer work on the carpal tunnel syndrome.

    PubMed

    Mouzakis, Dionysios E; Rachiotis, George; Zaoutsos, Stefanos; Eleftheriou, Andreas; Malizos, Konstantinos N

    2014-09-22

    Carpal tunnel syndrome (CTS) is a clinical disorder resulting from the compression of the median nerve. The available evidence regarding the association between computer use and CTS is controversial. There is some evidence that computer mouse or keyboard work, or both are associated with the development of CTS. Despite the availability of pressure measurements in the carpal tunnel during computer work (exposure to keyboard or mouse) there are no available data to support a direct effect of the increased intracarpal canal pressure on the median nerve. This study presents an attempt to simulate the direct effects of computer work on the whole carpal area section using finite element analysis. A finite element mesh was produced from computerized tomography scans of the carpal area, involving all tissues present in the carpal tunnel. Two loading scenarios were applied on these models based on biomechanical data measured during computer work. It was found that mouse work can produce large deformation fields on the median nerve region. Also, the high stressing effect of the carpal ligament was verified. Keyboard work produced considerable and heterogeneous elongations along the longitudinal axis of the median nerve. Our study provides evidence that increased intracarpal canal pressures caused by awkward wrist postures imposed during computer work were associated directly with deformation of the median nerve. Despite the limitations of the present study the findings could be considered as a contribution to the understanding of the development of CTS due to exposure to computer work. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Sociocultural Aspects of Computers in Education.

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    The data reported in this paper gives depth to the picture of computers in society, in work, and in schools. The prices have dropped but computer corporations sell to schools, as they do to any other customer, to increase profits for themselves. Computerizing is a vehicle for social stratification. Computers are not easy to use and are hard to…

  9. Improved lattice computation of proton decay matrix elements

    NASA Astrophysics Data System (ADS)

    Aoki, Yasumichi; Izubuchi, Taku; Shintani, Eigo; Soni, Amarjit

    2017-07-01

    We present an improved result for the lattice computation of the proton decay matrix elements in Nf=2 +1 QCD. In this study, by adopting the error reduction technique of all-mode-averaging, a significant improvement of the statistical accuracy is achieved for the relevant form factor of proton (and also neutron) decay on the gauge ensemble of Nf=2 +1 domain-wall fermions with mπ=0.34 - 0.69 GeV on a 2.7 fm3 lattice, as used in our previous work [1]. We improve the total accuracy of matrix elements to 10-15% from 30-40% for p →π e+ or from 20-40% for p →K ν ¯. The accuracy of the low-energy constants α and β in the leading-order baryon chiral perturbation theory (BChPT) of proton decay are also improved. The relevant form factors of p →π estimated through the "direct" lattice calculation from the three-point function appear to be 1.4 times smaller than those from the "indirect" method using BChPT with α and β . It turns out that the utilization of our result will provide a factor 2-3 larger proton partial lifetime than that obtained using BChPT. We also discuss the use of these parameters in a dark matter model.

  10. A finite element method to compute three-dimensional equilibrium configurations of fluid membranes: Optimal parameterization, variational formulation and applications

    NASA Astrophysics Data System (ADS)

    Rangarajan, Ramsharan; Gao, Huajian

    2015-09-01

    We introduce a finite element method to compute equilibrium configurations of fluid membranes, identified as stationary points of a curvature-dependent bending energy functional under certain geometric constraints. The reparameterization symmetries in the problem pose a challenge in designing parametric finite element methods, and existing methods commonly resort to Lagrange multipliers or penalty parameters. In contrast, we exploit these symmetries by representing solution surfaces as normal offsets of given reference surfaces and entirely bypass the need for artificial constraints. We then resort to a Galerkin finite element method to compute discrete C1 approximations of the normal offset coordinate. The variational framework presented is suitable for computing deformations of three-dimensional membranes subject to a broad range of external interactions. We provide a systematic algorithm for computing large deformations, wherein solutions at subsequent load steps are identified as perturbations of previously computed ones. We discuss the numerical implementation of the method in detail and demonstrate its optimal convergence properties using examples. We discuss applications of the method to studying adhesive interactions of fluid membranes with rigid substrates and to investigate the influence of membrane tension in tether formation.

  11. Advances and trends in the development of computational models for tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Tanner, J. A.

    1985-01-01

    Status and some recent developments of computational models for tires are summarized. Discussion focuses on a number of aspects of tire modeling and analysis including: tire materials and their characterization; evolution of tire models; characteristics of effective finite element models for analyzing tires; analysis needs for tires; and impact of the advances made in finite element technology, computational algorithms, and new computing systems on tire modeling and analysis. An initial set of benchmark problems has been proposed in concert with the U.S. tire industry. Extensive sets of experimental data will be collected for these problems and used for evaluating and validating different tire models. Also, the new Aircraft Landing Dynamics Facility (ALDF) at NASA Langley Research Center is described.

  12. Computational design of low aspect ratio wing-winglet configurations for transonic wind-tunnel tests

    NASA Technical Reports Server (NTRS)

    Kuhlman, John M.; Brown, Christopher K.

    1988-01-01

    A computational design has been performed for three different low aspect ratio wing planforms fitted with nonplanar winglets; one of the three planforms has been selected to be constructed as a wind tunnel model for testing in the NASA LaRC 7 x 10 High Speed Wind Tunnel. A design point of M = 0.8, CL approx = 0.3 was selected, for wings of aspect ratio equal to 2.2, and leading edge sweep angles of 45 and 50 deg. Winglet length is 15 percent of the wing semispan, with a cant angle of 15 deg, and a leading edge sweep of 50 deg. Winglet total area equals 2.25 percent of the wing reference area. This report summarizes the design process and the predicted transonic performance for each configuration.

  13. Combining elements of information fusion and knowledge-based systems to support situation analysis

    NASA Astrophysics Data System (ADS)

    Roy, Jean

    2006-04-01

    Situation awareness has emerged as an important concept in military and public security environments. Situation analysis is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of situation awareness for the decision maker(s). It is well established that information fusion, defined as the process of utilizing one or more information sources over time to assemble a representation of aspects of interest in an environment, is a key enabler to meeting the demanding requirements of situation analysis. However, although information fusion is important, developing and adopting a knowledge-centric view of situation analysis should provide a more holistic perspective of this process. This is based on the notion that awareness ultimately has to do with having knowledge of something. Moreover, not all of the situation elements and relationships of interest are directly observable. Those aspects of interest that cannot be observed must be inferred, i.e., derived as a conclusion from facts or premises, or by reasoning from evidence. This paper discusses aspects of knowledge, and how it can be acquired from experts, formally represented and stored in knowledge bases to be exploited by computer programs, and validated. Knowledge engineering is reviewed, with emphasis given to cognitive and ontological engineering. Facets of reasoning are discussed, along with inferencing methods that can be used in computer applications. Finally, combining elements of information fusion and knowledge-based systems, an overall approach and framework for the building of situation analysis support systems is presented.

  14. Orbital and maxillofacial computer aided surgery: patient-specific finite element models to predict surgical outcomes.

    PubMed

    Luboz, Vincent; Chabanas, Matthieu; Swider, Pascal; Payan, Yohan

    2005-08-01

    This paper addresses an important issue raised for the clinical relevance of Computer-Assisted Surgical applications, namely the methodology used to automatically build patient-specific finite element (FE) models of anatomical structures. From this perspective, a method is proposed, based on a technique called the mesh-matching method, followed by a process that corrects mesh irregularities. The mesh-matching algorithm generates patient-specific volume meshes from an existing generic model. The mesh regularization process is based on the Jacobian matrix transform related to the FE reference element and the current element. This method for generating patient-specific FE models is first applied to computer-assisted maxillofacial surgery, and more precisely, to the FE elastic modelling of patient facial soft tissues. For each patient, the planned bone osteotomies (mandible, maxilla, chin) are used as boundary conditions to deform the FE face model, in order to predict the aesthetic outcome of the surgery. Seven FE patient-specific models were successfully generated by our method. For one patient, the prediction of the FE model is qualitatively compared with the patient's post-operative appearance, measured from a computer tomography scan. Then, our methodology is applied to computer-assisted orbital surgery. It is, therefore, evaluated for the generation of 11 patient-specific FE poroelastic models of the orbital soft tissues. These models are used to predict the consequences of the surgical decompression of the orbit. More precisely, an average law is extrapolated from the simulations carried out for each patient model. This law links the size of the osteotomy (i.e. the surgical gesture) and the backward displacement of the eyeball (the consequence of the surgical gesture).

  15. Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    NASA Technical Reports Server (NTRS)

    Gaonkar, Gopal H.; Achar, N. S.

    1992-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  16. Connectivity Measures in EEG Microstructural Sleep Elements

    PubMed Central

    Sakellariou, Dimitris; Koupparis, Andreas M.; Kokkinos, Vasileios; Koutroumanidis, Michalis; Kostopoulos, George K.

    2016-01-01

    During Non-Rapid Eye Movement sleep (NREM) the brain is relatively disconnected from the environment, while connectedness between brain areas is also decreased. Evidence indicates, that these dynamic connectivity changes are delivered by microstructural elements of sleep: short periods of environmental stimuli evaluation followed by sleep promoting procedures. The connectivity patterns of the latter, among other aspects of sleep microstructure, are still to be fully elucidated. We suggest here a methodology for the assessment and investigation of the connectivity patterns of EEG microstructural elements, such as sleep spindles. The methodology combines techniques in the preprocessing, estimation, error assessing and visualization of results levels in order to allow the detailed examination of the connectivity aspects (levels and directionality of information flow) over frequency and time with notable resolution, while dealing with the volume conduction and EEG reference assessment. The high temporal and frequency resolution of the methodology will allow the association between the microelements and the dynamically forming networks that characterize them, and consequently possibly reveal aspects of the EEG microstructure. The proposed methodology is initially tested on artificially generated signals for proof of concept and subsequently applied to real EEG recordings via a custom built MATLAB-based tool developed for such studies. Preliminary results from 843 fast sleep spindles recorded in whole night sleep of 5 healthy volunteers indicate a prevailing pattern of interactions between centroparietal and frontal regions. We demonstrate hereby, an opening to our knowledge attempt to estimate the scalp EEG connectivity that characterizes fast sleep spindles via an “EEG-element connectivity” methodology we propose. The application of the latter, via a computational tool we developed suggests it is able to investigate the connectivity patterns related to the

  17. Finite element analysis and computer graphics visualization of flow around pitching and plunging airfoils

    NASA Technical Reports Server (NTRS)

    Bratanow, T.; Ecer, A.

    1973-01-01

    A general computational method for analyzing unsteady flow around pitching and plunging airfoils was developed. The finite element method was applied in developing an efficient numerical procedure for the solution of equations describing the flow around airfoils. The numerical results were employed in conjunction with computer graphics techniques to produce visualization of the flow. The investigation involved mathematical model studies of flow in two phases: (1) analysis of a potential flow formulation and (2) analysis of an incompressible, unsteady, viscous flow from Navier-Stokes equations.

  18. Nuclear-relaxed elastic and piezoelectric constants of materials: Computational aspects of two quantum-mechanical approaches.

    PubMed

    Erba, Alessandro; Caglioti, Dominique; Zicovich-Wilson, Claudio Marcelo; Dovesi, Roberto

    2017-02-15

    Two alternative approaches for the quantum-mechanical calculation of the nuclear-relaxation term of elastic and piezoelectric tensors of crystalline materials are illustrated and their computational aspects discussed: (i) a numerical approach based on the geometry optimization of atomic positions at strained lattice configurations and (ii) a quasi-analytical approach based on the evaluation of the force- and displacement-response internal-strain tensors as combined with the interatomic force-constant matrix. The two schemes are compared both as regards their computational accuracy and performance. The latter approach, not being affected by the many numerical parameters and procedures of a typical quasi-Newton geometry optimizer, constitutes a more reliable and robust mean to the evaluation of such properties, at a reduced computational cost for most crystalline systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Repetitive element signature-based visualization, distance computation, and classification of 1766 microbial genomes.

    PubMed

    Lee, Kang-Hoon; Shin, Kyung-Seop; Lim, Debora; Kim, Woo-Chan; Chung, Byung Chang; Han, Gyu-Bum; Roh, Jeongkyu; Cho, Dong-Ho; Cho, Kiho

    2015-07-01

    The genomes of living organisms are populated with pleomorphic repetitive elements (REs) of varying densities. Our hypothesis that genomic RE landscapes are species/strain/individual-specific was implemented into the Genome Signature Imaging system to visualize and compute the RE-based signatures of any genome. Following the occurrence profiling of 5-nucleotide REs/words, the information from top-50 frequency words was transformed into a genome-specific signature and visualized as Genome Signature Images (GSIs), using a CMYK scheme. An algorithm for computing distances among GSIs was formulated using the GSIs' variables (word identity, frequency, and frequency order). The utility of the GSI-distance computation system was demonstrated with control genomes. GSI-based computation of genome-relatedness among 1766 microbes (117 archaea and 1649 bacteria) identified their clustering patterns; although the majority paralleled the established classification, some did not. The Genome Signature Imaging system, with its visualization and distance computation functions, enables genome-scale evolutionary studies involving numerous genomes with varying sizes. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Gastrointestinal stromal tumors: retrospective analysis of the computer-tomographic aspects.

    PubMed

    Lupescu, Ioana G; Grasu, Mugur; Boros, Mirela; Gheorghe, Cristian; Ionescu, Mihnea; Popescu, Irinel; Herlea, Vlad; Georgescu, Serban A

    2007-06-01

    To describe the computer-tomographic (CT) aspects of gastrointestinal stromal tumors (GISTs) in correlation to their histology. The medical records of all patients at our hospital with a histologic diagnosis of GIST between January 2002 and June 2006, and investigated before surgery by CT, were reviewed. Two radiologists with knowledge of the diagnosis reviewed the CT findings. Amongst 15 cases of GISTs, 9 cases involved the stomach and 4 cases the small intestine. Location of the primary tumor could not be determined for 2 of 15 tumors, because of the presence of extensive peritoneal metastases. Most primary tumors were predominantly extraluminal (13 cases) while two were clearly endoluminal. The mean diameter of the primary tumor was 8 cm. The tumor margin was well defined in 12 patients and irregular in 3 cases. Central fluid attenuation was present in 11 tumors, while central gas was seen in two cases. Metastases were seen in 2 cases at presentation and in another 2 patients during follow-up. Spread was exclusive to the liver or peritoneum. Visceral obstruction was absent even in extensive peritoneal metastatic disease. Ascites was an unusual finding. CT plays an important role not only in the detection and the localization but also in the evaluation of the extension and follow-up of theses tumors. Using only CT aspects, we can only suspect the diagnosis to GISTs. Often other soft-tissue tumors with gastrointestinal involvement can mimic GISTs. In all cases histological diagnosis is essential.

  1. Finite element analysis of transonic flows in cascades: Importance of computational grids in improving accuracy and convergence

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Akay, H. U.

    1981-01-01

    The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.

  2. Improved lattice computation of proton decay matrix elements

    DOE PAGES

    Aoki, Yasumichi; Izubuchi, Taku; Shintani, Eigo; ...

    2017-07-14

    In this paper, we present an improved result for the lattice computation of the proton decay matrix elements in N f = 2 + 1 QCD. In this study, by adopting the error reduction technique of all-mode-averaging, a significant improvement of the statistical accuracy is achieved for the relevant form factor of proton (and also neutron) decay on the gauge ensemble of N f= 2 + 1 domain-wall fermions with m π = 0.34 – 0.69 GeV on a 2.7 fm 3 lattice, as used in our previous work. We improve the total accuracy of matrix elements to 10–15% from 30–40% for p → πe + or from 20–40% for p → Kmore » $$\\bar{ν}$$. The accuracy of the low-energy constants α and β in the leading-order baryon chiral perturbation theory (BChPT) of proton decay are also improved. The relevant form factors of p → π estimated through the “direct” lattice calculation from the three-point function appear to be 1.4 times smaller than those from the “indirect” method using BChPT with α and β . It turns out that the utilization of our result will provide a factor 2–3 larger proton partial lifetime than that obtained using BChPT. Lastly, we also discuss the use of these parameters in a dark matter model.« less

  3. Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements

    PubMed Central

    Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.

    2016-01-01

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037

  4. Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.

    PubMed

    Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A

    2016-03-21

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. High performance computation of radiative transfer equation using the finite element method

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.

    2018-05-01

    This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.

  6. The Development and Evaluation of a Computer-Simulated Science Inquiry Environment Using Gamified Elements

    ERIC Educational Resources Information Center

    Tsai, Fu-Hsing

    2018-01-01

    This study developed a computer-simulated science inquiry environment, called the Science Detective Squad, to engage students in investigating an electricity problem that may happen in daily life. The environment combined the simulation of scientific instruments and a virtual environment, including gamified elements, such as points and a story for…

  7. Computer Security: The Human Element.

    ERIC Educational Resources Information Center

    Guynes, Carl S.; Vanacek, Michael T.

    1981-01-01

    The security and effectiveness of a computer system are dependent on the personnel involved. Improved personnel and organizational procedures can significantly reduce the potential for computer fraud. (Author/MLF)

  8. A method for determining spiral-bevel gear tooth geometry for finite element analysis

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Litvin, Faydor L.

    1991-01-01

    An analytical method was developed to determine gear tooth surface coordinates of face-milled spiral bevel gears. The method uses the basic gear design parameters in conjunction with the kinematical aspects of spiral bevel gear manufacturing machinery. A computer program, SURFACE, was developed. The computer program calculates the surface coordinates and outputs 3-D model data that can be used for finite element analysis. Development of the modeling method and an example case are presented. This analysis method could also find application for gear inspection and near-net-shape gear forging die design.

  9. Practical and quality-control aspects of multi-element analysis with quadrupole ICP-MS with special attention to urine and whole blood.

    PubMed

    De Boer, Jan L M; Ritsema, Rob; Piso, Sjoerd; Van Staden, Hans; Van Den Beld, Wilbert

    2004-07-01

    Two screening methods were developed for rapid analysis of a great number of urine and blood samples within the framework of an exposure check of the population after a firework explosion. A total of 56 elements was measured including major elements. Sample preparation consisted of simple dilution. Extensive quality controls were applied including element addition and the use of certified reference materials. Relevant results at levels similar to those found in the literature were obtained for Co, Ni, Cu, Zn, Sr, Cd, Sn, Sb, Ba, Tl, and Pb in urine and for the same elements except Ni, Sn, Sb, and Ba in blood. However, quadrupole ICP-MS has limitations, mainly related to spectral interferences, for the analysis of urine and blood, and these cause higher detection limits. The general aspects discussed in the paper give it wider applicability than just for analysis of blood and urine-it can for example be used in environmental analysis.

  10. Program design by a multidisciplinary team. [for structural finite element analysis on STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Voigt, S.

    1975-01-01

    The use of software engineering aids in the design of a structural finite-element analysis computer program for the STAR-100 computer is described. Nested functional diagrams to aid in communication among design team members were used, and a standardized specification format to describe modules designed by various members was adopted. This is a report of current work in which use of the functional diagrams provided continuity and helped resolve some of the problems arising in this long-running part-time project.

  11. Liver CT image processing: a short introduction of the technical elements.

    PubMed

    Masutani, Y; Uozumi, K; Akahane, Masaaki; Ohtomo, Kuni

    2006-05-01

    In this paper, we describe the technical aspects of image analysis for liver diagnosis and treatment, including the state-of-the-art of liver image analysis and its applications. After discussion on modalities for liver image analysis, various technical elements for liver image analysis such as registration, segmentation, modeling, and computer-assisted detection are covered with examples performed with clinical data sets. Perspective in the imaging technologies is also reviewed and discussed.

  12. How to determine spiral bevel gear tooth geometry for finite element analysis

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Litvin, Faydor L.

    1991-01-01

    An analytical method was developed to determine gear tooth surface coordinates of face milled spiral bevel gears. The method combines the basic gear design parameters with the kinematical aspects for spiral bevel gear manufacturing. A computer program was developed to calculate the surface coordinates. From this data a 3-D model for finite element analysis can be determined. Development of the modeling method and an example case are presented.

  13. Human-computer interaction: psychological aspects of the human use of computing.

    PubMed

    Olson, Gary M; Olson, Judith S

    2003-01-01

    Human-computer interaction (HCI) is a multidisciplinary field in which psychology and other social sciences unite with computer science and related technical fields with the goal of making computing systems that are both useful and usable. It is a blend of applied and basic research, both drawing from psychological research and contributing new ideas to it. New technologies continuously challenge HCI researchers with new options, as do the demands of new audiences and uses. A variety of usability methods have been developed that draw upon psychological principles. HCI research has expanded beyond its roots in the cognitive processes of individual users to include social and organizational processes involved in computer usage in real environments as well as the use of computers in collaboration. HCI researchers need to be mindful of the longer-term changes brought about by the use of computing in a variety of venues.

  14. The Elastic Behaviour of Sintered Metallic Fibre Networks: A Finite Element Study by Beam Theory

    PubMed Central

    Bosbach, Wolfram A.

    2015-01-01

    Background The finite element method has complimented research in the field of network mechanics in the past years in numerous studies about various materials. Numerical predictions and the planning efficiency of experimental procedures are two of the motivational aspects for these numerical studies. The widespread availability of high performance computing facilities has been the enabler for the simulation of sufficiently large systems. Objectives and Motivation In the present study, finite element models were built for sintered, metallic fibre networks and validated by previously published experimental stiffness measurements. The validated models were the basis for predictions about so far unknown properties. Materials and Methods The finite element models were built by transferring previously published skeletons of fibre networks into finite element models. Beam theory was applied as simplification method. Results and Conclusions The obtained material stiffness isn’t a constant but rather a function of variables such as sample size and boundary conditions. Beam theory offers an efficient finite element method for the simulated fibre networks. The experimental results can be approximated by the simulated systems. Two worthwhile aspects for future work will be the influence of size and shape and the mechanical interaction with matrix materials. PMID:26569603

  15. Rectenna session: Micro aspects. [energy conversion

    NASA Technical Reports Server (NTRS)

    Gutmann, R. J.

    1980-01-01

    Two micro aspects of the rectenna design are addressed: evaluation of the degradation in net rectenna RF to DC conversion efficiency due to power density variations across the rectenna (power combining analysis) and design of Yagi-Uda receiving elements to reduce rectenna cost by decreasing the number of conversion circuits (directional receiving elements). The first of these micro aspects involves resolving a fundamental question of efficiency potential with a rectenna, while the second involves a design modification with a large potential cost saving.

  16. Computational Aspects of Heat Transfer in Structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M. (Compiler)

    1982-01-01

    Techniques for the computation of heat transfer and associated phenomena in complex structures are examined with an emphasis on reentry flight vehicle structures. Analysis methods, computer programs, thermal analysis of large space structures and high speed vehicles, and the impact of computer systems are addressed.

  17. Signal and noise extraction from analog memory elements for neuromorphic computing.

    PubMed

    Gong, N; Idé, T; Kim, S; Boybat, I; Sebastian, A; Narayanan, V; Ando, T

    2018-05-29

    Dense crossbar arrays of non-volatile memory (NVM) can potentially enable massively parallel and highly energy-efficient neuromorphic computing systems. The key requirements for the NVM elements are continuous (analog-like) conductance tuning capability and switching symmetry with acceptable noise levels. However, most NVM devices show non-linear and asymmetric switching behaviors. Such non-linear behaviors render separation of signal and noise extremely difficult with conventional characterization techniques. In this study, we establish a practical methodology based on Gaussian process regression to address this issue. The methodology is agnostic to switching mechanisms and applicable to various NVM devices. We show tradeoff between switching symmetry and signal-to-noise ratio for HfO 2 -based resistive random access memory. Then, we characterize 1000 phase-change memory devices based on Ge 2 Sb 2 Te 5 and separate total variability into device-to-device variability and inherent randomness from individual devices. These results highlight the usefulness of our methodology to realize ideal NVM devices for neuromorphic computing.

  18. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  19. Computational Toxicology at the US EPA | Science Inventory ...

    EPA Pesticide Factsheets

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in America’s air, water, and hazardous-waste sites. The ORD Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the EPA Science to Achieve Results (STAR) program. Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast™) and exposure (ExpoCast™), and creating virtual liver (v-Liver™) and virtual embryo (v-Embryo™) systems models. The models and underlying data are being made publicly available t

  20. Numerical Aspects of Eigenvalue and Eigenfunction Computations for Chaotic Quantum Systems

    NASA Astrophysics Data System (ADS)

    Bäcker, A.

    Summary: We give an introduction to some of the numerical aspects in quantum chaos. The classical dynamics of two-dimensional area-preserving maps on the torus is illustrated using the standard map and a perturbed cat map. The quantization of area-preserving maps given by their generating function is discussed and for the computation of the eigenvalues a computer program in Python is presented. We illustrate the eigenvalue distribution for two types of perturbed cat maps, one leading to COE and the other to CUE statistics. For the eigenfunctions of quantum maps we study the distribution of the eigenvectors and compare them with the corresponding random matrix distributions. The Husimi representation allows for a direct comparison of the localization of the eigenstates in phase space with the corresponding classical structures. Examples for a perturbed cat map and the standard map with different parameters are shown. Billiard systems and the corresponding quantum billiards are another important class of systems (which are also relevant to applications, for example in mesoscopic physics). We provide a detailed exposition of the boundary integral method, which is one important method to determine the eigenvalues and eigenfunctions of the Helmholtz equation. We discuss several methods to determine the eigenvalues from the Fredholm equation and illustrate them for the stadium billiard. The occurrence of spurious solutions is discussed in detail and illustrated for the circular billiard, the stadium billiard, and the annular sector billiard. We emphasize the role of the normal derivative function to compute the normalization of eigenfunctions, momentum representations or autocorrelation functions in a very efficient and direct way. Some examples for these quantities are given and discussed.

  1. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which

  2. The h-p Version of the Finite Element Method with Quasiuniform Meshes.

    DTIC Science & Technology

    1986-05-01

    Noetic Technologies, St. Louis).1 The theoretical aspects have been studied only recently. The first theoretical paper appeared in 1981 (see [6...mapping approach, the -esllt3 are also valid for curvilinear elements. r." ./" . • • .o i - .. • • .. 32 6. APPLICATIONS In this section we will study the...which were performed with a computer program called PROBE [20], [22] developed by V Noetic Technologies Corporation, St Louis. We will consider a

  3. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  4. MPSalsa Version 1.5: A Finite Element Computer Program for Reacting Flow Problems: Part 1 - Theoretical Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devine, K.D.; Hennigan, G.L.; Hutchinson, S.A.

    1999-01-01

    The theoretical background for the finite element computer program, MPSalsa Version 1.5, is presented in detail. MPSalsa is designed to solve laminar or turbulent low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow (with auxiliary turbulence equations), heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solve coupled multiple Poisson or advection-diffusion-reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurringmore » in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMK3N, respectively. The code employs unstructured meshes, using the EXODUS II finite element database suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec. solver library.« less

  5. Influence of Finite Element Software on Energy Release Rates Computed Using the Virtual Crack Closure Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)

    2006-01-01

    Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.

  6. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  7. STARS: An Integrated, Multidisciplinary, Finite-Element, Structural, Fluids, Aeroelastic, and Aeroservoelastic Analysis Computer Program

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1997-01-01

    A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.

  8. Optimum element density studies for finite-element thermal analysis of hypersonic aircraft structures

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy; Muramoto, Kyle M.

    1990-01-01

    Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.

  9. A New Material Mapping Procedure for Quantitative Computed Tomography-Based, Continuum Finite Element Analyses of the Vertebra

    PubMed Central

    Unnikrishnan, Ginu U.; Morgan, Elise F.

    2011-01-01

    Inaccuracies in the estimation of material properties and errors in the assignment of these properties into finite element models limit the reliability, accuracy, and precision of quantitative computed tomography (QCT)-based finite element analyses of the vertebra. In this work, a new mesh-independent, material mapping procedure was developed to improve the quality of predictions of vertebral mechanical behavior from QCT-based finite element models. In this procedure, an intermediate step, called the material block model, was introduced to determine the distribution of material properties based on bone mineral density, and these properties were then mapped onto the finite element mesh. A sensitivity study was first conducted on a calibration phantom to understand the influence of the size of the material blocks on the computed bone mineral density. It was observed that varying the material block size produced only marginal changes in the predictions of mineral density. Finite element (FE) analyses were then conducted on a square column-shaped region of the vertebra and also on the entire vertebra in order to study the effect of material block size on the FE-derived outcomes. The predicted values of stiffness for the column and the vertebra decreased with decreasing block size. When these results were compared to those of a mesh convergence analysis, it was found that the influence of element size on vertebral stiffness was less than that of the material block size. This mapping procedure allows the material properties in a finite element study to be determined based on the block size required for an accurate representation of the material field, while the size of the finite elements can be selected independently and based on the required numerical accuracy of the finite element solution. The mesh-independent, material mapping procedure developed in this study could be particularly helpful in improving the accuracy of finite element analyses of

  10. Finite element analysis of TAVI: Impact of native aortic root computational modeling strategies on simulation outcomes.

    PubMed

    Finotello, Alice; Morganti, Simone; Auricchio, Ferdinando

    2017-09-01

    In the last few years, several studies, each with different aim and modeling detail, have been proposed to investigate transcatheter aortic valve implantation (TAVI) with finite elements. The present work focuses on the patient-specific finite element modeling of the aortic valve complex. In particular, we aim at investigating how different modeling strategies in terms of material models/properties and discretization procedures can impact analysis results. Four different choices both for the mesh size (from  20 k elements to  200 k elements) and for the material model (from rigid to hyperelastic anisotropic) are considered. Different approaches for modeling calcifications are also taken into account. Post-operative CT data of the real implant are used as reference solution with the aim of outlining a trade-off between computational model complexity and reliability of the results. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. [Computer eyeglasses--aspects of a confusing topic].

    PubMed

    Huber-Spitzy, V; Janeba, E

    1997-01-01

    With the coming into force of the new Austrian Employee Protection Act the issue of the so called "computer glasses" will also gain added importance in our country. Such glasses have been defined as vision aids to be exclusively used for the work on computer monitors and include single-vision glasses solely intended for reading computer screen, glasses with bifocal lenses for reading computer screen and hard-copy documents as well as those with varifocal lenses featuring a thickened central section. There is still a considerable controversy among those concerned as to who will bear the costs for such glasses--most likely it will be the employer. Prescription of such vision aids will be exclusively restricted to ophthalmologists, based on a thorough ophthalmological examination under adequate consideration of the specific working environment and the workplace requirements of the individual employee concerned.

  12. Performance of an anisotropic Allman/DKT 3-node thin triangular flat shell element

    NASA Astrophysics Data System (ADS)

    Ertas, A.; Krafcik, J. T.; Ekwaro-Osire, S.

    1992-05-01

    A simple, explicit formulation of the stiffness matrix for an anisotropic, 3-node, thin triangular flat shell element in global coordinates is presented. An Allman triangle (AT) is used for membrane stiffness. The membrane stiffness matrix is explicitly derived by applying an Allman transformation to a Felippa 6-node linear strain triangle (LST). Bending stiffness is incorporated by the use of a discrete Kirchhoff triangle (DKT) bending element. Stiffness terms resulting from anisotropic membrane-bending coupling are included by integrating, in area coordinates, the membrane and bending strain-displacement matrices. Using the aforementioned approach, the objective of this study is to develop and test the performance of a practical 3-node flat shell element that could be used in plate problems with unsymmetrically stacked composite laminates. The performance of the latter element is tested on plates of varying aspect ratios. The developed 3-node shell element should simplify the programming task and have the potential of reducing the computational time.

  13. [Computers in biomedical research: I. Analysis of bioelectrical signals].

    PubMed

    Vivaldi, E A; Maldonado, P

    2001-08-01

    A personal computer equipped with an analog-to-digital conversion card is able to input, store and display signals of biomedical interest. These signals can additionally be submitted to ad-hoc software for analysis and diagnosis. Data acquisition is based on the sampling of a signal at a given rate and amplitude resolution. The automation of signal processing conveys syntactic aspects (data transduction, conditioning and reduction); and semantic aspects (feature extraction to describe and characterize the signal and diagnostic classification). The analytical approach that is at the basis of computer programming allows for the successful resolution of apparently complex tasks. Two basic principles involved are the definition of simple fundamental functions that are then iterated and the modular subdivision of tasks. These two principles are illustrated, respectively, by presenting the algorithm that detects relevant elements for the analysis of a polysomnogram, and the task flow in systems that automate electrocardiographic reports.

  14. Cost Considerations in Nonlinear Finite-Element Computing

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.

    1985-01-01

    Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.

  15. Educational aspects of molecular simulation

    NASA Astrophysics Data System (ADS)

    Allen, Michael P.

    This article addresses some aspects of teaching simulation methods to undergraduates and graduate students. Simulation is increasingly a cross-disciplinary activity, which means that the students who need to learn about simulation methods may have widely differing backgrounds. Also, they may have a wide range of views on what constitutes an interesting application of simulation methods. Almost always, a successful simulation course includes an element of practical, hands-on activity: a balance always needs to be struck between treating the simulation software as a 'black box', and becoming bogged down in programming issues. With notebook computers becoming widely available, students often wish to take away the programs to run themselves, and access to raw computer power is not the limiting factor that it once was; on the other hand, the software should be portable and, if possible, free. Examples will be drawn from the author's experience in three different contexts. (1) An annual simulation summer school for graduate students, run by the UK CCP5 organization, in which practical sessions are combined with an intensive programme of lectures describing the methodology. (2) A molecular modelling module, given as part of a doctoral training centre in the Life Sciences at Warwick, for students who might not have a first degree in the physical sciences. (3) An undergraduate module in Physics at Warwick, also taken by students from other disciplines, teaching high performance computing, visualization, and scripting in the context of a physical application such as Monte Carlo simulation.

  16. SAGUARO: a finite-element computer program for partially saturated porous flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, R.R.; Gartling, D.K.; Larson, D.E.

    1983-06-01

    SAGUARO is a finite element computer program designed to calculate two-dimensional flow of mass and energy through porous media. The media may be saturated or partially saturated. SAGUARO solves the parabolic time-dependent mass transport equation which accounts for the presence of partially saturated zones through the use of highly non-linear material characteristic curves. The energy equation accounts for the possibility of partially saturated regions by adjusting the thermal capacitances and thermal conductivities according to the volume fraction of water present in the local pores. Program capabilities, user instructions and a sample problem are presented in this manual.

  17. Assignment Of Finite Elements To Parallel Processors

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.

    1990-01-01

    Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.

  18. Exercises in molecular computing.

    PubMed

    Stojanovic, Milan N; Stefanovic, Darko; Rudchenko, Sergei

    2014-06-17

    CONSPECTUS: The successes of electronic digital logic have transformed every aspect of human life over the last half-century. The word "computer" now signifies a ubiquitous electronic device, rather than a human occupation. Yet evidently humans, large assemblies of molecules, can compute, and it has been a thrilling challenge to develop smaller, simpler, synthetic assemblies of molecules that can do useful computation. When we say that molecules compute, what we usually mean is that such molecules respond to certain inputs, for example, the presence or absence of other molecules, in a precisely defined but potentially complex fashion. The simplest way for a chemist to think about computing molecules is as sensors that can integrate the presence or absence of multiple analytes into a change in a single reporting property. Here we review several forms of molecular computing developed in our laboratories. When we began our work, combinatorial approaches to using DNA for computing were used to search for solutions to constraint satisfaction problems. We chose to work instead on logic circuits, building bottom-up from units based on catalytic nucleic acids, focusing on DNA secondary structures in the design of individual circuit elements, and reserving the combinatorial opportunities of DNA for the representation of multiple signals propagating in a large circuit. Such circuit design directly corresponds to the intuition about sensors transforming the detection of analytes into reporting properties. While this approach was unusual at the time, it has been adopted since by other groups working on biomolecular computing with different nucleic acid chemistries. We created logic gates by modularly combining deoxyribozymes (DNA-based enzymes cleaving or combining other oligonucleotides), in the role of reporting elements, with stem-loops as input detection elements. For instance, a deoxyribozyme that normally exhibits an oligonucleotide substrate recognition region is

  19. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  20. Finite element analysis of high aspect ratio wind tunnel wing model: A parametric study

    NASA Astrophysics Data System (ADS)

    Rosly, N. A.; Harmin, M. Y.

    2017-12-01

    Procedure for designing the wind tunnel model of a high aspect ratio (HAR) wing containing geometric nonlinearities is described in this paper. The design process begins with identification of basic features of the HAR wing as well as its design constraints. This enables the design space to be narrowed down and consequently, brings ease of convergence towards the design solution. Parametric studies in terms of the spar thickness, the span length and the store diameter are performed using finite element analysis for both undeformed and deformed cases, which respectively demonstrate the linear and nonlinear conditions. Two main criteria are accounted for in the selection of the wing design: the static deflections due to gravitational loading should be within the allowable margin of the size of the wind tunnel test section and the flutter speed of the wing should be much below the maximum speed of the wind tunnel. The findings show that the wing experiences a stiffness hardening effect under the nonlinear static solution and the presence of the store enables significant reduction in linear flutter speed.

  1. Advances in Integrated Computational Materials Engineering "ICME"

    NASA Astrophysics Data System (ADS)

    Hirsch, Jürgen

    The methods of Integrated Computational Materials Engineering that were developed and successfully applied for Aluminium have been constantly improved. The main aspects and recent advances of integrated material and process modeling are simulations of material properties like strength and forming properties and for the specific microstructure evolution during processing (rolling, extrusion, annealing) under the influence of material constitution and process variations through the production process down to the final application. Examples are discussed for the through-process simulation of microstructures and related properties of Aluminium sheet, including DC ingot casting, pre-heating and homogenization, hot and cold rolling, final annealing. New results are included of simulation solution annealing and age hardening of 6xxx alloys for automotive applications. Physically based quantitative descriptions and computer assisted evaluation methods are new ICME methods of integrating new simulation tools also for customer applications, like heat affected zones in welding of age hardening alloys. The aspects of estimating the effect of specific elements due to growing recycling volumes requested also for high end Aluminium products are also discussed, being of special interest in the Aluminium producing industries.

  2. Aspect-Oriented Subprogram Synthesizes UML Sequence Diagrams

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2006-01-01

    The Rational Sequence computer program described elsewhere includes a subprogram that utilizes the capability for aspect-oriented programming when that capability is present. This subprogram is denoted the Rational Sequence (AspectJ) component because it uses AspectJ, which is an extension of the Java programming language that introduces aspect-oriented programming techniques into the language

  3. COMPUTING MEDIUM USING THRESHOLD ELEMENTS WITH COMBINED FUNCTIONS,

    DTIC Science & Technology

    by grouping a number of elements to perform the desired functions. In the pulsed threshold system, a refractory period exists when the elements do not...disjunction, storage, and interaction. The level threshold elements, on the other hand do not have a refractory property but nevertheless they are able to perform signal branching, NOR operation, negation, and storage. (Author)

  4. Affective Aspects of Perceived Loss of Control and Potential Implications for Brain-Computer Interfaces.

    PubMed

    Grissmann, Sebastian; Zander, Thorsten O; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter

    2017-01-01

    Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios.

  5. Affective Aspects of Perceived Loss of Control and Potential Implications for Brain-Computer Interfaces

    PubMed Central

    Grissmann, Sebastian; Zander, Thorsten O.; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter

    2017-01-01

    Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios. PMID:28769776

  6. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    NASA Astrophysics Data System (ADS)

    Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.

    2017-12-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.

  7. Cortical Neural Computation by Discrete Results Hypothesis

    PubMed Central

    Castejon, Carlos; Nuñez, Angel

    2016-01-01

    One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called “Discrete Results” (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of “Discrete Results” is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel “Discrete Results” concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast

  8. Cortical Neural Computation by Discrete Results Hypothesis.

    PubMed

    Castejon, Carlos; Nuñez, Angel

    2016-01-01

    One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called "Discrete Results" (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of "Discrete Results" is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel "Discrete Results" concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS

  9. Energy Finite Element Analysis for Computing the High Frequency Vibration of the Aluminum Testbed Cylinder and Correlating the Results to Test Data

    NASA Technical Reports Server (NTRS)

    Vlahopoulos, Nickolas

    2005-01-01

    The Energy Finite Element Analysis (EFEA) is a finite element based computational method for high frequency vibration and acoustic analysis. The EFEA solves with finite elements governing differential equations for energy variables. These equations are developed from wave equations. Recently, an EFEA method for computing high frequency vibration of structures either in vacuum or in contact with a dense fluid has been presented. The presence of fluid loading has been considered through added mass and radiation damping. The EFEA developments were validated by comparing EFEA results to solutions obtained by very dense conventional finite element models and solutions from classical techniques such as statistical energy analysis (SEA) and the modal decomposition method for bodies of revolution. EFEA results have also been compared favorably with test data for the vibration and the radiated noise generated by a large scale submersible vehicle. The primary variable in EFEA is defined as the time averaged over a period and space averaged over a wavelength energy density. A joint matrix computed from the power transmission coefficients is utilized for coupling the energy density variables across any discontinuities, such as change of plate thickness, plate/stiffener junctions etc. When considering the high frequency vibration of a periodically stiffened plate or cylinder, the flexural wavelength is smaller than the interval length between two periodic stiffeners, therefore the stiffener stiffness can not be smeared by computing an equivalent rigidity for the plate or cylinder. The periodic stiffeners must be regarded as coupling components between periodic units. In this paper, Periodic Structure (PS) theory is utilized for computing the coupling joint matrix and for accounting for the periodicity characteristics.

  10. Rectenna session: Micro aspects

    NASA Technical Reports Server (NTRS)

    Gutmann, R. J.

    1980-01-01

    Two micro aspects of rectenna design are discussed: evaluation of the degradation in net rectenna RF to DC conversion efficiency due to power density variations across the rectenna (power combining analysis) and design of Yagi-Uda receiving elements to reduce rectenna cost by decreasing the number of conversion circuits (directional receiving elements). The first of these involves resolving a fundamental question of efficiency potential with a rectenna, while the second involves a design modification with a large potential cost saving.

  11. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  12. Three-dimensional computation of laser cavity eigenmodes by the use of finite element analysis (FEA)

    NASA Astrophysics Data System (ADS)

    Altmann, Konrad; Pflaum, Christoph; Seider, David

    2004-06-01

    A new method for computing eigenmodes of a laser resonator by the use of finite element analysis (FEA) is presented. For this purpose, the scalar wave equation [Δ + k2]E(x,y,z) = 0 is transformed into a solvable 3D eigenvalue problem by separating out the propagation factor exp(-ikz) from the phasor amplitude E(x,y,z) of the time-harmonic electrical field. For standing wave resonators, the beam inside the cavity is represented by a two-wave ansatz. For cavities with parabolic optical elements the new approach has successfully been verified by the use of the Gaussian mode algorithm. For a DPSSL with a thermally lensing crystal inside the cavity the expected deviation between Gaussian approximation and numerical solution could be demonstrated clearly.

  13. Self-Consistent Large-Scale Magnetosphere-Ionosphere Coupling: Computational Aspects and Experiments

    NASA Technical Reports Server (NTRS)

    Newman, Timothy S.

    2003-01-01

    Both external and internal phenomena impact the terrestrial magnetosphere. For example, solar wind and particle precipitation effect the distribution of hot plasma in the magnetosphere. Numerous models exist to describe different aspects of magnetosphere characteristics. For example, Tsyganenko has developed a series of models (e.g., [TSYG89]) that describe the magnetic field, and Stern [STER75] and Volland [VOLL73] have developed an analytical model that describes the convection electric field. Over the past several years, NASA colleague Khazanov, working with Fok and others, has developed a large-scale coupled model that tracks particle flow to determine hot ion and electron phase space densities in the magnetosphere. This model utilizes external data such as solar wind densities and velocities and geomagnetic indices (e.g., Kp) to drive computational processes that evaluate magnetic, electric field, and plasma sheet models at any time point. These models are coupled such that energetic ion and electron fluxes are produced, with those fluxes capable of interacting with the electric field model. A diagrammatic representation of the coupled model is shown.

  14. Exercises in Molecular Computing

    PubMed Central

    2014-01-01

    Conspectus The successes of electronic digital logic have transformed every aspect of human life over the last half-century. The word “computer” now signifies a ubiquitous electronic device, rather than a human occupation. Yet evidently humans, large assemblies of molecules, can compute, and it has been a thrilling challenge to develop smaller, simpler, synthetic assemblies of molecules that can do useful computation. When we say that molecules compute, what we usually mean is that such molecules respond to certain inputs, for example, the presence or absence of other molecules, in a precisely defined but potentially complex fashion. The simplest way for a chemist to think about computing molecules is as sensors that can integrate the presence or absence of multiple analytes into a change in a single reporting property. Here we review several forms of molecular computing developed in our laboratories. When we began our work, combinatorial approaches to using DNA for computing were used to search for solutions to constraint satisfaction problems. We chose to work instead on logic circuits, building bottom-up from units based on catalytic nucleic acids, focusing on DNA secondary structures in the design of individual circuit elements, and reserving the combinatorial opportunities of DNA for the representation of multiple signals propagating in a large circuit. Such circuit design directly corresponds to the intuition about sensors transforming the detection of analytes into reporting properties. While this approach was unusual at the time, it has been adopted since by other groups working on biomolecular computing with different nucleic acid chemistries. We created logic gates by modularly combining deoxyribozymes (DNA-based enzymes cleaving or combining other oligonucleotides), in the role of reporting elements, with stem–loops as input detection elements. For instance, a deoxyribozyme that normally exhibits an oligonucleotide substrate recognition region is

  15. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  16. Adaptive implicit-explicit and parallel element-by-element iteration schemes

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.

    1989-01-01

    Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.

  17. Element distinctness revisited

    NASA Astrophysics Data System (ADS)

    Portugal, Renato

    2018-07-01

    The element distinctness problem is the problem of determining whether the elements of a list are distinct, that is, if x=(x_1,\\ldots ,x_N) is a list with N elements, we ask whether the elements of x are distinct or not. The solution in a classical computer requires N queries because it uses sorting to check whether there are equal elements. In the quantum case, it is possible to solve the problem in O(N^{2/3}) queries. There is an extension which asks whether there are k colliding elements, known as element k-distinctness problem. This work obtains optimal values of two critical parameters of Ambainis' seminal quantum algorithm (SIAM J Comput 37(1):210-239, 2007). The first critical parameter is the number of repetitions of the algorithm's main block, which inverts the phase of the marked elements and calls a subroutine. The second parameter is the number of quantum walk steps interlaced by oracle queries. We show that, when the optimal values of the parameters are used, the algorithm's success probability is 1-O(N^{1/(k+1)}), quickly approaching 1. The specification of the exact running time and success probability is important in practical applications of this algorithm.

  18. Computer assisted generation of the matrix elements between contracted wavefunctions in a Complete Active Space scheme

    NASA Astrophysics Data System (ADS)

    Angeli, C.; Cimiraglia, R.

    2005-02-01

    Starting from a CAS-SCF calculation a sequence of contracted functions can be generated by applying strings of spin-traced replacement operators to the CAS-SCF solution. The laborious task of producing the Hamiltonian matrix elements between such functions can be substantially reduced making use of a computer algebra system. An implementation employing the MuPAD system is presented and illustrated.

  19. Non-conforming finite-element formulation for cardiac electrophysiology: an effective approach to reduce the computation time of heart simulations without compromising accuracy

    NASA Astrophysics Data System (ADS)

    Hurtado, Daniel E.; Rojas, Guillermo

    2018-04-01

    Computer simulations constitute a powerful tool for studying the electrical activity of the human heart, but computational effort remains prohibitively high. In order to recover accurate conduction velocities and wavefront shapes, the mesh size in linear element (Q1) formulations cannot exceed 0.1 mm. Here we propose a novel non-conforming finite-element formulation for the non-linear cardiac electrophysiology problem that results in accurate wavefront shapes and lower mesh-dependance in the conduction velocity, while retaining the same number of global degrees of freedom as Q1 formulations. As a result, coarser discretizations of cardiac domains can be employed in simulations without significant loss of accuracy, thus reducing the overall computational effort. We demonstrate the applicability of our formulation in biventricular simulations using a coarse mesh size of ˜ 1 mm, and show that the activation wave pattern closely follows that obtained in fine-mesh simulations at a fraction of the computation time, thus improving the accuracy-efficiency trade-off of cardiac simulations.

  20. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    ScienceCinema

    Arnold, Jeffrey

    2018-05-14

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided. About the speaker: Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  1. Computational design of low aspect ratio wing-winglet configurations for transonic wind-tunnel tests

    NASA Technical Reports Server (NTRS)

    Kuhlman, John M.; Brown, Christopher K.

    1989-01-01

    Computational designs were performed for three different low aspect ratio wing planforms fitted with nonplanar winglets; one of the three configurations was selected to be constructed as a wind tunnel model for testing in the NASA LaRC 8-foot transonic pressure tunnel. A design point of M = 0.8, C(sub L) is approximate or = to 0.3 was selected, for wings of aspect ratio equal to 2.2, and leading edge sweep angles of 45 deg and 50 deg. Winglet length is 15 percent of the wing semispan, with a cant angle of 15 deg, and a leading edge sweep of 50 deg. Winglet total area equals 2.25 percent of the wing reference area. The design process and the predicted transonic performance are summarized for each configuration. In addition, a companion low-speed design study was conducted, using one of the transonic design wing-winglet planforms but with different camber and thickness distributions. A low-speed wind tunnel model was constructed to match this low-speed design geometry, and force coefficient data were obtained for the model at speeds of 100 to 150 ft/sec. Measured drag coefficient reductions were of the same order of magnitude as those predicted by numerical subsonic performance predictions.

  2. Modeling Submarine Lava Flow with ASPECT

    NASA Astrophysics Data System (ADS)

    Storvick, E. R.; Lu, H.; Choi, E.

    2017-12-01

    Submarine lava flow is not easily observed and experimented on due to limited accessibility and challenges posed by the fast solidification of lava and the associated drastic changes in rheology. However, recent advances in numerical modeling techniques might address some of these challenges and provide unprecedented insight into the mechanics of submarine lava flow and conditions determining its wide-ranging morphologies. In this study, we explore the applicability ASPECT, Advanced Solver for Problems in Earth's ConvecTion, to submarine lava flow. ASPECT is a parallel finite element code that solves problems of thermal convection in the Earth's mantle. We will assess ASPECT's capability to model submarine lava flow by observing models of lava flow morphology simulated with GALE, a long-term tectonics finite element analysis code, with models created using comparable settings and parameters in ASPECT. From these observations we will contrast the differing models in order to identify the benefits of each code. While doing so, we anticipate we will learn about the conditions required for end-members of lava flow morphology, for example, pillows and sheet flows. With ASPECT specifically we focus on 1) whether the lava rheology can be implemented; 2) how effective the AMR is in resolving morphologies of the solidified crust; 3) whether and under what conditions the end-members of the lava flow morphologies, pillows and sheets, can be reproduced.

  3. Computer program for determination of concentrations of trace elements in components of water systems by nondestructive activation analysis (in German)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slavic, I.; Draskovic, R.; Tasovac, T.

    1973-03-01

    A computer program for the determination of trace elements in components of the water systems bed material, suspended material, dissolved substances, plankton, algae) by nondestructive activation analysis was developed. Results of the determination of Cr, Sb, Sc, Fe, Co, Na, and La concentrations in suspended materials from the Danube river, obtained by interpretation of data with a CDC- 3600 computer (64 k words), are presented. (auth)

  4. Finite element computation of a viscous compressible free shear flow governed by the time dependent Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.; Blanchard, D. K.

    1975-01-01

    A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.

  5. Inelastic strain analogy for piecewise linear computation of creep residues in built-up structures

    NASA Technical Reports Server (NTRS)

    Jenkins, Jerald M.

    1987-01-01

    An analogy between inelastic strains caused by temperature and those caused by creep is presented in terms of isotropic elasticity. It is shown how the theoretical aspects can be blended with existing finite-element computer programs to exact a piecewise linear solution. The creep effect is determined by using the thermal stress computational approach, if appropriate alterations are made to the thermal expansion of the individual elements. The overall transient solution is achieved by consecutive piecewise linear iterations. The total residue caused by creep is obtained by accumulating creep residues for each iteration and then resubmitting the total residues for each element as an equivalent input. A typical creep law is tested for incremental time convergence. The results indicate that the approach is practical, with a valid indication of the extent of creep after approximately 20 hr of incremental time. The general analogy between body forces and inelastic strain gradients is discussed with respect to how an inelastic problem can be worked as an elastic problem.

  6. Computational fluid dynamics analysis of SSME phase 2 and phase 2+ preburner injector element hydrogen flow paths

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph H.

    1992-01-01

    Phase 2+ Space Shuttle Main Engine powerheads, E0209 and E0215 degraded their main combustion chamber (MCC) liners at a faster rate than is normal for phase 2 powerheads. One possible cause of the accelerated degradation was a reduction of coolant flow through the MCC. Hardware changes were made to the preburner fuel leg which may have reduced the resistance and, therefore, pulled some of the hydrogen from the MCC coolant leg. A computational fluid dynamics (CFD) analysis was performed to determine hydrogen flow path resistances of the phase 2+ fuel preburner injector elements relative to the phase 2 element. FDNS was implemented on axisymmetric grids with the hydrogen assumed to be incompressible. The analysis was performed in two steps: the first isolated the effect of the different inlet areas and the second modeled the entire injector element hydrogen flow path.

  7. A Computational and Experimental Study of Nonlinear Aspects of Induced Drag

    NASA Technical Reports Server (NTRS)

    Smith, Stephen C.

    1996-01-01

    Despite the 80-year history of classical wing theory, considerable research has recently been directed toward planform and wake effects on induced drag. Nonlinear interactions between the trailing wake and the wing offer the possibility of reducing drag. The nonlinear effect of compressibility on induced drag characteristics may also influence wing design. This thesis deals with the prediction of these nonlinear aspects of induced drag and ways to exploit them. A potential benefit of only a few percent of the drag represents a large fuel savings for the world's commercial transport fleet. Computational methods must be applied carefully to obtain accurate induced drag predictions. Trefftz-plane drag integration is far more reliable than surface pressure integration, but is very sensitive to the accuracy of the force-free wake model. The practical use of Trefftz plane drag integration was extended to transonic flow with the Tranair full-potential code. The induced drag characteristics of a typical transport wing were studied with Tranair, a full-potential method, and A502, a high-order linear panel method to investigate changes in lift distribution and span efficiency due to compressibility. Modeling the force-free wake is a nonlinear problem, even when the flow governing equation is linear. A novel method was developed for computing the force-free wake shape. This hybrid wake-relaxation scheme couples the well-behaved nature of the discrete vortex wake with viscous-core modeling and the high-accuracy velocity prediction of the high-order panel method. The hybrid scheme produced converged wake shapes that allowed accurate Trefftz-plane integration. An unusual split-tip wing concept was studied for exploiting nonlinear wake interaction to reduced induced drag. This design exhibits significant nonlinear interactions between the wing and wake that produced a 12% reduction in induced drag compared to an equivalent elliptical wing at a lift coefficient of 0.7. The

  8. Precollege Computer Literacy: A Personal Computing Approach. Second Edition.

    ERIC Educational Resources Information Center

    Moursund, David

    Intended for elementary and secondary teachers and curriculum specialists, this booklet discusses and defines computer literacy as a functional knowledge of computers and their effects on students and the rest of society. It analyzes personal computing and the aspects of computers that have direct impact on students. Outlining computer-assisted…

  9. Methods and computer executable instructions for rapidly calculating simulated particle transport through geometrically modeled treatment volumes having uniform volume elements for use in radiotherapy

    DOEpatents

    Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.

    2001-01-16

    Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of

  10. An a-posteriori finite element error estimator for adaptive grid computation of viscous incompressible flows

    NASA Astrophysics Data System (ADS)

    Wu, Heng

    2000-10-01

    In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different

  11. State-Transition Structures in Physics and in Computation

    NASA Astrophysics Data System (ADS)

    Petri, C. A.

    1982-12-01

    In order to establish close connections between physical and computational processes, it is assumed that the concepts of “state” and of “transition” are acceptable both to physicists and to computer scientists, at least in an informal way. The aim of this paper is to propose formal definitions of state and transition elements on the basis of very low level physical concepts in such a way that (1) all physically possible computations can be described as embedded in physical processes; (2) the computational aspects of physical processes can be described on a well-defined level of abstraction; (3) the gulf between the continuous models of physics and the discrete models of computer science can be bridged by simple mathematical constructs which may be given a physical interpretation; (4) a combinatorial, nonstatistical definition of “information” can be given on low levels of abstraction which may serve as a basis to derive higher-level concepts of information, e.g., by a statistical or probabilistic approach. Conceivable practical consequences are discussed.

  12. Computational structures technology and UVA Center for CST

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1992-01-01

    Rapid advances in computer hardware have had a profound effect on various engineering and mechanics disciplines, including the materials, structures, and dynamics disciplines. A new technology, computational structures technology (CST), has recently emerged as an insightful blend between material modeling, structural and dynamic analysis and synthesis on the one hand, and other disciplines such as computer science, numerical analysis, and approximation theory, on the other hand. CST is an outgrowth of finite element methods developed over the last three decades. The focus of this presentation is on some aspects of CST which can impact future airframes and propulsion systems, as well as on the newly established University of Virginia (UVA) Center for CST. The background and goals for CST are described along with the motivations for developing CST, and a brief discussion is made on computational material modeling. We look at the future in terms of technical needs, computing environment, and research directions. The newly established UVA Center for CST is described. One of the research projects of the Center is described, and a brief summary of the presentation is given.

  13. Therapeutic, Molecular and Computational Aspects of Novel Monoamine Oxidase (MAO) Inhibitors.

    PubMed

    Ramesh, Muthusamy; Dokurugu, Yussif M; Thompson, Michael D; Soliman, Mahmoud E

    2017-01-01

    Background Due to the limited number of MAO inhibitors in the clinics, several research efforts are aimed at the discovery of novel MAO inhibitors. At present, a high specificity and a reversible mode of inhibition of MAO-A/B are cited as desirable traits in drug discovery process. This will help to reduce the probability of causing target disruption and may increase the duration of action of drug. Most of the existing MAO inhibitors lead to side effects due to the lack of affinity and selectivity. Therefore, there is an urgent need to design novel, potent, reversible and selective inhibitors for MAO-A/B. Selective inhibition of MAO-A results in the elevated level of serotonin and noradrenaline. Hence, MAO-A inhibitors can be used for improving the symptoms of depression. The selective MAO-B inhibitors are used with L-DOPA and/or dopamine agonists in the symptomatic treatment of Parkinson's disease. The present study was aimed to describe the recently developed hits of MAO inhibitors. At present, CADD techniques are gaining an attention in rationale drug discovery of MAO inhibitors, and several research groups employed CADD approaches on various chemical scaffolds to identify novel MAO inhibitors. These computational techniques assisted in the development of lead molecules with improved pharmacodynamics / pharmacokinetic properties toward MAOs. Further, CADD techniques provided a better understanding of structural aspects of molecular targets and lead molecules. The present review describes the importance of structural features of potential chemical scaffolds as well as the role of computational approaches like ligand docking, molecular dynamics, QSAR and pharmacophore modeling in the development of novel MAO inhibitors. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element has...

  15. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element has...

  16. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element has...

  17. Computational structural mechanics engine structures computational simulator

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1989-01-01

    The Computational Structural Mechanics (CSM) program at Lewis encompasses: (1) fundamental aspects for formulating and solving structural mechanics problems, and (2) development of integrated software systems to computationally simulate the performance/durability/life of engine structures.

  18. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 1, Theoretical background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, D.K.

    The theoretical and numerical background for the finite element computer program, TORO II, is presented in detail. TORO II is designed for the multi-dimensional analysis of nonlinear, electromagnetic field problems described by the quasi-static form of Maxwell`s equations. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in TORO II are also outlined. Instructions for the use of the code are documented in SAND96-0903; examples of problems analyzed with the code are also provided in the user`s manual. 24 refs., 8 figs.

  19. An Efficient Finite Element Framework to Assess Flexibility Performances of SMA Self-Expandable Carotid Artery Stents

    PubMed Central

    Ferraro, Mauro; Auricchio, Ferdinando; Boatti, Elisa; Scalet, Giulia; Conti, Michele; Morganti, Simone; Reali, Alessandro

    2015-01-01

    Computer-based simulations are nowadays widely exploited for the prediction of the mechanical behavior of different biomedical devices. In this aspect, structural finite element analyses (FEA) are currently the preferred computational tool to evaluate the stent response under bending. This work aims at developing a computational framework based on linear and higher order FEA to evaluate the flexibility of self-expandable carotid artery stents. In particular, numerical simulations involving large deformations and inelastic shape memory alloy constitutive modeling are performed, and the results suggest that the employment of higher order FEA allows accurately representing the computational domain and getting a better approximation of the solution with a widely-reduced number of degrees of freedom with respect to linear FEA. Moreover, when buckling phenomena occur, higher order FEA presents a superior capability of reproducing the nonlinear local effects related to buckling phenomena. PMID:26184329

  20. Sustaining Open Source Communities through Hackathons - An Example from the ASPECT Community

    NASA Astrophysics Data System (ADS)

    Heister, T.; Hwang, L.; Bangerth, W.; Kellogg, L. H.

    2016-12-01

    The ecosystem surrounding a successful scientific open source software package combines both social and technical aspects. Much thought has been given to the technology side of writing sustainable software for large infrastructure projects and software libraries, but less about building the human capacity to perpetuate scientific software used in computational modeling. One effective format for building capacity is regular multi-day hackathons. Scientific hackathons bring together a group of science domain users and scientific software contributors to make progress on a specific software package. Innovation comes through the chance to work with established and new collaborations. Especially in the domain sciences with small communities, hackathons give geographically distributed scientists an opportunity to connect face-to-face. They foster lively discussions amongst scientists with different expertise, promote new collaborations, and increase transparency in both the technical and scientific aspects of code development. ASPECT is an open source, parallel, extensible finite element code to simulate thermal convection, that began development in 2011 under the Computational Infrastructure for Geodynamics. ASPECT hackathons for the past 3 years have grown the number of authors to >50, training new code maintainers in the process. Hackathons begin with leaders establishing project-specific conventions for development, demonstrating the workflow for code contributions, and reviewing relevant technical skills. Each hackathon expands the developer community. Over 20 scientists add >6,000 lines of code during the >1 week event. Participants grow comfortable contributing to the repository and over half continue to contribute afterwards. A high return rate of participants ensures continuity and stability of the group as well as mentoring for novice members. We hope to build other software communities on this model, but anticipate each to bring their own unique challenges.

  1. Detection of Chorus Elements and other Wave Signatures Using Geometric Computational Techniques in the Van Allen radiation belts

    NASA Astrophysics Data System (ADS)

    Sengupta, A.; Kletzing, C.; Howk, R.; Kurth, W. S.

    2017-12-01

    An important goal of the Van Allen Probes mission is to understand wave particle interactions that can energize relativistic electron in the Earth's Van Allen radiation belts. The EMFISIS instrumentation suite provides measurements of wave electric and magnetic fields of wave features such as chorus that participate in these interactions. Geometric signal processing discovers structural relationships, e.g. connectivity across ridge-like features in chorus elements to reveal properties such as dominant angles of the element (frequency sweep rate) and integrated power along the a given chorus element. These techniques disambiguate these wave features against background hiss-like chorus. This enables autonomous discovery of chorus elements across the large volumes of EMFISIS data. At the scale of individual or overlapping chorus elements, topological pattern recognition techniques enable interpretation of chorus microstructure by discovering connectivity and other geometric features within the wave signature of a single chorus element or between overlapping chorus elements. Thus chorus wave features can be quantified and studied at multiple scales of spectral geometry using geometric signal processing techniques. We present recently developed computational techniques that exploit spectral geometry of chorus elements and whistlers to enable large-scale automated discovery, detection and statistical analysis of these events over EMFISIS data. Specifically, we present different case studies across a diverse portfolio of chorus elements and discuss the performance of our algorithms regarding precision of detection as well as interpretation of chorus microstructure. We also provide large-scale statistical analysis on the distribution of dominant sweep rates and other properties of the detected chorus elements.

  2. Micromagnetic computer simulations of spin waves in nanometre-scale patterned magnetic elements

    NASA Astrophysics Data System (ADS)

    Kim, Sang-Koog

    2010-07-01

    Current needs for further advances in the nanotechnologies of information-storage and -processing devices have attracted a great deal of interest in spin (magnetization) dynamics in nanometre-scale patterned magnetic elements. For instance, the unique dynamic characteristics of non-uniform magnetic microstructures such as various types of domain walls, magnetic vortices and antivortices, as well as spin wave dynamics in laterally restricted thin-film geometries, have been at the centre of extensive and intensive researches. Understanding the fundamentals of their unique spin structure as well as their robust and novel dynamic properties allows us to implement new functionalities into existing or future devices. Although experimental tools and theoretical approaches are effective means of understanding the fundamentals of spin dynamics and of gaining new insights into them, the limitations of those same tools and approaches have left gaps of unresolved questions in the pertinent physics. As an alternative, however, micromagnetic modelling and numerical simulation has recently emerged as a powerful tool for the study of a variety of phenomena related to spin dynamics of nanometre-scale magnetic elements. In this review paper, I summarize the recent results of simulations of the excitation and propagation and other novel wave characteristics of spin waves, highlighting how the micromagnetic computer simulation approach contributes to an understanding of spin dynamics of nanomagnetism and considering some of the merits of numerical simulation studies. Many examples of micromagnetic modelling for numerical calculations, employing various dimensions and shapes of patterned magnetic elements, are given. The current limitations of continuum micromagnetic modelling and of simulations based on the Landau-Lifshitz-Gilbert equation of motion of magnetization are also discussed, along with further research directions for spin-wave studies.

  3. SCEAPI: A unified Restful Web API for High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  4. Towards aspect-oriented functional--structural plant modelling.

    PubMed

    Cieslak, Mikolaj; Seleznyova, Alla N; Prusinkiewicz, Przemyslaw; Hanan, Jim

    2011-10-01

    Functional-structural plant models (FSPMs) are used to integrate knowledge and test hypotheses of plant behaviour, and to aid in the development of decision support systems. A significant amount of effort is being put into providing a sound methodology for building them. Standard techniques, such as procedural or object-oriented programming, are not suited for clearly separating aspects of plant function that criss-cross between different components of plant structure, which makes it difficult to reuse and share their implementations. The aim of this paper is to present an aspect-oriented programming approach that helps to overcome this difficulty. The L-system-based plant modelling language L+C was used to develop an aspect-oriented approach to plant modelling based on multi-modules. Each element of the plant structure was represented by a sequence of L-system modules (rather than a single module), with each module representing an aspect of the element's function. Separate sets of productions were used for modelling each aspect, with context-sensitive rules facilitated by local lists of modules to consider/ignore. Aspect weaving or communication between aspects was made possible through the use of pseudo-L-systems, where the strict-predecessor of a production rule was specified as a multi-module. The new approach was used to integrate previously modelled aspects of carbon dynamics, apical dominance and biomechanics with a model of a developing kiwifruit shoot. These aspects were specified independently and their implementation was based on source code provided by the original authors without major changes. This new aspect-oriented approach to plant modelling is well suited for studying complex phenomena in plant science, because it can be used to integrate separate models of individual aspects of plant development and function, both previously constructed and new, into clearly organized, comprehensive FSPMs. In a future work, this approach could be further

  5. Towards aspect-oriented functional–structural plant modelling

    PubMed Central

    Cieslak, Mikolaj; Seleznyova, Alla N.; Prusinkiewicz, Przemyslaw; Hanan, Jim

    2011-01-01

    Background and Aims Functional–structural plant models (FSPMs) are used to integrate knowledge and test hypotheses of plant behaviour, and to aid in the development of decision support systems. A significant amount of effort is being put into providing a sound methodology for building them. Standard techniques, such as procedural or object-oriented programming, are not suited for clearly separating aspects of plant function that criss-cross between different components of plant structure, which makes it difficult to reuse and share their implementations. The aim of this paper is to present an aspect-oriented programming approach that helps to overcome this difficulty. Methods The L-system-based plant modelling language L+C was used to develop an aspect-oriented approach to plant modelling based on multi-modules. Each element of the plant structure was represented by a sequence of L-system modules (rather than a single module), with each module representing an aspect of the element's function. Separate sets of productions were used for modelling each aspect, with context-sensitive rules facilitated by local lists of modules to consider/ignore. Aspect weaving or communication between aspects was made possible through the use of pseudo-L-systems, where the strict-predecessor of a production rule was specified as a multi-module. Key Results The new approach was used to integrate previously modelled aspects of carbon dynamics, apical dominance and biomechanics with a model of a developing kiwifruit shoot. These aspects were specified independently and their implementation was based on source code provided by the original authors without major changes. Conclusions This new aspect-oriented approach to plant modelling is well suited for studying complex phenomena in plant science, because it can be used to integrate separate models of individual aspects of plant development and function, both previously constructed and new, into clearly organized, comprehensive FSPMs. In

  6. Positive and Negative Aspects of the IWB and Tablet Computers in the First Grade of Primary School: A Multiple-Perspective Approach

    ERIC Educational Resources Information Center

    Fekonja-Peklaj, Urška; Marjanovic-Umek, Ljubica

    2015-01-01

    The aim of this qualitative study was to evaluate the positive and negative aspects of the interactive whiteboard (IWB) and tablet computers use in the first grade of primary school from the perspectives of three groups of evaluators, namely the teachers, the pupils and an independent observer. The sample included three first grade classes with…

  7. Some foundational aspects of quantum computers and quantum robots.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benioff, P.; Physics

    1998-01-01

    This paper addresses foundational issues related to quantum computing. The need for a universally valid theory such as quantum mechanics to describe to some extent its own validation is noted. This includes quantum mechanical descriptions of systems that do theoretical calculations (i.e. quantum computers) and systems that perform experiments. Quantum robots interacting with an environment are a small first step in this direction. Quantum robots are described here as mobile quantum systems with on-board quantum computers that interact with environments. Included are discussions on the carrying out of tasks and the division of tasks into computation and action phases. Specificmore » models based on quantum Turing machines are described. Differences and similarities between quantum robots plus environments and quantum computers are discussed.« less

  8. Computer-aided design and computer science technology

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Voigt, S. J.

    1976-01-01

    A description is presented of computer-aided design requirements and the resulting computer science advances needed to support aerospace design. The aerospace design environment is examined, taking into account problems of data handling and aspects of computer hardware and software. The interactive terminal is normally the primary interface between the computer system and the engineering designer. Attention is given to user aids, interactive design, interactive computations, the characteristics of design information, data management requirements, hardware advancements, and computer science developments.

  9. SMPBS: Web server for computing biomolecular electrostatics using finite element solvers of size modified Poisson-Boltzmann equation.

    PubMed

    Xie, Yang; Ying, Jinyong; Xie, Dexuan

    2017-03-30

    SMPBS (Size Modified Poisson-Boltzmann Solvers) is a web server for computing biomolecular electrostatics using finite element solvers of the size modified Poisson-Boltzmann equation (SMPBE). SMPBE not only reflects ionic size effects but also includes the classic Poisson-Boltzmann equation (PBE) as a special case. Thus, its web server is expected to have a broader range of applications than a PBE web server. SMPBS is designed with a dynamic, mobile-friendly user interface, and features easily accessible help text, asynchronous data submission, and an interactive, hardware-accelerated molecular visualization viewer based on the 3Dmol.js library. In particular, the viewer allows computed electrostatics to be directly mapped onto an irregular triangular mesh of a molecular surface. Due to this functionality and the fast SMPBE finite element solvers, the web server is very efficient in the calculation and visualization of electrostatics. In addition, SMPBE is reconstructed using a new objective electrostatic free energy, clearly showing that the electrostatics and ionic concentrations predicted by SMPBE are optimal in the sense of minimizing the objective electrostatic free energy. SMPBS is available at the URL: smpbs.math.uwm.edu © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Analysis of Uncertainty and Variability in Finite Element Computational Models for Biomedical Engineering: Characterization and Propagation

    PubMed Central

    Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González

    2016-01-01

    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering. PMID:27872840

  11. Analysis of Uncertainty and Variability in Finite Element Computational Models for Biomedical Engineering: Characterization and Propagation.

    PubMed

    Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González

    2016-01-01

    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.

  12. Analysis of Vertebral Bone Strength, Fracture Pattern, and Fracture Location: A Validation Study Using a Computed Tomography-Based Nonlinear Finite Element Analysis

    PubMed Central

    Imai, Kazuhiro

    2015-01-01

    Finite element analysis (FEA) is an advanced computer technique of structural stress analysis developed in engineering mechanics. Because the compressive behavior of vertebral bone shows nonlinear behavior, a nonlinear FEA should be utilized to analyze the clinical vertebral fracture. In this article, a computed tomography-based nonlinear FEA (CT/FEA) to analyze the vertebral bone strength, fracture pattern, and fracture location is introduced. The accuracy of the CT/FEA was validated by performing experimental mechanical testing with human cadaveric specimens. Vertebral bone strength and the minimum principal strain at the vertebral surface were accurately analyzed using the CT/FEA. The experimental fracture pattern and fracture location were also accurately simulated. Optimization of the element size was performed by assessing the accuracy of the CT/FEA, and the optimum element size was assumed to be 2 mm. It is expected that the CT/FEA will be valuable in analyzing vertebral fracture risk and assessing therapeutic effects on osteoporosis. PMID:26029476

  13. Differences between postmortem computed tomography and conventional autopsy in a stabbing murder case

    PubMed Central

    Zerbini, Talita; da Silva, Luiz Fernando Ferraz; Ferro, Antonio Carlos Gonçalves; Kay, Fernando Uliana; Junior, Edson Amaro; Pasqualucci, Carlos Augusto Gonçalves; do Nascimento Saldiva, Paulo Hilario

    2014-01-01

    OBJECTIVE: The aim of the present work is to analyze the differences and similarities between the elements of a conventional autopsy and images obtained from postmortem computed tomography in a case of a homicide stab wound. METHOD: Comparison between the findings of different methods: autopsy and postmortem computed tomography. RESULTS: In some aspects, autopsy is still superior to imaging, especially in relation to external examination and the description of lesion vitality. However, the findings of gas embolism, pneumothorax and pulmonary emphysema and the relationship between the internal path of the instrument of aggression and the entry wound are better demonstrated by postmortem computed tomography. CONCLUSIONS: Although multislice computed tomography has greater accuracy than autopsy, we believe that the conventional autopsy method is fundamental for providing evidence in criminal investigations. PMID:25518020

  14. Cellular computational platform and neurally inspired elements thereof

    DOEpatents

    Okandan, Murat

    2016-11-22

    A cellular computational platform is disclosed that includes a multiplicity of functionally identical, repeating computational hardware units that are interconnected electrically and optically. Each computational hardware unit includes a reprogrammable local memory and has interconnections to other such units that have reconfigurable weights. Each computational hardware unit is configured to transmit signals into the network for broadcast in a protocol-less manner to other such units in the network, and to respond to protocol-less broadcast messages that it receives from the network. Each computational hardware unit is further configured to reprogram the local memory in response to incoming electrical and/or optical signals.

  15. Multiple single-element transducer photoacoustic computed tomography system

    NASA Astrophysics Data System (ADS)

    Kalva, Sandeep Kumar; Hui, Zhe Zhi; Pramanik, Manojit

    2018-02-01

    Light absorption by the chromophores (hemoglobin, melanin, water etc.) present in any biological tissue results in local temperature rise. This rise in temperature results in generation of pressure waves due to the thermoelastic expansion of the tissue. In a circular scanning photoacoustic computed tomography (PACT) system, these pressure waves can be detected using a single-element ultrasound transducer (SUST) (while rotating in full 360° around the sample) or using a circular array transducer. SUST takes several minutes to acquire the PA data around the sample whereas the circular array transducer takes only a fraction of seconds. Hence, for real time imaging circular array transducers are preferred. However, these circular array transducers are custom made, expensive and not easily available in the market whereas SUSTs are cheap and readily available in the market. Using SUST for PACT systems is still cost effective. In order to reduce the scanning time to few seconds instead of using single SUST (rotating 360° ), multiple SUSTs can be used at the same time to acquire the PA data. This will reduce the scanning time by two-fold in case of two SUSTs (rotating 180° ) or by four-fold and eight-fold in case of four SUSTs (rotating 90° ) and eight SUSTs (rotating 45° ) respectively. Here we show that with multiple SUSTs, similar PA images (numerical and experimental phantom data) can be obtained as that of PA images obtained using single SUST.

  16. The effectiveness of element downsizing on a three-dimensional finite element model of bone trabeculae in implant biomechanics.

    PubMed

    Sato, Y; Wadamoto, M; Tsuga, K; Teixeira, E R

    1999-04-01

    More validity of finite element analysis in implant biomechanics requires element downsizing. However, excess downsizing needs computer memory and calculation time. To investigate the effectiveness of element downsizing on the construction of a three-dimensional finite element bone trabeculae model, with different element sizes (600, 300, 150 and 75 microm) models were constructed and stress induced by vertical 10 N loading was analysed. The difference in von Mises stress values between the models with 600 and 300 microm element sizes was larger than that between 300 and 150 microm. On the other hand, no clear difference of stress values was detected among the models with 300, 150 and 75 microm element sizes. Downsizing of elements from 600 to 300 microm is suggested to be effective in the construction of a three-dimensional finite element bone trabeculae model for possible saving of computer memory and calculation time in the laboratory.

  17. A multiscale method for modeling high-aspect-ratio micro/nano flows

    NASA Astrophysics Data System (ADS)

    Lockerby, Duncan; Borg, Matthew; Reese, Jason

    2012-11-01

    In this paper we present a new multiscale scheme for simulating micro/nano flows of high aspect ratio in the flow direction, e.g. within long ducts, tubes, or channels, of varying section. The scheme consists of applying a simple hydrodynamic description over the entire domain, and allocating micro sub-domains in very small ``slices'' of the channel. Every micro element is a molecular dynamics simulation (or other appropriate model, e.g., a direct simulation Monte Carlo method for micro-channel gas flows) over the local height of the channel/tube. The number of micro elements as well as their streamwise position is chosen to resolve the geometrical features of the macro channel. While there is no direct communication between individual micro elements, coupling occurs via an iterative imposition of mass and momentum-flux conservation on the macro scale. The greater the streamwise scale of the geometry, the more significant is the computational speed-up when compared to a full MD simulation. We test our new multiscale method on the case of a converging/diverging nanochannel conveying a simple Lennard-Jones liquid. We validate the results from our simulations by comparing them to a full MD simulation of the same test case. Supported by EPSRC Programme Grant, EP/I011927/1.

  18. Finite-element computer program for axisymmetric loading situations where components may have a relative interference fit

    NASA Technical Reports Server (NTRS)

    Taylor, C. M.

    1977-01-01

    A finite element computer program which enables the analysis of distortions and stresses occurring in compounds having a relative interference is presented. The program is limited to situations in which the loading is axisymmetric. Loads arising from the interference fit(s) and external, inertial, and thermal loadings are accommodated. The components comprise several different homogeneous isotropic materials whose properties may be a function of temperature. An example illustrating the data input and program output is given.

  19. Discrete element analysis is a valid method for computing joint contact stress in the hip before and after acetabular fracture.

    PubMed

    Townsend, Kevin C; Thomas-Aitken, Holly D; Rudert, M James; Kern, Andrew M; Willey, Michael C; Anderson, Donald D; Goetz, Jessica E

    2018-01-23

    Evaluation of abnormalities in joint contact stress that develop after inaccurate reduction of an acetabular fracture may provide a potential means for predicting the risk of developing post-traumatic osteoarthritis. Discrete element analysis (DEA) is a computational technique for calculating intra-articular contact stress distributions in a fraction of the time required to obtain the same information using the more commonly employed finite element analysis technique. The goal of this work was to validate the accuracy of DEA-computed contact stress against physical measurements of contact stress made in cadaveric hips using Tekscan sensors. Four static loading tests in a variety of poses from heel-strike to toe-off were performed in two different cadaveric hip specimens with the acetabulum intact and again with an intentionally malreduced posterior wall acetabular fracture. DEA-computed contact stress was compared on a point-by-point basis to stress measured from the physical experiments. There was good agreement between computed and measured contact stress over the entire contact area (correlation coefficients ranged from 0.88 to 0.99). DEA-computed peak contact stress was within an average of 0.5 MPa (range 0.2-0.8 MPa) of the Tekscan peak stress for intact hips, and within an average of 0.6 MPa (range 0-1.6 MPa) for fractured cases. DEA-computed contact areas were within an average of 33% of the Tekscan-measured areas (range: 1.4-60%). These results indicate that the DEA methodology is a valid method for accurately estimating contact stress in both intact and fractured hips. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. High precision computing with charge domain devices and a pseudo-spectral method therefor

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob (Inventor); Toomarian, Nikzad (Inventor); Fijany, Amir (Inventor); Zak, Michail (Inventor)

    1997-01-01

    The present invention enhances the bit resolution of a CCD/CID MVM processor by storing each bit of each matrix element as a separate CCD charge packet. The bits of each input vector are separately multiplied by each bit of each matrix element in massive parallelism and the resulting products are combined appropriately to synthesize the correct product. In another aspect of the invention, such arrays are employed in a pseudo-spectral method of the invention, in which partial differential equations are solved by expressing each derivative analytically as matrices, and the state function is updated at each computation cycle by multiplying it by the matrices. The matrices are treated as synaptic arrays of a neural network and the state function vector elements are treated as neurons. In a further aspect of the invention, moving target detection is performed by driving the soliton equation with a vector of detector outputs. The neural architecture consists of two synaptic arrays corresponding to the two differential terms of the soliton-equation and an adder connected to the output thereof and to the output of the detector array to drive the soliton equation.

  1. Implementation of a flexible and scalable particle-in-cell method for massively parallel computations in the mantle convection code ASPECT

    NASA Astrophysics Data System (ADS)

    Gassmöller, Rene; Bangerth, Wolfgang

    2016-04-01

    Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a

  2. Phenomenological aspects of the cognitive rumination construct.

    PubMed

    Meyer, Leonardo Fernandez; Taborda, José Geraldo Vernet; da Costa, Fábio Antônio; Soares, Ana Luiza Alfaya Galego; Mecler, Kátia; Valença, Alexandre Martins

    2015-01-01

    To evaluate the importance of phenomenological aspects of the cognitive rumination (CR) construct in current empirical psychiatric research. We searched SciELO, Scopus, ScienceDirect, MEDLINE, OneFile (GALE), SpringerLink, Cambridge Journals and Web of Science between February and March of 2014 for studies whose title and topic included the following keywords: cognitive rumination; rumination response scale; and self-reflection. The inclusion criteria were: empirical clinical study; CR as the main object of investigation; and study that included a conceptual definition of CR. The studies selected were published in English in biomedical journals in the last 10 years. Our phenomenological analysis was based on Karl Jaspers' General Psychopathology. Most current empirical studies adopt phenomenological cognitive elements in conceptual definitions. However, these elements do not seem to be carefully examined and are indistinctly understood as objective empirical factors that may be measured, which may contribute to misunderstandings about CR, erroneous interpretations of results and problematic theoretical models. Empirical studies fail when evaluating phenomenological aspects of the cognitive elements of the CR construct. Psychopathology and phenomenology may help define the characteristics of CR elements and may contribute to their understanding and hierarchical organization as a construct. A review of the psychopathology principles established by Jasper may clarify some of these issues.

  3. Charles Darwin and Evolution: Illustrating Human Aspects of Science

    ERIC Educational Resources Information Center

    Kampourakis, Kostas; McComas, William F.

    2010-01-01

    Recently, the nature of science (NOS) has become recognized as an important element within the K-12 science curriculum. Despite differences in the ultimate lists of recommended aspects, a consensus is emerging on what specific NOS elements should be the focus of science instruction and inform textbook writers and curriculum developers. In this…

  4. Cloud-free resolution element statistics program

    NASA Technical Reports Server (NTRS)

    Liley, B.; Martin, C. D.

    1971-01-01

    Computer program computes number of cloud-free elements in field-of-view and percentage of total field-of-view occupied by clouds. Human error is eliminated by using visual estimation to compute cloud statistics from aerial photographs.

  5. Chemistry of superheavy elements.

    PubMed

    Schädel, Matthias

    2006-01-09

    The number of chemical elements has increased considerably in the last few decades. Most excitingly, these heaviest, man-made elements at the far-end of the Periodic Table are located in the area of the long-awaited superheavy elements. While physical techniques currently play a leading role in these discoveries, the chemistry of superheavy elements is now beginning to be developed. Advanced and very sensitive techniques allow the chemical properties of these elusive elements to be probed. Often, less than ten short-lived atoms, chemically separated one-atom-at-a-time, provide crucial information on basic chemical properties. These results place the architecture of the far-end of the Periodic Table on the test bench and probe the increasingly strong relativistic effects that influence the chemical properties there. This review is focused mainly on the experimental work on superheavy element chemistry. It contains a short contribution on relativistic theory, and some important historical and nuclear aspects.

  6. Improving finite element results in modeling heart valve mechanics.

    PubMed

    Earl, Emily; Mohammadi, Hadi

    2018-06-01

    Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.

  7. Psychosomatic Aspects of Cancer: An Overview.

    ERIC Educational Resources Information Center

    Murray, John B.

    1980-01-01

    It is suggested in this literature review on the psychosomatic aspects of cancer that psychoanalytic interpretations which focused on intrapsychic elements have given way to considerations of rehabilitation and assistance with the complex emotional reactions of patients and their families to terminal illness and death. (Author/DB)

  8. Preprocessor and postprocessor computer programs for a radial-flow finite-element model

    USGS Publications Warehouse

    Pucci, A.A.; Pope, D.A.

    1987-01-01

    Preprocessing and postprocessing computer programs that enhance the utility of the U.S. Geological Survey radial-flow model have been developed. The preprocessor program: (1) generates a triangular finite element mesh from minimal data input, (2) produces graphical displays and tabulations of data for the mesh , and (3) prepares an input data file to use with the radial-flow model. The postprocessor program is a version of the radial-flow model, which was modified to (1) produce graphical output for simulation and field results, (2) generate a statistic for comparing the simulation results with observed data, and (3) allow hydrologic properties to vary in the simulated region. Examples of the use of the processor programs for a hypothetical aquifer test are presented. Instructions for the data files, format instructions, and a listing of the preprocessor and postprocessor source codes are given in the appendixes. (Author 's abstract)

  9. A large-scale computer facility for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F., Jr.

    1985-01-01

    As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.

  10. GPGPU-based explicit finite element computations for applications in biomechanics: the performance of material models, element technologies, and hardware generations.

    PubMed

    Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N

    2017-12-01

    Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.

  11. Laser fabrication of diffractive optical elements based on detour-phase computer-generated holograms for two-dimensional Airy beams.

    PubMed

    Călin, Bogdan-Ştefăniţă; Preda, Liliana; Jipa, Florin; Zamfirescu, Marian

    2018-02-20

    We have designed, fabricated, and tested an amplitude diffractive optical element for generation of two-dimensional (2D) Airy beams. The design is based on a detour-phase computer-generated hologram. Using laser ablation of metallic films, we obtained a 2  mm×2  mm diffractive optical element with a pixel of 5  μm×5  μm and demonstrated a fast, cheap, and reliable fabrication process. This device can modulate 2D Airy beams or it can be used as a UV lithography mask to fabricate a series of phase holograms for higher energy efficiency. Tests according to the premise and an analysis of the transverse profile and propagation are presented.

  12. Nonlinear dynamics of laser systems with elements of a chaos: Advanced computational code

    NASA Astrophysics Data System (ADS)

    Buyadzhi, V. V.; Glushkov, A. V.; Khetselius, O. Yu; Kuznetsova, A. A.; Buyadzhi, A. A.; Prepelitsa, G. P.; Ternovsky, V. B.

    2017-10-01

    A general, uniform chaos-geometric computational approach to analysis, modelling and prediction of the non-linear dynamics of quantum and laser systems (laser and quantum generators system etc) with elements of the deterministic chaos is briefly presented. The approach is based on using the advanced generalized techniques such as the wavelet analysis, multi-fractal formalism, mutual information approach, correlation integral analysis, false nearest neighbour algorithm, the Lyapunov’s exponents analysis, and surrogate data method, prediction models etc There are firstly presented the numerical data on the topological and dynamical invariants (in particular, the correlation, embedding, Kaplan-York dimensions, the Lyapunov’s exponents, Kolmogorov’s entropy and other parameters) for laser system (the semiconductor GaAs/GaAlAs laser with a retarded feedback) dynamics in a chaotic and hyperchaotic regimes.

  13. Higher-order adaptive finite-element methods for Kohn–Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motamarri, P.; Nowak, M.R.; Leiter, K.

    2013-11-15

    We present an efficient computational approach to perform real-space electronic structure calculations using an adaptive higher-order finite-element discretization of Kohn–Sham density-functional theory (DFT). To this end, we develop an a priori mesh-adaption technique to construct a close to optimal finite-element discretization of the problem. We further propose an efficient solution strategy for solving the discrete eigenvalue problem by using spectral finite-elements in conjunction with Gauss–Lobatto quadrature, and a Chebyshev acceleration technique for computing the occupied eigenspace. The proposed approach has been observed to provide a staggering 100–200-fold computational advantage over the solution of a generalized eigenvalue problem. Using the proposedmore » solution procedure, we investigate the computational efficiency afforded by higher-order finite-element discretizations of the Kohn–Sham DFT problem. Our studies suggest that staggering computational savings—of the order of 1000-fold—relative to linear finite-elements can be realized, for both all-electron and local pseudopotential calculations, by using higher-order finite-element discretizations. On all the benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy, suggesting that the hexic spectral-element may be an optimal choice for the finite-element discretization of the Kohn–Sham DFT problem. A comparative study of the computational efficiency of the proposed higher-order finite-element discretizations suggests that the performance of finite-element basis is competing with the plane-wave discretization for non-periodic local pseudopotential calculations, and compares to the Gaussian basis for all-electron calculations to within an order of magnitude. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of a metallic system

  14. On computational methods for crashworthiness

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1992-01-01

    The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

  15. A combined finite element-boundary element formulation for solution of axially symmetric bodies

    NASA Technical Reports Server (NTRS)

    Collins, Jeffrey D.; Volakis, John L.

    1991-01-01

    A new method is presented for the computation of electromagnetic scattering from axially symmetric bodies. To allow the simulation of inhomogeneous cross sections, the method combines the finite element and boundary element techniques. Interior to a fictitious surface enclosing the scattering body, the finite element method is used which results in a sparce submatrix, whereas along the enclosure the Stratton-Chu integral equation is enforced. By choosing the fictitious enclosure to be a right circular cylinder, most of the resulting boundary integrals are convolutional and may therefore be evaluated via the FFT with which the system is iteratively solved. In view of the sparce matrix associated with the interior fields, this reduces the storage requirement of the entire system to O(N) making the method attractive for large scale computations. The details of the corresponding formulation and its numerical implementation are described.

  16. Computers and the landscape

    Treesearch

    Gary H. Elsner

    1979-01-01

    Computers can analyze and help to plan the visual aspects of large wildland landscapes. This paper categorizes and explains current computer methods available. It also contains a futuristic dialogue between a landscape architect and a computer.

  17. The Student-Teacher-Computer Team: Focus on the Computer.

    ERIC Educational Resources Information Center

    Ontario Inst. for Studies in Education, Toronto.

    Descriptions of essential computer elements, logic and programing techniques, and computer applications are provided in an introductory handbook for use by educators and students. Following a brief historical perspective, the organization of a computer system is schematically illustrated, functions of components are explained in non-technical…

  18. Finite element implementation of state variable-based viscoplasticity models

    NASA Technical Reports Server (NTRS)

    Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.

    1991-01-01

    The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.

  19. Sustainability of transport structures - some aspects of the nonlinear reliability assessment

    NASA Astrophysics Data System (ADS)

    Pukl, Radomír; Sajdlová, Tereza; Strauss, Alfred; Lehký, David; Novák, Drahomír

    2017-09-01

    Efficient techniques for both nonlinear numerical analysis of concrete structures and advanced stochastic simulation methods have been combined in order to offer an advanced tool for assessment of realistic behaviour, failure and safety assessment of transport structures. The utilized approach is based on randomization of the non-linear finite element analysis of the structural models. Degradation aspects such as carbonation of concrete can be accounted in order predict durability of the investigated structure and its sustainability. Results can serve as a rational basis for the performance and sustainability assessment based on advanced nonlinear computer analysis of the structures of transport infrastructure such as bridges or tunnels. In the stochastic simulation the input material parameters obtained from material tests including their randomness and uncertainty are represented as random variables or fields. Appropriate identification of material parameters is crucial for the virtual failure modelling of structures and structural elements. Inverse analysis using artificial neural networks and virtual stochastic simulations approach is applied to determine the fracture mechanical parameters of the structural material and its numerical model. Structural response, reliability and sustainability have been investigated on different types of transport structures made from various materials using the above mentioned methodology and tools.

  20. Cognitive-ergonomics and instructional aspects of e-learning courses.

    PubMed

    Rodrigues, Martha; Castello Branco, Iana; Shimioshi, José; Rodrigues, Evaldo; Monteiro, Simone; Quirino, Marcelo

    2012-01-01

    This paper presents an analysis of cognitive-ergonomic aspects of e-learning courses, offered by an organism from Brazilian Public Administration. The Cognitive Ergonomic studies conductive and cognitive aspects concerning to the relation between human, physics elements and social elements of the work space. From that usability aspects were evaluated by these points: i) visualization; ii) text comprehension lecture; iii) memory; iv) interface; v) instructional design; and vi) attention and learning. That survey is characterized as having been applied using the following techniques: (1) bibliographic survey, (2) field survey and (3) analysis of the documents. It was chosen the semi-structured questionnaire as the main method of data collection. About the interacting with artifacts, the interface of the courses is classified as direct engagement, because it allows the user to get the feeling that acts directly on the objects. Although the courses are well-structured they have flaws that will be discussed below. Even with these problems, the courses have a good degree of usability.

  1. Computational analysis of particle reinforced viscoelastic polymer nanocomposites - statistical study of representative volume element

    NASA Astrophysics Data System (ADS)

    Hu, Anqi; Li, Xiaolin; Ajdari, Amin; Jiang, Bing; Burkhart, Craig; Chen, Wei; Brinson, L. Catherine

    2018-05-01

    The concept of representative volume element (RVE) is widely used to determine the effective material properties of random heterogeneous materials. In the present work, the RVE is investigated for the viscoelastic response of particle-reinforced polymer nanocomposites in the frequency domain. The smallest RVE size and the minimum number of realizations at a given volume size for both structural and mechanical properties are determined for a given precision using the concept of margin of error. It is concluded that using the mean of many realizations of a small RVE instead of a single large RVE can retain the desired precision of a result with much lower computational cost (up to three orders of magnitude reduced computation time) for the property of interest. Both the smallest RVE size and the minimum number of realizations for a microstructure with higher volume fraction (VF) are larger compared to those of one with lower VF at the same desired precision. Similarly, a clustered structure is shown to require a larger minimum RVE size as well as a larger number of realizations at a given volume size compared to the well-dispersed microstructures.

  2. Biomolecular electrostatics and solvation: a computational perspective

    PubMed Central

    Ren, Pengyu; Chun, Jaehun; Thomas, Dennis G.; Schnieders, Michael J.; Marucho, Marcelo; Zhang, Jiajing; Baker, Nathan A.

    2012-01-01

    An understanding of molecular interactions is essential for insight into biological systems at the molecular scale. Among the various components of molecular interactions, electrostatics are of special importance because of their long-range nature and their influence on polar or charged molecules, including water, aqueous ions, proteins, nucleic acids, carbohydrates, and membrane lipids. In particular, robust models of electrostatic interactions are essential for understanding the solvation properties of biomolecules and the effects of solvation upon biomolecular folding, binding, enzyme catalysis, and dynamics. Electrostatics, therefore, are of central importance to understanding biomolecular structure and modeling interactions within and among biological molecules. This review discusses the solvation of biomolecules with a computational biophysics view towards describing the phenomenon. While our main focus lies on the computational aspect of the models, we provide an overview of the basic elements of biomolecular solvation (e.g., solvent structure, polarization, ion binding, and nonpolar behavior) in order to provide a background to understand the different types of solvation models. PMID:23217364

  3. Biomolecular electrostatics and solvation: a computational perspective.

    PubMed

    Ren, Pengyu; Chun, Jaehun; Thomas, Dennis G; Schnieders, Michael J; Marucho, Marcelo; Zhang, Jiajing; Baker, Nathan A

    2012-11-01

    An understanding of molecular interactions is essential for insight into biological systems at the molecular scale. Among the various components of molecular interactions, electrostatics are of special importance because of their long-range nature and their influence on polar or charged molecules, including water, aqueous ions, proteins, nucleic acids, carbohydrates, and membrane lipids. In particular, robust models of electrostatic interactions are essential for understanding the solvation properties of biomolecules and the effects of solvation upon biomolecular folding, binding, enzyme catalysis, and dynamics. Electrostatics, therefore, are of central importance to understanding biomolecular structure and modeling interactions within and among biological molecules. This review discusses the solvation of biomolecules with a computational biophysics view toward describing the phenomenon. While our main focus lies on the computational aspect of the models, we provide an overview of the basic elements of biomolecular solvation (e.g. solvent structure, polarization, ion binding, and non-polar behavior) in order to provide a background to understand the different types of solvation models.

  4. Fluid-structure finite-element vibrational analysis

    NASA Technical Reports Server (NTRS)

    Feng, G. C.; Kiefling, L.

    1974-01-01

    A fluid finite element has been developed for a quasi-compressible fluid. Both kinetic and potential energy are expressed as functions of nodal displacements. Thus, the formulation is similar to that used for structural elements, with the only differences being that the fluid can possess gravitational potential, and the constitutive equations for fluid contain no shear coefficients. Using this approach, structural and fluid elements can be used interchangeably in existing efficient sparse-matrix structural computer programs such as SPAR. The theoretical development of the element formulations and the relationships of the local and global coordinates are shown. Solutions of fluid slosh, liquid compressibility, and coupled fluid-shell oscillation problems which were completed using a temporary digital computer program are shown. The frequency correlation of the solutions with classical theory is excellent.

  5. [Numerical finite element modeling of custom car seat using computer aided design].

    PubMed

    Huang, Xuqi; Singare, Sekou

    2014-02-01

    A good cushion can not only provide the sitter with a high comfort, but also control the distribution of the hip pressure to reduce the incidence of diseases. The purpose of this study is to introduce a computer-aided design (CAD) modeling method of the buttocks-cushion using numerical finite element (FE) simulation to predict the pressure distribution on the buttocks-cushion interface. The buttock and the cushion model geometrics were acquired from a laser scanner, and the CAD software was used to create the solid model. The FE model of a true seated individual was developed using ANSYS software (ANSYS Inc, Canonsburg, PA). The model is divided into two parts, i.e. the cushion model made of foam and the buttock model represented by the pelvis covered with a soft tissue layer. Loading simulations consisted of imposing a vertical force of 520N on the pelvis, corresponding to the weight of the user upper extremity, and then solving iteratively the system.

  6. The computational structural mechanics testbed generic structural-element processor manual

    NASA Technical Reports Server (NTRS)

    Stanley, Gary M.; Nour-Omid, Shahram

    1990-01-01

    The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).

  7. The effect of in situ/in vitro three-dimensional quantitative computed tomography image voxel size on the finite element model of human vertebral cancellous bone.

    PubMed

    Lu, Yongtao; Engelke, Klaus; Glueer, Claus-C; Morlock, Michael M; Huber, Gerd

    2014-11-01

    Quantitative computed tomography-based finite element modeling technique is a promising clinical tool for the prediction of bone strength. However, quantitative computed tomography-based finite element models were created from image datasets with different image voxel sizes. The aim of this study was to investigate whether there is an influence of image voxel size on the finite element models. In all 12 thoracolumbar vertebrae were scanned prior to autopsy (in situ) using two different quantitative computed tomography scan protocols, which resulted in image datasets with two different voxel sizes (0.29 × 0.29 × 1.3 mm(3) vs 0.18 × 0.18 × 0.6 mm(3)). Eight of them were scanned after autopsy (in vitro) and the datasets were reconstructed with two voxel sizes (0.32 × 0.32 × 0.6 mm(3) vs. 0.18 × 0.18 × 0.3 mm(3)). Finite element models with cuboid volume of interest extracted from the vertebral cancellous part were created and inhomogeneous bilinear bone properties were defined. Axial compression was simulated. No effect of voxel size was detected on the apparent bone mineral density for both the in situ and in vitro cases. However, the apparent modulus and yield strength showed significant differences in the two voxel size group pairs (in situ and in vitro). In conclusion, the image voxel size may have to be considered when the finite element voxel modeling technique is used in clinical applications. © IMechE 2014.

  8. Numerical investigation of multi-element airfoils

    NASA Technical Reports Server (NTRS)

    Cummings, Russell M.

    1993-01-01

    The flow over multi-element airfoils with flat-plate lift-enhancing tabs was numerically investigated. Tabs ranging in height from 0.25 percent to 1.25 percent of the reference airfoil chord were studied near the trailing edge of the main-element. This two-dimensional numerical simulation employed an incompressible Navier-Stokes solver on a structured, embedded grid topology. New grid refinements were used to improve the accuracy of the solution near the overlapping grid boundaries. The effects of various tabs were studied at a constant Reynolds number on a two-element airfoil with a slotted flap. Both computed and measured results indicated that a tab in the main-element cove improved the maximum lift and lift-to-drag ratio relative to the baseline airfoil without a tab. Computed streamlines revealed that the additional turning caused by the tab may reduce the amount of separated flow on the flap. A three-element airfoil was also studied over a range of Reynolds numbers. For the optimized flap rigging, the computed and measured Reynolds number effects were similar. When the flap was moved from the optimum position, numerical results indicated that a tab may help to reoptimize the airfoil to within 1 percent of the optimum flap case.

  9. Automatic partitioning of unstructured meshes for the parallel solution of problems in computational mechanics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Lesoinne, Michel

    1993-01-01

    Most of the recently proposed computational methods for solving partial differential equations on multiprocessor architectures stem from the 'divide and conquer' paradigm and involve some form of domain decomposition. For those methods which also require grids of points or patches of elements, it is often necessary to explicitly partition the underlying mesh, especially when working with local memory parallel processors. In this paper, a family of cost-effective algorithms for the automatic partitioning of arbitrary two- and three-dimensional finite element and finite difference meshes is presented and discussed in view of a domain decomposed solution procedure and parallel processing. The influence of the algorithmic aspects of a solution method (implicit/explicit computations), and the architectural specifics of a multiprocessor (SIMD/MIMD, startup/transmission time), on the design of a mesh partitioning algorithm are discussed. The impact of the partitioning strategy on load balancing, operation count, operator conditioning, rate of convergence and processor mapping is also addressed. Finally, the proposed mesh decomposition algorithms are demonstrated with realistic examples of finite element, finite volume, and finite difference meshes associated with the parallel solution of solid and fluid mechanics problems on the iPSC/2 and iPSC/860 multiprocessors.

  10. NASTRAN data generation of helicopter fuselages using interactive graphics. [preprocessor system for finite element analysis using IBM computer

    NASA Technical Reports Server (NTRS)

    Sainsbury-Carter, J. B.; Conaway, J. H.

    1973-01-01

    The development and implementation of a preprocessor system for the finite element analysis of helicopter fuselages is described. The system utilizes interactive graphics for the generation, display, and editing of NASTRAN data for fuselage models. It is operated from an IBM 2250 cathode ray tube (CRT) console driven by an IBM 370/145 computer. Real time interaction plus automatic data generation reduces the nominal 6 to 10 week time for manual generation and checking of data to a few days. The interactive graphics system consists of a series of satellite programs operated from a central NASTRAN Systems Monitor. Fuselage structural models including the outer shell and internal structure may be rapidly generated. All numbering systems are automatically assigned. Hard copy plots of the model labeled with GRID or elements ID's are also available. General purpose programs for displaying and editing NASTRAN data are included in the system. Utilization of the NASTRAN interactive graphics system has made possible the multiple finite element analysis of complex helicopter fuselage structures within design schedules.

  11. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  12. A parallel finite element procedure for contact-impact problems using edge-based smooth triangular element and GPU

    NASA Astrophysics Data System (ADS)

    Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang

    2018-04-01

    The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.

  13. Efficient simulation of incompressible viscous flow over multi-element airfoils

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Wiltberger, N. Lyn; Kwak, Dochan

    1992-01-01

    The incompressible, viscous, turbulent flow over single and multi-element airfoils is numerically simulated in an efficient manner by solving the incompressible Navier-Stokes equations. The computer code uses the method of pseudo-compressibility with an upwind-differencing scheme for the convective fluxes and an implicit line-relaxation solution algorithm. The motivation for this work includes interest in studying the high-lift take-off and landing configurations of various aircraft. In particular, accurate computation of lift and drag at various angles of attack, up to stall, is desired. Two different turbulence models are tested in computing the flow over an NACA 4412 airfoil; an accurate prediction of stall is obtained. The approach used for multi-element airfoils involves the use of multiple zones of structured grids fitted to each element. Two different approaches are compared: a patched system of grids, and an overlaid Chimera system of grids. Computational results are presented for two-element, three-element, and four-element airfoil configurations. Excellent agreement with experimental surface pressure coefficients is seen. The code converges in less than 200 iterations, requiring on the order of one minute of CPU time (on a CRAY YMP) per element in the airfoil configuration.

  14. Modelling of anisotropic growth in biological tissues. A new approach and computational aspects.

    PubMed

    Menzel, A

    2005-03-01

    In this contribution, we develop a theoretical and computational framework for anisotropic growth phenomena. As a key idea of the proposed phenomenological approach, a fibre or rather structural tensor is introduced, which allows the description of transversely isotropic material behaviour. Based on this additional argument, anisotropic growth is modelled via appropriate evolution equations for the fibre while volumetric remodelling is realised by an evolution of the referential density. Both the strength of the fibre as well as the density follow Wolff-type laws. We however elaborate on two different approaches for the evolution of the fibre direction, namely an alignment with respect to strain or with respect to stress. One of the main benefits of the developed framework is therefore the opportunity to address the evolutions of the fibre strength and the fibre direction separately. It is then straightforward to set up appropriate integration algorithms such that the developed framework fits nicely into common, finite element schemes. Finally, several numerical examples underline the applicability of the proposed formulation.

  15. Nutritional Aspects of Essential Trace Elements in Oral Health and Disease: An Extensive Review

    PubMed Central

    Hussain, Mohsina

    2016-01-01

    Human body requires certain essential elements in small quantities and their absence or excess may result in severe malfunctioning of the body and even death in extreme cases because these essential trace elements directly influence the metabolic and physiologic processes of the organism. Rapid urbanization and economic development have resulted in drastic changes in diets with developing preference towards refined diet and nutritionally deprived junk food. Poor nutrition can lead to reduced immunity, augmented vulnerability to various oral and systemic diseases, impaired physical and mental growth, and reduced efficiency. Diet and nutrition affect oral health in a variety of ways with influence on craniofacial development and growth and maintenance of dental and oral soft tissues. Oral potentially malignant disorders (OPMD) are treated with antioxidants containing essential trace elements like selenium but even increased dietary intake of trace elements like copper could lead to oral submucous fibrosis. The deficiency or excess of other trace elements like iodine, iron, zinc, and so forth has a profound effect on the body and such conditions are often diagnosed through their early oral manifestations. This review appraises the biological functions of significant trace elements and their role in preservation of oral health and progression of various oral diseases. PMID:27433374

  16. Plane Smoothers for Multiblock Grids: Computational Aspects

    NASA Technical Reports Server (NTRS)

    Llorente, Ignacio M.; Diskin, Boris; Melson, N. Duane

    1999-01-01

    Standard multigrid methods are not well suited for problems with anisotropic discrete operators, which can occur, for example, on grids that are stretched in order to resolve a boundary layer. One of the most efficient approaches to yield robust methods is the combination of standard coarsening with alternating-direction plane relaxation in the three dimensions. However, this approach may be difficult to implement in codes with multiblock structured grids because there may be no natural definition of global lines or planes. This inherent obstacle limits the range of an implicit smoother to only the portion of the computational domain in the current block. This report studies in detail, both numerically and analytically, the behavior of blockwise plane smoothers in order to provide guidance to engineers who use block-structured grids. The results obtained so far show alternating-direction plane smoothers to be very robust, even on multiblock grids. In common computational fluid dynamics multiblock simulations, where the number of subdomains crossed by the line of a strong anisotropy is low (up to four), textbook multigrid convergence rates can be obtained with a small overlap of cells between neighboring blocks.

  17. Development of an object-oriented finite element program: application to metal-forming and impact simulations

    NASA Astrophysics Data System (ADS)

    Pantale, O.; Caperaa, S.; Rakotomalala, R.

    2004-07-01

    During the last 50 years, the development of better numerical methods and more powerful computers has been a major enterprise for the scientific community. In the same time, the finite element method has become a widely used tool for researchers and engineers. Recent advances in computational software have made possible to solve more physical and complex problems such as coupled problems, nonlinearities, high strain and high-strain rate problems. In this field, an accurate analysis of large deformation inelastic problems occurring in metal-forming or impact simulations is extremely important as a consequence of high amount of plastic flow. In this presentation, the object-oriented implementation, using the C++ language, of an explicit finite element code called DynELA is presented. The object-oriented programming (OOP) leads to better-structured codes for the finite element method and facilitates the development, the maintainability and the expandability of such codes. The most significant advantage of OOP is in the modeling of complex physical systems such as deformation processing where the overall complex problem is partitioned in individual sub-problems based on physical, mathematical or geometric reasoning. We first focus on the advantages of OOP for the development of scientific programs. Specific aspects of OOP, such as the inheritance mechanism, the operators overload procedure or the use of template classes are detailed. Then we present the approach used for the development of our finite element code through the presentation of the kinematics, conservative and constitutive laws and their respective implementation in C++. Finally, the efficiency and accuracy of our finite element program are investigated using a number of benchmark tests relative to metal forming and impact simulations.

  18. Newly synthesized dihydroquinazoline derivative from the aspect of combined spectroscopic and computational study

    NASA Astrophysics Data System (ADS)

    El-Azab, Adel S.; Mary, Y. Sheena; Mary, Y. Shyma; Panicker, C. Yohannan; Abdel-Aziz, Alaa A.-M.; El-Sherbeny, Magda A.; Armaković, Stevan; Armaković, Sanja J.; Van Alsenoy, Christian

    2017-04-01

    In this work, spectroscopic characterization of 2-(2-(4-oxo-3-phenethyl-3,4-dihydroquinazolin-2-ylthio)ethyl)isoindoline-1,3-dione have been obtained with experimentally and theoretically. Complete assignments of fundamental vibrations were performed on the basis of the potential energy distribution of the vibrational modes and good agreement between the experimental and scaled wavenumbers has been achieved. Frontier molecular orbitals have been used as indicators of stability and reactivity. Intramolecular interactions have been investigated by NBO analysis. The dipole moment, linear polarizability and first and second order hyperpolarizability values were also computed. In order to determine molecule sites prone to electrophilic attacks DFT calculations of average local ionization energy (ALIE) and Fukui functions have been performed as well. Intra-molecular non-covalent interactions have been determined and analyzed by the analysis of charge density. Stability of title molecule have also been investigated from the aspect of autoxidation, by calculations of bond dissociation energies (BDE), and hydrolysis, by calculations of radial distribution functions after molecular dynamics (MD) simulations. In order to assess the biological potential of the title compound a molecular docking study towards breast cancer type 2 complex has been performed.

  19. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  20. Effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy

    1987-01-01

    The effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter was investigated. Several structural performance and resizing (SPAR) thermal models and NASA structural analysis (NASTRAN) structural models were set up for the orbiter wing midspan bay 3. The thermal model was found to be the one that determines the limit of finite-element fineness because of the limitation of computational core space required for the radiation view factor calculations. The thermal stresses were found to be extremely sensitive to a slight variation of structural temperature distributions. The minimum degree of element fineness required for the thermal model to yield reasonably accurate solutions was established. The radiation view factor computation time was found to be insignificant compared with the total computer time required for the SPAR transient heat transfer analysis.

  1. Numerical studies of the reversed-field pinch at high aspect ratio

    NASA Astrophysics Data System (ADS)

    Sätherblom, H.-E.; Drake, J. R.

    1998-10-01

    The reversed field pinch (RFP) configuration at an aspect ratio of 8.8 is studied numerically by means of the three-dimensional magnetohydrodynamic code DEBS [D. D. Schnack et al., J. Comput. Phys. 70, 330 (1987)]. This aspect ratio is equal to that of the Extrap T1 experiment [S. Mazur et al., Nucl. Fusion 34, 427 (1994)]. A numerical study of a RFP with this level of aspect ratio requires extensive computer achievements and has hitherto not been performed. The results are compared with previous studies [Y. L. Ho et al., Phys. Plasmas 2, 3407 (1995)] of lower aspect ratio RFP configurations. In particular, an evaluation of the extrapolation to the aspect ratio of 8.8 made in this previous study shows that the extrapolation of the spectral spread, as well as most of the other findings, are confirmed. An important exception, however, is the magnetic diffusion coefficient, which is found to decrease with aspect ratio. Furthermore, an aspect ratio dependence of the magnetic energy and of the helicity of the RFP is found.

  2. Program Helps Generate Boundary-Element Mathematical Models

    NASA Technical Reports Server (NTRS)

    Goldberg, R. K.

    1995-01-01

    Composite Model Generation-Boundary Element Method (COM-GEN-BEM) computer program significantly reduces time and effort needed to construct boundary-element mathematical models of continuous-fiber composite materials at micro-mechanical (constituent) scale. Generates boundary-element models compatible with BEST-CMS boundary-element code for anlaysis of micromechanics of composite material. Written in PATRAN Command Language (PCL).

  3. Methodical and technological aspects of creation of interactive computer learning systems

    NASA Astrophysics Data System (ADS)

    Vishtak, N. M.; Frolov, D. A.

    2017-01-01

    The article presents a methodology for the development of an interactive computer training system for training power plant. The methods used in the work are a generalization of the content of scientific and methodological sources on the use of computer-based training systems in vocational education, methods of system analysis, methods of structural and object-oriented modeling of information systems. The relevance of the development of the interactive computer training systems in the preparation of the personnel in the conditions of the educational and training centers is proved. Development stages of the computer training systems are allocated, factors of efficient use of the interactive computer training system are analysed. The algorithm of work performance at each development stage of the interactive computer training system that enables one to optimize time, financial and labor expenditure on the creation of the interactive computer training system is offered.

  4. Atomic adsorption on pristine graphene along the Periodic Table of Elements - From PBE to non-local functionals

    NASA Astrophysics Data System (ADS)

    Pašti, Igor A.; Jovanović, Aleksandar; Dobrota, Ana S.; Mentus, Slavko V.; Johansson, Börje; Skorodumova, Natalia V.

    2018-04-01

    The understanding of atomic adsorption on graphene is of high importance for many advanced technologies. Here we present a complete database of the atomic adsorption energies for the elements of the Periodic Table up to the atomic number 86 (excluding lanthanides) on pristine graphene. The energies have been calculated using the projector augmented wave (PAW) method with PBE, long-range dispersion interaction corrected PBE (PBE+D2, PBE+D3) as well as non-local vdW-DF2 approach. The inclusion of dispersion interactions leads to an exothermic adsorption for all the investigated elements. Dispersion interactions are found to be of particular importance for the adsorption of low atomic weight earth alkaline metals, coinage and s-metals (11th and 12th groups), high atomic weight p-elements and noble gases. We discuss the observed adsorption trends along the groups and rows of the Periodic Table as well some computational aspects of modelling atomic adsorption on graphene.

  5. Boundary element analysis of post-tensioned slabs

    NASA Astrophysics Data System (ADS)

    Rashed, Youssef F.

    2015-06-01

    In this paper, the boundary element method is applied to carry out the structural analysis of post-tensioned flat slabs. The shear-deformable plate-bending model is employed. The effect of the pre-stressing cables is taken into account via the equivalent load method. The formulation is automated using a computer program, which uses quadratic boundary elements. Verification samples are presented, and finally a practical application is analyzed where results are compared against those obtained from the finite element method. The proposed method is efficient in terms of computer storage and processing time as well as the ease in data input and modifications.

  6. Diffractive Optical Elements for Spectral Imaging

    NASA Technical Reports Server (NTRS)

    Wilson, D.; Maker, P.; Muller, R.; Mourolis, P.; Descour, M.; Volin, C.; Dereniak, E.

    2000-01-01

    Diffractive optical elements fabricated on flat and non-flat substrates frequently act as dispersive elements in imaging spectrometers. We describe the design and electron-beam fabrication of blazed and computer-generated-hologram gratings for slit and tomographic imaging spectrometer.

  7. Diffractive Optical Elements for Spectral Imaging

    NASA Technical Reports Server (NTRS)

    Wilson, D.; Maker, P.; Muller, R.; Maker, P.; Mouroulis, P.; Descour, M.; Volin, C.; Dereniak, E.

    2000-01-01

    Diffractive optical elements fabricated on flat and non-flat substrates frequently act as dispersive elements in imaging spectrometers. We describe the design and electron-beam fabrication of blazed and computer-generated-hologram gratings for slit and tomographic imaging spectrometers.

  8. What Aspects of Personal Care Are Most Important to Patients Undergoing Radiation Therapy for Prostate Cancer?

    PubMed

    Foley, Kimberley A; Feldman-Stewart, Deb; Groome, Patti A; Brundage, Michael D; McArdle, Siobhan; Wallace, David; Peng, Yingwei; Mackillop, William J

    2016-02-01

    The overall quality of patient care is a function of the quality of both its technical and its nontechnical components. The purpose of this study was to identify the elements of nontechnical (personal) care that are most important to patients undergoing radiation therapy for prostate cancer. We reviewed the literature and interviewed patients and health professionals to identify elements of personal care pertinent to patients undergoing radiation therapy for prostate cancer. We identified 143 individual elements relating to 10 aspects of personal care. Patients undergoing radical radiation therapy for prostate cancer completed a self-administered questionnaire in which they rated the importance of each element. The overall importance of each element was measured by the percentage of respondents who rated it as "very important." The importance of each aspect of personal care was measured by the mean importance of its elements. One hundred eight patients completed the questionnaire. The percentage of patients who rated each element "very important" ranged from 7% to 95% (mean 61%). The mean importance rating of the elements of each aspect of care varied significantly: "perceived competence of caregivers," 80%; "empathy and respectfulness of caregivers," 67%; "adequacy of information sharing," 67%; "patient centeredness," 59%; "accessibility of caregivers," 57%; "continuity of care," 51%; "privacy," 51%; "convenience," 45%; "comprehensiveness of services," 44%; and "treatment environment," 30% (P<.0001). Neither age nor education was associated with importance ratings, but the patient's health status was associated with the rating of some elements of care. Many different elements of personal care are important to patients undergoing radiation therapy for prostate cancer, but the 3 aspects of care that most believe are most important are these: the perceived competence of their caregivers, the empathy and respectfulness of their caregivers, and the adequacy of

  9. Structural aspects of denitrifying enzymes.

    PubMed

    Moura, I; Moura, J J

    2001-04-01

    The reduction of nitrate to nitrogen gas via nitrite, nitric oxide and nitrous oxide is the metabolic pathway usually known as denitrification, a key step in the nitrogen cycle. As observed for other elemental cycles, a battery of enzymes are utilized, namely the reductases for nitrate, nitrite, nitric oxide and nitrous oxide, as well as multiple electron donors that interact with these enzymes, in order to carry out the stepwise reactions that involve key intermediates. Because of the importance of this pathway (of parallel importance to the nitrogen-fixation pathway), efforts are underway to understand the structures of the participating enzymes and to uncover mechanistic aspects. Three-dimensional structures have been solved for the majority of these enzymes in the past few years, revealing the architecture of the active metal sites as well as global structural aspects, and possible mechanistic aspects. In addition, the recognition of specific electron-transfer partners raises important questions regarding specific electron-transfer pathways, partner recognition and control of metabolism.

  10. Mathematical Aspects of Finite Element Methods for Incompressible Viscous Flows.

    DTIC Science & Technology

    1986-09-01

    respectively. Here h is a parameter which is usually related to the size of the grid associated with the finite element partitioning of Q. Then one... grid and of not at least performing serious mesh refinement studies. It also points out the usefulness of rigorous results concerning the stability...overconstrained the .1% approximate velocity field. However, by employing different grids for the ’z pressure and velocity fields, the linear-constant

  11. Prediction and phylogenetic analysis of mammalian short interspersed elements (SINEs).

    PubMed

    Rogozin, I B; Mayorov, V I; Lavrentieva, M V; Milanesi, L; Adkison, L R

    2000-09-01

    The presence of repetitive elements can create serious problems for sequence analysis, especially in the case of homology searches in nucleotide sequence databases. Repetitive elements should be treated carefully by using special programs and databases. In this paper, various aspects of SINE (short interspersed repetitive element) identification, analysis and evolution are discussed.

  12. VIEWIT: computation of seen areas, slope, and aspect for land-use planning

    Treesearch

    Michael R. Travis; Gary H. Elsner; Wayne D. Iverson; Christine G. Johnson

    1975-01-01

    This user's guide provides instructions for using VIEWIT--a computerized technique for delineating the terrain visible from a single point or from multiple observer points, and for doing slope and aspect analyses. Results are in tabular or in overlay map form. VIEWIT can do individual view-area, slope, or aspect analyses or combined analyses, and can produce...

  13. Transferring elements of a density matrix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allahverdyan, Armen E.; Hovhannisyan, Karen V.; Yerevan State University, A. Manoogian Street 1, Yerevan

    2010-01-15

    We study restrictions imposed by quantum mechanics on the process of matrix-element transfer. This problem is at the core of quantum measurements and state transfer. Given two systems A and B with initial density matrices lambda and r, respectively, we consider interactions that lead to transferring certain matrix elements of unknown lambda into those of the final state r-tilde of B. We find that this process eliminates the memory on the transferred (or certain other) matrix elements from the final state of A. If one diagonal matrix element is transferred, r(tilde sign){sub aa}=lambda{sub aa}, the memory on each nondiagonal elementmore » lambda{sub an}ot ={sub b} is completely eliminated from the final density operator of A. Consider the following three quantities, Relambda{sub an}ot ={sub b}, Imlambda{sub an}ot ={sub b}, and lambda{sub aa}-lambda{sub bb} (the real and imaginary part of a nondiagonal element and the corresponding difference between diagonal elements). Transferring one of them, e.g., Rer(tilde sign){sub an}ot ={sub b}=Relambda{sub an}ot ={sub b}, erases the memory on two others from the final state of A. Generalization of these setups to a finite-accuracy transfer brings in a trade-off between the accuracy and the amount of preserved memory. This trade-off is expressed via system-independent uncertainty relations that account for local aspects of the accuracy-disturbance trade-off in quantum measurements. Thus, the general aspect of state disturbance in quantum measurements is elimination of memory on non-diagonal elements, rather than diagonalization.« less

  14. Addition of higher order plate and shell elements into NASTRAN computer program

    NASA Technical Reports Server (NTRS)

    Narayanaswami, R.; Goglia, G. L.

    1976-01-01

    Two higher order plate elements, the linear strain triangular membrane element and the quintic bending element, along with a shallow shell element, suitable for inclusion into the NASTRAN (NASA Structural Analysis) program are described. Additions to the NASTRAN Theoretical Manual, Users' Manual, Programmers' Manual and the NASTRAN Demonstration Problem Manual, for inclusion of these elements into the NASTRAN program are also presented.

  15. Finite element dynamic analysis of soft tissues using state-space model.

    PubMed

    Iorga, Lucian N; Shan, Baoxiang; Pelegri, Assimina A

    2009-04-01

    A finite element (FE) model is employed to investigate the dynamic response of soft tissues under external excitations, particularly corresponding to the case of harmonic motion imaging. A solid 3D mixed 'u-p' element S8P0 is implemented to capture the near-incompressibility inherent in soft tissues. Two important aspects in structural modelling of these tissues are studied; these are the influence of viscous damping on the dynamic response and, following FE-modelling, a developed state-space formulation that valuates the efficiency of several order reduction methods. It is illustrated that the order of the mathematical model can be significantly reduced, while preserving the accuracy of the observed system dynamics. Thus, the reduced-order state-space representation of soft tissues for general dynamic analysis significantly reduces the computational cost and provides a unitary framework for the 'forward' simulation and 'inverse' estimation of soft tissues. Moreover, the results suggest that damping in soft-tissue is significant, effectively cancelling the contribution of all but the first few vibration modes.

  16. Element fracture technique for hypervelocity impact simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-tian; Li, Xiao-gang; Liu, Tao; Jia, Guang-hui

    2015-05-01

    Hypervelocity impact dynamics is the theoretical support of spacecraft shielding against space debris. The numerical simulation has become an important approach for obtaining the ballistic limits of the spacecraft shields. Currently, the most widely used algorithm for hypervelocity impact is the smoothed particle hydrodynamics (SPH). Although the finite element method (FEM) is widely used in fracture mechanics and low-velocity impacts, the standard FEM can hardly simulate the debris cloud generated by hypervelocity impact. This paper presents a successful application of the node-separation technique for hypervelocity impact debris cloud simulation. The node-separation technique assigns individual/coincident nodes for the adjacent elements, and it applies constraints to the coincident node sets in the modeling step. In the explicit iteration, the cracks are generated by releasing the constrained node sets that meet the fracture criterion. Additionally, the distorted elements are identified from two aspects - self-piercing and phase change - and are deleted so that the constitutive computation can continue. FEM with the node-separation technique is used for thin-wall hypervelocity impact simulations. The internal structures of the debris cloud in the simulation output are compared with that in the test X-ray graphs under different material fracture criteria. It shows that the pressure criterion is more appropriate for hypervelocity impact. The internal structures of the debris cloud are also simulated and compared under different thickness-to-diameter ratios (t/D). The simulation outputs show the same spall pattern with the tests. Finally, the triple-plate impact case is simulated with node-separation FEM.

  17. Experimental study of low aspect ratio compressor blading

    NASA Technical Reports Server (NTRS)

    Reid, L.; Moore, R. D.

    1979-01-01

    The effects of low aspect ratio blading on aerodynamic performance were examined. Four individual transonic compressor stages, representative of the inlet stage of an advanced high pressure ratio core compressor, are discussed. The flow phenomena for the four stages are investigated. Comparisons of blade element parameters are presented for the two different aspect ratio configurations. Blade loading levels are compared for the near stall conditions and comparisons are made of loss and diffusion factors over the operating range of incidence angles.

  18. Mechanical testing and finite element analysis of orthodontic teardrop loop.

    PubMed

    Coimbra, Maria Elisa Rodrigues; Penedo, Norman Duque; de Gouvêa, Jayme Pereira; Elias, Carlos Nelson; de Souza Araújo, Mônica Tirre; Coelho, Paulo Guilherme

    2008-02-01

    Understanding how teeth move in response to mechanical loads is an important aspect of orthodontic treatment. Treatment planning should include consideration of the appliances that will meet the desired loading of the teeth to result in optimized treatment outcomes. The purpose of this study was to evaluate the use of computer simulation to predict the force and the torsion obtained after the activation of tear drop loops of 3 heights. Seventy-five retraction loops were divided into 3 groups according to height (6, 7, and 8 mm). The loops were subjected to tensile load through displacements of 0.5, 1.0, 1.5, and 2.0 mm, and the resulting forces and torques were recorded. The loops were designed in AutoCAD software(2005; Autodesk Systems, Alpharetta, GA), and finite element analysis was performed with Ansys software(version 7.0; Swanson Analysis System, Canonsburg, PA). Statistical analysis of the mechanical experiment results was obtained by ANOVA and the Tukey post-hoc test (P < .01). The correlation test and the paired t test (P < .05) were used to compare the computer simulation with the mechanical experiment. The computer simulation accurately predicted the experimentally determined mechanical behavior of tear drop loops of different heights and should be considered an alternative for designing orthodontic appliances before treatment.

  19. Computational modeling of chemo-electro-mechanical coupling: A novel implicit monolithic finite element approach

    PubMed Central

    Wong, J.; Göktepe, S.; Kuhl, E.

    2014-01-01

    Summary Computational modeling of the human heart allows us to predict how chemical, electrical, and mechanical fields interact throughout a cardiac cycle. Pharmacological treatment of cardiac disease has advanced significantly over the past decades, yet it remains unclear how the local biochemistry of an individual heart cell translates into global cardiac function. Here we propose a novel, unified strategy to simulate excitable biological systems across three biological scales. To discretize the governing chemical, electrical, and mechanical equations in space, we propose a monolithic finite element scheme. We apply a highly efficient and inherently modular global-local split, in which the deformation and the transmembrane potential are introduced globally as nodal degrees of freedom, while the chemical state variables are treated locally as internal variables. To ensure unconditional algorithmic stability, we apply an implicit backward Euler finite difference scheme to discretize the resulting system in time. To increase algorithmic robustness and guarantee optimal quadratic convergence, we suggest an incremental iterative Newton-Raphson scheme. The proposed algorithm allows us to simulate the interaction of chemical, electrical, and mechanical fields during a representative cardiac cycle on a patient-specific geometry, robust and stable, with calculation times on the order of four days on a standard desktop computer. PMID:23798328

  20. A multidimensional finite element method for CFD

    NASA Technical Reports Server (NTRS)

    Pepper, Darrell W.; Humphrey, Joseph W.

    1991-01-01

    A finite element method is used to solve the equations of motion for 2- and 3-D fluid flow. The time-dependent equations are solved explicitly using quadrilateral (2-D) and hexahedral (3-D) elements, mass lumping, and reduced integration. A Petrov-Galerkin technique is applied to the advection terms. The method requires a minimum of computational storage, executes quickly, and is scalable for execution on computer systems ranging from PCs to supercomputers.

  1. On modelling three-dimensional piezoelectric smart structures with boundary spectral element method

    NASA Astrophysics Data System (ADS)

    Zou, Fangxin; Aliabadi, M. H.

    2017-05-01

    The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.

  2. Subversion: The Neglected Aspect of Computer Security.

    DTIC Science & Technology

    1980-06-01

    fundamentally flawed. Recall from mathematics that it is sufficient to disprove a4 proposition (e.g., that a system is secure) by showing only one example where...made. This lack of protection is one of the fundamental reasons why the subversion of computer systems can be so effective. Later chapters will amplify...an area of code that will not be liable to revision. Operatine system software, as pointed out earlier, is often riddled with design errors or subject

  3. Finite element Compton tomography

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Amouzou, Pauline; Menon, Naresh; Gertsenshteyn, Michael

    2007-09-01

    In this paper a new approach to 3D Compton imaging is presented, based on a kind of finite element (FE) analysis. A window for X-ray incoherent scattering (or Compton scattering) attenuation coefficients is identified for breast cancer diagnosis, for hard X-ray photon energy of 100-300 keV. The point-by-point power/energy budget is computed, based on a 2D array of X-ray pencil beams, scanned vertically. The acceptable medical doses are also computed. The proposed finite element tomography (FET) can be an alternative to X-ray mammography, tomography, and tomosynthesis. In experiments, 100 keV (on average) X-ray photons are applied, and a new type of pencil beam collimation, based on a Lobster-Eye Lens (LEL), is proposed.

  4. Computer aided stress analysis of long bones utilizing computer tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marom, S.A.

    1986-01-01

    A computer aided analysis method, utilizing computed tomography (CT) has been developed, which together with a finite element program determines the stress-displacement pattern in a long bone section. The CT data file provides the geometry, the density and the material properties for the generated finite element model. A three-dimensional finite element model of a tibial shaft is automatically generated from the CT file by a pre-processing procedure for a finite element program. The developed pre-processor includes an edge detection algorithm which determines the boundaries of the reconstructed cross-sectional images of the scanned bone. A mesh generation procedure than automatically generatesmore » a three-dimensional mesh of a user-selected refinement. The elastic properties needed for the stress analysis are individually determined for each model element using the radiographic density (CT number) of each pixel with the elemental borders. The elastic modulus is determined from the CT radiographic density by using an empirical relationship from the literature. The generated finite element model, together with applied loads, determined from existing gait analysis and initial displacements, comprise a formatted input for the SAP IV finite element program. The output of this program, stresses and displacements at the model elements and nodes, are sorted and displayed by a developed post-processor to provide maximum and minimum values at selected locations in the model.« less

  5. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less

  6. Methods for improving simulations of biological systems: systemic computation and fractal proteins

    PubMed Central

    Bentley, Peter J.

    2009-01-01

    Modelling and simulation are becoming essential for new fields such as synthetic biology. Perhaps the most important aspect of modelling is to follow a clear design methodology that will help to highlight unwanted deficiencies. The use of tools designed to aid the modelling process can be of benefit in many situations. In this paper, the modelling approach called systemic computation (SC) is introduced. SC is an interaction-based language, which enables individual-based expression and modelling of biological systems, and the interactions between them. SC permits a precise description of a hypothetical mechanism to be written using an intuitive graph-based or a calculus-based notation. The same description can then be directly run as a simulation, merging the hypothetical mechanism and the simulation into the same entity. However, even when using well-designed modelling tools to produce good models, the best model is not always the most accurate one. Frequently, computational constraints or lack of data make it infeasible to model an aspect of biology. Simplification may provide one way forward, but with inevitable consequences of decreased accuracy. Instead of attempting to replace an element with a simpler approximation, it is sometimes possible to substitute the element with a different but functionally similar component. In the second part of this paper, this modelling approach is described and its advantages are summarized using an exemplar: the fractal protein model. Finally, the paper ends with a discussion of good biological modelling practice by presenting lessons learned from the use of SC and the fractal protein model. PMID:19324681

  7. Report of a Workshop on the Pedagogical Aspects of Computational Thinking

    ERIC Educational Resources Information Center

    National Academies Press, 2011

    2011-01-01

    In 2008, the Computer and Information Science and Engineering Directorate of the National Science Foundation asked the National Research Council (NRC) to conduct two workshops to explore the nature of computational thinking and its cognitive and educational implications. The first workshop focused on the scope and nature of computational thinking…

  8. Computational optical palpation: a finite-element approach to micro-scale tactile imaging using a compliant sensor

    PubMed Central

    Sampson, David D.; Kennedy, Brendan F.

    2017-01-01

    High-resolution tactile imaging, superior to the sense of touch, has potential for future biomedical applications such as robotic surgery. In this paper, we propose a tactile imaging method, termed computational optical palpation, based on measuring the change in thickness of a thin, compliant layer with optical coherence tomography and calculating tactile stress using finite-element analysis. We demonstrate our method on test targets and on freshly excised human breast fibroadenoma, demonstrating a resolution of up to 15–25 µm and a field of view of up to 7 mm. Our method is open source and readily adaptable to other imaging modalities, such as ultrasonography and confocal microscopy. PMID:28250098

  9. Instructional Aspects of Intelligent Tutoring Systems.

    ERIC Educational Resources Information Center

    Pieters, Jules M., Ed.

    This collection contains three papers addressing the instructional aspects of intelligent tutoring systems (ITS): (1) "Some Experiences with Two Intelligent Tutoring Systems for Teaching Computer Programming: Proust and the LISP-Tutor" (van den Berg, Merrienboer, and Maaswinkel); (2) "Some Issues on the Construction of Cooperative…

  10. A comparison between families obtained from different proper elements

    NASA Technical Reports Server (NTRS)

    Zappala, Vincenzo; Cellino, Alberto; Farinella, Paolo

    1992-01-01

    Using the hierarchical method of family identification developed by Zappala et al., the results coming from the data set of proper elements computed by Williams (about 2100 numbered + about 1200 PLS 2 asteroids) and by Milani and Knezevic (5.7 version, about 4200 asteroids) are compared. Apart from some expected discrepancies due to the different data sets and/or low accuracy of proper elements computed in peculiar dynamical zones, a good agreement was found in several cases. It follows that these high reliability families represent a sample which can be considered independent on the methods used for their proper elements computation. Therefore, they should be considered as the best candidates for detailed physical studies.

  11. Injector element characterization methodology

    NASA Technical Reports Server (NTRS)

    Cox, George B., Jr.

    1988-01-01

    Characterization of liquid rocket engine injector elements is an important part of the development process for rocket engine combustion devices. Modern nonintrusive instrumentation for flow velocity and spray droplet size measurement, and automated, computer-controlled test facilities allow rapid, low-cost evaluation of injector element performance and behavior. Application of these methods in rocket engine development, paralleling their use in gas turbine engine development, will reduce rocket engine development cost and risk. The Alternate Turbopump (ATP) Hot Gas Systems (HGS) preburner injector elements were characterized using such methods, and the methodology and some of the results obtained will be shown.

  12. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-01-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  13. Variational approach to probabilistic finite elements

    NASA Astrophysics Data System (ADS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-08-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  14. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1987-01-01

    Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  15. Examining the Minimal Required Elements of a Computer-Tailored Intervention Aimed at Dietary Fat Reduction: Results of a Randomized Controlled Dismantling Study

    ERIC Educational Resources Information Center

    Kroeze, Willemieke; Oenema, Anke; Dagnelie, Pieter C.; Brug, Johannes

    2008-01-01

    This study investigated the minimally required feedback elements of a computer-tailored dietary fat reduction intervention to be effective in improving fat intake. In all 588 Healthy Dutch adults were randomly allocated to one of four conditions in an randomized controlled trial: (i) feedback on dietary fat intake [personal feedback (P feedback)],…

  16. Cognitive neuroscience in forensic science: understanding and utilizing the human element

    PubMed Central

    Dror, Itiel E.

    2015-01-01

    The human element plays a critical role in forensic science. It is not limited only to issues relating to forensic decision-making, such as bias, but also relates to most aspects of forensic work (some of which even take place before a crime is ever committed or long after the verification of the forensic conclusion). In this paper, I explicate many aspects of forensic work that involve the human element and therefore show the relevance (and potential contribution) of cognitive neuroscience to forensic science. The 10 aspects covered in this paper are proactive forensic science, selection during recruitment, training, crime scene investigation, forensic decision-making, verification and conflict resolution, reporting, the role of the forensic examiner, presentation in court and judicial decisions. As the forensic community is taking on the challenges introduced by the realization that the human element is critical for forensic work, new opportunities emerge that allow for considerable improvement and enhancement of the forensic science endeavour. PMID:26101281

  17. Addressing social aspects associated with wastewater treatment facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padilla-Rivera, Alejandro; Morgan-Sagastume, Juan Manuel; Noyola, Adalberto

    In wastewater treatment facilities (WWTF), technical and financial aspects have been considered a priority, while other issues, such as social aspects, have not been evaluated seriously and there is not an accepted methodology for assessing it. In this work, a methodology focused on social concerns related to WWTF is presented. The methodology proposes the use of 25 indicators as a framework for measuring social performance to evaluate the progress in moving towards sustainability. The methodology was applied to test its applicability and effectiveness in two WWTF in Mexico (urban and rural). This evaluation helped define the key elements, stakeholders andmore » barriers in the facilities. In this context, the urban facility showed a better overall performance, a result that may be explained mainly by the better socioeconomic context of the urban municipality. Finally, the evaluation of social aspects using the semi-qualitative approach proposed in this work allows for a comparison between different facilities and for the identification of strengths and weakness, and it provides an alternative tool for achieving and improving wastewater management. - Highlights: • The methodology proposes 25 indicators as a framework for measuring social performance in wastewater treatment facilities. • The evaluation helped to define the key elements, stakeholders and barriers in the wastewater treatment facilities. • The evaluation of social aspects allows the identification of strengths and weakness for improving wastewater management. • It provides a social profile of the facility that highlights the best and worst performances.« less

  18. Methodologies and systems for heterogeneous concurrent computing

    NASA Technical Reports Server (NTRS)

    Sunderam, V. S.

    1994-01-01

    Heterogeneous concurrent computing is gaining increasing acceptance as an alternative or complementary paradigm to multiprocessor-based parallel processing as well as to conventional supercomputing. While algorithmic and programming aspects of heterogeneous concurrent computing are similar to their parallel processing counterparts, system issues, partitioning and scheduling, and performance aspects are significantly different. In this paper, we discuss critical design and implementation issues in heterogeneous concurrent computing, and describe techniques for enhancing its effectiveness. In particular, we highlight the system level infrastructures that are required, aspects of parallel algorithm development that most affect performance, system capabilities and limitations, and tools and methodologies for effective computing in heterogeneous networked environments. We also present recent developments and experiences in the context of the PVM system and comment on ongoing and future work.

  19. Probabilistic finite elements for fracture mechanics

    NASA Technical Reports Server (NTRS)

    Besterfield, Glen

    1988-01-01

    The probabilistic finite element method (PFEM) is developed for probabilistic fracture mechanics (PFM). A finite element which has the near crack-tip singular strain embedded in the element is used. Probabilistic distributions, such as expectation, covariance and correlation stress intensity factors, are calculated for random load, random material and random crack length. The method is computationally quite efficient and can be expected to determine the probability of fracture or reliability.

  20. RELATIONSHIP BETWEEN RIGIDITY OF EXTERNAL FIXATOR AND NUMBER OF PINS: COMPUTER ANALYSIS USING FINITE ELEMENTS

    PubMed Central

    Sternick, Marcelo Back; Dallacosta, Darlan; Bento, Daniela Águida; do Reis, Marcelo Lemos

    2015-01-01

    Objective: To analyze the rigidity of a platform-type external fixator assembly, according to different numbers of pins on each clamp. Methods: Computer simulation on a large-sized Cromus dynamic external fixator (Baumer SA) was performed using a finite element method, in accordance with the standard ASTM F1541. The models were generated with approximately 450,000 quadratic tetrahedral elements. Assemblies with two, three and four Schanz pins of 5.5 mm in diameter in each clamp were compared. Every model was subjected to a maximum force of 200 N, divided into 10 sub-steps. For the components, the behavior of the material was assumed to be linear, elastic, isotropic and homogeneous. For each model, the rigidity of the assembly and the Von Mises stress distribution were evaluated. Results: The rigidity of the system was 307.6 N/mm for two pins, 369.0 N/mm for three and 437.9 N/mm for four. Conclusion: The results showed that four Schanz pins in each clamp promoted rigidity that was 19% greater than in the configuration with three pins and 42% greater than with two pins. Higher tension occurred in configurations with fewer pins. In the models analyzed, the maximum tension occurred on the surface of the pin, close to the fixation area. PMID:27047879

  1. Cochlear pharmacokinetics with local inner ear drug delivery using a three-dimensional finite-element computer model.

    PubMed

    Plontke, Stefan K; Siedow, Norbert; Wegener, Raimund; Zenner, Hans-Peter; Salt, Alec N

    2007-01-01

    Cochlear fluid pharmacokinetics can be better represented by three-dimensional (3D) finite-element simulations of drug dispersal. Local drug deliveries to the round window membrane are increasingly being used to treat inner ear disorders. Crucial to the development of safe therapies is knowledge of drug distribution in the inner ear with different delivery methods. Computer simulations allow application protocols and drug delivery systems to be evaluated, and may permit animal studies to be extrapolated to the larger cochlea of the human. A finite-element 3D model of the cochlea was constructed based on geometric dimensions of the guinea pig cochlea. Drug propagation along and between compartments was described by passive diffusion. To demonstrate the potential value of the model, methylprednisolone distribution in the cochlea was calculated for two clinically relevant application protocols using pharmacokinetic parameters derived from a prior one-dimensional (1D) model. In addition, a simplified geometry was used to compare results from 3D with 1D simulations. For the simplified geometry, calculated concentration profiles with distance were in excellent agreement between the 1D and the 3D models. Different drug delivery strategies produce very different concentration time courses, peak concentrations and basal-apical concentration gradients of drug. In addition, 3D computations demonstrate the existence of substantial gradients across the scalae in the basal turn. The 3D model clearly shows the presence of drug gradients across the basal scalae of guinea pigs, demonstrating the necessity of a 3D approach to predict drug movements across and between scalae with larger cross-sectional areas, such as the human, with accuracy. This is the first model to incorporate the volume of the spiral ligament and to calculate diffusion through this structure. Further development of the 3D model will have to incorporate a more accurate geometry of the entire inner ear and

  2. Cochlear Pharmacokinetics with Local Inner Ear Drug Delivery Using a Three-Dimensional Finite-Element Computer Model

    PubMed Central

    Plontke, Stefan K.; Siedow, Norbert; Wegener, Raimund; Zenner, Hans-Peter; Salt, Alec N.

    2006-01-01

    Hypothesis: Cochlear fluid pharmacokinetics can be better represented by three-dimensional (3D) finite-element simulations of drug dispersal. Background: Local drug deliveries to the round window membrane are increasingly being used to treat inner ear disorders. Crucial to the development of safe therapies is knowledge of drug distribution in the inner ear with different delivery methods. Computer simulations allow application protocols and drug delivery systems to be evaluated, and may permit animal studies to be extrapolated to the larger cochlea of the human. Methods: A finite-element 3D model of the cochlea was constructed based on geometric dimensions of the guinea pig cochlea. Drug propagation along and between compartments was described by passive diffusion. To demonstrate the potential value of the model, methylprednisolone distribution in the cochlea was calculated for two clinically relevant application protocols using pharmacokinetic parameters derived from a prior one-dimensional (1D) model. In addition, a simplified geometry was used to compare results from 3D with 1D simulations. Results: For the simplified geometry, calculated concentration profiles with distance were in excellent agreement between the 1D and the 3D models. Different drug delivery strategies produce very different concentration time courses, peak concentrations and basal-apical concentration gradients of drug. In addition, 3D computations demonstrate the existence of substantial gradients across the scalae in the basal turn. Conclusion: The 3D model clearly shows the presence of drug gradients across the basal scalae of guinea pigs, demonstrating the necessity of a 3D approach to predict drug movements across and between scalae with larger cross-sectional areas, such as the human, with accuracy. This is the first model to incorporate the volume of the spiral ligament and to calculate diffusion through this structure. Further development of the 3D model will have to incorporate a more

  3. Effect of boundary conditions on the numerical solutions of representative volume element problems for random heterogeneous composite microstructures

    NASA Astrophysics Data System (ADS)

    Cho, Yi Je; Lee, Wook Jin; Park, Yong Ho

    2014-11-01

    Aspects of numerical results from computational experiments on representative volume element (RVE) problems using finite element analyses are discussed. Two different boundary conditions (BCs) are examined and compared numerically for volume elements with different sizes, where tests have been performed on the uniaxial tensile deformation of random particle reinforced composites. Structural heterogeneities near model boundaries such as the free-edges of particle/matrix interfaces significantly influenced the overall numerical solutions, producing force and displacement fluctuations along the boundaries. Interestingly, this effect was shown to be limited to surface regions within a certain distance of the boundaries, while the interior of the model showed almost identical strain fields regardless of the applied BCs. Also, the thickness of the BC-affected regions remained constant with varying volume element sizes in the models. When the volume element size was large enough compared to the thickness of the BC-affected regions, the structural response of most of the model was found to be almost independent of the applied BC such that the apparent properties converged to the effective properties. Finally, the mechanism that leads a RVE model for random heterogeneous materials to be representative is discussed in terms of the size of the volume element and the thickness of the BC-affected region.

  4. What Aspects of Personal Care Are Most Important to Patients Undergoing Radiation Therapy for Prostate Cancer?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, Kimberley A.; Department of Public Health Sciences, Queen's University, Kingston, Ontario; Feldman-Stewart, Deb

    Purpose/Objective: The overall quality of patient care is a function of the quality of both its technical and its nontechnical components. The purpose of this study was to identify the elements of nontechnical (personal) care that are most important to patients undergoing radiation therapy for prostate cancer. Methods and Materials: We reviewed the literature and interviewed patients and health professionals to identify elements of personal care pertinent to patients undergoing radiation therapy for prostate cancer. We identified 143 individual elements relating to 10 aspects of personal care. Patients undergoing radical radiation therapy for prostate cancer completed a self-administered questionnaire inmore » which they rated the importance of each element. The overall importance of each element was measured by the percentage of respondents who rated it as “very important.” The importance of each aspect of personal care was measured by the mean importance of its elements. Results: One hundred eight patients completed the questionnaire. The percentage of patients who rated each element “very important” ranged from 7% to 95% (mean 61%). The mean importance rating of the elements of each aspect of care varied significantly: “perceived competence of caregivers,” 80%; “empathy and respectfulness of caregivers,” 67%; “adequacy of information sharing,” 67%; “patient centeredness,” 59%; “accessibility of caregivers,” 57%; “continuity of care,” 51%; “privacy,” 51%; “convenience,” 45%; “comprehensiveness of services,” 44%; and “treatment environment,” 30% (P<.0001). Neither age nor education was associated with importance ratings, but the patient's health status was associated with the rating of some elements of care. Conclusions: Many different elements of personal care are important to patients undergoing radiation therapy for prostate cancer, but the 3 aspects of care that most believe are most important are these: the

  5. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A; Kabel, A.; Lee, L.

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  6. Myocardial contrast echocardiography in mice: technical and physiological aspects.

    PubMed

    Verkaik, Melissa; van Poelgeest, Erik M; Kwekkeboom, Rick F J; Ter Wee, Piet M; van den Brom, Charissa E; Vervloet, Marc G; Eringa, Etto C

    2018-03-01

    Myocardial contrast echocardiography (MCE) offers the opportunity to study myocardial perfusion defects in mice in detail. The value of MCE compared with single-photon emission computed tomography, positron emission tomography, and computed tomography consists of high spatial resolution, the possibility of quantification of blood volume, and relatively low costs. Nevertheless, a number of technical and physiological aspects should be considered to ensure reproducibility among research groups. The aim of this overview is to describe technical aspects of MCE and the physiological parameters that influence myocardial perfusion data obtained with this technique. First, technical aspects of MCE discussed in this technical review are logarithmic compression of ultrasound data by ultrasound systems, saturation of the contrast signal, and acquisition of images during different phases of the cardiac cycle. Second, physiological aspects of myocardial perfusion that are affected by the experimental design are discussed, including the anesthesia regimen, systemic cardiovascular effects of vasoactive agents used, and fluctuations in body temperature that alter myocardial perfusion. When these technical and physiological aspects of MCE are taken into account and adequately standardized, MCE is an easily accessible technique for mice that can be used to study the control of myocardial perfusion by a wide range of factors.

  7. Analytic and Computational Perspectives of Multi-Scale Theory for Homogeneous, Laminated Composite, and Sandwich Beams and Plates

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Gherlone, Marco; Versino, Daniele; DiSciuva, Marco

    2012-01-01

    This paper reviews the theoretical foundation and computational mechanics aspects of the recently developed shear-deformation theory, called the Refined Zigzag Theory (RZT). The theory is based on a multi-scale formalism in which an equivalent single-layer plate theory is refined with a robust set of zigzag local layer displacements that are free of the usual deficiencies found in common plate theories with zigzag kinematics. In the RZT, first-order shear-deformation plate theory is used as the equivalent single-layer plate theory, which represents the overall response characteristics. Local piecewise-linear zigzag displacements are used to provide corrections to these overall response characteristics that are associated with the plate heterogeneity and the relative stiffnesses of the layers. The theory does not rely on shear correction factors and is equally accurate for homogeneous, laminated composite, and sandwich beams and plates. Regardless of the number of material layers, the theory maintains only seven kinematic unknowns that describe the membrane, bending, and transverse shear plate-deformation modes. Derived from the virtual work principle, RZT is well-suited for developing computationally efficient, C(sup 0)-continuous finite elements; formulations of several RZT-based elements are highlighted. The theory and its finite element approximations thus provide a unified and reliable computational platform for the analysis and design of high-performance load-bearing aerospace structures.

  8. Analytic and Computational Perspectives of Multi-Scale Theory for Homogeneous, Laminated Composite, and Sandwich Beams and Plates

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Gherlone, Marco; Versino, Daniele; Di Sciuva, Marco

    2012-01-01

    This paper reviews the theoretical foundation and computational mechanics aspects of the recently developed shear-deformation theory, called the Refined Zigzag Theory (RZT). The theory is based on a multi-scale formalism in which an equivalent single-layer plate theory is refined with a robust set of zigzag local layer displacements that are free of the usual deficiencies found in common plate theories with zigzag kinematics. In the RZT, first-order shear-deformation plate theory is used as the equivalent single-layer plate theory, which represents the overall response characteristics. Local piecewise-linear zigzag displacements are used to provide corrections to these overall response characteristics that are associated with the plate heterogeneity and the relative stiffnesses of the layers. The theory does not rely on shear correction factors and is equally accurate for homogeneous, laminated composite, and sandwich beams and plates. Regardless of the number of material layers, the theory maintains only seven kinematic unknowns that describe the membrane, bending, and transverse shear plate-deformation modes. Derived from the virtual work principle, RZT is well-suited for developing computationally efficient, C0-continuous finite elements; formulations of several RZT-based elements are highlighted. The theory and its finite elements provide a unified and reliable computational platform for the analysis and design of high-performance load-bearing aerospace structures.

  9. Computing Mass Properties From AutoCAD

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  10. Cohesive Elements for Shells

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.; Turon, Albert

    2007-01-01

    A cohesive element for shell analysis is presented. The element can be used to simulate the initiation and growth of delaminations between stacked, non-coincident layers of shell elements. The procedure to construct the element accounts for the thickness offset by applying the kinematic relations of shell deformation to transform the stiffness and internal force of a zero-thickness cohesive element such that interfacial continuity between the layers is enforced. The procedure is demonstrated by simulating the response and failure of the Mixed Mode Bending test and a skin-stiffener debond specimen. In addition, it is shown that stacks of shell elements can be used to create effective models to predict the inplane and delamination failure modes of thick components. The results indicate that simple shell models can retain many of the necessary predictive attributes of much more complex 3D models while providing the computational efficiency that is necessary for design.

  11. Comparative Evaluation of a Four-Implant-Supported Polyetherketoneketone Framework Prosthesis: A Three-Dimensional Finite Element Analysis Based on Cone Beam Computed Tomography and Computer-Aided Design.

    PubMed

    Lee, Ki-Sun; Shin, Sang-Wan; Lee, Sang-Pyo; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Jeong-Yol

    The purpose of this pilot study was to evaluate and compare polyetherketoneketone (PEKK) with different framework materials for implant-supported prostheses by means of a three-dimensional finite element analysis (3D-FEA) based on cone beam computed tomography (CBCT) and computer-aided design (CAD) data. A geometric model that consisted of four maxillary implants supporting a prosthesis framework was constructed from CBCT and CAD data of a treated patient. Three different materials (zirconia, titanium, and PEKK) were selected, and their material properties were simulated using FEA software in the generated geometric model. In the PEKK framework (ie, low elastic modulus) group, the stress transferred to the implant and simulated adjacent tissue was reduced when compressive stress was dominant, but increased when tensile stress was dominant. This study suggests that the shock-absorbing effects of a resilient implant-supported framework are limited in some areas and that rigid framework material shows a favorable stress distribution and safety of overall components of the prosthesis.

  12. Efficient simulation of incompressible viscous flow over multi-element airfoils

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Wiltberger, N. Lyn; Kwak, Dochan

    1993-01-01

    The incompressible, viscous, turbulent flow over single and multi-element airfoils is numerically simulated in an efficient manner by solving the incompressible Navier-Stokes equations. The solution algorithm employs the method of pseudo compressibility and utilizes an upwind differencing scheme for the convective fluxes, and an implicit line-relaxation scheme. The motivation for this work includes interest in studying high-lift take-off and landing configurations of various aircraft. In particular, accurate computation of lift and drag at various angles of attack up to stall is desired. Two different turbulence models are tested in computing the flow over an NACA 4412 airfoil; an accurate prediction of stall is obtained. The approach used for multi-element airfoils involves the use of multiple zones of structured grids fitted to each element. Two different approaches are compared; a patched system of grids, and an overlaid Chimera system of grids. Computational results are presented for two-element, three-element, and four-element airfoil configurations. Excellent agreement with experimental surface pressure coefficients is seen. The code converges in less than 200 iterations, requiring on the order of one minute of CPU time on a CRAY YMP per element in the airfoil configuration.

  13. Development of an orthotropic hole element

    NASA Technical Reports Server (NTRS)

    Smith, C. V.; Markham, J. W.; Kelley, J. W.; Kathiresan, K.

    1981-01-01

    A finite element was developed which adequately represents the state of stress in the region around a circular hole in orthotropic material experiencing reasonably general loading. This was achieved with a complementary virtual work formulation of the stiffness and stress matrices for a square element with center circular hole. The assumed stress state provides zero shearing stress on the hole boundary, so the element is suitable for problems involving load transfer without friction. The element has been implemented in the NASTRAN computer program, and sample problem results are presented.

  14. Fluid Flow Investigations within a 37 Element CANDU Fuel Bundle Supported by Magnetic Resonance Velocimetry and Computational Fluid Dynamics

    DOE PAGES

    Piro, M.H.A; Wassermann, F.; Grundmann, S.; ...

    2017-05-23

    The current work presents experimental and computational investigations of fluid flow through a 37 element CANDU nuclear fuel bundle. Experiments based on Magnetic Resonance Velocimetry (MRV) permit three-dimensional, three-component fluid velocity measurements to be made within the bundle with sub-millimeter resolution that are non-intrusive, do not require tracer particles or optical access of the flow field. Computational fluid dynamic (CFD) simulations of the foregoing experiments were performed with the hydra-th code using implicit large eddy simulation, which were in good agreement with experimental measurements of the fluid velocity. Greater understanding has been gained in the evolution of geometry-induced inter-subchannel mixing,more » the local effects of obstructed debris on the local flow field, and various turbulent effects, such as recirculation, swirl and separation. These capabilities are not available with conventional experimental techniques or thermal-hydraulic codes. Finally, the overall goal of this work is to continue developing experimental and computational capabilities for further investigations that reliably support nuclear reactor performance and safety.« less

  15. Fluid Flow Investigations within a 37 Element CANDU Fuel Bundle Supported by Magnetic Resonance Velocimetry and Computational Fluid Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piro, M.H.A; Wassermann, F.; Grundmann, S.

    The current work presents experimental and computational investigations of fluid flow through a 37 element CANDU nuclear fuel bundle. Experiments based on Magnetic Resonance Velocimetry (MRV) permit three-dimensional, three-component fluid velocity measurements to be made within the bundle with sub-millimeter resolution that are non-intrusive, do not require tracer particles or optical access of the flow field. Computational fluid dynamic (CFD) simulations of the foregoing experiments were performed with the hydra-th code using implicit large eddy simulation, which were in good agreement with experimental measurements of the fluid velocity. Greater understanding has been gained in the evolution of geometry-induced inter-subchannel mixing,more » the local effects of obstructed debris on the local flow field, and various turbulent effects, such as recirculation, swirl and separation. These capabilities are not available with conventional experimental techniques or thermal-hydraulic codes. Finally, the overall goal of this work is to continue developing experimental and computational capabilities for further investigations that reliably support nuclear reactor performance and safety.« less

  16. The GPRIME approach to finite element modeling

    NASA Technical Reports Server (NTRS)

    Wallace, D. R.; Mckee, J. H.; Hurwitz, M. M.

    1983-01-01

    GPRIME, an interactive modeling system, runs on the CDC 6000 computers and the DEC VAX 11/780 minicomputer. This system includes three components: (1) GPRIME, a user friendly geometric language and a processor to translate that language into geometric entities, (2) GGEN, an interactive data generator for 2-D models; and (3) SOLIDGEN, a 3-D solid modeling program. Each component has a computer user interface of an extensive command set. All of these programs make use of a comprehensive B-spline mathematics subroutine library, which can be used for a wide variety of interpolation problems and other geometric calculations. Many other user aids, such as automatic saving of the geometric and finite element data bases and hidden line removal, are available. This interactive finite element modeling capability can produce a complete finite element model, producing an output file of grid and element data.

  17. Toward automatic finite element analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Perucchio, Renato; Voelcker, Herbert

    1987-01-01

    Two problems must be solved if the finite element method is to become a reliable and affordable blackbox engineering tool. Finite element meshes must be generated automatically from computer aided design databases and mesh analysis must be made self-adaptive. The experimental system described solves both problems in 2-D through spatial and analytical substructuring techniques that are now being extended into 3-D.

  18. Computer-Generated Feedback on Student Writing

    ERIC Educational Resources Information Center

    Ware, Paige

    2011-01-01

    A distinction must be made between "computer-generated scoring" and "computer-generated feedback". Computer-generated scoring refers to the provision of automated scores derived from mathematical models built on organizational, syntactic, and mechanical aspects of writing. In contrast, computer-generated feedback, the focus of this article, refers…

  19. Seeing the forest for the trees: Networked workstations as a parallel processing computer

    NASA Technical Reports Server (NTRS)

    Breen, J. O.; Meleedy, D. M.

    1992-01-01

    Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.

  20. Parallel processing in finite element structural analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1987-01-01

    A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).

  1. Transuranic Computational Chemistry.

    PubMed

    Kaltsoyannis, Nikolas

    2018-02-26

    Recent developments in the chemistry of the transuranic elements are surveyed, with particular emphasis on computational contributions. Examples are drawn from molecular coordination and organometallic chemistry, and from the study of extended solid systems. The role of the metal valence orbitals in covalent bonding is a particular focus, especially the consequences of the stabilization of the 5f orbitals as the actinide series is traversed. The fledgling chemistry of transuranic elements in the +II oxidation state is highlighted. Throughout, the symbiotic interplay of experimental and computational studies is emphasized; the extraordinary challenges of experimental transuranic chemistry afford computational chemistry a particularly valuable role at the frontier of the periodic table. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Aspects of Plant Intelligence

    PubMed Central

    TREWAVAS, ANTHONY

    2003-01-01

    Intelligence is not a term commonly used when plants are discussed. However, I believe that this is an omission based not on a true assessment of the ability of plants to compute complex aspects of their environment, but solely a reflection of a sessile lifestyle. This article, which is admittedly controversial, attempts to raise many issues that surround this area. To commence use of the term intelligence with regard to plant behaviour will lead to a better understanding of the complexity of plant signal transduction and the discrimination and sensitivity with which plants construct images of their environment, and raises critical questions concerning how plants compute responses at the whole‐plant level. Approaches to investigating learning and memory in plants will also be considered. PMID:12740212

  3. 01010000 01001100 01000001 01011001: Play Elements in Computer Programming

    ERIC Educational Resources Information Center

    Breslin, Samantha

    2013-01-01

    This article explores the role of play in human interaction with computers in the context of computer programming. The author considers many facets of programming including the literary practice of coding, the abstract design of programs, and more mundane activities such as testing, debugging, and hacking. She discusses how these incorporate the…

  4. Ceramic matrix composite behavior -- Computational simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamis, C.C.; Murthy, P.L.N.; Mital, S.K.

    Development of analytical modeling and computational capabilities for the prediction of high temperature ceramic matrix composite behavior has been an ongoing research activity at NASA-Lewis Research Center. These research activities have resulted in the development of micromechanics based methodologies to evaluate different aspects of ceramic matrix composite behavior. The basis of the approach is micromechanics together with a unique fiber substructuring concept. In this new concept the conventional unit cell (the smallest representative volume element of the composite) of micromechanics approach has been modified by substructuring the unit cell into several slices and developing the micromechanics based equations at themore » slice level. Main advantage of this technique is that it can provide a much greater detail in the response of composite behavior as compared to a conventional micromechanics based analysis and still maintains a very high computational efficiency. This methodology has recently been extended to model plain weave ceramic composites. The objective of the present paper is to describe the important features of the modeling and simulation and illustrate with select examples of laminated as well as woven composites.« less

  5. Computing Fiber/Matrix Interfacial Effects In SiC/RBSN

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Hopkins, Dale A.

    1996-01-01

    Computational study conducted to demonstrate use of boundary-element method in analyzing effects of fiber/matrix interface on elastic and thermal behaviors of representative laminated composite materials. In study, boundary-element method implemented by Boundary Element Solution Technology - Composite Modeling System (BEST-CMS) computer program.

  6. Space-Time Conservation Element and Solution Element Method Being Developed

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Jorgenson, Philip C. E.; Loh, Ching-Yuen; Wang, Xiao-Yen; Yu, Sheng-Tao

    1999-01-01

    The engineering research and design requirements of today pose great computer-simulation challenges to engineers and scientists who are called on to analyze phenomena in continuum mechanics. The future will bring even more daunting challenges, when increasingly complex phenomena must be analyzed with increased accuracy. Traditionally used numerical simulation methods have evolved to their present state by repeated incremental extensions to broaden their scope. They are reaching the limits of their applicability and will need to be radically revised, at the very least, to meet future simulation challenges. At the NASA Lewis Research Center, researchers have been developing a new numerical framework for solving conservation laws in continuum mechanics, namely, the Space-Time Conservation Element and Solution Element Method, or the CE/SE method. This method has been built from fundamentals and is not a modification of any previously existing method. It has been designed with generality, simplicity, robustness, and accuracy as cornerstones. The CE/SE method has thus far been applied in the fields of computational fluid dynamics, computational aeroacoustics, and computational electromagnetics. Computer programs based on the CE/SE method have been developed for calculating flows in one, two, and three spatial dimensions. Results have been obtained for numerous problems and phenomena, including various shock-tube problems, ZND detonation waves, an implosion and explosion problem, shocks over a forward-facing step, a blast wave discharging from a nozzle, various acoustic waves, and shock/acoustic-wave interactions. The method can clearly resolve shock/acoustic-wave interactions, wherein the difference of the magnitude between the acoustic wave and shock could be up to six orders. In two-dimensional flows, the reflected shock is as crisp as the leading shock. CE/SE schemes are currently being used for advanced applications to jet and fan noise prediction and to chemically

  7. Games, Gaming, and Gamification: Some Aspects of Motivation

    ERIC Educational Resources Information Center

    Hanson-Smith, Elizabeth

    2016-01-01

    Unsupported claims have been made for the use of games in education and the gamification (game-like aspects, such as scores and point goals) of various learning elements. This brief article examines what may be the motivational basis of gaming and how it can affect students' behavior and ultimate success.

  8. An atomic finite element model for biodegradable polymers. Part 1. Formulation of the finite elements.

    PubMed

    Gleadall, Andrew; Pan, Jingzhe; Ding, Lifeng; Kruft, Marc-Anton; Curcó, David

    2015-11-01

    Molecular dynamics (MD) simulations are widely used to analyse materials at the atomic scale. However, MD has high computational demands, which may inhibit its use for simulations of structures involving large numbers of atoms such as amorphous polymer structures. An atomic-scale finite element method (AFEM) is presented in this study with significantly lower computational demands than MD. Due to the reduced computational demands, AFEM is suitable for the analysis of Young's modulus of amorphous polymer structures. This is of particular interest when studying the degradation of bioresorbable polymers, which is the topic of an accompanying paper. AFEM is derived from the inter-atomic potential energy functions of an MD force field. The nonlinear MD functions were adapted to enable static linear analysis. Finite element formulations were derived to represent interatomic potential energy functions between two, three and four atoms. Validation of the AFEM was conducted through its application to atomic structures for crystalline and amorphous poly(lactide). Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Finite element modeling of borehole heat exchanger systems. Part 1. Fundamentals

    NASA Astrophysics Data System (ADS)

    Diersch, H.-J. G.; Bauer, D.; Heidemann, W.; Rühaak, W.; Schätzl, P.

    2011-08-01

    Single borehole heat exchanger (BHE) and arrays of BHE are modeled by using the finite element method. The first part of the paper derives the fundamental equations for BHE systems and their finite element representations, where the thermal exchange between the borehole components is modeled via thermal transfer relations. For this purpose improved relationships for thermal resistances and capacities of BHE are introduced. Pipe-to-grout thermal transfer possesses multiple grout points for double U-shape and single U-shape BHE to attain a more accurate modeling. The numerical solution of the final 3D problems is performed via a widely non-sequential (essentially non-iterative) coupling strategy for the BHE and porous medium discretization. Four types of vertical BHE are supported: double U-shape (2U) pipe, single U-shape (1U) pipe, coaxial pipe with annular (CXA) and centred (CXC) inlet. Two computational strategies are used: (1) The analytical BHE method based on Eskilson and Claesson's (1988) solution, (2) numerical BHE method based on Al-Khoury et al.'s (2005) solution. The second part of the paper focusses on BHE meshing aspects, the validation of BHE solutions and practical applications for borehole thermal energy store systems.

  10. Computing with Chaos

    NASA Astrophysics Data System (ADS)

    Murali, K.; Sinah, Sudeshna; Ditto, William

    2004-03-01

    Recently there has been a new theoretical direction in harnessing the richness of spatially extended chaotic systems, namely the exploitation of coupled chaotic elements to do flexible computations [1]. The aim of this presentation is to demonstrate the use a single chaotic element to emulate different logic gates and perform different arithmetic tasks. Additionally we demonstrate that the elements can be controlled to switch easily between the different operational roles. Such a computing unit may then allow a more dynamic computer architecture and serve as ingredients of a general-purpose device more flexible than statically wired hardware. The theoretical scheme for flexible implementation of all these fundamental logical operations utilizing low dimensional chaos [1] will be reviewed along with a specific realization of the theory in a chaotic circuit [2]. Results will also be presented from experiments done on leech neurons. [1] Sinha, S., Munakata, T. and Ditto, W.L., Phys. Rev. E 65 036216 [2] "Experimental realization of the fundamental NOR Gate using a chaotic circuit," K. Murali, Sudeshna Sinha and William L. Ditto Phys. Rev. E 68, 016205 (2003).

  11. Four-terminal circuit element with photonic core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sampayan, Stephen

    A four-terminal circuit element is described that includes a photonic core inside of the circuit element that uses a wide bandgap semiconductor material that exhibits photoconductivity and allows current flow through the material in response to the light that is incident on the wide bandgap material. The four-terminal circuit element can be configured based on various hardware structures using a single piece or multiple pieces or layers of a wide bandgap semiconductor material to achieve various designed electrical properties such as high switching voltages by using the photoconductive feature beyond the breakdown voltages of semiconductor devices or circuits operated basedmore » on electrical bias or control designs. The photonic core aspect of the four-terminal circuit element provides unique features that enable versatile circuit applications to either replace the semiconductor transistor-based circuit elements or semiconductor diode-based circuit elements.« less

  12. Quality assessment and control of finite element solutions

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Babuska, Ivo

    1987-01-01

    Status and some recent developments in the techniques for assessing the reliability of finite element solutions are summarized. Discussion focuses on a number of aspects including: the major types of errors in the finite element solutions; techniques used for a posteriori error estimation and the reliability of these estimators; the feedback and adaptive strategies for improving the finite element solutions; and postprocessing approaches used for improving the accuracy of stresses and other important engineering data. Also, future directions for research needed to make error estimation and adaptive movement practical are identified.

  13. Sensitivity analysis of bridge health index to element failure and element conditions.

    DOT National Transportation Integrated Search

    2009-11-01

    Bridge Health Index (BHI) is a bridge performance measure based on the condition of the bridge elements. It : is computed as the ratio of remaining value of the bridge structure to the initial value of the structure. Since it : is expressed as a perc...

  14. Efficient Computation Of Behavior Of Aircraft Tires

    NASA Technical Reports Server (NTRS)

    Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.

    1989-01-01

    NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.

  15. Cognitive neuroscience in forensic science: understanding and utilizing the human element.

    PubMed

    Dror, Itiel E

    2015-08-05

    The human element plays a critical role in forensic science. It is not limited only to issues relating to forensic decision-making, such as bias, but also relates to most aspects of forensic work (some of which even take place before a crime is ever committed or long after the verification of the forensic conclusion). In this paper, I explicate many aspects of forensic work that involve the human element and therefore show the relevance (and potential contribution) of cognitive neuroscience to forensic science. The 10 aspects covered in this paper are proactive forensic science, selection during recruitment, training, crime scene investigation, forensic decision-making, verification and conflict resolution, reporting, the role of the forensic examiner, presentation in court and judicial decisions. As the forensic community is taking on the challenges introduced by the realization that the human element is critical for forensic work, new opportunities emerge that allow for considerable improvement and enhancement of the forensic science endeavour. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  16. Numerical computation of transonic flows by finite-element and finite-difference methods

    NASA Technical Reports Server (NTRS)

    Hafez, M. M.; Wellford, L. C.; Merkle, C. L.; Murman, E. M.

    1978-01-01

    Studies on applications of the finite element approach to transonic flow calculations are reported. Different discretization techniques of the differential equations and boundary conditions are compared. Finite element analogs of Murman's mixed type finite difference operators for small disturbance formulations were constructed and the time dependent approach (using finite differences in time and finite elements in space) was examined.

  17. Modeling hazardous mass flows Geoflows09: Mathematical and computational aspects of modeling hazardous geophysical mass flows; Seattle, Washington, 9–11 March 2009

    USGS Publications Warehouse

    Iverson, Richard M.; LeVeque, Randall J.

    2009-01-01

    A recent workshop at the University of Washington focused on mathematical and computational aspects of modeling the dynamics of dense, gravity-driven mass movements such as rock avalanches and debris flows. About 30 participants came from seven countries and brought diverse backgrounds in geophysics; geology; physics; applied and computational mathematics; and civil, mechanical, and geotechnical engineering. The workshop was cosponsored by the U.S. Geological Survey Volcano Hazards Program, by the U.S. National Science Foundation through a Vertical Integration of Research and Education (VIGRE) in the Mathematical Sciences grant to the University of Washington, and by the Pacific Institute for the Mathematical Sciences. It began with a day of lectures open to the academic community at large and concluded with 2 days of focused discussions and collaborative work among the participants.

  18. Stabilized Finite Elements in FUN3D

    NASA Technical Reports Server (NTRS)

    Anderson, W. Kyle; Newman, James C.; Karman, Steve L.

    2017-01-01

    A Streamlined Upwind Petrov-Galerkin (SUPG) stabilized finite-element discretization has been implemented as a library into the FUN3D unstructured-grid flow solver. Motivation for the selection of this methodology is given, details of the implementation are provided, and the discretization for the interior scheme is verified for linear and quadratic elements by using the method of manufactured solutions. A methodology is also described for capturing shocks, and simulation results are compared to the finite-volume formulation that is currently the primary method employed for routine engineering applications. The finite-element methodology is demonstrated to be more accurate than the finite-volume technology, particularly on tetrahedral meshes where the solutions obtained using the finite-volume scheme can suffer from adverse effects caused by bias in the grid. Although no effort has been made to date to optimize computational efficiency, the finite-element scheme is competitive with the finite-volume scheme in terms of computer time to reach convergence.

  19. Application of shell elements in simulation of cans ironing

    NASA Astrophysics Data System (ADS)

    Andrianov, A. V.; Erisov, Y. A.; Aryshensky, E. V.; Aryshensky, V. Y.

    2017-01-01

    In the present study, the special shell finite elements are used to simulate the drawing with high ironing ratio of aluminum beverage cans. These elements are implemented in commercial software complex PAM-STAMP 2G by means of T.T.S. normal stress option, which is used for ironing to describe well normal stress. By comparison of simulation and experimental data, it is shown that shell elements with T.T.S. option are capable to provide accurate simulation of deep drawing and ironing. The error of can thickness and height computation agrees with the engineering computation accuracy.

  20. Key Aspects of a Computerized Statistics Course.

    ERIC Educational Resources Information Center

    Wells, Karin L.; Marsh, Lawrence C.

    1997-01-01

    Looks at ways in which computer-assisted instruction transforms three traditional aspects of college teaching: lectures are replaced with multimedia presentations; homework becomes electronic, with instant grading and detailed explanations; and traditional office hours are replaced with electronic mail, list-serv, and live screen interaction…

  1. Three dimensional magnetic fields in extra high speed modified Lundell alternators computed by a combined vector-scalar magnetic potential finite element method

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A.; Wang, R.; Secunde, R.

    1992-01-01

    A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.

  2. Exponential convergence through linear finite element discretization of stratified subdomains

    NASA Astrophysics Data System (ADS)

    Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali

    2016-10-01

    Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.

  3. Learning the Lexical Aspects of a Second Language at Different Proficiencies: A Neural Computational Study

    ERIC Educational Resources Information Center

    Cuppini, Cristiano; Magosso, Elisa; Ursino, Mauro

    2013-01-01

    We present an original model designed to study how a second language (L2) is acquired in bilinguals at different proficiencies starting from an existing L1. The model assumes that the conceptual and lexical aspects of languages are stored separately: conceptual aspects in distinct topologically organized Feature Areas, and lexical aspects in a…

  4. Total-reflection X-ray fluorescence studies of trace elements in biomedical samples

    NASA Astrophysics Data System (ADS)

    Kubala-Kukuś, A.; Braziewicz, J.; Pajek, M.

    2004-08-01

    Application of the total-reflection X-ray fluorescence (TXRF) analysis in the studies of trace element contents in biomedical samples is discussed in the following aspects: (i) a nature of trace element concentration distributions, (ii) censoring approach to the detection limits, and (iii) a comparison of two sets of censored data. The paper summarizes the recent results achieved in this topics, in particular, the lognormal, or more general logstable, nature of concentration distribution of trace elements, the random left-censoring and the Kaplan-Meier approach accounting for detection limits and, finally, the application of the logrank test to compare the censored concentrations measured for two groups. These new aspects, which are of importance for applications of the TXRF in different fields, are discussed here in the context of TXRF studies of trace element in various samples of medical interest.

  5. Coping with Computing Success.

    ERIC Educational Resources Information Center

    Breslin, Richard D.

    Elements of computing success of Iona College, the challenges it currently faces, and the strategies conceived to cope with future computing needs are discussed. The college has mandated computer literacy for students and offers nine degrees in the computerized information system/management information system areas. Since planning is needed in…

  6. On finite element methods for the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Aziz, A. K.; Werschulz, A. G.

    1979-01-01

    The numerical solution of the Helmholtz equation is considered via finite element methods. A two-stage method which gives the same accuracy in the computed gradient as in the computed solution is discussed. Error estimates for the method using a newly developed proof are given, and the computational considerations which show this method to be computationally superior to previous methods are presented.

  7. An emulator for minimizing finite element analysis implementation resources

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.

    1982-01-01

    A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.

  8. Cause and Cure-Deterioration in Accuracy of CFD Simulations with Use of High-Aspect-Ratio Triangular/Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji

    2017-01-01

    In the multi-dimensional space-time conservation element and solution element16 (CESE) method, triangles and tetrahedral mesh elements turn out to be the most natural building blocks for 2D and 3D spatial grids, respectively. As such, the CESE method is naturally compatible with the simplest 2D and 3D unstructured grids and thus can be easily applied to solve problems with complex geometries. However, because (a) accurate solution of a high-Reynolds number flow field near a solid wall requires that the grid intervals along the direction normal to the wall be much finer than those in a direction parallel to the wall and, as such, the use of grid cells with extremely high aspect ratio (103 to 106) may become mandatory, and (b) unlike quadrilateral hexahedral grids, it is well-known that accuracy of gradient computations involving triangular tetrahedral grids tends to deteriorate rapidly as cell aspect ratio increases. As a result, the use of triangular tetrahedral grid cells near a solid wall has long been deemed impractical by CFD researchers. In view of (a) the critical role played by triangular tetrahedral grids in the CESE development, and (b) the importance of accurate resolution of high-Reynolds number flow field near a solid wall, as will be presented in the main paper, a comprehensive and rigorous mathematical framework that clearly identifies the reasons behind the accuracy deterioration as described above has been developed for the 2D case involving triangular cells. By avoiding the pitfalls identified by the 2D framework, and its 3D extension, it has been shown numerically.

  9. Two-dimensional radiant energy array computers and computing devices

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.; Strong, J. P., III (Inventor)

    1976-01-01

    Two dimensional digital computers and computer devices operate in parallel on rectangular arrays of digital radiant energy optical signal elements which are arranged in ordered rows and columns. Logic gate devices receive two input arrays and provide an output array having digital states dependent only on the digital states of the signal elements of the two input arrays at corresponding row and column positions. The logic devices include an array of photoconductors responsive to at least one of the input arrays for either selectively accelerating electrons to a phosphor output surface, applying potentials to an electroluminescent output layer, exciting an array of discrete radiant energy sources, or exciting a liquid crystal to influence crystal transparency or reflectivity.

  10. A novel approach in formulation of special transition elements: Mesh interface elements

    NASA Technical Reports Server (NTRS)

    Sarigul, Nesrin

    1991-01-01

    The objective of this research program is in the development of more accurate and efficient methods for solution of singular problems encountered in various branches of mechanics. The research program can be categorized under three levels. The first two levels involve the formulation of a new class of elements called 'mesh interface elements' (MIE) to connect meshes of traditional elements either in three dimensions or in three and two dimensions. The finite element formulations are based on boolean sum and blending operators. MEI are being formulated and tested in this research to account for the steep gradients encountered in aircraft and space structure applications. At present, the heat transfer and structural analysis problems are being formulated from uncoupled theory point of view. The status report: (1) summarizes formulation for heat transfer and structural analysis; (2) explains formulation of MEI; (3) examines computational efficiency; and (4) shows verification examples.

  11. Advantages and disadvantages of computer imaging in cosmetic surgery.

    PubMed

    Koch, R J; Chavez, A; Dagum, P; Newman, J P

    1998-02-01

    Despite the growing popularity of computer imaging systems, it is not clear whether the medical and legal advantages of using such a system outweigh the disadvantages. The purpose of this report is to evaluate these aspects, and provide some protective guidelines in the use of computer imaging in cosmetic surgery. The positive and negative aspects of computer imaging from a medical and legal perspective are reviewed. Also, specific issues are examined by a legal panel. The greatest advantages are potential problem patient exclusion, and enhanced physician-patient communication. Disadvantages include cost, user learning curve, and potential liability. Careful use of computer imaging should actually reduce one's liability when all aspects are considered. Recommendations for such use and specific legal issues are discussed.

  12. Mathematical and computational aspects of nonuniform frictional slip modeling

    NASA Astrophysics Data System (ADS)

    Gorbatikh, Larissa

    2004-07-01

    A mechanics-based model of non-uniform frictional sliding is studied from the mathematical/computational analysis point of view. This problem is of a key importance for a number of applications (particularly geomechanical ones), where materials interfaces undergo partial frictional sliding under compression and shear. We show that the problem is reduced to Dirichlet's problem for monotonic loading and to Riemman's problem for cyclic loading. The problem may look like a traditional crack interaction problem, however, it is confounded by the fact that locations of n sliding intervals are not known. They are to be determined from the condition for the stress intensity factors: KII=0 at the ends of the sliding zones. Computationally, it reduces to solving a system of 2n coupled non-linear algebraic equations involving singular integrals with unknown limits of integration.

  13. Computational Modeling of Morphogenesis Regulated by Mechanical Feedback

    PubMed Central

    Ramasubramanian, Ashok; Taber, Larry A.

    2008-01-01

    Mechanical forces cause changes in form during embryogenesis and likely play a role in regulating these changes. This paper explores the idea that changes in homeostatic tissue stress (target stress), possibly modulated by genes, drive some morphogenetic processes. Computational models are presented to illustrate how regional variations in target stress can cause a range of complex behaviors involving the bending of epithelia. These models include growth and cytoskeletal contraction regulated by stress-based mechanical feedback. All simulations were carried out using the commercial finite element code ABAQUS, with growth and contraction included by modifying the zero-stress state in the material constitutive relations. Results presented for bending of bilayered beams and invagination of cylindrical and spherical shells provide insight into some of the mechanical aspects that must be considered in studying morphogenetic mechanisms. PMID:17318485

  14. Finite element analysis of helicopter structures

    NASA Technical Reports Server (NTRS)

    Rich, M. J.

    1978-01-01

    Application of the finite element analysis is now being expanded to three dimensional analysis of mechanical components. Examples are presented for airframe, mechanical components, and composite structure calculations. Data are detailed on the increase of model size, computer usage, and the effect on reducing stress analysis costs. Future applications for use of finite element analysis for helicopter structures are projected.

  15. Charles Darwin and Evolution: Illustrating Human Aspects of Science

    NASA Astrophysics Data System (ADS)

    Kampourakis, Kostas; McComas, William F.

    2010-06-01

    Recently, the nature of science (NOS) has become recognized as an important element within the K-12 science curriculum. Despite differences in the ultimate lists of recommended aspects, a consensus is emerging on what specific NOS elements should be the focus of science instruction and inform textbook writers and curriculum developers. In this article, we suggest a contextualized, explicit approach addressing one core NOS aspect: the human aspects of science that include the domains of creativity, social influences and subjectivity. To illustrate these ideas, we have focused on Charles Darwin, a scientist whose life, work and thought processes were particularly well recorded at the time and analyzed by scholars in the succeeding years. Historical facts are discussed and linked to core NOS ideas. Creativity is illustrated through the analogies between the struggle for existence in human societies and in nature, between artificial and natural selection, and between the division of labor in human societies and in nature. Social influences are represented by Darwin’s aversion of criticism of various kinds and by his response to the methodological requirements of the science of that time. Finally, subjectivity is discussed through Darwin’s development of a unique but incorrect source for the origin of variations within species.

  16. Paraxial diffractive elements for space-variant linear transforms

    NASA Astrophysics Data System (ADS)

    Teiwes, Stephan; Schwarzer, Heiko; Gu, Ben-Yuan

    1998-06-01

    Optical linear transform architectures bear good potential for future developments of very powerful hybrid vision systems and neural network classifiers. The optical modules of such systems could be used as pre-processors to solve complex linear operations at very high speed in order to simplify an electronic data post-processing. However, the applicability of linear optical architectures is strongly connected with the fundamental question of how to implement a specific linear transform by optical means and physical imitations. The large majority of publications on this topic focusses on the optical implementation of space-invariant transforms by the well-known 4f-setup. Only few papers deal with approaches to implement selected space-variant transforms. In this paper, we propose a simple algebraic method to design diffractive elements for an optical architecture in order to realize arbitrary space-variant transforms. The design procedure is based on a digital model of scalar, paraxial wave theory and leads to optimal element transmission functions within the model. Its computational and physical limitations are discussed in terms of complexity measures. Finally, the design procedure is demonstrated by some examples. Firstly, diffractive elements for the realization of different rotation operations are computed and, secondly, a Hough transform element is presented. The correct optical functions of the elements are proved in computer simulation experiments.

  17. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  18. Acceleration of low order finite element computation with GPUs (Invited)

    NASA Astrophysics Data System (ADS)

    Knepley, M. G.

    2010-12-01

    Considerable effort has been focused on the acceleration using GPUs of high order spectral element methods and discontinuous Galerkin finite element methods. However, these methods are not universally applicable, and much of the existing FEM software base employs low order methods. In this talk, we present a formulation of FEM, using the PETSc framework from ANL, which is amenable to GPU acceleration even at very low order. In addition, using the FEniCS system for FEM, we show that the relevant kernels can be automatically generated and optimized using a symbolic manipulation system.

  19. Influence of altitude and aspect on daily variations in factors of forest fire danger

    Treesearch

    G. Lloyd. Hayes

    1941-01-01

    Altitude, in broad subdivisions, exerts recognized and well-understood effects on climate. Aspect further modifies the altitudinal influence. Many publications have dealt with the interrelations of these geographic factors with climate and life zones or have discussed variations of individual weather elements as influenced by local altitude and aspect differences and...

  20. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort is to develop a means to use, and to ultimately implement, hp-version finite elements in the numerical solution of optimal control problems. The hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element.

  1. Human-computer interface for the study of information fusion concepts in situation analysis and command decision support systems

    NASA Astrophysics Data System (ADS)

    Roy, Jean; Breton, Richard; Paradis, Stephane

    2001-08-01

    Situation Awareness (SAW) is essential for commanders to conduct decision-making (DM) activities. Situation Analysis (SA) is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of SAW for the decision maker. Operational trends in warfare put the situation analysis process under pressure. This emphasizes the need for a real-time computer-based Situation analysis Support System (SASS) to aid commanders in achieving the appropriate situation awareness, thereby supporting their response to actual or anticipated threats. Data fusion is clearly a key enabler for SA and a SASS. Since data fusion is used for SA in support of dynamic human decision-making, the exploration of the SA concepts and the design of data fusion techniques must take into account human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight integration of the human element with the SA technology is essential. Regarding these issues, this paper provides a description of CODSI (Command Decision Support Interface), and operational- like human machine interface prototype for investigations in computer-based SA and command decision support. With CODSI, one objective was to apply recent developments in SA theory and information display technology to the problem of enhancing SAW quality. It thus provides a capability to adequately convey tactical information to command decision makers. It also supports the study of human-computer interactions for SA, and methodologies for SAW measurement.

  2. Steps in creating a methodology for interpreting a geodiversity element -integrating a geodiversity element in the popular knowledge

    NASA Astrophysics Data System (ADS)

    Toma, Cristina; Andrasanu, Alexandru

    2017-04-01

    Conserving geodiversity and especially geological heritage is not very well integrated in the general knowledge as biodiversity is, for example. Keeping that in mind we are trying, through this research, to find a better way of transmitting a geological process to the general public. The means to integrate a geodiversity element in the popular knowledge is through interpretation. Interpretation "translates" the scientific information into a common language with very well known facts by the general public. The purpose of this paper is creating a framework for a methodology necessary in interpreting a geodiversity element - salt - in Buzau Land Geopark. We will approach the salt subject through a scheme in order to have a general view of the process and to better understand and explain it to the general public. We will look into the subject from three scientific points of view: GEODIVERSITY, ANTHROPOLOGY, and the SOCIO-ECONOMICAL aspect. Each of these points of view or domains will be divided into themes. For GEODIVERSITY we will have the following themes: Formation, Accumulation, Diapirism process, Chemical formula, Landscape (here we will include also the specific biodiversity with the halophile plants), Landforms, Hazard. For ANTHROPOLOGY will contain themes of tangible and intangible heritage like: Salt symbolistic, Stories and ritual usage, Recipes, How the knowledge is transmitted. The SOCIO-ECONOMIC aspect will be reflected through themes like: Extractive methods, Usage, Interdictions, Taxes, Commercial exchanges. Each theme will have a set of keywords that will be described and each one will be at the base of the elements that together will form the interpretation of the geodiversity element - the salt. The next step will be to clearly set the scope of the interpretation, to which field of expertise is our interpretation process addressed: Education (Undergraduate or post-graduate Students), Science, Geotourism, Entrepreneurship. After putting together the

  3. Computational Aspects of N-Mixture Models

    PubMed Central

    Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S

    2015-01-01

    The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629

  4. Peridynamic Multiscale Finite Element Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Timothy; Bond, Stephen D.; Littlewood, David John

    The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic andmore » local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and

  5. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  6. A Hybrid FPGA/Tilera Compute Element for Autonomous Hazard Detection and Navigation

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Werner, Robert A.; Carson, John M., III; Khanoyan, Garen; Stern, Ryan A.; Trawny, Nikolas

    2013-01-01

    To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.

  7. A hybrid FPGA/Tilera compute element for autonomous hazard detection and navigation

    NASA Astrophysics Data System (ADS)

    Villalpando, C. Y.; Werner, R. A.; Carson, J. M.; Khanoyan, G.; Stern, R. A.; Trawny, N.

    To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.

  8. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  9. Lumped element filters for electronic warfare systems

    NASA Astrophysics Data System (ADS)

    Morgan, D.; Ragland, R.

    1986-02-01

    Increasing demands which future generations of electronic warfare (EW) systems are to satisfy include a reduction in the size of the equipment. The present paper is concerned with lumped element filters which can make a significant contribution to the downsizing of advanced EW systems. Lumped element filter design makes it possible to obtain very small package sizes by utilizing classical low frequency inductive and capacitive components which are small compared to the size of a wavelength. Cost-effective, temperature-stable devices can be obtained on the basis of new design techniques. Attention is given to aspects of design flexibility, an interdigital filter equivalent circuit diagram, conditions for which the use of lumped element filters can be recommended, construction techniques, a design example, and questions regarding the application of lumped element filters to EW processing systems.

  10. Automatically extracting the significant aspects evaluated in game reviews

    NASA Astrophysics Data System (ADS)

    Fong, Chiok Hoong; Ng, Yen Kaow

    2017-04-01

    Understanding the criteria (or "aspects") that reviewers use to evaluate games is important to game developers and publishers, since this will give them important input on how to improve their products. Techniques for the extraction of such aspects have been studied by others, albeit not specific to the gaming industry. In this paper we demonstrate an aspect extraction and analysis system specific to computer games. The system extracts game review texts from a list of known websites and automatically extracts candidate aspects from the review text using techniques from natural language processing and sentiment analysis. It then ranks the candidate aspects using the HITS algorithm. To evaluate the correctness of the extracted aspects, we used the system to calculate an overall score for each game by aggregating its highly-rated aspects, weighted by the importance of the respective aspects. The aggregated scores resulted in a ranking of games, which we compared to a known ranking from a popular website - the rankings showed overall consistency, which suggests that the system has extracted valuable aspects from the reviews. Using the extracted aspect, our system also facilitates the analysis of a game, by evaluating how review articles have rated its performance in these extracted aspects.

  11. A class of hybrid finite element methods for electromagnetics: A review

    NASA Technical Reports Server (NTRS)

    Volakis, J. L.; Chatterjee, A.; Gong, J.

    1993-01-01

    Integral equation methods have generally been the workhorse for antenna and scattering computations. In the case of antennas, they continue to be the prominent computational approach, but for scattering applications the requirement for large-scale computations has turned researchers' attention to near neighbor methods such as the finite element method, which has low O(N) storage requirements and is readily adaptable in modeling complex geometrical features and material inhomogeneities. In this paper, we review three hybrid finite element methods for simulating composite scatterers, conformal microstrip antennas, and finite periodic arrays. Specifically, we discuss the finite element method and its application to electromagnetic problems when combined with the boundary integral, absorbing boundary conditions, and artificial absorbers for terminating the mesh. Particular attention is given to large-scale simulations, methods, and solvers for achieving low memory requirements and code performance on parallel computing architectures.

  12. Computational model of collagen turnover in carotid arteries during hypertension.

    PubMed

    Sáez, P; Peña, E; Tarbell, J M; Martínez, M A

    2015-02-01

    It is well known that biological tissues adapt their properties because of different mechanical and chemical stimuli. The goal of this work is to study the collagen turnover in the arterial tissue of hypertensive patients through a coupled computational mechano-chemical model. Although it has been widely studied experimentally, computational models dealing with the mechano-chemical approach are not. The present approach can be extended easily to study other aspects of bone remodeling or collagen degradation in heart diseases. The model can be divided into three different stages. First, we study the smooth muscle cell synthesis of different biological substances due to over-stretching during hypertension. Next, we study the mass-transport of these substances along the arterial wall. The last step is to compute the turnover of collagen based on the amount of these substances in the arterial wall which interact with each other to modify the turnover rate of collagen. We simulate this process in a finite element model of a real human carotid artery. The final results show the well-known stiffening of the arterial wall due to the increase in the collagen content. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Finite-Element Methods for Real-Time Simulation of Surgery

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay

    2003-01-01

    Two finite-element methods have been developed for mathematical modeling of the time-dependent behaviors of deformable objects and, more specifically, the mechanical responses of soft tissues and organs in contact with surgical tools. These methods may afford the computational efficiency needed to satisfy the requirement to obtain computational results in real time for simulating surgical procedures as described in Simulation System for Training in Laparoscopic Surgery (NPO-21192) on page 31 in this issue of NASA Tech Briefs. Simulation of the behavior of soft tissue in real time is a challenging problem because of the complexity of soft-tissue mechanics. The responses of soft tissues are characterized by nonlinearities and by spatial inhomogeneities and rate and time dependences of material properties. Finite-element methods seem promising for integrating these characteristics of tissues into computational models of organs, but they demand much central-processing-unit (CPU) time and memory, and the demand increases with the number of nodes and degrees of freedom in a given finite-element model. Hence, as finite-element models become more realistic, it becomes more difficult to compute solutions in real time. In both of the present methods, one uses approximate mathematical models trading some accuracy for computational efficiency and thereby increasing the feasibility of attaining real-time up36 NASA Tech Briefs, October 2003 date rates. The first of these methods is based on modal analysis. In this method, one reduces the number of differential equations by selecting only the most significant vibration modes of an object (typically, a suitable number of the lowest-frequency modes) for computing deformations of the object in response to applied forces.

  14. Development library of finite elements for computer-aided design system of reed sensors

    NASA Astrophysics Data System (ADS)

    Kozlov, A. S.; Shmakov, N. A.; Tkalich, V. L.; Labkovskaia, R. I.; Kalinkina, M. E.; Pirozhnikova, O. I.

    2018-05-01

    The article is devoted to the development of a modern highly reliable element base of devices for security and fire alarm systems, in particular, to the improvement of the quality of contact cores (reed and membrane) of reed sensors. Modeling of elastic sensitive elements uses quadrangular elements of plates and shells, considered in the system of curvilinear orthogonal coordinates. The developed mathematical models and the formed finite element library are designed for systems of automated design of reed switch detectors to create competitive devices alarms. The finite element library is used for the automated system production of reed switch detectors both in series production and in the implementation of individual orders.

  15. Visualizing stressful aspects of repetitive motion tasks and opportunities for ergonomic improvements using computer vision.

    PubMed

    Greene, Runyu L; Azari, David P; Hu, Yu Hen; Radwin, Robert G

    2017-11-01

    Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements. Copyright © 2017. Published by Elsevier Ltd.

  16. Finite element computation of multi-physical micropolar transport phenomena from an inclined moving plate in porous media

    NASA Astrophysics Data System (ADS)

    Shamshuddin, MD.; Anwar Bég, O.; Sunder Ram, M.; Kadir, A.

    2018-02-01

    Non-Newtonian flows arise in numerous industrial transport processes including materials fabrication systems. Micropolar theory offers an excellent mechanism for exploring the fluid dynamics of new non-Newtonian materials which possess internal microstructure. Magnetic fields may also be used for controlling electrically-conducting polymeric flows. To explore numerical simulation of transport in rheological materials processing, in the current paper, a finite element computational solution is presented for magnetohydrodynamic, incompressible, dissipative, radiative and chemically-reacting micropolar fluid flow, heat and mass transfer adjacent to an inclined porous plate embedded in a saturated homogenous porous medium. Heat generation/absorption effects are included. Rosseland's diffusion approximation is used to describe the radiative heat flux in the energy equation. A Darcy model is employed to simulate drag effects in the porous medium. The governing transport equations are rendered into non-dimensional form under the assumption of low Reynolds number and also low magnetic Reynolds number. Using a Galerkin formulation with a weighted residual scheme, finite element solutions are presented to the boundary value problem. The influence of plate inclination, Eringen coupling number, radiation-conduction number, heat absorption/generation parameter, chemical reaction parameter, plate moving velocity parameter, magnetic parameter, thermal Grashof number, species (solutal) Grashof number, permeability parameter, Eckert number on linear velocity, micro-rotation, temperature and concentration profiles. Furthermore, the influence of selected thermo-physical parameters on friction factor, surface heat transfer and mass transfer rate is also tabulated. The finite element solutions are verified with solutions from several limiting cases in the literature. Interesting features in the flow are identified and interpreted.

  17. Digitizing for Computer-Aided Finite Element Model Generation.

    DTIC Science & Technology

    1979-10-10

    this approach is a collection of programs developed over the last eight years at the University of Arizona, and called the GIFTS system. This paper...briefly describes the latest version of the system, GIFTS -5, and demonstrates its suitability in a design environment by simple examples. The programs...constituting the GIFTS system were used as a tool for research in many areas, including mesh generation, finite element data base design, interactive

  18. Computer Assisted Language Learning. Routledge Studies in Computer Assisted Language Learning

    ERIC Educational Resources Information Center

    Pennington, Martha

    2011-01-01

    Computer-assisted language learning (CALL) is an approach to language teaching and learning in which computer technology is used as an aid to the presentation, reinforcement and assessment of material to be learned, usually including a substantial interactive element. This books provides an up-to date and comprehensive overview of…

  19. A three-dimensional finite element model of human atrial anatomy: New methods for cubic Hermite meshes with extraordinary vertices

    PubMed Central

    Gonzales, Matthew J.; Sturgeon, Gregory; Krishnamurthy, Adarsh; Hake, Johan; Jonas, René; Stark, Paul; Rappel, Wouter-Jan; Narayan, Sanjiv M.; Zhang, Yongjie; Segars, W. Paul; McCulloch, Andrew D.

    2013-01-01

    High-order cubic Hermite finite elements have been valuable in modeling cardiac geometry, fiber orientations, biomechanics, and electrophysiology, but their use in solving three-dimensional problems has been limited to ventricular models with simple topologies. Here, we utilized a subdivision surface scheme and derived a generalization of the “local-to-global” derivative mapping scheme of cubic Hermite finite elements to construct bicubic and tricubic Hermite models of the human atria with extraordinary vertices from computed tomography images of a patient with atrial fibrillation. To an accuracy of 0.6 millimeters, we were able to capture the left atrial geometry with only 142 bicubic Hermite finite elements, and the right atrial geometry with only 90. The left and right atrial bicubic Hermite meshes were G1 continuous everywhere except in the one-neighborhood of extraordinary vertices, where the mean dot products of normals at adjacent elements were 0.928 and 0.925. We also constructed two biatrial tricubic Hermite models and defined fiber orientation fields in agreement with diagrammatic data from the literature using only 42 angle parameters. The meshes all have good quality metrics, uniform element sizes, and elements with aspect ratios near unity, and are shared with the public. These new methods will allow for more compact and efficient patient-specific models of human atrial and whole heart physiology. PMID:23602918

  20. Mechanical Aspects of Interfaces and Surfaces in Ceramic Containing Systems.

    DTIC Science & Technology

    1984-12-14

    of a computer model to simulate the crack damage. The model is based on the fracture mechanics of cracks engulfed by the short stress pulse generated...by drop impact. Inertial effects of the crack faces are a particularly important aspect of the model. The computer scheme thereby allows the stress...W. R. Beaumont, "On the Toughness of Particulate Filled Polymers." Water Drop Impact X. E. D. Case and A. G. Evans, "A Computer -Generated Simulation

  1. Theoretical and numerical difficulties in 3-D vector potential methods in finite element magnetostatic computations

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A.; Wang, R.

    1990-01-01

    This paper describes the results of application of three well known 3D magnetic vector potential (MVP) based finite element formulations for computation of magnetostatic fields in electrical devices. The three methods were identically applied to three practical examples, the first of which contains only one medium (free space), while the second and third examples contained a mix of free space and iron. The first of these methods is based on the unconstrained curl-curl of the MVP, while the second and third methods are predicated upon constraining the divergence of the MVP 10 zero (Coulomb's Gauge). It was found that the two latter methods cease to give useful and meaningful results when the global solution region contains a mix of media of high and low permeabilities. Furthermore, it was found that their results do not achieve the intended zero constraint on the divergence of the MVP.

  2. Near-field radiative heat transfer in scanning thermal microscopy computed with the boundary element method

    NASA Astrophysics Data System (ADS)

    Nguyen, K. L.; Merchiers, O.; Chapuis, P.-O.

    2017-11-01

    We compute the near-field radiative heat transfer between a hot AFM tip and a cold substrate. This contribution to the tip-sample heat transfer in Scanning Thermal Microscopy is often overlooked, despite its leading role when the tip is out of contact. For dielectrics, we provide power levels exchanged as a function of the tip-sample distance in vacuum and spatial maps of the heat flux deposited into the sample which indicate the near-contact spatial resolution. The results are compared to analytical expressions of the Proximity Flux Approximation. The numerical results are obtained by means of the Boundary Element Method (BEM) implemented in the SCUFF-EM software, and require first a thorough convergence analysis of the progressive implementation of this method to the thermal emission by a sphere, the radiative transfer between two spheres, and the radiative exchange between a sphere and a finite substrate.

  3. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 3: Design philosophy and programming details

    USGS Publications Warehouse

    Torak, L.J.

    1993-01-01

    A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of

  4. Secondary flow in turbulent ducts with increasing aspect ratio

    NASA Astrophysics Data System (ADS)

    Vinuesa, R.; Schlatter, P.; Nagib, H. M.

    2018-05-01

    Direct numerical simulations of turbulent duct flows with aspect ratios 1, 3, 5, 7, 10, and 14.4 at a center-plane friction Reynolds number Reτ,c≃180 , and aspect ratios 1 and 3 at Reτ,c≃360 , were carried out with the spectral-element code nek5000. The aim of these simulations is to gain insight into the kinematics and dynamics of Prandtl's secondary flow of the second kind and its impact on the flow physics of wall-bounded turbulence. The secondary flow is characterized in terms of the cross-plane component of the mean kinetic energy, and its variation in the spanwise direction of the flow. Our results show that averaging times of around 3000 convective time units (based on duct half-height h ) are required to reach a converged state of the secondary flow, which extends up to a spanwise distance of around ≃5 h measured from the side walls. We also show that if the duct is not wide enough to accommodate the whole extent of the secondary flow, then its structure is modified as reflected through a different spanwise distribution of energy. Another confirmation of the extent of the secondary flow is the decay rate of kinetic energy of any remnant secondary motions for zc/h >5 (where zc is the spanwise distance from the corner) in aspect ratios 7, 10, and 14.4, which exhibits a decreasing level of energy with increasing averaging time ta, and in its rapid rate of decay given by ˜ta-1 . This is the same rate of decay observed in a spanwise-periodic channel simulation, which suggests that at the core, the kinetic energy of the secondary flow integrated over the cross-sectional area, , behaves as a random variable with zero mean, with rate of decay consistent with central limit theorem. Long-time averages of statistics in a region of rectangular ducts extending about the width of a well-designed channel simulation (i.e., extending about ≃3 h on each side of the center plane) indicate that ducts or experimental facilities with aspect ratios larger than 10

  5. Adiabatic Quantum Computation: Coherent Control Back Action.

    PubMed

    Goswami, Debabrata

    2006-11-22

    Though attractive from scalability aspects, optical approaches to quantum computing are highly prone to decoherence and rapid population loss due to nonradiative processes such as vibrational redistribution. We show that such effects can be reduced by adiabatic coherent control, in which quantum interference between multiple excitation pathways is used to cancel coupling to the unwanted, non-radiative channels. We focus on experimentally demonstrated adiabatic controlled population transfer experiments wherein the details on the coherence aspects are yet to be explored theoretically but are important for quantum computation. Such quantum computing schemes also form a back-action connection to coherent control developments.

  6. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    PubMed

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  7. The finite element method in low speed aerodynamics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Manhardt, P. D.

    1975-01-01

    The finite element procedure is shown to be of significant impact in design of the 'computational wind tunnel' for low speed aerodynamics. The uniformity of the mathematical differential equation description, for viscous and/or inviscid, multi-dimensional subsonic flows about practical aerodynamic system configurations, is utilized to establish the general form of the finite element algorithm. Numerical results for inviscid flow analysis, as well as viscous boundary layer, parabolic, and full Navier Stokes flow descriptions verify the capabilities and overall versatility of the fundamental algorithm for aerodynamics. The proven mathematical basis, coupled with the distinct user-orientation features of the computer program embodiment, indicate near-term evolution of a highly useful analytical design tool to support computational configuration studies in low speed aerodynamics.

  8. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  9. Stroke patients’ utilisation of extrinsic feedback from computer-based technology in the home: a multiple case study realistic evaluation

    PubMed Central

    2014-01-01

    Background Evidence indicates that post − stroke rehabilitation improves function, independence and quality of life. A key aspect of rehabilitation is the provision of appropriate information and feedback to the learner. Advances in information and communications technology (ICT) have allowed for the development of various systems to complement stroke rehabilitation that could be used in the home setting. These systems may increase the provision of rehabilitation a stroke survivor receives and carries out, as well as providing a learning platform that facilitates long-term self-managed rehabilitation and behaviour change. This paper describes the application of an innovative evaluative methodology to explore the utilisation of feedback for post-stroke upper-limb rehabilitation in the home. Methods Using the principles of realistic evaluation, this study aimed to test and refine intervention theories by exploring the complex interactions of contexts, mechanisms and outcomes that arise from technology deployment in the home. Methods included focus groups followed by multi-method case studies (n = 5) before, during and after the use of computer-based equipment. Data were analysed in relation to the context-mechanism-outcome hypotheses case by case. This was followed by a synthesis of the findings to answer the question, ‘what works for whom and in what circumstances and respects?’ Results Data analysis reveals that to achieve desired outcomes through the use of ICT, key elements of computer feedback, such as accuracy, measurability, rewarding feedback, adaptability, and knowledge of results feedback, are required to trigger the theory-driven mechanisms underpinning the intervention. In addition, the pre-existing context and the personal and environmental contexts, such as previous experience of service delivery, personal goals, trust in the technology, and social circumstances may also enable or constrain the underpinning theory-driven mechanisms

  10. Stroke patients' utilisation of extrinsic feedback from computer-based technology in the home: a multiple case study realistic evaluation.

    PubMed

    Parker, Jack; Mawson, Susan; Mountain, Gail; Nasr, Nasrin; Zheng, Huiru

    2014-06-05

    Evidence indicates that post-stroke rehabilitation improves function, independence and quality of life. A key aspect of rehabilitation is the provision of appropriate information and feedback to the learner.Advances in information and communications technology (ICT) have allowed for the development of various systems to complement stroke rehabilitation that could be used in the home setting. These systems may increase the provision of rehabilitation a stroke survivor receives and carries out, as well as providing a learning platform that facilitates long-term self-managed rehabilitation and behaviour change. This paper describes the application of an innovative evaluative methodology to explore the utilisation of feedback for post-stroke upper-limb rehabilitation in the home. Using the principles of realistic evaluation, this study aimed to test and refine intervention theories by exploring the complex interactions of contexts, mechanisms and outcomes that arise from technology deployment in the home. Methods included focus groups followed by multi-method case studies (n = 5) before, during and after the use of computer-based equipment. Data were analysed in relation to the context-mechanism-outcome hypotheses case by case. This was followed by a synthesis of the findings to answer the question, 'what works for whom and in what circumstances and respects?' Data analysis reveals that to achieve desired outcomes through the use of ICT, key elements of computer feedback, such as accuracy, measurability, rewarding feedback, adaptability, and knowledge of results feedback, are required to trigger the theory-driven mechanisms underpinning the intervention. In addition, the pre-existing context and the personal and environmental contexts, such as previous experience of service delivery, personal goals, trust in the technology, and social circumstances may also enable or constrain the underpinning theory-driven mechanisms. Findings suggest that the theory-driven mechanisms

  11. Standardized Computer-based Organized Reporting of EEG: SCORE

    PubMed Central

    Beniczky, Sándor; Aurlien, Harald; Brøgger, Jan C; Fuglsang-Frederiksen, Anders; Martins-da-Silva, António; Trinka, Eugen; Visser, Gerhard; Rubboli, Guido; Hjalgrim, Helle; Stefan, Hermann; Rosén, Ingmar; Zarubova, Jana; Dobesberger, Judith; Alving, Jørgen; Andersen, Kjeld V; Fabricius, Martin; Atkins, Mary D; Neufeld, Miri; Plouin, Perrine; Marusic, Petr; Pressler, Ronit; Mameniskiene, Ruta; Hopfengärtner, Rüdiger; Emde Boas, Walter; Wolf, Peter

    2013-01-01

    The electroencephalography (EEG) signal has a high complexity, and the process of extracting clinically relevant features is achieved by visual analysis of the recordings. The interobserver agreement in EEG interpretation is only moderate. This is partly due to the method of reporting the findings in free-text format. The purpose of our endeavor was to create a computer-based system for EEG assessment and reporting, where the physicians would construct the reports by choosing from predefined elements for each relevant EEG feature, as well as the clinical phenomena (for video-EEG recordings). A working group of EEG experts took part in consensus workshops in Dianalund, Denmark, in 2010 and 2011. The faculty was approved by the Commission on European Affairs of the International League Against Epilepsy (ILAE). The working group produced a consensus proposal that went through a pan-European review process, organized by the European Chapter of the International Federation of Clinical Neurophysiology. The Standardised Computer-based Organised Reporting of EEG (SCORE) software was constructed based on the terms and features of the consensus statement and it was tested in the clinical practice. The main elements of SCORE are the following: personal data of the patient, referral data, recording conditions, modulators, background activity, drowsiness and sleep, interictal findings, “episodes” (clinical or subclinical events), physiologic patterns, patterns of uncertain significance, artifacts, polygraphic channels, and diagnostic significance. The following specific aspects of the neonatal EEGs are scored: alertness, temporal organization, and spatial organization. For each EEG finding, relevant features are scored using predefined terms. Definitions are provided for all EEG terms and features. SCORE can potentially improve the quality of EEG assessment and reporting; it will help incorporate the results of computer-assisted analysis into the report, it will make

  12. Legal aspects of satellite teleconferencing

    NASA Technical Reports Server (NTRS)

    Smith, D. D.

    1971-01-01

    The application of satellite communications for teleconferencing purposes is discussed. The legal framework within which such a system or series of systems could be developed is considered. The analysis is based on: (1) satellite teleconferencing regulation, (2) the options available for such a system, (3) regulatory alternatives, and (4) ownership and management aspects. The system is designed to provide a capability for professional education, remote medical diagnosis, business conferences, and computer techniques.

  13. In vitro element release and biological aspects of base–metal alloys for metal-ceramic applications

    PubMed Central

    Holm, Charlotta; Morisbak, Else; Kalfoss, Torill; Dahl, Jon E.

    2015-01-01

    Abstract Objective: The aims of this study were to investigate the release of element from, and the biological response in vitro to, cobalt–chromium alloys and other base–metal alloys used for the fabrication of metal-ceramic restorations. Material and methods: Eighteen different alloys were investigated. Nine cobalt–chromium alloys, three nickel–chromium alloys, two cobalt–chromium–iron alloys, one palladium–silver alloy, one high-noble gold alloy, titanium grade II and one type III copper–aluminium alloy. Pure copper served as positive control. The specimens were prepared according to the ISO standards for biological and corrosion testing. Passive leaching of elements was measured by using Inductively Coupled Plasma – Mass Spectrometry (ICP-MS) after incubation in cell culture media, MEM, for 3 days. Corrosion testing was carried out in 0.9% sodium chloride (NaCl) and 1% lactic acid for 7 days, and the element release was measured by Inductively Coupled Plasma – Optical Emission Spectroscopy (ICP-OES). The biological response from the extract solutions was measured though MTT cytotoxicity testing and the Hen's egg test-chorio-allantoic membrane (HET-CAM) technique for irritationt. Results: The corrosion test showed similar element release from base-metal alloys compared to noble alloys such as gold. Apart from the high-copper alloy, all alloys expressed low element release in the immersion test, no cytotoxic effect in the MTT test, and were rated non-irritant in the HET-CAM test. Conclusions: Minimal biological response was observed for all the alloys tested, with the exception of the high-copper alloy. PMID:28642904

  14. The Automated Logistics Element Planning System (ALEPS)

    NASA Technical Reports Server (NTRS)

    Schwaab, Douglas G.

    1991-01-01

    The design and functions of ALEPS (Automated Logistics Element Planning System) is a computer system that will automate planning and decision support for Space Station Freedom Logistical Elements (LEs) resupply and return operations. ALEPS provides data management, planning, analysis, monitoring, interfacing, and flight certification for support of LE flight load planning activities. The prototype ALEPS algorithm development is described.

  15. Optical computing.

    NASA Technical Reports Server (NTRS)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  16. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  17. Computational Analysis of Behavior.

    PubMed

    Egnor, S E Roian; Branson, Kristin

    2016-07-08

    In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.

  18. A CW Gunn Diode Switching Element.

    ERIC Educational Resources Information Center

    Hurtado, Marco; Rosenbaum, Fred J.

    As part of a study of the application of communication satellites to educational development, certain technical aspects of such a system were examined. A current controlled bistable switching element using a CW Gunn diode is reported on here. With modest circuits switching rates of the order of 10 MHz have been obtained. Switching is initiated by…

  19. Collider Aspects of Flavour Physics at High Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Aguila, F.; Aguilar-Saavedra, J.A.; Allanach, B.C.

    2008-03-07

    This chapter of the report of the 'Flavour in the era of LHC' workshop discusses flavor related issues in the production and decays of heavy states at LHC, both from the experimental side and from the theoretical side. We review top quark physics and discuss flavor aspects of several extensions of the Standard Model, such as supersymmetry, little Higgs model or models with extra dimensions. This includes discovery aspects as well as measurement of several properties of these heavy states. We also present public available computational tools related to this topic.

  20. Three-Dimensional Effects on Multi-Element High Lift Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Watson, Ralph D.

    2002-01-01

    In an effort to discover the causes for disagreement between previous 2-D computations and nominally 2-D experiment for flow over the 3-clement McDonnell Douglas 30P-30N airfoil configuration at high lift, a combined experimental/CFD investigation is described. The experiment explores several different side-wall boundary layer control venting patterns, document's venting mass flow rates, and looks at corner surface flow patterns. The experimental angle of attack at maximum lift is found to be sensitive to the side wall venting pattern: a particular pattern increases the angle of attack at maximum lift by at least 2 deg. A significant amount of spanwise pressure variation is present at angles of attack near maximum lift. A CFD study using 3-D structured-grid computations, which includes the modeling of side-wall venting, is employed to investigate 3-D effects of the flow. Side-wall suction strength is found to affect the angle at which maximum lift is predicted. Maximum lift in the CFD is shown to be limited by the growth of all off-body corner flow vortex and consequent increase in spanwise pressure variation and decrease in circulation. The 3-D computations with and without wall venting predict similar trends to experiment at low angles of attack, but either stall too earl or else overpredict lift levels near maximum lift by as much as 5%. Unstructured-grid computations demonstrate that mounting brackets lower die the levels near maximum lift conditions.

  1. Design of a massively parallel computer using bit serial processing elements

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing

    1995-01-01

    A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.

  2. Computer-aided design and experimental investigation of a hydrodynamic device: the microwire electrode

    PubMed

    Fulian; Gooch; Fisher; Stevens; Compton

    2000-08-01

    The development and application of a new electrochemical device using a computer-aided design strategy is reported. This novel design is based on the flow of electrolyte solution past a microwire electrode situated centrally within a large duct. In the design stage, finite element simulations were employed to evaluate feasible working geometries and mass transport rates. The computer-optimized designs were then exploited to construct experimental devices. Steady-state voltammetric measurements were performed for a reversible one-electron-transfer reaction to establish the experimental relationship between electrolysis current and solution velocity. The experimental results are compared to those predicted numerically, and good agreement is found. The numerical studies are also used to establish an empirical relationship between the mass transport limited current and the volume flow rate, providing a simple and quantitative alternative for workers who would prefer to exploit this device without the need to develop the numerical aspects.

  3. Automatic computation of transfer functions

    DOEpatents

    Atcitty, Stanley; Watson, Luke Dale

    2015-04-14

    Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

  4. 47 CFR 69.119 - Basic service element expedited approval process.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Basic service element expedited approval... CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges § 69.119 Basic service element... approval of new basic service elements are those indicated in § 1.45 of the rules, except as specified...

  5. 47 CFR 69.119 - Basic service element expedited approval process.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Basic service element expedited approval... CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges § 69.119 Basic service element... approval of new basic service elements are those indicated in § 1.45 of the rules, except as specified...

  6. 47 CFR 69.119 - Basic service element expedited approval process.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Basic service element expedited approval... CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges § 69.119 Basic service element... approval of new basic service elements are those indicated in § 1.45 of the rules, except as specified...

  7. 47 CFR 69.119 - Basic service element expedited approval process.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Basic service element expedited approval... CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges § 69.119 Basic service element... approval of new basic service elements are those indicated in § 1.45 of the rules, except as specified...

  8. Computers in Schools: White Boys Only?

    ERIC Educational Resources Information Center

    Hammett, Roberta F.

    1997-01-01

    Discusses the role of computers in today's world and the construction of computer use attitudes, such as gender gaps. Suggests how schools might close the gaps. Includes a brief explanation about how facility with computers is important for women in their efforts to gain equitable treatment in all aspects of their lives. (PA)

  9. Using OSG Computing Resources with (iLC)Dirac

    NASA Astrophysics Data System (ADS)

    Sailer, A.; Petric, M.; CLICdp Collaboration

    2017-10-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called ‘SiteDirectors’, which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional site-specific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were developed. Not only is the usage of these types of computing elements now completely transparent for all DIRAC instances, which makes DIRAC a flexible solution for OSG based virtual organisations, but it also allows LCG Grid Sites to move to the HTCondor-CE software, without shutting DIRAC based VOs out of their site. In these proceedings we detail how we interfaced the DIRAC system to the HTCondor-CE and Globus computing elements and explain the encountered obstacles and solutions developed, and how the linear collider community uses resources in the OSG.

  10. Framework to model neutral particle flux in convex high aspect ratio structures using one-dimensional radiosity

    NASA Astrophysics Data System (ADS)

    Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.

  11. Finite element meshing of ANSYS (trademark) solid models

    NASA Technical Reports Server (NTRS)

    Kelley, F. S.

    1987-01-01

    A large scale, general purpose finite element computer program, ANSYS, developed and marketed by Swanson Analysis Systems, Inc. is discussed. ANSYS was perhaps the first commercially available program to offer truly interactive finite element model generation. ANSYS's purpose is for solid modeling. This application is briefly discussed and illustrated.

  12. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    PubMed

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p < 0.001) when compared with images reconstructed using the bone-sharpening kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p < 0.001, and 18.2%, p < 0.001, respectively) when compared with the image reconstructed by the bone-sharpening kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  13. A comparison between different finite elements for elastic and aero-elastic analyses.

    PubMed

    Mahran, Mohamed; ELsabbagh, Adel; Negm, Hani

    2017-11-01

    In the present paper, a comparison between five different shell finite elements, including the Linear Triangular Element, Linear Quadrilateral Element, Linear Quadrilateral Element based on deformation modes, 8-node Quadrilateral Element, and 9-Node Quadrilateral Element was presented. The shape functions and the element equations related to each element were presented through a detailed mathematical formulation. Additionally, the Jacobian matrix for the second order derivatives was simplified and used to derive each element's strain-displacement matrix in bending. The elements were compared using carefully selected elastic and aero-elastic bench mark problems, regarding the number of elements needed to reach convergence, the resulting accuracy, and the needed computation time. The best suitable element for elastic free vibration analysis was found to be the Linear Quadrilateral Element with deformation-based shape functions, whereas the most suitable element for stress analysis was the 8-Node Quadrilateral Element, and the most suitable element for aero-elastic analysis was the 9-Node Quadrilateral Element. Although the linear triangular element was the last choice for modal and stress analyses, it establishes more accurate results in aero-elastic analyses, however, with much longer computation time. Additionally, the nine-node quadrilateral element was found to be the best choice for laminated composite plates analysis.

  14. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    NASA Technical Reports Server (NTRS)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The technical challenges, engineering solutions, and results of the NOCC computer-human interface design are presented. The use-centered design process was as follows: determine the design criteria for user concerns; assess the impact of design decisions on the users; and determine the technical aspects of the implementation (tools, platforms, etc.). The NOCC hardware architecture is illustrated. A graphical model of the DSN that represented the hierarchical structure of the data was constructed. The DSN spacecraft summary display is shown. Navigation from top to bottom is accomplished by clicking the appropriate button for the element about which the user desires more detail. The telemetry summary display and the antenna color decision table are also shown.

  15. Solution algorithms for nonlinear transient heat conduction analysis employing element-by-element iterative strategies

    NASA Technical Reports Server (NTRS)

    Winget, J. M.; Hughes, T. J. R.

    1985-01-01

    The particular problems investigated in the present study arise from nonlinear transient heat conduction. One of two types of nonlinearities considered is related to a material temperature dependence which is frequently needed to accurately model behavior over the range of temperature of engineering interest. The second nonlinearity is introduced by radiation boundary conditions. The finite element equations arising from the solution of nonlinear transient heat conduction problems are formulated. The finite element matrix equations are temporally discretized, and a nonlinear iterative solution algorithm is proposed. Algorithms for solving the linear problem are discussed, taking into account the form of the matrix equations, Gaussian elimination, cost, and iterative techniques. Attention is also given to approximate factorization, implementational aspects, and numerical results.

  16. Using a software-defined computer in teaching the basics of computer architecture and operation

    NASA Astrophysics Data System (ADS)

    Kosowska, Julia; Mazur, Grzegorz

    2017-08-01

    The paper describes the concept and implementation of SDC_One software-defined computer designed for experimental and didactic purposes. Equipped with extensive hardware monitoring mechanisms, the device enables the students to monitor the computer's operation on bus transfer cycle or instruction cycle basis, providing the practical illustration of basic aspects of computer's operation. In the paper, we describe the hardware monitoring capabilities of SDC_One and some scenarios of using it in teaching the basics of computer architecture and microprocessor operation.

  17. Animation of finite element models and results

    NASA Technical Reports Server (NTRS)

    Lipman, Robert R.

    1992-01-01

    This is not intended as a complete review of computer hardware and software that can be used for animation of finite element models and results, but is instead a demonstration of the benefits of visualization using selected hardware and software. The role of raw computational power, graphics speed, and the use of videotape are discussed.

  18. Prediction of overall and blade-element performance for axial-flow pump configurations

    NASA Technical Reports Server (NTRS)

    Serovy, G. K.; Kavanagh, P.; Okiishi, T. H.; Miller, M. J.

    1973-01-01

    A method and a digital computer program for prediction of the distributions of fluid velocity and properties in axial flow pump configurations are described and evaluated. The method uses the blade-element flow model and an iterative numerical solution of the radial equilbrium and continuity conditions. Correlated experimental results are used to generate alternative methods for estimating blade-element turning and loss characteristics. Detailed descriptions of the computer program are included, with example input and typical computed results.

  19. Integrated computer-aided design using minicomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  20. Geology of photo linear elements, Great Divide Basin, Wyoming

    NASA Technical Reports Server (NTRS)

    Blackstone, D. L., Jr.

    1973-01-01

    The author has identified the following significant results. Ground examination of photo linear elements in the Great Divide Basin, Wyoming indicates little if any tectonic control. Aeolian aspects are more widespread and pervasive than previously considered.

  1. People and computers--some recent highlights.

    PubMed

    Shackel, B

    2000-12-01

    This paper aims to review selectively a fair proportion of the literature on human-computer interaction (HCI) over the three years since Shackel (J. Am. Soc. Inform. Sci. 48 (11) (1997) 970-986). After a brief note of history I discuss traditional input, output and workplace aspects, the web and 'E-topics', web-related aspects, virtual reality, safety-critical systems, and the need to move from HCI to human-system integration (HSI). Finally I suggest, and consider briefly, some future possibilities and issues including web consequences, embedded ubiquitous computing, and 'back to systems ergonomics?'.

  2. Human Aspects of High Tech in Special Libraries.

    ERIC Educational Resources Information Center

    Bichteler, Julie

    1986-01-01

    This investigation of library employees who spend significant portion of time in online computer interaction provides information on intellectual, psychological, social, and physical aspects of their work. Long- and short-term effects of special libraries are identified and solutions to "technostress" problems are suggested. (16…

  3. Computer Processing of Esperanto Text.

    ERIC Educational Resources Information Center

    Sherwood, Bruce

    1981-01-01

    Basic aspects of computer processing of Esperanto are considered in relation to orthography and computer representation, phonetics, morphology, one-syllable and multisyllable words, lexicon, semantics, and syntax. There are 28 phonemes in Esperanto, each represented in orthography by a single letter. The PLATO system handles diacritics by using a…

  4. Research and the Personal Computer.

    ERIC Educational Resources Information Center

    Blackburn, D. A.

    1989-01-01

    Discussed is the history and elements of the personal computer. Its uses as a laboratory assistant and generic toolkit for mathematical analysis and modeling are included. The future of the personal computer in research is addressed. (KR)

  5. Computer memory: the LLL experience. [Octopus computer network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.

    1976-02-01

    Those aspects of Octopus computer network design are reviewed that relate to memory and storage. Emphasis is placed on the difficulties and problems that arise because of the limitations of present storage devices, and indications are made of the directions in which technological advance could be of most value. (auth)

  6. Australian Educational Computing: Journal of the Australian Council for Computers in Education. Volume 8, Special Conference Edition.

    ERIC Educational Resources Information Center

    Nanlohy, Phil, Ed.

    1993-01-01

    The 43 papers in this collection were presented at the Australian Council for Computers in Education 1993 annual conference. The papers focus on research and scholarship in the use of computers at the elementary, secondary, and higher education levels. The papers address the following aspects of the use of computers in education: (1) theoretical…

  7. Campus Computing, 1998. The Ninth National Survey of Desktop Computing and Information Technology in American Higher Education.

    ERIC Educational Resources Information Center

    Green, Kenneth C.

    This report presents findings of a June 1998 survey of computing officials at 1,623 two- and four-year U.S. colleges and universities concerning the use of computer technology. The survey found that computing and information technology (IT) are now core components of the campus environment and classroom experience. However, key aspects of IT…

  8. Probing coherence aspects of adiabatic quantum computation and control.

    PubMed

    Goswami, Debabrata

    2007-09-28

    Quantum interference between multiple excitation pathways can be used to cancel the couplings to the unwanted, nonradiative channels resulting in robustly controlling decoherence through adiabatic coherent control approaches. We propose a useful quantification of the two-level character in a multilevel system by considering the evolution of the coherent character in the quantum system as represented by the off-diagonal density matrix elements, which switches from real to imaginary as the excitation process changes from being resonant to completely adiabatic. Such counterintuitive results can be explained in terms of continuous population exchange in comparison to no population exchange under the adiabatic condition.

  9. Object-oriented design and implementation of CFDLab: a computer-assisted learning tool for fluid dynamics using dual reciprocity boundary element methodology

    NASA Astrophysics Data System (ADS)

    Friedrich, J.

    1999-08-01

    As lecturers, our main concern and goal is to develop more attractive and efficient ways of communicating up-to-date scientific knowledge to our students and facilitate an in-depth understanding of physical phenomena. Computer-based instruction is very promising to help both teachers and learners in their difficult task, which involves complex cognitive psychological processes. This complexity is reflected in high demands on the design and implementation methods used to create computer-assisted learning (CAL) programs. Due to their concepts, flexibility, maintainability and extended library resources, object-oriented modeling techniques are very suitable to produce this type of pedagogical tool. Computational fluid dynamics (CFD) enjoys not only a growing importance in today's research, but is also very powerful for teaching and learning fluid dynamics. For this purpose, an educational PC program for university level called 'CFDLab 1.1' for Windows™ was developed with an interactive graphical user interface (GUI) for multitasking and point-and-click operations. It uses the dual reciprocity boundary element method as a versatile numerical scheme, allowing to handle a variety of relevant governing equations in two dimensions on personal computers due to its simple pre- and postprocessing including 2D Laplace, Poisson, diffusion, transient convection-diffusion.

  10. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures.

    PubMed

    Zhan, Yijian; Meschke, Günther

    2017-07-08

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.

  11. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures

    PubMed Central

    Zhan, Yijian

    2017-01-01

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130

  12. A locally conservative non-negative finite element formulation for anisotropic advective-diffusive-reactive systems

    NASA Astrophysics Data System (ADS)

    Mudunuru, M. K.; Shabouei, M.; Nakshatrala, K.

    2015-12-01

    Advection-diffusion-reaction (ADR) equations appear in various areas of life sciences, hydrogeological systems, and contaminant transport. Obtaining stable and accurate numerical solutions can be challenging as the underlying equations are coupled, nonlinear, and non-self-adjoint. Currently, there is neither a robust computational framework available nor a reliable commercial package known that can handle various complex situations. Herein, the objective of this poster presentation is to present a novel locally conservative non-negative finite element formulation that preserves the underlying physical and mathematical properties of a general linear transient anisotropic ADR equation. In continuous setting, governing equations for ADR systems possess various important properties. In general, all these properties are not inherited during finite difference, finite volume, and finite element discretizations. The objective of this poster presentation is two fold: First, we analyze whether the existing numerical formulations (such as SUPG and GLS) and commercial packages provide physically meaningful values for the concentration of the chemical species for various realistic benchmark problems. Furthermore, we also quantify the errors incurred in satisfying the local and global species balance for two popular chemical kinetics schemes: CDIMA (chlorine dioxide-iodine-malonic acid) and BZ (Belousov--Zhabotinsky). Based on these numerical simulations, we show that SUPG and GLS produce unphysical values for concentration of chemical species due to the violation of the non-negative constraint, contain spurious node-to-node oscillations, and have large errors in local and global species balance. Second, we proposed a novel finite element formulation to overcome the above difficulties. The proposed locally conservative non-negative computational framework based on low-order least-squares finite elements is able to preserve these underlying physical and mathematical properties

  13. Business aspects and sustainability for healthgrids - an expert survey.

    PubMed

    Scholz, Stefan; Semler, Sebastian C; Breitner, Michael H

    2009-01-01

    Grid computing initiatives in medicine and life sciences are under pressure to prove their sustainability. While some first business model frameworks were outlined, few practical experiences were considered. This gap has been narrowed by an international survey of 33 grid computing experts with biomedical and non-biomedical background on business aspects. The experts surveyed were cautiously optimistic about a sustainable implementation of grid computing within a mid term timeline. They identified marketable application areas, stated the underlying value proposition, outlined trends and specify critical success factors. From a general perspective of their answers, they provided a stable basis for a road map of sustainable grid computing solutions for medicine and life sciences.

  14. Aorta modeling with the element-based zero-stress state and isogeometric discretization

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Sasaki, Takafumi

    2017-02-01

    Patient-specific arterial fluid-structure interaction computations, including aorta computations, require an estimation of the zero-stress state (ZSS), because the image-based arterial geometries do not come from a ZSS. We have earlier introduced a method for estimation of the element-based ZSS (EBZSS) in the context of finite element discretization of the arterial wall. The method has three main components. 1. An iterative method, which starts with a calculated initial guess, is used for computing the EBZSS such that when a given pressure load is applied, the image-based target shape is matched. 2. A method for straight-tube segments is used for computing the EBZSS so that we match the given diameter and longitudinal stretch in the target configuration and the "opening angle." 3. An element-based mapping between the artery and straight-tube is extracted from the mapping between the artery and straight-tube segments. This provides the mapping from the arterial configuration to the straight-tube configuration, and from the estimated EBZSS of the straight-tube configuration back to the arterial configuration, to be used as the initial guess for the iterative method that matches the image-based target shape. Here we present the version of the EBZSS estimation method with isogeometric wall discretization. With isogeometric discretization, we can obtain the element-based mapping directly, instead of extracting it from the mapping between the artery and straight-tube segments. That is because all we need for the element-based mapping, including the curvatures, can be obtained within an element. With NURBS basis functions, we may be able to achieve a similar level of accuracy as with the linear basis functions, but using larger-size and much fewer elements. Higher-order NURBS basis functions allow representation of more complex shapes within an element. To show how the new EBZSS estimation method performs, we first present 2D test computations with straight

  15. The Elements of Effective Board Governance

    ERIC Educational Resources Information Center

    Doyle, Jim

    2009-01-01

    The purpose of this book is to act as a guide to good governance by exploring its various aspects and the key elements of success. It is intended to be used by anybody who is a member of a board, particularly in the nonprofit sector. This book is intended to help board members maximize their effectiveness both individually and collectively.…

  16. Spectral/ hp element methods: Recent developments, applications, and perspectives

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.

    2018-02-01

    The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.

  17. Advance finite element modeling of rotor blade aeroelasticity

    NASA Technical Reports Server (NTRS)

    Straub, F. K.; Sangha, K. B.; Panda, B.

    1994-01-01

    An advanced beam finite element has been developed for modeling rotor blade dynamics and aeroelasticity. This element is part of the Element Library of the Second Generation Comprehensive Helicopter Analysis System (2GCHAS). The element allows modeling of arbitrary rotor systems, including bearingless rotors. It accounts for moderately large elastic deflections, anisotropic properties, large frame motion for maneuver simulation, and allows for variable order shape functions. The effects of gravity, mechanically applied and aerodynamic loads are included. All kinematic quantities required to compute airloads are provided. In this paper, the fundamental assumptions and derivation of the element matrices are presented. Numerical results are shown to verify the formulation and illustrate several features of the element.

  18. Flow Applications of the Least Squares Finite Element Method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1998-01-01

    The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.

  19. Diffractive micro-optical element with nonpoint response

    NASA Astrophysics Data System (ADS)

    Soifer, Victor A.; Golub, Michael A.

    1993-01-01

    Common-use diffractive lenses have microrelief zones in the form of simple rings that provide only an optical power but do not contain any image information. They have a point-image response under point-source illumination. We must use a more complicated non-point response to focus a light beam into different light marks, letter-type images as well as for optical pattern recognition. The current presentation describes computer generation of diffractive micro- optical elements with complicated curvilinear zones of a regular piecewise-smooth structure and grey-level or staircase phase microrelief. The manufacture of non-point response elements uses the steps of phase-transfer calculation and orthogonal-scan masks generation or lithographic glass etching. Ray-tracing method is shown to be applicable in this task. Several working samples of focusing optical elements generated by computer and photolithography are presented. Using the experimental results we discuss here such applications as laser branding.

  20. Rolling-Element Fatigue Testing and Data Analysis - A Tutorial

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.

    2011-01-01

    In order to rank bearing materials, lubricants and other design variables using rolling-element bench type fatigue testing of bearing components and full-scale rolling-element bearing tests, the investigator needs to be cognizant of the variables that affect rolling-element fatigue life and be able to maintain and control them within an acceptable experimental tolerance. Once these variables are controlled, the number of tests and the test conditions must be specified to assure reasonable statistical certainty of the final results. There is a reasonable correlation between the results from elemental test rigs with those results obtained with full-scale bearings. Using the statistical methods of W. Weibull and L. Johnson, the minimum number of tests required can be determined. This paper brings together and discusses the technical aspects of rolling-element fatigue testing and data analysis as well as making recommendations to assure quality and reliable testing of rolling-element specimens and full-scale rolling-element bearings.

  1. GAP Noise Computation By The CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2001-01-01

    A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.

  2. [Physiological effects of rare earth elements and their application in traditional Chinese medicine].

    PubMed

    Zhou, Jie; Guo, Lanping; Xiao, Wenjuan; Geng, Yanling; Wang, Xiao; Shi, Xin'gang; Dan, Staerk

    2012-08-01

    The process in the studies on physiological effects of rare earth elements in plants and their action mechanisms were summarized in the aspects of seed germination, photosynthesis, mineral metabolism and stress resistance. And the applications of rare earth elements in traditional Chinese medicine (TCM) in recent years were also overviewed, which will provide reference for further development and application of rare earth elements in TCM.

  3. Multi-scale damage modelling in a ceramic matrix composite using a finite-element microstructure meshfree methodology

    PubMed Central

    2016-01-01

    The problem of multi-scale modelling of damage development in a SiC ceramic fibre-reinforced SiC matrix ceramic composite tube is addressed, with the objective of demonstrating the ability of the finite-element microstructure meshfree (FEMME) model to introduce important aspects of the microstructure into a larger scale model of the component. These are particularly the location, orientation and geometry of significant porosity and the load-carrying capability and quasi-brittle failure behaviour of the fibre tows. The FEMME model uses finite-element and cellular automata layers, connected by a meshfree layer, to efficiently couple the damage in the microstructure with the strain field at the component level. Comparison is made with experimental observations of damage development in an axially loaded composite tube, studied by X-ray computed tomography and digital volume correlation. Recommendations are made for further development of the model to achieve greater fidelity to the microstructure. This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242308

  4. Implicit Space-Time Conservation Element and Solution Element Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Wang, Xiao-Yen

    1999-01-01

    Artificial numerical dissipation is in important issue in large Reynolds number computations. In such computations, the artificial dissipation inherent in traditional numerical schemes can overwhelm the physical dissipation and yield inaccurate results on meshes of practical size. In the present work, the space-time conservation element and solution element method is used to construct new and accurate implicit numerical schemes such that artificial numerical dissipation will not overwhelm physical dissipation. Specifically, these schemes have the property that numerical dissipation vanishes when the physical viscosity goes to zero. These new schemes therefore accurately model the physical dissipation even when it is extremely small. The new schemes presented are two highly accurate implicit solvers for a convection-diffusion equation. The two schemes become identical in the pure convection case, and in the pure diffusion case. The implicit schemes are applicable over the whole Reynolds number range, from purely diffusive equations to convection-dominated equations with very small viscosity. The stability and consistency of the schemes are analysed, and some numerical results are presented. It is shown that, in the inviscid case, the new schemes become explicit and their amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, their principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.

  5. Hyperswitch Network For Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Chow, Edward; Madan, Herbert; Peterson, John

    1989-01-01

    Data-driven dynamic switching enables high speed data transfer. Proposed hyperswitch network based on mixed static and dynamic topologies. Routing header modified in response to congestion or faults encountered as path established. Static topology meets requirement if nodes have switching elements that perform necessary routing header revisions dynamically. Hypercube topology now being implemented with switching element in each computer node aimed at designing very-richly-interconnected multicomputer system. Interconnection network connects great number of small computer nodes, using fixed hypercube topology, characterized by point-to-point links between nodes.

  6. Teaching and Learning about Solid Waste: Aspects of Content Knowledge

    ERIC Educational Resources Information Center

    Cinquetti, Heloisa Chalmers Sisla; de Carvalho, Luiz Marcelo

    2007-01-01

    This paper investigates aspects of content knowledge related to teaching and learning about solid waste, focusing on the processes of learning and teaching by Elementary School teachers in Brazil, in two modalities of continuing education: courses and school-based meetings. We analyse elements of teachers' reflections whilst referring to three…

  7. Iterative methods for elliptic finite element equations on general meshes

    NASA Technical Reports Server (NTRS)

    Nicolaides, R. A.; Choudhury, Shenaz

    1986-01-01

    Iterative methods for arbitrary mesh discretizations of elliptic partial differential equations are surveyed. The methods discussed are preconditioned conjugate gradients, algebraic multigrid, deflated conjugate gradients, an element-by-element techniques, and domain decomposition. Computational results are included.

  8. A finite element head and neck model as a supportive tool for deformable image registration.

    PubMed

    Kim, Jihun; Saitou, Kazuhiro; Matuszak, Martha M; Balter, James M

    2016-07-01

    A finite element (FE) head and neck model was developed as a tool to aid investigations and development of deformable image registration and patient modeling in radiation oncology. Useful aspects of a FE model for these purposes include ability to produce realistic deformations (similar to those seen in patients over the course of treatment) and a rational means of generating new configurations, e.g., via the application of force and/or displacement boundary conditions. The model was constructed based on a cone-beam computed tomography image of a head and neck cancer patient. The three-node triangular surface meshes created for the bony elements (skull, mandible, and cervical spine) and joint elements were integrated into a skeletal system and combined with the exterior surface. Nodes were additionally created inside the surface structures which were composed of the three-node triangular surface meshes, so that four-node tetrahedral FE elements were created over the whole region of the model. The bony elements were modeled as a homogeneous linear elastic material connected by intervertebral disks. The surrounding tissues were modeled as a homogeneous linear elastic material. Under force or displacement boundary conditions, FE analysis on the model calculates approximate solutions of the displacement vector field. A FE head and neck model was constructed that skull, mandible, and cervical vertebrae were mechanically connected by disks. The developed FE model is capable of generating realistic deformations that are strain-free for the bony elements and of creating new configurations of the skeletal system with the surrounding tissues reasonably deformed. The FE model can generate realistic deformations for skeletal elements. In addition, the model provides a way of evaluating the accuracy of image alignment methods by producing a ground truth deformation and correspondingly simulated images. The ability to combine force and displacement conditions provides

  9. Workshop on the Integration of Finite Element Modeling with Geometric Modeling

    NASA Technical Reports Server (NTRS)

    Wozny, Michael J.

    1987-01-01

    The workshop on the Integration of Finite Element Modeling with Geometric Modeling was held on 12 May 1987. It was held to discuss the geometric modeling requirements of the finite element modeling process and to better understand the technical aspects of the integration of these two areas. The 11 papers are presented except for one for which only the abstract is given.

  10. Computational Fluid Dynamics at ICMA (Institute for Computational Mathematics and Applications)

    DTIC Science & Technology

    1988-10-18

    PERSONAL. AUTHOR(S) Charles A. Hall and Thomas A. Porsching 13a. TYPE OF REPORT 13b. TIME COVERED 114. DATE OF REPORT (YearMOth, De ) 1. PAGE COUNT...of ten ICtA (Institute for Computational Mathe- matics and Applications) personnel, relating to the general area of computational fluid mechanics...questions raised in the previous subsection. Our previous work in this area concentrated on a study of the differential geometric aspects of the prob- lem

  11. Recursive computer architecture for VLSI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treleaven, P.C.; Hopkins, R.P.

    1982-01-01

    A general-purpose computer architecture based on the concept of recursion and suitable for VLSI computer systems built from replicated (lego-like) computing elements is presented. The recursive computer architecture is defined by presenting a program organisation, a machine organisation and an experimental machine implementation oriented to VLSI. The experimental implementation is being restricted to simple, identical microcomputers each containing a memory, a processor and a communications capability. This future generation of lego-like computer systems are termed fifth generation computers by the Japanese. 30 references.

  12. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  13. Inversion of potential field data using the finite element method on parallel computers

    NASA Astrophysics Data System (ADS)

    Gross, L.; Altinay, C.; Shaw, S.

    2015-11-01

    In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.

  14. Computer-aided engineering system for design of sequence arrays and lithographic masks

    DOEpatents

    Hubbell, Earl A.; Lipshutz, Robert J.; Morris, Macdonald S.; Winkler, James L.

    1997-01-01

    An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks.

  15. Computing and Office Automation: Changing Variables.

    ERIC Educational Resources Information Center

    Staman, E. Michael

    1981-01-01

    Trends in computing and office automation and their applications, including planning, institutional research, and general administrative support in higher education, are discussed. Changing aspects of information processing and an increasingly larger user community are considered. The computing literacy cycle may involve programming, analysis, use…

  16. TAP 2: A finite element program for thermal analysis of convectively cooled structures

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.

    1980-01-01

    A finite element computer program (TAP 2) for steady-state and transient thermal analyses of convectively cooled structures is presented. The program has a finite element library of six elements: two conduction/convection elements to model heat transfer in a solid, two convection elements to model heat transfer in a fluid, and two integrated conduction/convection elements to represent combined heat transfer in tubular and plate/fin fluid passages. Nonlinear thermal analysis due to temperature-dependent thermal parameters is performed using the Newton-Raphson iteration method. Transient analyses are performed using an implicit Crank-Nicolson time integration scheme with consistent or lumped capacitance matrices as an option. Program output includes nodal temperatures and element heat fluxes. Pressure drops in fluid passages may be computed as an option. User instructions and sample problems are presented in appendixes.

  17. Nonlinear Large Deflection Theory with Modified Aeroelastic Lifting Line Aerodynamics for a High Aspect Ratio Flexible Wing

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Ting, Eric; Chaparro, Daniel

    2017-01-01

    This paper investigates the effect of nonlinear large deflection bending on the aerodynamic performance of a high aspect ratio flexible wing. A set of nonlinear static aeroelastic equations are derived for the large bending deflection of a high aspect ratio wing structure. An analysis is conducted to compare the nonlinear bending theory with the linear bending theory. The results show that the nonlinear bending theory is length-preserving whereas the linear bending theory causes a non-physical effect of lengthening the wing structure under the no axial load condition. A modified lifting line theory is developed to compute the lift and drag coefficients of a wing structure undergoing a large bending deflection. The lift and drag coefficients are more accurately estimated by the nonlinear bending theory due to its length-preserving property. The nonlinear bending theory yields lower lift and span efficiency than the linear bending theory. A coupled aerodynamic-nonlinear finite element model is developed to implement the nonlinear bending theory for a Common Research Model (CRM) flexible wing wind tunnel model to be tested in the University of Washington Aeronautical Laboratory (UWAL). The structural stiffness of the model is designed to give about 10% wing tip deflection which is large enough that could cause the nonlinear deflection effect to become significant. The computational results show that the nonlinear bending theory yields slightly less lift than the linear bending theory for this wind tunnel model. As a result, the linear bending theory is deemed adequate for the CRM wind tunnel model.

  18. Design of a high-speed digital processing element for parallel simulation

    NASA Technical Reports Server (NTRS)

    Milner, E. J.; Cwynar, D. S.

    1983-01-01

    A prototype of a custom designed computer to be used as a processing element in a multiprocessor based jet engine simulator is described. The purpose of the custom design was to give the computer the speed and versatility required to simulate a jet engine in real time. Real time simulations are needed for closed loop testing of digital electronic engine controls. The prototype computer has a microcycle time of 133 nanoseconds. This speed was achieved by: prefetching the next instruction while the current one is executing, transporting data using high speed data busses, and using state of the art components such as a very large scale integration (VLSI) multiplier. Included are discussions of processing element requirements, design philosophy, the architecture of the custom designed processing element, the comprehensive instruction set, the diagnostic support software, and the development status of the custom design.

  19. Visual ergonomics and computer work--is it all about computer glasses?

    PubMed

    Jonsson, Christina

    2012-01-01

    The Swedish Provisions on Work with Display Screen Equipment and the EU Directive on the minimum safety and health requirements for work with display screen equipment cover several important visual ergonomics aspects. But a review of cases and questions to the Swedish Work Environment Authority clearly shows that most attention is given to the demands for eyesight tests and special computer glasses. Other important visual ergonomics factors are at risk of being neglected. Today computers are used everywhere, both at work and at home. Computers can be laptops, PDA's, tablet computers, smart phones, etc. The demands on eyesight tests and computer glasses still apply but the visual demands and the visual ergonomics conditions are quite different compared to the use of a stationary computer. Based on this review, we raise the question if the demand on the employer to provide the employees with computer glasses is outdated.

  20. Combined AIE/EBE/GMRES approach to incompressible flows. [Adaptive Implicit-Explicit/Grouped Element-by-Element/Generalized Minimum Residuals

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1990-01-01

    Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.

  1. Calculating three loop ladder and V-topologies for massive operator matrix elements by computer algebra

    NASA Astrophysics Data System (ADS)

    Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.

    2016-05-01

    Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.

  2. Self-Aware Computing

    DTIC Science & Technology

    2009-06-01

    to floating point , to multi-level logic. 2 Overview Self-aware computation can be distinguished from existing computational models which are...systems have advanced to the point that the time is ripe to realize such a system. To illustrate, let us examine each of the key aspects of self...servers for each service, there are no single points of failure in the system. If an OS or user core has a failure, one of several introspection cores

  3. Using a multifrontal sparse solver in a high performance, finite element code

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Lucas, Robert; Raefsky, Arthur

    1990-01-01

    We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP.

  4. A comparative evaluation of mandibular finite element models with different lengths and elements for implant biomechanics.

    PubMed

    Teixeira, E R; Sato, Y; Akagawa, Y; Shindoi, N

    1998-04-01

    Further validity of finite element analysis (FEA) in implant biomechanics requires an increase of modelled range and mesh refinement, and a consequent increase in element number and calculation time. To develop a new method that allows a decrease of the modelled range and element number (along with less calculation time and less computer memory), 10 FEA models of the mandible with different mesio-distal lengths and elements were constructed based on three-dimensional graphic data of the bone structure around an osseointegrated implant. Analysis of stress distribution followed by 100 N loading with the fixation of the most external planes of the models indicated that a minimal bone length of 4.2 mm of the mesial and distal sides was acceptable for FEA representation. Moreover, unification of elements located far away from the implant surface did not affect stress distribution. These results suggest that it may be possible to develop a replica FEA implant model of the mandible with less range and fewer elements without altering stress distribution.

  5. 47 CFR 69.119 - Basic service element expedited approval process.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Basic service element expedited approval process. 69.119 Section 69.119 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges § 69.119 Basic service element...

  6. Models of optical quantum computing

    NASA Astrophysics Data System (ADS)

    Krovi, Hari

    2017-03-01

    I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  7. Computational mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raboin, P J

    1998-01-01

    The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D.more » Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.« less

  8. A finite element solver for 3-D compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Reddy, K. C.; Reddy, J. N.; Nayani, S.

    1990-01-01

    Computation of the flow field inside a space shuttle main engine (SSME) requires the application of state of the art computational fluid dynamic (CFD) technology. Several computer codes are under development to solve 3-D flow through the hot gas manifold. Some algorithms were designed to solve the unsteady compressible Navier-Stokes equations, either by implicit or explicit factorization methods, using several hundred or thousands of time steps to reach a steady state solution. A new iterative algorithm is being developed for the solution of the implicit finite element equations without assembling global matrices. It is an efficient iteration scheme based on a modified nonlinear Gauss-Seidel iteration with symmetric sweeps. The algorithm is analyzed for a model equation and is shown to be unconditionally stable. Results from a series of test problems are presented. The finite element code was tested for couette flow, which is flow under a pressure gradient between two parallel plates in relative motion. Another problem that was solved is viscous laminar flow over a flat plate. The general 3-D finite element code was used to compute the flow in an axisymmetric turnaround duct at low Mach numbers.

  9. Advanced flight computers for planetary exploration

    NASA Technical Reports Server (NTRS)

    Stephenson, R. Rhoads

    1988-01-01

    Research concerning flight computers for use on interplanetary probes is reviewed. The history of these computers from the Viking mission to the present is outlined. The differences between ground commercial computers and computers for planetary exploration are listed. The development of a computer for the Mariner Mark II comet rendezvous asteroid flyby mission is described. Various aspects of recently developed computer systems are examined, including the Max real time, embedded computer, a hypercube distributed supercomputer, a SAR data processor, a processor for the High Resolution IR Imaging Spectrometer, and a robotic vision multiresolution pyramid machine for processsing images obtained by a Mars Rover.

  10. On a categorial aspect of knowledge representation

    NASA Astrophysics Data System (ADS)

    Tataj, Emanuel; Mulawka, Jan; Nieznański, Edward

    Adequate representation of data is crucial for modeling any type of data. To faithfully present and describe the relevant section of the world it is necessary to select the method that can easily be implemented on a computer system which will help in further description allowing reasoning. The main objective of this contribution is to present methods of knowledge representation using categorial approach. Next to identify the main advantages for computer implementation. Categorical aspect of knowledge representation is considered in semantic networks realisation. Such method borrows already known metaphysics properties for data modeling process. The potential topics of further development of categorical semantic networks implementations are also underlined.

  11. Rubbery computing

    NASA Astrophysics Data System (ADS)

    Wilson, Katherine E.; Henke, E.-F. Markus; Slipher, Geoffrey A.; Anderson, Iain A.

    2017-04-01

    Electromechanically coupled dielectric elastomer actuators (DEAs) and dielectric elastomer switches (DESs) may form digital logic circuitry made entirely of soft and flexible materials. The expansion in planar area of a DEA exerts force across a DES, which is a soft electrode with strain-dependent resistivity. When compressed, the DES drops steeply in resistance and changes state from non-conducting to conducting. Logic operators may be achieved with different arrangements of interacting DE actuators and switches. We demonstrate combinatorial logic elements, including the fundamental Boolean logic gates, as well as sequential logic elements, including latches and flip-flops. With both data storage and signal processing abilities, the necessary calculating components of a soft computer are available. A noteworthy advantage of a soft computer with mechanosensitive DESs is the potential for responding to environmental strains while locally processing information and generating a reaction, like a muscle reflex.

  12. Prediction of damage formation in hip arthroplasties by finite element analysis using computed tomography images.

    PubMed

    Abdullah, Abdul Halim; Todo, Mitsugu; Nakashima, Yasuharu

    2017-06-01

    Femoral bone fracture is one of the main causes for the failure of hip arthroplasties (HA). Being subjected to abrupt and high impact forces in daily activities may lead to complex loading configuration such as bending and sideway falls. The objective of this study is to predict the risk of femoral bone fractures in total hip arthroplasty (THA) and resurfacing hip arthroplasty (RHA). A computed tomography (CT) based on finite element analysis was conducted to demonstrate damage formation in a three dimensional model of HAs. The inhomogeneous model of femoral bone was constructed from a 79 year old female patient with hip osteoarthritis complication. Two different femoral components were modeled with titanium alloy and cobalt chromium and inserted into the femoral bones to present THA and RHA models respectively. The analysis included six configurations, which exhibited various loading and boundary conditions, including axial compression, torsion, lateral bending, stance and two types of falling configurations. The applied hip loadings were normalized to body weight (BW) and accumulated from 1 BW to 3 BW. Predictions of damage formation in the femoral models were discussed as the resulting tensile failure as well as the compressive yielding and failure elements. The results indicate that loading directions can forecast the pattern and location of fractures at varying magnitudes of loading. Lateral bending configuration experienced the highest damage formation in both THA and RHA models. Femoral neck and trochanteric regions were in a common location in the RHA model in most configurations, while the predicted fracture locations in THA differed as per the Vancouver classification. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. "I Am Very Good at Computers": Young Children's Computer Use and Their Computer Self-Esteem

    ERIC Educational Resources Information Center

    Hatzigianni, Maria; Margetts, Kay

    2012-01-01

    Children frequently encounter computers in many aspects of daily life. It is important to consider the consequences not only on children's cognitive development but on their emotional and self-development. This paper reports on research undertaken in Australia with 52 children aged between 44 and 79 months to explore the existence or not of a…

  14. What is Aspect-Oriented Programming, Revisited

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    For the Advanced Separation of Concerns workshop at OOPSLA 2000 in Minneapolis, Dan Friedman and I wrote a paper that argued that the distinguishing characteristic of Aspect-Oriented Programming systems (qua programming systems) is that they provide quantification and obliviousness. In this paper, I expand on the themes of our Minneapolis workshop paper, respond to some of the comments we've received on that paper, and provide a computational formalization of the notion of quantification.

  15. Nitsche Extended Finite Element Methods for Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Coon, Ethan T.

    Modeling earthquakes and geologically short-time-scale events on fault networks is a difficult problem with important implications for human safety and design. These problems demonstrate a. rich physical behavior, in which distributed loading localizes both spatially and temporally into earthquakes on fault systems. This localization is governed by two aspects: friction and fault geometry. Computationally, these problems provide a stern challenge for modelers --- static and dynamic equations must be solved on domains with discontinuities on complex fault systems, and frictional boundary conditions must be applied on these discontinuities. The most difficult aspect of modeling physics on complicated domains is the mesh. Most numerical methods involve meshing the geometry; nodes are placed on the discontinuities, and edges are chosen to coincide with faults. The resulting mesh is highly unstructured, making the derivation of finite difference discretizations difficult. Therefore, most models use the finite element method. Standard finite element methods place requirements on the mesh for the sake of stability, accuracy, and efficiency. The formation of a mesh which both conforms to fault geometry and satisfies these requirements is an open problem, especially for three dimensional, physically realistic fault. geometries. In addition, if the fault system evolves over the course of a dynamic simulation (i.e. in the case of growing cracks or breaking new faults), the geometry must he re-meshed at each time step. This can be expensive computationally. The fault-conforming approach is undesirable when complicated meshes are required, and impossible to implement when the geometry is evolving. Therefore, meshless and hybrid finite element methods that handle discontinuities without placing them on element boundaries are a desirable and natural way to discretize these problems. Several such methods are being actively developed for use in engineering mechanics involving crack

  16. Computer-aided engineering system for design of sequence arrays and lithographic masks

    DOEpatents

    Hubbell, Earl A.; Morris, MacDonald S.; Winkler, James L.

    1999-01-05

    An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system (100) is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files (104) to design and/or generate lithographic masks (110).

  17. Computer-aided engineering system for design of sequence arrays and lithographic masks

    DOEpatents

    Hubbell, Earl A.; Morris, MacDonald S.; Winkler, James L.

    1996-01-01

    An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system (100) is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files (104) to design and/or generate lithographic masks (110).

  18. Computer-aided engineering system for design of sequence arrays and lithographic masks

    DOEpatents

    Hubbell, E.A.; Morris, M.S.; Winkler, J.L.

    1999-01-05

    An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.

  19. Computer-aided engineering system for design of sequence arrays and lithographic masks

    DOEpatents

    Hubbell, E.A.; Lipshutz, R.J.; Morris, M.S.; Winkler, J.L.

    1997-01-14

    An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.

  20. Computer-aided engineering system for design of sequence arrays and lithographic masks

    DOEpatents

    Hubbell, E.A.; Morris, M.S.; Winkler, J.L.

    1996-11-05

    An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.