On current aspects of finite element computational fluid mechanics for turbulent flows
NASA Technical Reports Server (NTRS)
Baker, A. J.
1982-01-01
A set of nonlinear partial differential equations suitable for the description of a class of turbulent three-dimensional flow fields in select geometries is identified. On the basis of the concept of enforcing a penalty constraint to ensure accurate accounting of ordering effects, a finite element numerical solution algorithm is established for the equation set and the theoretical aspects of accuracy, convergence and stability are identified and quantized. Hypermatrix constructions are used to formulate the reduction of the computational aspects of the theory to practice. The robustness of the algorithm, and the computer program embodiment, have been verified for pertinent flow configurations.
Computational aspects of heat transfer in structures via transfinite element formulations
NASA Technical Reports Server (NTRS)
Tamma, K. K.; Railkar, S.
1986-01-01
The paper presents a generalized Transform Method based Finite Element methodology for thermal analysis with emphasis on the computational aspects of heat transfer in structures. The purpose of this paper is to present an alternate methodology for thermal analysis of structures and therein outline the advantages of the approach in comparison with conventional finite element schemes and existing practices. The overall goals of the research, however, are aimed first toward enhanced thermal formulations and therein to provide avenues for subsequent interdisciplinary thermal/structural analysis via a common numerical methodology. Basic concepts of the approach for thermal analysis is described with emphasis on a Laplace Transform based finite element methodology. Highlights and characteristic features of the approach are described via generalized formulations and applications to several problems. Results obtained demonstrate excellent agreement in comparison with analytic and/or conventional finite element solutions with savings in computational times and model sizes. Potential of the approach for interdisciplinary thermal/structural problems are also identified.
Computational Aspects of the h, p and h-p Versions of the Finite Element Method.
1987-03-01
Then we will 3,4,5,6 the dependence of the accuracy of the error fliell study the dependence between the error jjeljl and the on the computational...University, June 23-26, 1987 paper presented at the First World Congress on Computational [23] Szab6, B.A.: PROBE: Theoretical Manual, NOETIC Tech...ment agencies such as the National Bureau of Standards. 0 To be an international center of study and research for foreign students in numerical
Terminological aspects of data elements
Strehlow, R.A. ); Kenworthey, W.H. Jr. ); Schuldt, R.E. )
1991-01-01
The creation and display of data comprise a process that involves a sequence of steps requiring both semantic and systems analysis. An essential early step in this process is the choice, definition, and naming of data element concepts and is followed by the specification of other needed data element concept attributes. The attributes and the values of data element concept remain associated with them from their birth as a concept to a generic data element that serves as a template for final application. Terminology is, therefore, centrally important to the entire data creation process. Smooth mapping from natural language to a database is a critical aspect of database, and consequently, it requires terminology standardization from the outset of database work. In this paper the semantic aspects of data elements are analyzed and discussed. Seven kinds of data element concept information are considered and those that require terminological development and standardization are identified. The four terminological components of a data element are the hierarchical type of a concept, functional dependencies, schematas showing conceptual structures, and definition statements. These constitute the conventional role of terminology in database design. 12 refs., 8 figs., 1 tab.
Computational Aspects of Equilibria
NASA Astrophysics Data System (ADS)
Yannakakis, Mihalis
Equilibria play a central role in game theory and economics. They characterize the possible outcomes in the interaction of rational, optimizing agents: In a game between rational players that want to optimize their payoffs, the only solutions in which no player has any incentive to switch his strategy are the Nash equilibria. Price equilibria in markets give the prices that allow the market to clear (demand matches supply) while the traders optimize their preferences (utilities). Fundamental theorems of Nash [34] and Arrow-Debreu [2] established the existence of the respective equilibria (under suitable conditions in the market case). The proofs in both cases use a fixed point theorem (relying ultimately on a compactness argument), and are non-constructive, i.e., do not yield an algorithm for constructing an equilibrium. We would clearly like to compute these predicted outcomes. This has led to extensive research since the 60’s in the game theory and mathematical economics literature, with the development of several methods for computation of equilibria, and more generally fixed points. More recently, equilibria problems have been studied intensively in the computer science community, from the point of view of modern computation theory. While we still do not know definitely whether equilibria can be computed in general efficiently or not, these investigations have led to a better understanding of the computational complexity of equilibria, the various issues involved, and the relationship with other open problems in computation. In this talk we will discuss some of these aspects and our current understanding of the relevant problems. We outline below the main points and explain some of the related issues.
ERIC Educational Resources Information Center
Edwards, Judith B.; And Others
This textbook is intended to provide students with an awareness of the possible alternatives in the computer field and with the background information necessary for them to evaluate those alternatives intelligently. Problem solving and simulated work experiences are emphasized as students are familiarized with the functions and limitations of…
Finite element computational fluid mechanics
NASA Technical Reports Server (NTRS)
Baker, A. J.
1983-01-01
Finite element analysis as applied to the broad spectrum of computational fluid mechanics is analyzed. The finite element solution methodology is derived, developed, and applied directly to the differential equation systems governing classes of problems in fluid mechanics. The heat conduction equation is used to reveal the essence and elegance of finite element theory, including higher order accuracy and convergence. The algorithm is extended to the pervasive nonlinearity of the Navier-Stokes equations. A specific fluid mechanics problem class is analyzed with an even mix of theory and applications, including turbulence closure and the solution of turbulent flows.
Finite element computational fluid mechanics
NASA Technical Reports Server (NTRS)
Baker, A. J.
1983-01-01
Finite element analysis as applied to the broad spectrum of computational fluid mechanics is analyzed. The finite element solution methodology is derived, developed, and applied directly to the differential equation systems governing classes of problems in fluid mechanics. The heat conduction equation is used to reveal the essence and elegance of finite element theory, including higher order accuracy and convergence. The algorithm is extended to the pervasive nonlinearity of the Navier-Stokes equations. A specific fluid mechanics problem class is analyzed with an even mix of theory and applications, including turbulence closure and the solution of turbulent flows.
Element-topology-independent preconditioners for parallel finite element computations
NASA Technical Reports Server (NTRS)
Park, K. C.; Alexander, Scott
1992-01-01
A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.
Element-topology-independent preconditioners for parallel finite element computations
NASA Technical Reports Server (NTRS)
Park, K. C.; Alexander, Scott
1992-01-01
A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.
Conceptual aspects of geometric quantum computation
NASA Astrophysics Data System (ADS)
Sjöqvist, Erik; Azimi Mousolou, Vahid; Canali, Carlo M.
2016-10-01
Geometric quantum computation is the idea that geometric phases can be used to implement quantum gates, i.e., the basic elements of the Boolean network that forms a quantum computer. Although originally thought to be limited to adiabatic evolution, controlled by slowly changing parameters, this form of quantum computation can as well be realized at high speed by using nonadiabatic schemes. Recent advances in quantum gate technology have allowed for experimental demonstrations of different types of geometric gates in adiabatic and nonadiabatic evolution. Here, we address some conceptual issues that arise in the realizations of geometric gates. We examine the appearance of dynamical phases in quantum evolution and point out that not all dynamical phases need to be compensated for in geometric quantum computation. We delineate the relation between Abelian and non-Abelian geometric gates and find an explicit physical example where the two types of gates coincide. We identify differences and similarities between adiabatic and nonadiabatic realizations of quantum computation based on non-Abelian geometric phases.
Algebraic aspects of the computably enumerable degrees.
Slaman, T A; Soare, R I
1995-01-01
A set A of nonnegative integers is computably enumerable (c.e.), also called recursively enumerable (r.e.), if there is a computable method to list its elements. The class of sets B which contain the same information as A under Turing computability (elements, whether every embedding of P into can be extended to an embedding of Q into R. Many of the most significant theorems giving an algebraic insight into R have asserted either extension or nonextension of embeddings. We extend and unify these results and their proofs to produce complete and complementary criteria and techniques to analyze instances of extension and nonextension. We conclude that the full extension of embedding problem is decidable. PMID:11607508
Computer Security: The Human Element.
ERIC Educational Resources Information Center
Guynes, Carl S.; Vanacek, Michael T.
1981-01-01
The security and effectiveness of a computer system are dependent on the personnel involved. Improved personnel and organizational procedures can significantly reduce the potential for computer fraud. (Author/MLF)
Mathematical aspects of finite element methods for incompressible viscous flows
NASA Technical Reports Server (NTRS)
Gunzburger, M. D.
1986-01-01
Mathematical aspects of finite element methods are surveyed for incompressible viscous flows, concentrating on the steady primitive variable formulation. The discretization of a weak formulation of the Navier-Stokes equations are addressed, then the stability condition is considered, the satisfaction of which insures the stability of the approximation. Specific choices of finite element spaces for the velocity and pressure are then discussed. Finally, the connection between different weak formulations and a variety of boundary conditions is explored.
Nonlinear Finite Element Analysis of Shells with Large Aspect Ratio
NASA Technical Reports Server (NTRS)
Chang, T. Y.; Sawamiphakdi, K.
1984-01-01
A higher order degenerated shell element with nine nodes was selected for large deformation and post-buckling analysis of thick or thin shells. Elastic-plastic material properties are also included. The post-buckling analysis algorithm is given. Using a square plate, it was demonstrated that the none-node element does not have shear locking effect even if its aspect ratio was increased to the order 10 to the 8th power. Two sample problems are given to illustrate the analysis capability of the shell element.
Impact of new computing systems on finite element computations
NASA Technical Reports Server (NTRS)
Noor, A. K.; Storassili, O. O.; Fulton, R. E.
1983-01-01
Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified.
Computation of Asteroid Proper Elements: Recent Advances
NASA Astrophysics Data System (ADS)
Knežević, Z.
2017-06-01
The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.
Dedicated breast computed tomography: Basic aspects
Sarno, Antonio; Mettivier, Giovanni Russo, Paolo
2015-06-15
X-ray mammography of the compressed breast is well recognized as the “gold standard” for early detection of breast cancer, but its performance is not ideal. One limitation of screening mammography is tissue superposition, particularly for dense breasts. Since 2001, several research groups in the USA and in the European Union have developed computed tomography (CT) systems with digital detector technology dedicated to x-ray imaging of the uncompressed breast (breast CT or BCT) for breast cancer screening and diagnosis. This CT technology—tracing back to initial studies in the 1970s—allows some of the limitations of mammography to be overcome, keeping the levels of radiation dose to the radiosensitive breast glandular tissue similar to that of two-view mammography for the same breast size and composition. This paper presents an evaluation of the research efforts carried out in the invention, development, and improvement of BCT with dedicated scanners with state-of-the-art technology, including initial steps toward commercialization, after more than a decade of R and D in the laboratory and/or in the clinic. The intended focus here is on the technological/engineering aspects of BCT and on outlining advantages and limitations as reported in the related literature. Prospects for future research in this field are discussed.
Computational and Practical Aspects of Drug Repositioning
Oprea, Tudor I.
2015-01-01
Abstract The concept of the hypothesis-driven or observational-based expansion of the therapeutic application of drugs is very seductive. This is due to a number of factors, such as lower cost of development, higher probability of success, near-term clinical potential, patient and societal benefit, and also the ability to apply the approach to rare, orphan, and underresearched diseases. Another highly attractive aspect is that the “barrier to entry” is low, at least in comparison to a full drug discovery operation. The availability of high-performance computing, and databases of various forms have also enhanced the ability to pose reasonable and testable hypotheses for drug repurposing, rescue, and repositioning. In this article we discuss several factors that are currently underdeveloped, or could benefit from clearer definition in articles presenting such work. We propose a classification scheme—drug repositioning evidence level (DREL)—for all drug repositioning projects, according to the level of scientific evidence. DREL ranges from zero, which refers to predictions that lack any experimental support, to four, which refers to drugs approved for the new indication. We also present a set of simple concepts that can allow rapid and effective filtering of hypotheses, leading to a focus on those that are most likely to lead to practical safe applications of an existing drug. Some promising repurposing leads for malaria (DREL-1) and amoebic dysentery (DREL-2) are discussed. PMID:26241209
Computational and Practical Aspects of Drug Repositioning.
Oprea, Tudor I; Overington, John P
2015-01-01
The concept of the hypothesis-driven or observational-based expansion of the therapeutic application of drugs is very seductive. This is due to a number of factors, such as lower cost of development, higher probability of success, near-term clinical potential, patient and societal benefit, and also the ability to apply the approach to rare, orphan, and underresearched diseases. Another highly attractive aspect is that the "barrier to entry" is low, at least in comparison to a full drug discovery operation. The availability of high-performance computing, and databases of various forms have also enhanced the ability to pose reasonable and testable hypotheses for drug repurposing, rescue, and repositioning. In this article we discuss several factors that are currently underdeveloped, or could benefit from clearer definition in articles presenting such work. We propose a classification scheme-drug repositioning evidence level (DREL)-for all drug repositioning projects, according to the level of scientific evidence. DREL ranges from zero, which refers to predictions that lack any experimental support, to four, which refers to drugs approved for the new indication. We also present a set of simple concepts that can allow rapid and effective filtering of hypotheses, leading to a focus on those that are most likely to lead to practical safe applications of an existing drug. Some promising repurposing leads for malaria (DREL-1) and amoebic dysentery (DREL-2) are discussed.
Computational Aspects of Heat Transfer in Structures
NASA Technical Reports Server (NTRS)
Adelman, H. M. (Compiler)
1982-01-01
Techniques for the computation of heat transfer and associated phenomena in complex structures are examined with an emphasis on reentry flight vehicle structures. Analysis methods, computer programs, thermal analysis of large space structures and high speed vehicles, and the impact of computer systems are addressed.
Sociocultural Aspects of Computers in Education.
ERIC Educational Resources Information Center
Yeaman, Andrew R. J.
The data reported in this paper gives depth to the picture of computers in society, in work, and in schools. The prices have dropped but computer corporations sell to schools, as they do to any other customer, to increase profits for themselves. Computerizing is a vehicle for social stratification. Computers are not easy to use and are hard to…
Parallel computation with the spectral element method
Ma, Hong
1995-12-01
Spectral element models for the shallow water equations and the Navier-Stokes equations have been successfully implemented on a data parallel supercomputer, the Connection Machine model CM-5. The nonstaggered grid formulations for both models are described, which are shown to be especially efficient in data parallel computing environment.
Central control element expands computer capability
NASA Technical Reports Server (NTRS)
Easton, R. A.
1975-01-01
Redundant processing and multiprocessing modes can be obtained from one computer by using logic configuration. Configuration serves as central control element which can automatically alternate between high-capacity multiprocessing mode and high-reliability redundant mode using dynamic mode switching in real time.
Mathematical Aspects of Quantum Computing 2007
NASA Astrophysics Data System (ADS)
Nakahara, Mikio; Rahimi, Robabeh; SaiToh, Akira
2008-04-01
Quantum computing: an overview / M. Nakahara -- Braid group and topological quantum computing / T. Ootsuka, K. Sakuma -- An introduction to entanglement theory / D. J. H. Markham -- Holonomic quantum computing and its optimization / S. Tanimura -- Playing games in quantum mechanical settings: features of quantum games / S. K. Özdemir, J. Shimamura, N. Imoto -- Quantum error-correcting codes / M. Hagiwara -- Poster summaries. Controled teleportation of an arbitrary unknown two-qubit entangled state / V. Ebrahimi, R. Rahimi, M. Nakahara. Notes on the Dür-Cirac classification / Y. Ota, M. Yoshida, I. Ohba. Bang-bang control of entanglement in Spin-Bus-Boson model / R. Rahimi, A. SaiToh, M. Nakahara. Numerical computation of time-dependent multipartite nonclassical correlation / A. SaiToh ... [et al.]. On classical no-cloning theorem under Liouville dynamics and distances / T. Yamano, O. Iguchi.
Computing aspects of power for multiple regression.
Dunlap, William P; Xin, Xue; Myers, Leann
2004-11-01
Rules of thumb for power in multiple regression research abound. Most such rules dictate the necessary sample size, but they are based only upon the number of predictor variables, usually ignoring other critical factors necessary to compute power accurately. Other guides to power in multiple regression typically use approximate rather than precise equations for the underlying distribution; entail complex preparatory computations; require interpolation with tabular presentation formats; run only under software such as Mathmatica or SAS that may not be immediately available to the user; or are sold to the user as parts of power computation packages. In contrast, the program we offer herein is immediately downloadable at no charge, runs under Windows, is interactive, self-explanatory, flexible to fit the user's own regression problems, and is as accurate as single precision computation ordinarily permits.
Security Aspects of Computer Supported Collaborative Work
1993-09-01
its enabling software. CSCW has been described by some as computer- based tools which can be used to facilitate the exchange and sharing of...information by work groups. Others have described it as a computer- based shared environment that supports two or more users. [Bock92] CSCW is a rapidly...Groupware applications according to the type of work they are designed 6 to accomplish. Based on this first criteria, they recognize four general classes
Aspects of computer vision in surgical endoscopy
NASA Astrophysics Data System (ADS)
Rodin, Vincent; Ayache, Alain; Berreni, N.
1993-09-01
This work is related to a project of medical robotics applied to surgical endoscopy, led in collaboration with Doctor Berreni from the Saint Roch nursing-home in Perpignan, France). After taking what Doctor Berreni advises, two aspects of endoscopic color image processing have been brought out: (1) The help to the diagnosis by the automatic detection of the sick areas after a learning phase. (2) The 3D reconstruction of the analyzed cavity by using a zoom.
Analytical and Computational Aspects of Collaborative Optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2000-01-01
Bilevel problem formulations have received considerable attention as an approach to multidisciplinary optimization in engineering. We examine the analytical and computational properties of one such approach, collaborative optimization. The resulting system-level optimization problems suffer from inherent computational difficulties due to the bilevel nature of the method. Most notably, it is impossible to characterize and hence identify solutions of the system-level problems because the standard first-order conditions for solutions of constrained optimization problems do not hold. The analytical features of the system-level problem make it difficult to apply conventional nonlinear programming algorithms. Simple examples illustrate the analysis and the algorithmic consequences for optimization methods. We conclude with additional observations on the practical implications of the analytical and computational properties of collaborative optimization.
Subversion: The Neglected Aspect of Computer Security.
1980-06-01
is an accidental or unintentional opening that l permits unauthorized control of the system or unauthorized access to information. It can occur in...propram (which has access to the sensitive data is sendine a binary if the service opens the given file for readin,. This is because he would be...ANSON IS. Of lafuma &M Akm1) IS. SuPOLCUENTANV NOTES S I. EYsubveine protecto oiy trap doors, Trojan horses, penetration, computer security, access
Programmable computing with a single magnetoresistive element
NASA Astrophysics Data System (ADS)
Ney, A.; Pampuch, C.; Koch, R.; Ploog, K. H.
2003-10-01
The development of transistor-based integrated circuits for modern computing is a story of great success. However, the proved concept for enhancing computational power by continuous miniaturization is approaching its fundamental limits. Alternative approaches consider logic elements that are reconfigurable at run-time to overcome the rigid architecture of the present hardware systems. Implementation of parallel algorithms on such `chameleon' processors has the potential to yield a dramatic increase of computational speed, competitive with that of supercomputers. Owing to their functional flexibility, `chameleon' processors can be readily optimized with respect to any computer application. In conventional microprocessors, information must be transferred to a memory to prevent it from getting lost, because electrically processed information is volatile. Therefore the computational performance can be improved if the logic gate is additionally capable of storing the output. Here we describe a simple hardware concept for a programmable logic element that is based on a single magnetic random access memory (MRAM) cell. It combines the inherent advantage of a non-volatile output with flexible functionality which can be selected at run-time to operate as an AND, OR, NAND or NOR gate.
Programmable computing with a single magnetoresistive element.
Ney, A; Pampuch, C; Koch, R; Ploog, K H
2003-10-02
The development of transistor-based integrated circuits for modern computing is a story of great success. However, the proved concept for enhancing computational power by continuous miniaturization is approaching its fundamental limits. Alternative approaches consider logic elements that are reconfigurable at run-time to overcome the rigid architecture of the present hardware systems. Implementation of parallel algorithms on such 'chameleon' processors has the potential to yield a dramatic increase of computational speed, competitive with that of supercomputers. Owing to their functional flexibility, 'chameleon' processors can be readily optimized with respect to any computer application. In conventional microprocessors, information must be transferred to a memory to prevent it from getting lost, because electrically processed information is volatile. Therefore the computational performance can be improved if the logic gate is additionally capable of storing the output. Here we describe a simple hardware concept for a programmable logic element that is based on a single magnetic random access memory (MRAM) cell. It combines the inherent advantage of a non-volatile output with flexible functionality which can be selected at run-time to operate as an AND, OR, NAND or NOR gate.
Synchrotron Imaging Computations on the Grid without the Computing Element
NASA Astrophysics Data System (ADS)
Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.
2011-12-01
Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.
Plane Smoothers for Multiblock Grids: Computational Aspects
NASA Technical Reports Server (NTRS)
Llorente, Ignacio M.; Diskin, Boris; Melson, N. Duane
1999-01-01
Standard multigrid methods are not well suited for problems with anisotropic discrete operators, which can occur, for example, on grids that are stretched in order to resolve a boundary layer. One of the most efficient approaches to yield robust methods is the combination of standard coarsening with alternating-direction plane relaxation in the three dimensions. However, this approach may be difficult to implement in codes with multiblock structured grids because there may be no natural definition of global lines or planes. This inherent obstacle limits the range of an implicit smoother to only the portion of the computational domain in the current block. This report studies in detail, both numerically and analytically, the behavior of blockwise plane smoothers in order to provide guidance to engineers who use block-structured grids. The results obtained so far show alternating-direction plane smoothers to be very robust, even on multiblock grids. In common computational fluid dynamics multiblock simulations, where the number of subdomains crossed by the line of a strong anisotropy is low (up to four), textbook multigrid convergence rates can be obtained with a small overlap of cells between neighboring blocks.
NASA Astrophysics Data System (ADS)
Sadique, Jasim; Yang, Xiang I. A.; Meneveau, Charles; Mittal, Rajat
2017-05-01
We examine the effect of varying roughness-element aspect ratio on the mean velocity distributions of turbulent flow over arrays of rectangular-prism-shaped elements. Large-eddy simulations (LES) in conjunction with a sharp-interface immersed boundary method are used to simulate spatially-growing turbulent boundary layers over these rough surfaces. Arrays of aligned and staggered rectangular roughness elements with aspect ratio >1 are considered. First the temporally- and spatially-averaged velocity profiles are used to illustrate the aspect-ratio effects. For aligned prisms, the roughness length (z_o) and the friction velocity (u_*) increase initially with an increase in the roughness-element aspect ratio, until the values reach a plateau at a particular aspect ratio. The exact value of this aspect ratio depends on the coverage density. Further increase in the aspect ratio changes neither z_o, u_* nor the bulk flow above the roughness elements. For the staggered cases, z_o and u_* continue to increase for the surface coverage density and the aspect ratios investigated. To model the flow response to variations in roughness aspect ratio, we turn to a previously developed phenomenological volumetric sheltering model (Yang et al., in J Fluid Mech 789:127-165, 2016), which was intended for low to moderate aspect-ratio roughness elements. Here, we extend this model to account for high aspect-ratio roughness elements. We find that for aligned cases, the model predicts strong mutual sheltering among the roughness elements, while the effect is much weaker for staggered cases. The model-predicted z_o and u_* agree well with the LES results. Results show that the model, which takes explicit account of the mutual sheltering effects, provides a rapid and reliable prediction method of roughness effects in turbulent boundary-layer flows over arrays of rectangular-prism roughness elements.
NASA Astrophysics Data System (ADS)
Sadique, Jasim; Yang, Xiang I. A.; Meneveau, Charles; Mittal, Rajat
2016-12-01
We examine the effect of varying roughness-element aspect ratio on the mean velocity distributions of turbulent flow over arrays of rectangular-prism-shaped elements. Large-eddy simulations (LES) in conjunction with a sharp-interface immersed boundary method are used to simulate spatially-growing turbulent boundary layers over these rough surfaces. Arrays of aligned and staggered rectangular roughness elements with aspect ratio >1 are considered. First the temporally- and spatially-averaged velocity profiles are used to illustrate the aspect-ratio effects. For aligned prisms, the roughness length (z_o ) and the friction velocity (u_* ) increase initially with an increase in the roughness-element aspect ratio, until the values reach a plateau at a particular aspect ratio. The exact value of this aspect ratio depends on the coverage density. Further increase in the aspect ratio changes neither z_o , u_* nor the bulk flow above the roughness elements. For the staggered cases, z_o and u_* continue to increase for the surface coverage density and the aspect ratios investigated. To model the flow response to variations in roughness aspect ratio, we turn to a previously developed phenomenological volumetric sheltering model (Yang et al., in J Fluid Mech 789:127-165, 2016), which was intended for low to moderate aspect-ratio roughness elements. Here, we extend this model to account for high aspect-ratio roughness elements. We find that for aligned cases, the model predicts strong mutual sheltering among the roughness elements, while the effect is much weaker for staggered cases. The model-predicted z_o and u_* agree well with the LES results. Results show that the model, which takes explicit account of the mutual sheltering effects, provides a rapid and reliable prediction method of roughness effects in turbulent boundary-layer flows over arrays of rectangular-prism roughness elements.
Benchmarking: More Aspects of High Performance Computing
Ravindrudu, Rahul
2004-01-01
pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.
Computational Aspects of N-Mixture Models
Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S
2015-01-01
The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629
Some Aspects of Mathematics and Computer Science in Japan,
Japan. In fact, he learned about a rather wide variety of research in various aspects of applied mathematics and computer science . The readers...Mathematics . Those interested in computer science and applications software will be most interested in the work at Fujitsu Limited and the work at the
The case for biological quantum computer elements
NASA Astrophysics Data System (ADS)
Baer, Wolfgang; Pizzi, Rita
2009-05-01
An extension to vonNeumann's analysis of quantum theory suggests self-measurement is a fundamental process of Nature. By mapping the quantum computer to the brain architecture we will argue that the cognitive experience results from a measurement of a quantum memory maintained by biological entities. The insight provided by this mapping suggests quantum effects are not restricted to small atomic and nuclear phenomena but are an integral part of our own cognitive experience and further that the architecture of a quantum computer system parallels that of a conscious brain. We will then review the suggestions for biological quantum elements in basic neural structures and address the de-coherence objection by arguing for a self- measurement event model of Nature. We will argue that to first order approximation the universe is composed of isolated self-measurement events which guaranties coherence. Controlled de-coherence is treated as the input/output interactions between quantum elements of a quantum computer and the quantum memory maintained by biological entities cognizant of the quantum calculation results. Lastly we will present stem-cell based neuron experiments conducted by one of us with the aim of demonstrating the occurrence of quantum effects in living neural networks and discuss future research projects intended to reach this objective.
Finite element concepts in computational aerodynamics
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
Finite element theory was employed to establish an implicit numerical solution algorithm for the time averaged unsteady Navier-Stokes equations. Both the multidimensional and a time-split form of the algorithm were considered, the latter of particular interest for problem specification on a regular mesh. A Newton matrix iteration procedure is outlined for solving the resultant nonlinear algebraic equation systems. Multidimensional discretization procedures are discussed with emphasis on automated generation of specific nonuniform solution grids and accounting of curved surfaces. The time-split algorithm was evaluated with regards to accuracy and convergence properties for hyperbolic equations on rectangular coordinates. An overall assessment of the viability of the finite element concept for computational aerodynamics is made.
Computational aspects in mechanical modeling of the articular cartilage tissue.
Mohammadi, Hadi; Mequanint, Kibret; Herzog, Walter
2013-04-01
This review focuses on the modeling of articular cartilage (at the tissue level), chondrocyte mechanobiology (at the cell level) and a combination of both in a multiscale computation scheme. The primary objective is to evaluate the advantages and disadvantages of conventional models implemented to study the mechanics of the articular cartilage tissue and chondrocytes. From monophasic material models as the simplest form to more complicated multiscale theories, these approaches have been frequently used to model articular cartilage and have contributed significantly to modeling joint mechanics, addressing and resolving numerous issues regarding cartilage mechanics and function. It should be noted that attentiveness is important when using different modeling approaches, as the choice of the model limits the applications available. In this review, we discuss the conventional models applicable to some of the mechanical aspects of articular cartilage such as lubrication, swelling pressure and chondrocyte mechanics and address some of the issues associated with the current modeling approaches. We then suggest future pathways for a more realistic modeling strategy as applied for the simulation of the mechanics of the cartilage tissue using multiscale and parallelized finite element method.
HYDRA, A finite element computational fluid dynamics code: User manual
Christon, M.A.
1995-06-01
HYDRA is a finite element code which has been developed specifically to attack the class of transient, incompressible, viscous, computational fluid dynamics problems which are predominant in the world which surrounds us. The goal for HYDRA has been to achieve high performance across a spectrum of supercomputer architectures without sacrificing any of the aspects of the finite element method which make it so flexible and permit application to a broad class of problems. As supercomputer algorithms evolve, the continuing development of HYDRA will strive to achieve optimal mappings of the most advanced flow solution algorithms onto supercomputer architectures. HYDRA has drawn upon the many years of finite element expertise constituted by DYNA3D and NIKE3D Certain key architectural ideas from both DYNA3D and NIKE3D have been adopted and further improved to fit the advanced dynamic memory management and data structures implemented in HYDRA. The philosophy for HYDRA is to focus on mapping flow algorithms to computer architectures to try and achieve a high level of performance, rather than just performing a port.
Business aspects of cardiovascular computed tomography: tackling the challenges.
Bateman, Timothy M
2008-01-01
The purpose of this article is to provide a comprehensive understanding of the business issues surrounding provision of dedicated cardiovascular computed tomographic imaging. Some of the challenges include high up-front costs, current low utilization relative to scanner capability, and inadequate payments. Cardiovascular computed tomographic imaging is a valuable clinical modality that should be offered by cardiovascular centers-of-excellence. With careful consideration of the business aspects, moderate-to-large size cardiology programs should be able to implement an economically viable cardiovascular computed tomographic service.
Physical aspects of computing the flow of a viscous fluid
NASA Technical Reports Server (NTRS)
Mehta, U. B.
1984-01-01
One of the main themes in fluid dynamics at present and in the future is going to be computational fluid dynamics with the primary focus on the determination of drag, flow separation, vortex flows, and unsteady flows. A computation of the flow of a viscous fluid requires an understanding and consideration of the physical aspects of the flow. This is done by identifying the flow regimes and the scales of fluid motion, and the sources of vorticity. Discussions of flow regimes deal with conditions of incompressibility, transitional and turbulent flows, Navier-Stokes and non-Navier-Stokes regimes, shock waves, and strain fields. Discussions of the scales of fluid motion consider transitional and turbulent flows, thin- and slender-shear layers, triple- and four-deck regions, viscous-inviscid interactions, shock waves, strain rates, and temporal scales. In addition, the significance and generation of vorticity are discussed. These physical aspects mainly guide computations of the flow of a viscous fluid.
Power throttling of collections of computing elements
Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd
2011-08-16
An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.
Computational aspects of growth-induced instabilities through eigenvalue analysis
NASA Astrophysics Data System (ADS)
Javili, A.; Dortdivanlioglu, B.; Kuhl, E.; Linder, C.
2015-09-01
The objective of this contribution is to establish a computational framework to study growth-induced instabilities. The common approach towards growth-induced instabilities is to decompose the deformation multiplicatively into its growth and elastic part. Recently, this concept has been employed in computations of growing continua and has proven to be extremely useful to better understand the material behavior under growth. While finite element simulations seem to be capable of predicting the behavior of growing continua, they often cannot naturally capture the instabilities caused by growth. The accepted strategy to provoke growth-induced instabilities is therefore to perturb the solution of the problem, which indeed results in geometric instabilities in the form of wrinkles and folds. However, this strategy is intrinsically subjective as the user is prescribing the perturbations and the simulations are often highly perturbation-dependent. We propose a different strategy that is inherently suitable for this problem, namely eigenvalue analysis. The main advantages of eigenvalue analysis are that first, no arbitrary, artificial perturbations are needed and second, it is, in general, independent of the time step size. Therefore, the solution obtained by this methodology is not subjective and thus, is generic and reproducible. Equipped with eigenvalue analysis, we are able to compute precisely the critical growth to initiate instabilities. Furthermore, this strategy allows us to compare different finite elements for this family of problems. Our results demonstrate that linear elements perform strikingly poorly, as compared to quadratic elements.
On Undecidability Aspects of Resilient Computations and Implications to Exascale
Rao, Nageswara S
2014-01-01
Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classes of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.
Computational Aspects of Data Assimilation and the ESMF
NASA Technical Reports Server (NTRS)
daSilva, A.
2003-01-01
The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.
Computational Aspects of Data Assimilation and the ESMF
NASA Technical Reports Server (NTRS)
daSilva, A.
2003-01-01
The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.
Mathematical Aspects of Finite Element Methods for Incompressible Viscous Flows.
1986-09-01
irteir-rt In element pa ir 1: is je Tnit’ by f i F-t mii iidi rig * % % % % % % - 4* % VV 4 ~ % - ~ * .. . * *. PA - 33- Q into rectangular prisms , or...mtr.o gerier-il Iv, lrit h.-t ,- For the prpsstirp sputi-P w’e choose~ pi.p’ievi so. -- u t subregions. We subdi vIde each rectangular prism into 24 tetr...8217 Unfortunately, these boundary conditions have no PhV.- tico . meaninq. Thus the choice (4.5.1), or equivalently (4.10.1,, can only be used in conjunction
Algebraic and Computational Aspects of Network Reliability and Problems.
1986-07-15
7 -A175 075 ALGEBRAIC AND COMPUTATIONAL ASPECTS OF KETUORK / IRELIABILITY AND PROBLEMS(U) CLEMSON UNIV SC D SHIER 15 JUL 86 AFOSR-TR-86-2115 AFOSR...MONITORING ORGANIZATION I, afpplhcable) Clemson University AFOSR/NM 6C. ADDRESS (City. State and ZIP Codej 7b. ADDRESS (City. State and ZIP Code) Mlartin...Hall Bldg 410 Clemson , SC 29634-1907 Bolling AFB OC 20332-6448 S& NAME OF FUNOING/SPONSORING Bb. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION
Computers in the Library: The Human Element.
ERIC Educational Resources Information Center
Magrath, Lynn L.
1982-01-01
Discusses library staff and public reaction to the computerization of library operations at the Pikes Peak Library District in Colorado Springs. An outline of computer applications implemented since the inception of the program in 1975 is included. (EJS)
Concurrent multiresolution finite element: formulation and algorithmic aspects
NASA Astrophysics Data System (ADS)
Tang, Shan; Kopacz, Adrian M.; Chan O'Keeffe, Stephanie; Olson, Gregory B.; Liu, Wing Kam
2013-12-01
A multiresolution concurrent theory for heterogenous materials is proposed with novel macro scale and micro scale constitutive laws that include the plastic yield function at different length scales. In contrast to the conventional plasticity, the plastic flow at the micro zone depends on the plastic strain gradient. The consistency condition at the macro and micro zones can result in a set of algebraic equations. Using appropriate boundary conditions, the finite element discretization was derived from a variational principle with the extra degrees of freedom for the micro zones. In collaboration with LSTC Inc, the degrees of freedom at the micro zone and their related history variables have been augmented in LS-DYNA. The 3D multiresolution theory has been implemented. Shear band propagation and the large scale simulation of a shear driven ductile fracture process were carried out. Our results show that the proposed multiresolution theory in combination with the parallel implementation into LS-DYNA can capture the effects of the microstructure on shear band propagation and allows for realistic modeling of ductile fracture process.
Control aspects of quantum computing using pure and mixed states.
Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J
2012-10-13
Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems.
Control aspects of quantum computing using pure and mixed states
Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J.
2012-01-01
Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems. PMID:22946034
Higher-Order Finite Elements for Computing Thermal Radiation
NASA Technical Reports Server (NTRS)
Gould, Dana C.
2004-01-01
Two variants of the finite-element method have been developed for use in computational simulations of radiative transfers of heat among diffuse gray surfaces. Both variants involve the use of higher-order finite elements, across which temperatures and radiative quantities are assumed to vary according to certain approximations. In this and other applications, higher-order finite elements are used to increase (relative to classical finite elements, which are assumed to be isothermal) the accuracies of final numerical results without having to refine computational meshes excessively and thereby incur excessive computation times. One of the variants is termed the radiation sub-element (RSE) method, which, itself, is subject to a number of variations. This is the simplest and most straightforward approach to representation of spatially variable surface radiation. Any computer code that, heretofore, could model surface-to-surface radiation can incorporate the RSE method without major modifications. In the basic form of the RSE method, each finite element selected for use in computing radiative heat transfer is considered to be a parent element and is divided into sub-elements for the purpose of solving the surface-to-surface radiation-exchange problem. The sub-elements are then treated as classical finite elements; that is, they are assumed to be isothermal, and their view factors and absorbed heat fluxes are calculated accordingly. The heat fluxes absorbed by the sub-elements are then transferred back to the parent element to obtain a radiative heat flux that varies spatially across the parent element. Variants of the RSE method involve the use of polynomials to interpolate and/or extrapolate to approximate spatial variations of physical quantities. The other variant of the finite-element method is termed the integration method (IM). Unlike in the RSE methods, the parent finite elements are not subdivided into smaller elements, and neither isothermality nor other
Adaptive Finite-Element Computation In Fracture Mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1995-01-01
Report discusses recent progress in use of solution-adaptive finite-element computational methods to solve two-dimensional problems in linear elastic fracture mechanics. Method also shown extensible to three-dimensional problems.
NASA Astrophysics Data System (ADS)
Toro, S.; Sánchez, P. J.; Podestá, J. M.; Blanco, P. J.; Huespe, A. E.; Feijóo, R. A.
2016-10-01
The paper describes the computational aspects and numerical implementation of a two-scale cohesive surface methodology developed for analyzing fracture in heterogeneous materials with complex micro-structures. This approach can be categorized as a semi-concurrent model using the representative volume element concept. A variational multi-scale formulation of the methodology has been previously presented by the authors. Subsequently, the formulation has been generalized and improved in two aspects: (i) cohesive surfaces have been introduced at both scales of analysis, they are modeled with a strong discontinuity kinematics (new equations describing the insertion of the macro-scale strains, into the micro-scale and the posterior homogenization procedure have been considered); (ii) the computational procedure and numerical implementation have been adapted for this formulation. The first point has been presented elsewhere, and it is summarized here. Instead, the main objective of this paper is to address a rather detailed presentation of the second point. Finite element techniques for modeling cohesive surfaces at both scales of analysis (FE^2 approach) are described: (i) finite elements with embedded strong discontinuities are used for the macro-scale simulation, and (ii) continuum-type finite elements with high aspect ratios, mimicking cohesive surfaces, are adopted for simulating the failure mechanisms at the micro-scale. The methodology is validated through numerical simulation of a quasi-brittle concrete fracture problem. The proposed multi-scale model is capable of unveiling the mechanisms that lead from the material degradation phenomenon at the meso-structural level to the activation and propagation of cohesive surfaces at the structural scale.
1988-05-01
for Advanced Computer Studies and Department of Computer Science University of Maryland College Park, MD 20742 4, ABSTRACT We discuss some aspects of...Computer Studies and Technology & Dept. of Compute. Scienc II. CONTROLLING OFFICE NAME AND ADDRESS Viyriyf~ 12. REPORT DATE Department of the Navy uo...number)-1/ 2.) We study the performance of CG and PCG by examining its performance for u E (0,1), for solving the two model problems with an accuracy
Optically intraconnected computer employing dynamically reconfigurable holographic optical element
NASA Technical Reports Server (NTRS)
Bergman, Larry A. (Inventor)
1992-01-01
An optically intraconnected computer and a reconfigurable holographic optical element employed therein. The basic computer comprises a memory for holding a sequence of instructions to be executed; logic for accessing the instructions in sequence; logic for determining for each the instruction the function to be performed and the effective address thereof; a plurality of individual elements on a common support substrate optimized to perform certain logical sequences employed in executing the instructions; and, element selection logic connected to the logic determining the function to be performed for each the instruction for determining the class of each function and for causing the instruction to be executed by those the elements which perform those associated the logical sequences affecting the instruction execution in an optimum manner. In the optically intraconnected version, the element selection logic is adapted for transmitting and switching signals to the elements optically.
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
A computer graphics program for general finite element analyses
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Sawyer, L. M.
1978-01-01
Documentation for a computer graphics program for displays from general finite element analyses is presented. A general description of display options and detailed user instructions are given. Several plots made in structural, thermal and fluid finite element analyses are included to illustrate program options. Sample data files are given to illustrate use of the program.
The Impact of Instructional Elements in Computer-Based Instruction
ERIC Educational Resources Information Center
Martin, Florence; Klein, James D.; Sullivan, Howard
2007-01-01
This study investigated the effects of several elements of instruction (objectives, information, practice, examples and review) when they were combined in a systematic manner. College students enrolled in a computer literacy course used one of six different versions of a computer-based lesson delivered on the web to learn about input, processing,…
Computational aspects of steel fracturing pertinent to naval requirements.
Matic, Peter; Geltmacher, Andrew; Rath, Bhakta
2015-03-28
Modern high strength and ductile steels are a key element of US Navy ship structural technology. The development of these alloys spurred the development of modern structural integrity analysis methods over the past 70 years. Strength and ductility provided the designers and builders of navy surface ships and submarines with the opportunity to reduce ship structural weight, increase hull stiffness, increase damage resistance, improve construction practices and reduce maintenance costs. This paper reviews how analytical and computational tools, driving simulation methods and experimental techniques, were developed to provide ongoing insights into the material, damage and fracture characteristics of these alloys. The need to understand alloy fracture mechanics provided unique motivations to measure and model performance from structural to microstructural scales. This was done while accounting for the highly nonlinear behaviours of both materials and underlying fracture processes. Theoretical methods, data acquisition strategies, computational simulation and scientific imaging were applied to increasingly smaller scales and complex materials phenomena under deformation. Knowledge gained about fracture resistance was used to meet minimum fracture initiation, crack growth and crack arrest characteristics as part of overall structural integrity considerations.
NASA Technical Reports Server (NTRS)
Atluri, S. N.
1986-01-01
Computational finite-element and boundary-element methods are reviewed, and their application to the mechanics of solids is discussed. Stability conditions for general FEMs are considered in addition to the use of least-order, stable, invariant, or hybrid/mixed isoparametric elements as alternatives to the displacement-based isoparametric elements. The use of symbolic manipulation, adaptive mesh refinement, transient dynamic response, and boundary-element methods for linear elaslticity and finite-strain problems of inelastic materials are also discussed.
Acceleration of matrix element computations for precision measurements
Brandt, Oleg; Gutierrez, Gaston; Wang, M. H.L.S.; Ye, Zhenyu
2014-11-25
The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix element technique. We then utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment.
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
NASA Astrophysics Data System (ADS)
Takinoue, Masahiro; Kiga, Daisuke; Shohda, Koh-Ichiroh; Suyama, Akira
2008-10-01
Autonomous DNA computers have been attracting much attention because of their ability to integrate into living cells. Autonomous DNA computers can process information through DNA molecules and their molecular reactions. We have already proposed an idea of an autonomous molecular computer with high computational ability, which is now named Reverse-transcription-and-TRanscription-based Autonomous Computing System (RTRACS). In this study, we first report an experimental demonstration of a basic computation element of RTRACS and a mathematical modeling method for RTRACS. We focus on an AND gate, which produces an output RNA molecule only when two input RNA molecules exist, because it is one of the most basic computation elements in RTRACS. Experimental results demonstrated that the basic computation element worked as designed. In addition, its behaviors were analyzed using a mathematical model describing the molecular reactions of the RTRACS computation elements. A comparison between experiments and simulations confirmed the validity of the mathematical modeling method. This study will accelerate construction of various kinds of computation elements and computational circuits of RTRACS, and thus advance the research on autonomous DNA computers.
Takinoue, Masahiro; Kiga, Daisuke; Shohda, Koh-Ichiroh; Suyama, Akira
2008-10-01
Autonomous DNA computers have been attracting much attention because of their ability to integrate into living cells. Autonomous DNA computers can process information through DNA molecules and their molecular reactions. We have already proposed an idea of an autonomous molecular computer with high computational ability, which is now named Reverse-transcription-and-TRanscription-based Autonomous Computing System (RTRACS). In this study, we first report an experimental demonstration of a basic computation element of RTRACS and a mathematical modeling method for RTRACS. We focus on an AND gate, which produces an output RNA molecule only when two input RNA molecules exist, because it is one of the most basic computation elements in RTRACS. Experimental results demonstrated that the basic computation element worked as designed. In addition, its behaviors were analyzed using a mathematical model describing the molecular reactions of the RTRACS computation elements. A comparison between experiments and simulations confirmed the validity of the mathematical modeling method. This study will accelerate construction of various kinds of computation elements and computational circuits of RTRACS, and thus advance the research on autonomous DNA computers.
Enhanced pre-computed finite element models for surgical simulation.
Zhong, Hualiang; Wachowiak, Mark P; Peters, Terry M
2005-01-01
Soft tissue modeling is an important component in effective surgical simulation systems. A pre-computed finite element method based on elastic models is well suited to modeling soft tissue deformation. This paper addresses two principal issues: the flexibility of the pre-computed FE method and the approximation approach to non-linear elastic models. We describe a dynamic mechanism of the reconfiguration of the contacted nodes and the fixed boundary, without re-computing the inverse of the global stiffness matrix. The flexibility of the pre-computed models is described for both linear and non-linear elastic models.
Finite element dynamic analysis on CDC STAR-100 computer
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lambiotte, J. J., Jr.
1978-01-01
Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.
Computer simulation of functioning of elements of security systems
NASA Astrophysics Data System (ADS)
Godovykh, A. V.; Stepanov, B. P.; Sheveleva, A. A.
2017-01-01
The article is devoted to issues of development of the informational complex for simulation of functioning of the security system elements. The complex is described from the point of view of main objectives, a design concept and an interrelation of main elements. The proposed conception of the computer simulation provides an opportunity to simulate processes of security system work for training security staff during normal and emergency operation.
Introducing the Practical Aspects of Computational Chemistry to Undergraduate Chemistry Students
ERIC Educational Resources Information Center
Pearson, Jason K.
2007-01-01
Various efforts are being made to introduce the different physical aspects and uses of computational chemistry to the undergraduate chemistry students. A new laboratory approach that demonstrates all such aspects via experiments has been devised for the purpose.
Development of non-linear finite element computer code
NASA Technical Reports Server (NTRS)
Becker, E. B.; Miller, T.
1985-01-01
Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.
Computer programs for the Boltzmann collision matrix elements
NASA Astrophysics Data System (ADS)
Das, P.
1989-09-01
When the distribution function in the kinetic theory of gases is expanded in a basis of orthogonal functions, the Boltzmann collision operators can be evaluated in terms of appropriate matrix elements. These matrix elements are usually given in terms of highly complex algebraic expressions. When Burnett functions, which consist of Sonine polynomials and spherical harmonics, are used as the basis, the irreducible tensor formalism provides expressions for the matrix elements that are algebraically simple, possess high symmetry, and are computationally more economical than in any other basis. The package reported here consists of routines to compute such matrix elements in a Burnett function basis for a mixture of hard sphere gases, as also the loss integral of a Burnett mode and the functions themselves. The matrix elements involve the Clebsch-Gordan and Brody-Moshinsky coefficients, both of which are used here for unusually high values of their arguments. For the purpose of validation both coefficients are computed using two different methods. Though written for hard sphere molecules the package can, with only slight modification, be adapted to more general molecular models as well.
Rad-hard computer elements for space applications
NASA Technical Reports Server (NTRS)
Krishnan, G. S.; Longerot, Carl D.; Treece, R. Keith
1993-01-01
Space Hardened CMOS computer elements emulating a commercial microcontroller and microprocessor family have been designed, fabricated, qualified, and delivered for a variety of space programs including NASA's multiple launch International Solar-Terrestrial Physics (ISTP) program, Mars Observer, and government and commercial communication satellites. Design techniques and radiation performance of the 1.25 micron feature size products are described.
Some aspects of the computer simulation of conduction heat transfer and phase change processes
Solomon, A. D.
1982-04-01
Various aspects of phase change processes in materials are discussd including computer modeling, validation of results and sensitivity. In addition, the possible incorporation of cognitive activities in computational heat transfer is examined.
NASA Technical Reports Server (NTRS)
Wang, R.; Demerdash, N. A.
1990-01-01
The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.
NASA Technical Reports Server (NTRS)
Wang, R.; Demerdash, N. A.
1990-01-01
The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.
Modeling of rolling element bearing mechanics. Computer program user's manual
NASA Technical Reports Server (NTRS)
Greenhill, Lyn M.; Merchant, David H.
1994-01-01
This report provides the user's manual for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings, duplex angular contact ball bearings, and cylindrical roller bearings. The model includes the defects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program, and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. This report addresses input instructions for and features of the computer codes. A companion report addresses the theoretical basis for the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.
Carpenter, D.C.
1998-01-01
This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.
A computational study of nodal-based tetrahedral element behavior.
Gullerud, Arne S.
2010-09-01
This report explores the behavior of nodal-based tetrahedral elements on six sample problems, and compares their solution to that of a corresponding hexahedral mesh. The problems demonstrate that while certain aspects of the solution field for the nodal-based tetrahedrons provide good quality results, the pressure field tends to be of poor quality. Results appear to be strongly affected by the connectivity of the tetrahedral elements. Simulations that rely on the pressure field, such as those which use material models that are dependent on the pressure (e.g. equation-of-state models), can generate erroneous results. Remeshing can also be strongly affected by these issues. The nodal-based test elements as they currently stand need to be used with caution to ensure that their numerical deficiencies do not adversely affect critical values of interest.
Massively parallel finite element computation of three dimensional flow problems
NASA Astrophysics Data System (ADS)
Tezduyar, T.; Aliabadi, S.; Behr, M.; Johnson, A.; Mittal, S.
1992-12-01
The parallel finite element computation of three-dimensional compressible, and incompressible flows, with emphasis on the space-time formulations, mesh moving schemes and implementations on the Connection Machines CM-200 and CM-5 are presented. For computation of unsteady compressible and incompressible flows involving moving boundaries and interfaces, the Deformable-Spatial-Domain/Stabilized-Space-Time (DSD/SST) formulation that previously developed are employed. In this approach, the stabilized finite element formulations of the governing equations are written over the space-time domain of the problem; therefore, the deformation of the spatial domain with respect to time is taken into account automatically. This approach gives the capability to solve a large class of problems involving free surfaces, moving interfaces, and fluid-structure and fluid-particle interactions. By using special mesh moving schemes, the frequency of remeshing is minimized to reduce the projection errors involved in remeshing and also to increase the parallelization ease of the computations. The implicit equation systems arising from the finite element discretizations are solved iteratively by using the GMRES update technique with the diagonal and nodal-block-diagonal preconditioners. These formulations have all been implemented on the CM-200 and CM-5, and have been applied to several large-scale problems. The three-dimensional problems in this report were all computed on the CM-200 and CM-5.
The spectral-element method, Beowulf computing, and global seismology.
Komatitsch, Dimitri; Ritsema, Jeroen; Tromp, Jeroen
2002-11-29
The propagation of seismic waves through Earth can now be modeled accurately with the recently developed spectral-element method. This method takes into account heterogeneity in Earth models, such as three-dimensional variations of seismic wave velocity, density, and crustal thickness. The method is implemented on relatively inexpensive clusters of personal computers, so-called Beowulf machines. This combination of hardware and software enables us to simulate broadband seismograms without intrinsic restrictions on the level of heterogeneity or the frequency content.
Photodeposited diffractive optical elements of computer generated masks
NASA Astrophysics Data System (ADS)
Mirchin, N.; Peled, A.; Baal-Zedaka, I.; Margolin, R.; Zagon, M.; Lapsker, I.; Verdyan, A.; Azoulay, J.
2005-07-01
Diffractive optical elements (DOE) were synthesized on plastic substrates using the photodeposition (PD) technique by depositing amorphous selenium (a-Se) films with argon lasers and UV spectra light. The thin films were deposited typically onto polymethylmethacrylate (PMMA) substrates at room temperature. Scanned beam and contact mask modes were employed using computer-designed DOE lenses. Optical and electron micrographs characterize the surface details. The films were typically 200 nm thick.
A stochastic method for computing hadronic matrix elements
Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...
2014-01-24
In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.
Single Photon Holographic Qudit Elements for Linear Optical Quantum Computing
2011-05-01
in optical volume holography and designed and simulated practical single-photon, single-optical elements for qudit MUB-state quantum in- formation...Independent of the representation we use, the MUB states will ordinarily be modulated in both amplitude and phase. Recently a practical method has been...quantum computing with qudits (d ≥ 3) has been an efficient and practical quantum state sorter for photons whose complex fields are modulated in both
Transient Finite Element Computations on a Variable Transputer System
NASA Technical Reports Server (NTRS)
Smolinski, Patrick J.; Lapczyk, Ireneusz
1993-01-01
A parallel program to analyze transient finite element problems was written and implemented on a system of transputer processors. The program uses the explicit time integration algorithm which eliminates the need for equation solving, making it more suitable for parallel computations. An interprocessor communication scheme was developed for arbitrary two dimensional grid processor configurations. Several 3-D problems were analyzed on a system with a small number of processors.
Implicit extrapolation methods for multilevel finite element computations
Jung, M.; Ruede, U.
1994-12-31
The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.
Acceleration of matrix element computations for precision measurements
Brandt, Oleg; Gutierrez, Gaston; Wang, M. H.L.S.; ...
2014-11-25
The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix elementmore » technique. We then utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment.« less
Compute Element and Interface Box for the Hazard Detection System
NASA Technical Reports Server (NTRS)
Villalpando, Carlos Y.; Khanoyan, Garen; Stern, Ryan A.; Some, Raphael R.; Bailey, Erik S.; Carson, John M.; Vaughan, Geoffrey M.; Werner, Robert A.; Salomon, Phil M.; Martin, Keith E.; Spaulding, Matthew D.; Luna, Michael E.; Motaghedi, Shui H.; Trawny, Nikolas; Johnson, Andrew E.; Ivanov, Tonislav I.; Huertas, Andres; Whitaker, William D.; Goldberg, Steven B.
2013-01-01
The Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is building a sensor that enables a spacecraft to evaluate autonomously a potential landing area to generate a list of hazardous and safe landing sites. It will also provide navigation inputs relative to those safe sites. The Hazard Detection System Compute Element (HDS-CE) box combines a field-programmable gate array (FPGA) board for sensor integration and timing, with a multicore computer board for processing. The FPGA does system-level timing and data aggregation, and acts as a go-between, removing the real-time requirements from the processor and labeling events with a high resolution time. The processor manages the behavior of the system, controls the instruments connected to the HDS-CE, and services the "heavy lifting" computational requirements for analyzing the potential landing spots.
Boundary element analysis on vector and parallel computers
NASA Technical Reports Server (NTRS)
Kane, J. H.
1994-01-01
Boundary element analysis (BEA) can be characterized as a numerical technique that generally shifts the computational burden in the analysis toward numerical integration and the solution of nonsymmetric and either dense or blocked sparse systems of algebraic equations. Researchers have explored the concept that the fundamental characteristics of BEA can be exploited to generate effective implementations on vector and parallel computers. In this paper, the results of some of these investigations are discussed. The performance of overall algorithms for BEA on vector supercomputers, massively data parallel single instruction multiple data (SIMD), and relatively fine grained distributed memory multiple instruction multiple data (MIMD) computer systems is described. Some general trends and conclusions are discussed, along with indications of future developments that may prove fruitful in this regard.
Computational design aspects of a NASP nozzle/afterbody experiment
NASA Technical Reports Server (NTRS)
Ruffin, Stephen M.; Venkatapathy, Ethiraj; Keener, Earl R.; Nagaraj, N.
1989-01-01
This paper highlights the influence of computational methods on design of a wind tunnel experiment which generically models the nozzle/afterbody flow field of the proposed National Aerospace Plane. The rectangular slot nozzle plume flow field is computed using a three-dimensional, upwind, implicit Navier-Stokes solver. Freestream Mach numbers of 5.3, 7.3, and 10 are investigated. Two-dimensional parametric studies of various Mach numbers, pressure ratios, and ramp angles are used to help determine model loads and afterbody ramp angle and length. It was found that the center of pressure on the ramp occurs at nearly the same location for all ramp angles and test conditions computed. Also, to prevent air liquefaction, it is suggested that a helium-air mixture be used as the jet gas for the highest Mach number test case.
Computational design aspects of a NASP nozzle/afterbody experiment
NASA Technical Reports Server (NTRS)
Ruffin, Stephen M.; Venkatapathy, Ethiraj; Keener, Earl R.; Nagaraj, N.
1989-01-01
This paper highlights the influence of computational methods on design of a wind tunnel experiment which generically models the nozzle/afterbody flow field of the proposed National Aerospace Plane. The rectangular slot nozzle plume flow field is computed using a three-dimensional, upwind, implicit Navier-Stokes solver. Freestream Mach numbers of 5.3, 7.3, and 10 are investigated. Two-dimensional parametric studies of various Mach numbers, pressure ratios, and ramp angles are used to help determine model loads and afterbody ramp angle and length. It was found that the center of pressure on the ramp occurs at nearly the same location for all ramp angles and test conditions computed. Also, to prevent air liquefaction, it is suggested that a helium-air mixture be used as the jet gas for the highest Mach number test case.
Theoretical aspects of light-element alloys under extremely high pressure
NASA Astrophysics Data System (ADS)
Feng, Ji
In this Dissertation, we present theoretical studies on the geometric and electronic structure of light-element alloys under high pressure. The first three Chapters are concerned with specific compounds, namely, SiH 4, CaLi2 and BexLi1- x, and associated structural and electronic phenomena, arising in our computational studies. In the fourth Chapter, we attempt to develop a unified view of the relationship between the electronic and geometric structure of light-element alloys under pressure, by focusing on the states near the Fermi level in these metals.
Technical Aspects of Computer-Assisted Instruction in Chinese.
ERIC Educational Resources Information Center
Cheng, Chin-Chaun; Sherwood, Bruce
1981-01-01
Computer assisted instruction in Chinese is considered in relation to the design and recognition of Chinese characters, speech synthesis of the standard Chinese language, and the identification of Chinese tone. The PLATO work has shifted its orientation from provision of supplementary courseware to implementation of independent lessons and…
Computational Aspects of Realization & Design Algorithms in Linear Systems Theory.
NASA Astrophysics Data System (ADS)
Tsui, Chia-Chi
Realization and design problems are two major problems in linear time-invariant systems control theory and have been solved theoretically. However, little is understood about their numerical properties. Due to the large scale of the problem and the finite precision of computer computation, it is very important and is the purpose of this study to investigate the computational reliability and efficiency of the algorithms for these two problems. In this dissertation, a reliable algorithm to achieve canonical form realization via Hankel matrix is developed. A comparative study of three general realization algorithms, for both numerical reliability and efficiency, shows that the proposed algorithm (via Hankel matrix) is the most preferable one among the three. The design problems, such as the state feedback design for pole placement, the state observer design, and the low order single and multi-functional observer design, have been solved by using canonical form systems matrices. In this dissertation, a set of algorithms for solving these three design problems is developed and analysed. These algorithms are based on Hessenberg form systems matrices which are numerically more reliable to compute than the canonical form systems matrices.
Computational aspects of sensitivity calculations in linear transient structural analysis
NASA Technical Reports Server (NTRS)
Greene, W. H.; Haftka, R. T.
1991-01-01
The calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, and transient response problems is studied. Several existing sensitivity calculation methods and two new methods are compared for three example problems. Approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite model. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. This was found to result in poor convergence of stress sensitivities in several cases. Two semianalytical techniques are developed to overcome this poor convergence. Both new methods result in very good convergence of the stress sensitivities; the computational cost is much less than would result if the vibration modes were recalculated and then used in an overall finite difference method.
Some Aspects of the Symbolic Manipulation of Computer Descriptions
1974-07-01
Given a desired macnme described .r terms of some specification language, and g^ven a space of machines def.ned by a class of Register Transfer...ISP, design it in terms of Foster Transfer level modules. Formally they may seem iden^cal. but the design spaces looi. quite d.nerent. 6) Mf... spacing is needed by the design automation system to produce a wiring list. Hence there is information contained in the computer description that is
A Finite Element Method for Computation of Structural Intensity by the Normal Mode Approach
NASA Astrophysics Data System (ADS)
Gavrić, L.; Pavić, G.
1993-06-01
A method for numerical computation of structural intensity in thin-walled structures is presented. The method is based on structural finite elements (beam, plate and shell type) enabling computation of real eigenvalues and eigenvectors of the undamped structure which then serve in evaluation of complex response. The distributed structural damping is taken into account by using the modal damping concept, while any localized damping is treated as an external loading, determined by use of impedance matching conditions and eigenproperties of the structure. Emphasis is given to aspects of accuracy of the results and efficiency of the numerical procedures used. High requirements on accuracy of the structural response (displacements and stresses) needed in intensity applications are satisfied by employing the "swept static solution", which effectively takes into account the influence of higher modes otherwise inaccessible to numerical computation. A comparison is made between the results obtained by using analytical methods and the proposed numerical procedure to demonstrate the validity of the method presented.
Computation of molecular electrostatics with boundary element methods.
Liang, J; Subramaniam, S
1997-01-01
In continuum approaches to molecular electrostatics, the boundary element method (BEM) can provide accurate solutions to the Poisson-Boltzmann equation. However, the numerical aspects of this method pose significant problems. We describe our approach, applying an alpha shape-based method to generate a high-quality mesh, which represents the shape and topology of the molecule precisely. We also describe an analytical method for mapping points from the planar mesh to their exact locations on the surface of the molecule. We demonstrate that derivative boundary integral formulation has numerical advantages over the nonderivative formulation: the well-conditioned influence matrix can be maintained without deterioration of the condition number when the number of the mesh elements scales up. Singular integrand kernels are characteristics of the BEM. Their accurate integration is an important issue. We describe variable transformations that allow accurate numerical integration. The latter is the only plausible integral evaluation method when using curve-shaped boundary elements. Images FIGURE 3 FIGURE 5 FIGURE 6 FIGURE 7 FIGURE 8 PMID:9336178
Computational characterization of chromatin domain boundary-associated genomic elements.
Hong, Seungpyo; Kim, Dongsup
2017-08-23
Topologically associated domains (TADs) are 3D genomic structures with high internal interactions that play important roles in genome compaction and gene regulation. Their genomic locations and their association with CCCTC-binding factor (CTCF)-binding sites and transcription start sites (TSSs) were recently reported. However, the relationship between TADs and other genomic elements has not been systematically evaluated. This was addressed in the present study, with a focus on the enrichment of these genomic elements and their ability to predict the TAD boundary region. We found that consensus CTCF-binding sites were strongly associated with TAD boundaries as well as with the transcription factors (TFs) Zinc finger protein (ZNF)143 and Yin Yang (YY)1. TAD boundary-associated genomic elements include DNase I-hypersensitive sites, H3K36 trimethylation, TSSs, RNA polymerase II, and TFs such as Specificity protein 1, ZNF274 and SIX homeobox 5. Computational modeling with these genomic elements suggests that they have distinct roles in TAD boundary formation. We propose a structural model of TAD boundaries based on these findings that provides a basis for studying the mechanism of chromatin structure formation and gene regulation. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Able to deploy within one hour of notification, EPA's Airborne Spectral Photometric Environmental Collection Technology (ASPECT) is the nation’s only airborne real-time chemical and radiological detection, infrared and photographic imagery platform.
Computational aspects of sensitivity calculations in transient structural analysis
NASA Technical Reports Server (NTRS)
Greene, William H.; Haftka, Raphael T.
1988-01-01
A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.
Massively parallel computation of RCS with finite elements
NASA Technical Reports Server (NTRS)
Parker, Jay
1993-01-01
One of the promising combinations of finite element approaches for scattering problems uses Whitney edge elements, spherical vector wave-absorbing boundary conditions, and bi-conjugate gradient solution for the frequency-domain near field. Each of these approaches may be criticized. Low-order elements require high mesh density, but also result in fast, reliable iterative convergence. Spherical wave-absorbing boundary conditions require additional space to be meshed beyond the most minimal near-space region, but result in fully sparse, symmetric matrices which keep storage and solution times low. Iterative solution is somewhat unpredictable and unfriendly to multiple right-hand sides, yet we find it to be uniformly fast on large problems to date, given the other two approaches. Implementation of these approaches on a distributed memory, message passing machine yields huge dividends, as full scalability to the largest machines appears assured and iterative solution times are well-behaved for large problems. We present times and solutions for computed RCS for a conducting cube and composite permeability/conducting sphere on the Intel ipsc860 with up to 16 processors solving over 200,000 unknowns. We estimate problems of approximately 10 million unknowns, encompassing 1000 cubic wavelengths, may be attempted on a currently available 512 processor machine, but would be exceedingly tedious to prepare. The most severe bottlenecks are due to the slow rate of mesh generation on non-parallel machines and the large transfer time from such a machine to the parallel processor. One solution, in progress, is to create and then distribute a coarse mesh among the processors, followed by systematic refinement within each processor. Elimination of redundant node definitions at the mesh-partition surfaces, snap-to-surface post processing of the resulting mesh for good modelling of curved surfaces, and load-balancing redistribution of new elements after the refinement are auxiliary
Behavioral and computational aspects of language and its acquisition
NASA Astrophysics Data System (ADS)
Edelman, Shimon; Waterfall, Heidi
2007-12-01
One of the greatest challenges facing the cognitive sciences is to explain what it means to know a language, and how the knowledge of language is acquired. The dominant approach to this challenge within linguistics has been to seek an efficient characterization of the wealth of documented structural properties of language in terms of a compact generative grammar-ideally, the minimal necessary set of innate, universal, exception-less, highly abstract rules that jointly generate all and only the observed phenomena and are common to all human languages. We review developmental, behavioral, and computational evidence that seems to favor an alternative view of language, according to which linguistic structures are generated by a large, open set of constructions of varying degrees of abstraction and complexity, which embody both form and meaning and are acquired through socially situated experience in a given language community, by probabilistic learning algorithms that resemble those at work in other cognitive modalities.
Computational aspects of speed-dependent Voigt profiles
NASA Astrophysics Data System (ADS)
Schreier, Franz
2017-01-01
The increasing quality of atmospheric spectroscopy observations has indicated the limitations of the Voigt profile routinely used for line-by-line modeling, and physical processes beyond pressure and Doppler broadening have to be considered. The speed-dependent Voigt (SDV) profile can be readily computed as the difference of the real part of two complex error functions (i.e. Voigt functions). Using a highly accurate code as a reference, various implementations of the SDV function based on Humlíček's rational approximations are examined for typical speed dependences of pressure broadening and the range of wavenumber distances and Lorentz to Doppler width ratios encountered in infrared applications. Neither of these implementations appears to be optimal, and a new algorithm based on a combination of the Humlíček (1982) and Weideman (1994) rational approximations is suggested.
Computational and theoretical aspects of biomolecular structure and dynamics
Garcia, A.E.; Berendzen, J.; Catasti, P., Chen, X.
1996-09-01
This is the final report for a project that sought to evaluate and develop theoretical, and computational bases for designing, performing, and analyzing experimental studies in structural biology. Simulations of large biomolecular systems in solution, hydrophobic interactions, and quantum chemical calculations for large systems have been performed. We have developed a code that implements the Fast Multipole Algorithm (FMA) that scales linearly in the number of particles simulated in a large system. New methods have been developed for the analysis of multidimensional NMR data in order to obtain high resolution atomic structures. These methods have been applied to the study of DNA sequences in the human centromere, sequences linked to genetic diseases, and the dynamics and structure of myoglobin.
Theoretical and computational aspects of the self-induced motion of three-dimensional vortex sheets
NASA Astrophysics Data System (ADS)
Pozrikidis, C.
2000-12-01
Theoretical and computational aspects of the self-induced motion of closed and periodic three-dimensional vortex sheets situated at the interfaces between two inviscid uids with generally different densities in the presence of surface tension are considered. In the mathematical formulation, the vortex sheet is described by a continuous distribution of marker points that move with the velocity of the fluid normal to the vortex sheet while executing an arbitrary tangential motion. Evolution equations for the vectorial jump in the velocity across the vortex sheet, the vectorial strength of the vortex sheet, and the scalar circulation field or strength of the effective dipole field following the marker points are derived. The computation of the self-induced motion of the vortex sheet requires the accurate evaluation of the strongly singular Biot-Savart integral whose existence requires that the normal vector varies in a continuous fashion over the vortex sheet. Two methods of computing the principal value of the Biot-Savart integral are implemented. The first method involves computing the vector potential and the principal value of the harmonic potential over the vortex sheet, and then differentiating them in tangential directions to produce the normal or tangential component of the velocity, in the spirit of generalized vortex methods developed by Baker (1983). The second method involves subtracting off the dominant singularity of the Biot-Savart kernel and then accounting for its contribution by use of vector identities. Evaluating the strongly singular Biot-Savart integral is thus reduced to computing a weakly singular integral involving the mean curvature of the vortex sheet, and this allows the routine discretization of the vortex sheet into curved elements whose normal vector is not necessarily continuous across the edges, and the computation of the self-induced velocity without kernel desingularization. Numerical simulations of the motion of a closed or periodic
Human-Computer Interaction: A Review of the Research on Its Affective and Social Aspects.
ERIC Educational Resources Information Center
Deaudelin, Colette; Dussault, Marc; Brodeur, Monique
2003-01-01
Discusses a review of 34 qualitative and non-qualitative studies related to affective and social aspects of student-computer interactions. Highlights include the nature of the human-computer interaction (HCI); the interface, comparing graphic and text types; and the relation between variables linked to HCI, mainly trust, locus of control,…
Human-Computer Interaction: A Review of the Research on Its Affective and Social Aspects.
ERIC Educational Resources Information Center
Deaudelin, Colette; Dussault, Marc; Brodeur, Monique
2003-01-01
Discusses a review of 34 qualitative and non-qualitative studies related to affective and social aspects of student-computer interactions. Highlights include the nature of the human-computer interaction (HCI); the interface, comparing graphic and text types; and the relation between variables linked to HCI, mainly trust, locus of control,…
ERIC Educational Resources Information Center
Wayman, Ian; Kyobe, Michael
2012-01-01
As students in computing disciplines are introduced to modern information technologies, numerous unethical practices also escalate. With the increase in stringent legislations on use of IT, users of technology could easily be held liable for violation of this legislation. There is however lack of understanding of social aspects of computing, and…
ERIC Educational Resources Information Center
Wayman, Ian; Kyobe, Michael
2012-01-01
As students in computing disciplines are introduced to modern information technologies, numerous unethical practices also escalate. With the increase in stringent legislations on use of IT, users of technology could easily be held liable for violation of this legislation. There is however lack of understanding of social aspects of computing, and…
Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox
NASA Astrophysics Data System (ADS)
Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas
In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.
FLASH: A finite element computer code for variably saturated flow
Baca, R.G.; Magnuson, S.O.
1992-05-01
A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.
SYMBMAT: Symbolic computation of quantum transition matrix elements
NASA Astrophysics Data System (ADS)
Ciappina, M. F.; Kirchner, T.
2012-08-01
We have developed a set of Mathematica notebooks to compute symbolically quantum transition matrices relevant for atomic ionization processes. The utilization of a symbolic language allows us to obtain analytical expressions for the transition matrix elements required in charged-particle and laser induced ionization of atoms. Additionally, by using a few simple commands, it is possible to export these symbolic expressions to standard programming languages, such as Fortran or C, for the subsequent computation of differential cross sections or other observables. One of the main drawbacks in the calculation of transition matrices is the tedious algebraic work required when initial states other than the simple hydrogenic 1s state need to be considered. Using these notebooks the work is dramatically reduced and it is possible to generate exact expressions for a large set of bound states. We present explicit examples of atomic collisions (in First Born Approximation and Distorted Wave Theory) and laser-matter interactions (within the Dipole and Strong Field Approximations and different gauges) using both hydrogenic wavefunctions and Slater-Type Orbitals with arbitrary nlm quantum numbers as initial states. Catalogue identifier: AEMI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 71 628 No. of bytes in distributed program, including test data, etc.: 444 195 Distribution format: tar.gz Programming language: Mathematica Computer: Single machines using Linux or Windows (with cores with any clock speed, cache memory and bits in a word) Operating system: Any OS that supports Mathematica. The notebooks have been tested under Windows and Linux and with versions 6.x, 7.x and 8.x Classification: 2.6 Nature of problem
Spectral element simulations of unsteady flow over a 3D, low aspect-ratio semi-circular wing
NASA Astrophysics Data System (ADS)
Kandala, Sriharsha; Rempfer, Dietmar
2011-11-01
Numerical simulations of unsteady 3D flow over a low-aspect-ratio semi-circular wing are performed using a spectral element method. Specsolve, a parallel spectral element solver currently under development at IIT, is used for the simulation. The solution is represented locally as a tensor product of Legendre polynomials and C0-continuity is enforced between adjacent domains. A BDF/EXT scheme is used for temporal integration. The fractional step method is used for computing velocity and pressure. The code incorporates a FDM (fast diagonalization method) based overlapping Schwarz preconditioner for the consistent Poisson operator and an algebraic multigrid based coarse grid solver (P.F. Fischer et al, J. Phys.: Conf. Ser.(125) 012076, 2008) for pressure. The simulation replicates the conditions of the active flow control experiment (D. Williams et al, AIAA paper, 2010-4969). The Reynolds number based on chord length and free-stream velocity is about 68000. Different angles of attack, encompassing both pre-stall and post-stall regimes, are considered. These results are compared with data from the experiment and numerical simulations based on Lattice Boltzmann method (G. Brès et al, AIAA paper, 2010-4713).
Computational study of protein secondary structure elements: Ramachandran plots revisited.
Carrascoza, Francisco; Zaric, Snezana; Silaghi-Dumitrescu, Radu
2014-05-01
Potential energy surface (PES) were built for nineteen amino acids using density functional theory (PW91 and DFT M062X/6-311**). Examining the energy as a function of the φ/ψ dihedral angles in the allowed regions of the Ramachandran plot, amino acid groups that share common patterns on their PES plots and global minima were identified. These patterns show partial correlation with their structural and pharmacophoric features. Differences between these computational results and the experimentally noted permitted conformations of each amino acid are rationalized on the basis of attractive intra- and inter-molecular non-covalent interactions. The present data are focused on the intrinsic properties of an amino acid - an element which to our knowledge is typically ignored, as larger models are always used for the sake of similarity to real biological polypeptides.
NASA Technical Reports Server (NTRS)
Fulton, Robert E.
1985-01-01
Research performed over the past 10 years in engineering data base management and parallel computing is discussed, and certain opportunities for research toward the next generation of structural analysis capability are proposed. Particular attention is given to data base management associated with the IPAD project and parallel processing associated with the Finite Element Machine project, both sponsored by NASA, and a near term strategy for a distributed structural analysis capability based on relational data base management software and parallel computers for a future structural analysis system.
NASA Technical Reports Server (NTRS)
Fulton, Robert E.
1985-01-01
Research performed over the past 10 years in engineering data base management and parallel computing is discussed, and certain opportunities for research toward the next generation of structural analysis capability are proposed. Particular attention is given to data base management associated with the IPAD project and parallel processing associated with the Finite Element Machine project, both sponsored by NASA, and a near term strategy for a distributed structural analysis capability based on relational data base management software and parallel computers for a future structural analysis system.
Improved lattice computation of proton decay matrix elements
NASA Astrophysics Data System (ADS)
Aoki, Yasumichi; Izubuchi, Taku; Shintani, Eigo; Soni, Amarjit
2017-07-01
We present an improved result for the lattice computation of the proton decay matrix elements in Nf=2 +1 QCD. In this study, by adopting the error reduction technique of all-mode-averaging, a significant improvement of the statistical accuracy is achieved for the relevant form factor of proton (and also neutron) decay on the gauge ensemble of Nf=2 +1 domain-wall fermions with mπ=0.34 - 0.69 GeV on a 2.7 fm3 lattice, as used in our previous work [1]. We improve the total accuracy of matrix elements to 10-15% from 30-40% for p →π e+ or from 20-40% for p →K ν ¯. The accuracy of the low-energy constants α and β in the leading-order baryon chiral perturbation theory (BChPT) of proton decay are also improved. The relevant form factors of p →π estimated through the "direct" lattice calculation from the three-point function appear to be 1.4 times smaller than those from the "indirect" method using BChPT with α and β . It turns out that the utilization of our result will provide a factor 2-3 larger proton partial lifetime than that obtained using BChPT. We also discuss the use of these parameters in a dark matter model.
Matrix element method for high performance computing platforms
NASA Astrophysics Data System (ADS)
Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.
2015-12-01
Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.
Cost Considerations in Nonlinear Finite-Element Computing
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.
1985-01-01
Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.
Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer
NASA Technical Reports Server (NTRS)
Pifko, A. B.; Ogilvie, P. L.
1978-01-01
The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.
NASA Astrophysics Data System (ADS)
Hazer, D.; Schmidt, E.; Unterhinninghofen, R.; Richter, G. M.; Dillmann, R.
2009-08-01
Abnormal hemodynamics and biomechanics of blood flow and vessel wall conditions in the arteries may result in severe cardiovascular diseases. Cardiovascular diseases result from complex flow pattern and fatigue of the vessel wall and are prevalent causes leading to high mortality each year. Computational Fluid Dynamics (CFD), Computational Structure Mechanics (CSM) and Fluid Structure Interaction (FSI) have become efficient tools in modeling the individual hemodynamics and biomechanics as well as their interaction in the human arteries. The computations allow non-invasively simulating patient-specific physical parameters of the blood flow and the vessel wall needed for an efficient minimally invasive treatment. The numerical simulations are based on the Finite Element Method (FEM) and require exact and individual mesh models to be provided. In the present study, we developed a numerical tool to automatically generate complex patient-specific Finite Element (FE) mesh models from image-based geometries of healthy and diseased vessels. The mesh generation is optimized based on the integration of mesh control functions for curvature, boundary layers and mesh distribution inside the computational domain. The needed mesh parameters are acquired from a computational grid analysis which ensures mesh-independent and stable simulations. Further, the generated models include appropriate FE sets necessary for the definition of individual boundary conditions, required to solve the system of nonlinear partial differential equations governed by the fluid and solid domains. Based on the results, we have performed computational blood flow and vessel wall simulations in patient-specific aortic models providing a physical insight into the pathological vessel parameters. Automatic mesh generation with individual awareness in terms of geometry and conditions is a prerequisite for performing fast, accurate and realistic FEM-based computations of hemodynamics and biomechanics in the
A finite element method for the computation of transonic flow past airfoils
NASA Technical Reports Server (NTRS)
Eberle, A.
1980-01-01
A finite element method for the computation of the transonic flow with shocks past airfoils is presented using the artificial viscosity concept for the local supersonic regime. Generally, the classic element types do not meet the accuracy requirements of advanced numerical aerodynamics requiring special attention to the choice of an appropriate element. A series of computed pressure distributions exhibits the usefulness of the method.
NASA Astrophysics Data System (ADS)
Goh, Wei Pin; Ghadiri, Mojtaba; Muller, Frans; Sinha, Kushal; Nere, Nandkishor; Ho, Raimundo; Bordawekar, Shailendra; Sheikh, Ahmad
2017-06-01
The size distribution, shape and aspect ratio of particles are the common factors that affect their packing in a particle bed. Agitated powder beds are commonly used in the process industry for various applications. The stresses arising as a result of shearing the bed could result in undesirable particle breakage with adverse impact on manufacturability. We report on our work on analysing the stress distribution within an agitated particle bed with several particle aspect ratios by the Discrete Element Method (DEM). Rounded cylinders with different aspect ratios are generated and incorporated into the DEM simulation. The void fraction of the packing of the static and agitated beds with different particle aspect ratios is analysed. Principal and deviatoric stresses are quantified in the regions of interest along the agitating impeller blade for different cases of particle aspect ratios. The relationship between the particle aspect ratio and the stress distribution of the bed over the regions of interest is then established and will be presented.
Human-computer interaction: psychological aspects of the human use of computing.
Olson, Gary M; Olson, Judith S
2003-01-01
Human-computer interaction (HCI) is a multidisciplinary field in which psychology and other social sciences unite with computer science and related technical fields with the goal of making computing systems that are both useful and usable. It is a blend of applied and basic research, both drawing from psychological research and contributing new ideas to it. New technologies continuously challenge HCI researchers with new options, as do the demands of new audiences and uses. A variety of usability methods have been developed that draw upon psychological principles. HCI research has expanded beyond its roots in the cognitive processes of individual users to include social and organizational processes involved in computer usage in real environments as well as the use of computers in collaboration. HCI researchers need to be mindful of the longer-term changes brought about by the use of computing in a variety of venues.
NASA Technical Reports Server (NTRS)
Mehrotra, S. C.; Lan, C. E.
1978-01-01
The necessary information for using a computer program to predict distributed and total aerodynamic characteristics for low aspect ratio wings with partial leading-edge separation is presented. The flow is assumed to be steady and inviscid. The wing boundary condition is formulated by the Quasi-Vortex-Lattice method. The leading edge separated vortices are represented by discrete free vortex elements which are aligned with the local velocity vector at midpoints to satisfy the force free condition. The wake behind the trailing edge is also force free. The flow tangency boundary condition is satisfied on the wing, including the leading and trailing edges. The program is restricted to delta wings with zero thickness and no camber. It is written in FORTRAN language and runs on CDC 6600 computer.
Computation of Sound Propagation by Boundary Element Method
NASA Technical Reports Server (NTRS)
Guo, Yueping
2005-01-01
This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which
Computer simulation of diffractive optical element (DOE) performance
NASA Astrophysics Data System (ADS)
Delacour, Jacques F.; Venturino, Jean-Claude; Gouedard, Yannick
2004-02-01
Diffractive optical elements (DOE), also known as computer generated holograms (CGH), can transform an illuminating laser beam into a specified intensity distribution by diffraction rather than refraction or reflection. These are widely used in coherent light systems with beam shaping purposes, as an alignment tool or as a structured light generator. The diffractive surface is split into an array of sub-wavelength depth cells. Each of these locally transforms the beam by phase adaptation. Based on the work of the LSP lab from the University of Strasbourg, France, we have developed a unique industry-oriented tool. It allows the user first to optimize a DOE using the Gerchberg-Saxton algorithm. This part can manage sources from the simple plane wave to high order Gaussian modes or complex maps defined beams and objective patterns based on BMP images. A simulation part permits then to test the performance of the DOE with regard to system parameters, dealing with the beam, the DOE itself and the system organization. This will meet the needs of people concerned by tolerancing issues. Focusing on the industrial problem of beam shaping, we will present the whole DOE design sequence, starting from the generation of a DOE up to the study of the sensitivity of its performance according to the variation of several parameters of the system. For example, we will show the influence of the position of the beam on diffraction efficiency. This unique feature formerly neglected in industrial design process will lead the way to production quality improvement.
Four Studies on Aspects of Assessing Computational Performance. Technical Report No. 297.
ERIC Educational Resources Information Center
Romberg, Thomas A., Ed.
The four studies reported in this document deal with aspects of assessing students' performance on computational skills. The first study grew out of a need for an instrument to measure students' speed at recalling addition facts. This had seemed to be a very easy task, but it proved to be much more difficult than anticipated. The second study grew…
REM (relative element magnitude): program explanation and computer program listing
VanTrump, George; Alminas, Henry V.
1978-01-01
The REM (relative element magnitude) program is designed as an aid in the characterization of geochemical anomalies. The program ranks the magnitudes of anomalies of individual elements within a multielement geochemical anomaly.
A shell element for computing 3D eddy currents -- Applications to transformers
Guerin, C.; Tanneau, G.; Meunier, G.; Labie, P.; Ngnegueu, T.; Sacotte, M.
1995-05-01
A skin depth-independent shell element to model thin conducting sheets is described in a finite element context. This element takes into account the field variation through depth due to skin effect. The finite element formulation is first described, then boundary conditions at the edge of conducting shells and the possibility of describing non conducting line gaps and holes are discussed. Finally, a computation of an earthing transformer model with an aluminium shield modelled with shell elements is presented.
Grouped element-by-element iteration schemes for incompressible flow computations
NASA Astrophysics Data System (ADS)
Tezduyar, T. E.; Liou, J.
1989-05-01
Grouped element-by-element (GEBE) iteration schemes for incompressible flows are presented in the context of vorticity- stream function formulation. The GEBE procedure is a variation of the EBE procedure and is based on arrangement of the elements into groups with no inter-element coupling within each group. With the GEBE approach, vectorization and parallel implementation of the EBE method becomes more clear. The savings in storage and CPU time are demonstrated with two unsteady flow problems.
Cellular computational platform and neurally inspired elements thereof
Okandan, Murat
2016-11-22
A cellular computational platform is disclosed that includes a multiplicity of functionally identical, repeating computational hardware units that are interconnected electrically and optically. Each computational hardware unit includes a reprogrammable local memory and has interconnections to other such units that have reconfigurable weights. Each computational hardware unit is configured to transmit signals into the network for broadcast in a protocol-less manner to other such units in the network, and to respond to protocol-less broadcast messages that it receives from the network. Each computational hardware unit is further configured to reprogram the local memory in response to incoming electrical and/or optical signals.
Some Computational Aspects of the Brain Computer Interfaces Based on Inner Music
Klonowski, Wlodzimierz; Duch, Wlodzisław; Perovic, Aleksandar; Jovanovic, Aleksandar
2009-01-01
We discuss the BCI based on inner tones and inner music. We had some success in the detection of inner tones, the imagined tones which are not sung aloud. Rather easily imagined and controlled, they offer a set of states usable for BCI, with high information capacity and high transfer rates. Imagination of sounds or musical tunes could provide a multicommand language for BCI, as if using the natural language. Moreover, this approach could be used to test musical abilities. Such BCI interface could be superior when there is a need for a broader command language. Some computational estimates and unresolved difficulties are presented. PMID:19503802
Floridi, Chiara; Radaelli, Alessandro; Abi-Jaoudeh, Nadine; Grass, Micheal; Lin, Ming De; Chiaradia, Melanie; Geschwind, Jean-Francois; Kobeiter, Hishman; Squillaci, Ettore; Maleux, Geert; Giovagnoni, Andrea; Brunese, Luca; Wood, Bradford; Carrafiello, Gianpaolo; Rotondo, Antonio
2014-01-01
C-arm cone-beam computed tomography (CBCT) is a new imaging technology integrated in modern angiographic systems. Due to its ability to obtain cross-sectional imaging and the possibility to use dedicated planning and navigation software, it provides an informed platform for interventional oncology procedures. In this paper, we highlight the technical aspects and clinical applications of CBCT imaging and navigation in the most common loco-regional oncological treatments. PMID:25012472
ElemeNT: a computational tool for detecting core promoter elements.
Sloutskin, Anna; Danino, Yehuda M; Orenstein, Yaron; Zehavi, Yonathan; Doniger, Tirza; Shamir, Ron; Juven-Gershon, Tamar
2015-01-01
Core promoter elements play a pivotal role in the transcriptional output, yet they are often detected manually within sequences of interest. Here, we present 2 contributions to the detection and curation of core promoter elements within given sequences. First, the Elements Navigation Tool (ElemeNT) is a user-friendly web-based, interactive tool for prediction and display of putative core promoter elements and their biologically-relevant combinations. Second, the CORE database summarizes ElemeNT-predicted core promoter elements near CAGE and RNA-seq-defined Drosophila melanogaster transcription start sites (TSSs). ElemeNT's predictions are based on biologically-functional core promoter elements, and can be used to infer core promoter compositions. ElemeNT does not assume prior knowledge of the actual TSS position, and can therefore assist in annotation of any given sequence. These resources, freely accessible at http://lifefaculty.biu.ac.il/gershon-tamar/index.php/resources, facilitate the identification of core promoter elements as active contributors to gene expression.
ElemeNT: a computational tool for detecting core promoter elements
Sloutskin, Anna; Danino, Yehuda M; Orenstein, Yaron; Zehavi, Yonathan; Doniger, Tirza; Shamir, Ron; Juven-Gershon, Tamar
2015-01-01
Core promoter elements play a pivotal role in the transcriptional output, yet they are often detected manually within sequences of interest. Here, we present 2 contributions to the detection and curation of core promoter elements within given sequences. First, the Elements Navigation Tool (ElemeNT) is a user-friendly web-based, interactive tool for prediction and display of putative core promoter elements and their biologically-relevant combinations. Second, the CORE database summarizes ElemeNT-predicted core promoter elements near CAGE and RNA-seq-defined Drosophila melanogaster transcription start sites (TSSs). ElemeNT's predictions are based on biologically-functional core promoter elements, and can be used to infer core promoter compositions. ElemeNT does not assume prior knowledge of the actual TSS position, and can therefore assist in annotation of any given sequence. These resources, freely accessible at http://lifefaculty.biu.ac.il/gershon-tamar/index.php/resources, facilitate the identification of core promoter elements as active contributors to gene expression. PMID:26226151
A Limited Survey of General Purpose Finite Element Computer Programs
NASA Technical Reports Server (NTRS)
Glaser, J. C.
1972-01-01
Ten representative programs are compared. A listing of additional programs encountered during the course of this effort is also included. Tables are presented to show the structural analysis, material, load, and modeling element capability for the ten selected programs.
01010000 01001100 01000001 01011001: Play Elements in Computer Programming
ERIC Educational Resources Information Center
Breslin, Samantha
2013-01-01
This article explores the role of play in human interaction with computers in the context of computer programming. The author considers many facets of programming including the literary practice of coding, the abstract design of programs, and more mundane activities such as testing, debugging, and hacking. She discusses how these incorporate the…
Nagel, Simon; Sinha, Devesh; Day, Diana; Reith, Wolfgang; Chapot, René; Papanagiotou, Panagiotis; Warburton, Elizabeth A; Guyler, Paul; Tysoe, Sharon; Fassbender, Klaus; Walter, Silke; Essig, Marco; Heidenrich, Jens; Konstas, Angelos A; Harrison, Michael; Papadakis, Michalis; Greveson, Eric; Joly, Olivier; Gerry, Stephen; Maguire, Holly; Roffe, Christine; Hampton-Till, James; Buchan, Alastair M; Grunwald, Iris Q
2017-08-01
Background The Alberta Stroke Program Early Computed Tomography Score (ASPECTS) is an established 10-point quantitative topographic computed tomography scan score to assess early ischemic changes. We performed a non-inferiority trial between the e-ASPECTS software and neuroradiologists in scoring ASPECTS on non-contrast enhanced computed tomography images of acute ischemic stroke patients. Methods In this multicenter study, e-ASPECTS and three independent neuroradiologists retrospectively and blindly assessed baseline non-contrast enhanced computed tomography images of 132 patients with acute anterior circulation ischemic stroke. Follow-up scans served as ground truth to determine the definite area of infarction. Sensitivity, specificity, and accuracy for region- and score-based analysis, receiver-operating characteristic curves, Bland-Altman plots and Matthews correlation coefficients relative to the ground truth were calculated and comparisons were made between neuroradiologists and different pre-specified e-ASPECTS operating points. The non-inferiority margin was set to 10% for both sensitivity and specificity on region-based analysis. Results In total 2640 (132 patients × 20 regions per patient) ASPECTS regions were scored. Mean time from onset to baseline computed tomography was 146 ± 124 min and median NIH Stroke Scale (NIHSS) was 11 (6-17, interquartile range). Median ASPECTS for ground truth on follow-up imaging was 8 (6.5-9, interquartile range). In the region-based analysis, two e-ASPECTS operating points (sensitivity, specificity, and accuracy of 44%, 93%, 87% and 44%, 91%, 85%) were statistically non-inferior to all three neuroradiologists (all p-values <0.003). Both Matthews correlation coefficients for e-ASPECTS were higher (0.36 and 0.34) than those of all neuroradiologists (0.32, 0.31, and 0.3). Conclusions e-ASPECTS was non-inferior to three neuroradiologists in scoring ASPECTS on non-contrast enhanced computed tomography images of
2006-02-01
International Journal of Computational Methods for...Fluids, in review. "* V. Prabhakar and J. N. Reddy, "Orthogonality of Modal Bases," International Journal of Computational Methods for Fluids...Least-Squares Finite Element Model for Incompressible Navier-Stokes Equations," International Journal of Computational Methods for Fluids, in review.
Acceleration of low order finite element computation with GPUs (Invited)
NASA Astrophysics Data System (ADS)
Knepley, M. G.
2010-12-01
Considerable effort has been focused on the acceleration using GPUs of high order spectral element methods and discontinuous Galerkin finite element methods. However, these methods are not universally applicable, and much of the existing FEM software base employs low order methods. In this talk, we present a formulation of FEM, using the PETSc framework from ANL, which is amenable to GPU acceleration even at very low order. In addition, using the FEniCS system for FEM, we show that the relevant kernels can be automatically generated and optimized using a symbolic manipulation system.
Finite Element Method for Thermal Analysis. [with computer program
NASA Technical Reports Server (NTRS)
Heuser, J.
1973-01-01
A two- and three-dimensional, finite-element thermal-analysis program which handles conduction with internal heat generation, convection, radiation, specified flux, and specified temperature boundary conditions is presented. Elements used in the program are the triangle and tetrahedron for two- and three-dimensional analysis, respectively. The theory used in the program is developed, and several sample problems demonstrating the capability and reliability of the program are presented. A guide to using the program, description of the input cards, and program listing are included.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Finite Element Analysis in Concurrent Processing: Computational Issues
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett
2004-01-01
The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.
Computational study of a conical wing having unit aspect ratio at supersonic speeds
NASA Technical Reports Server (NTRS)
Mcgrath, Brian E.
1993-01-01
A study was conducted to identify and assess a computational method as a preliminary analysis and design tool for advanced military aircraft designs. The method of choice for this study was the Euler Marching Technique for Accurate Computation (EMTAC). Computational and experimental results were compared for a thick unit aspect ratio delta wing at Mach 2.8 and 4.0. This geometry along with the associated flow physics is representative of advanced aircraft designs. The comparisons of the lift and drag coefficients show that the computations agree with experimentally obtained data at Mach 2.8 and 4.0. Further, comparison between EMTAC and experiment shows that the computations accurately predict the overall shape and levels of the surface pressure distributions at Mach 2.8 and 4.0. Qualitative assessment of the computed flow-field properties shows that EMTAC captures the basic flow-field characteristics representative of advanced aircraft designs. The study further suggests that EMTAC can be successfully used in the preliminary analysis and design of advanced military aircraft.
Some aspects of statistical distribution of trace element concentrations in biomedical samples
NASA Astrophysics Data System (ADS)
Majewska, U.; Braziewicz, J.; Banaś , D.; Kubala-Kukuś , A.; Góź Dź , S.; Pajek, M.; Zadrozsolarna, M.; Jaskóla, M.; Czyzsolarewski, T.
1999-04-01
Concentrations of trace elements in biomedical samples were studied using X-ray fluorescence (XRF), total reflection X-ray fluorescence (TRXRF) and particle-induced X-ray emission (PIXE) methods. Used analytical methods were compared in terms of their detection limits and applicability for studying the trace elements in large populations of biomedical samples. In a result, the XRF and TRXRF methods were selected to be used for the trace element concentration measurements in the urine and woman full-term placenta samples. The measured trace element concentration distributions were found to be strongly asymmetric and described by the logarithmic-normal distribution. Such a distribution is expected for the random sequential process, which realistically models a level of trace elements in studied biomedical samples. The importance and consequences of this finding are discussed, especially in the context of comparison of the concentration measurements in different populations of biomedical samples.
Computational design of low aspect ratio wing-winglets for transonic wind-tunnel testing
NASA Technical Reports Server (NTRS)
Kuhlman, John M.; Brown, Christopher K.
1989-01-01
A computational design has been performed for three different low aspect ratio wing planforms fitted with nonplanar winglets; one of the three planforms has been selected to be constructed as a wind tunnel model for testing in the NASA LaRC 7 x 10 High Speed Wind Tunnel. A design point of M = 0.8, CL approx = 0.3 was selected, for wings of aspect ratio equal to 2.2, and leading edge sweep angles of 45 and 50 deg. Winglet length is 15 percent of the wing semispan, with a cant angle of 15 deg, and a leading edge sweep of 50 deg. Winglet total area equals 2.25 percent of the wing reference area. This report summarizes the design process and the predicted transonic performance for each configuration.
Computational design of low aspect ratio wing-winglet configurations for transonic wind-tunnel tests
NASA Technical Reports Server (NTRS)
Kuhlman, John M.; Brown, Christopher K.
1988-01-01
A computational design has been performed for three different low aspect ratio wing planforms fitted with nonplanar winglets; one of the three planforms has been selected to be constructed as a wind tunnel model for testing in the NASA LaRC 7 x 10 High Speed Wind Tunnel. A design point of M = 0.8, CL approx = 0.3 was selected, for wings of aspect ratio equal to 2.2, and leading edge sweep angles of 45 and 50 deg. Winglet length is 15 percent of the wing semispan, with a cant angle of 15 deg, and a leading edge sweep of 50 deg. Winglet total area equals 2.25 percent of the wing reference area. This report summarizes the design process and the predicted transonic performance for each configuration.
Nutritional Aspects of Essential Trace Elements in Oral Health and Disease: An Extensive Review
Hussain, Mohsina
2016-01-01
Human body requires certain essential elements in small quantities and their absence or excess may result in severe malfunctioning of the body and even death in extreme cases because these essential trace elements directly influence the metabolic and physiologic processes of the organism. Rapid urbanization and economic development have resulted in drastic changes in diets with developing preference towards refined diet and nutritionally deprived junk food. Poor nutrition can lead to reduced immunity, augmented vulnerability to various oral and systemic diseases, impaired physical and mental growth, and reduced efficiency. Diet and nutrition affect oral health in a variety of ways with influence on craniofacial development and growth and maintenance of dental and oral soft tissues. Oral potentially malignant disorders (OPMD) are treated with antioxidants containing essential trace elements like selenium but even increased dietary intake of trace elements like copper could lead to oral submucous fibrosis. The deficiency or excess of other trace elements like iodine, iron, zinc, and so forth has a profound effect on the body and such conditions are often diagnosed through their early oral manifestations. This review appraises the biological functions of significant trace elements and their role in preservation of oral health and progression of various oral diseases. PMID:27433374
Nutritional Aspects of Essential Trace Elements in Oral Health and Disease: An Extensive Review.
Bhattacharya, Preeti Tomar; Misra, Satya Ranjan; Hussain, Mohsina
2016-01-01
Human body requires certain essential elements in small quantities and their absence or excess may result in severe malfunctioning of the body and even death in extreme cases because these essential trace elements directly influence the metabolic and physiologic processes of the organism. Rapid urbanization and economic development have resulted in drastic changes in diets with developing preference towards refined diet and nutritionally deprived junk food. Poor nutrition can lead to reduced immunity, augmented vulnerability to various oral and systemic diseases, impaired physical and mental growth, and reduced efficiency. Diet and nutrition affect oral health in a variety of ways with influence on craniofacial development and growth and maintenance of dental and oral soft tissues. Oral potentially malignant disorders (OPMD) are treated with antioxidants containing essential trace elements like selenium but even increased dietary intake of trace elements like copper could lead to oral submucous fibrosis. The deficiency or excess of other trace elements like iodine, iron, zinc, and so forth has a profound effect on the body and such conditions are often diagnosed through their early oral manifestations. This review appraises the biological functions of significant trace elements and their role in preservation of oral health and progression of various oral diseases.
Numerical algorithms for finite element computations on arrays of microprocessors
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1981-01-01
The development of a multicolored successive over relaxation (SOR) program for the finite element machine is discussed. The multicolored SOR method uses a generalization of the classical Red/Black grid point ordering for the SOR method. These multicolored orderings have the advantage of allowing the SOR method to be implemented as a Jacobi method, which is ideal for arrays of processors, but still enjoy the greater rate of convergence of the SOR method. The program solves a general second order self adjoint elliptic problem on a square region with Dirichlet boundary conditions, discretized by quadratic elements on triangular regions. For this general problem and discretization, six colors are necessary for the multicolored method to operate efficiently. The specific problem that was solved using the six color program was Poisson's equation; for Poisson's equation, three colors are necessary but six may be used. In general, the number of colors needed is a function of the differential equation, the region and boundary conditions, and the particular finite element used for the discretization.
Validation of the NESSUS probabilistic finite element analysis computer program
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Burnside, O. H.
1988-01-01
A computer program, NESSUS, is being developed as part of a NASA-sponsored project to develop probabilistic structural analysis methods for propulsion system components. This paper describes the process of validating the NESSUS code, as it has been developed to date, and presents numerical results comparing NESSUS and exact solutions for a set of selected problems.
Technical and clinical aspects of spectrometric analysis of trace elements in clinical samples.
Chan, S; Gerson, B; Reitz, R E; Sadjadi, S A
1998-12-01
The capabilities of ICP-MS far exceed the slow, single-element analysis of GFAAS for determination of multiple trace elements. Additionally, its sensitivity is superior to that of DCP, ICP, and FAAS. The analytic procedure for ICP-MS is relatively straightforward and bypasses the need for digestion in many cases. It enables the physician to identify the target trace element(s) in intoxication cases, nutritional deficiency, or disease, thus eliminating the treatment delays experienced with sequential testing methods. This technology has its limitations as well. The ICP-MS cannot be used in the positive ion mode to analyze with sufficient sensitivity highly electronegative elements such as fluorine, because F+ is unstable and forms only by very high ionization energy. The ICP mass spectrometers used in most commercial laboratories utilize the quadrupole mass selector, which is limited by low resolution and, thus, by the various interferences previously discussed. For example, when an argon plasma is used, selenium (m/e 80) and chromium (m/e 52) in serum, plasma, and blood specimens are subject to polyatomic and molecular ion interferences. Low-resolution ICP mass spectrometers can therefore be used to analyze many trace elements, but they are not universal analyzers. High-resolution ICP-MS can resolve these interferences, but with greater expense. With the advent of more research and development of new techniques, some of these difficulties may be overcome, making this technique even more versatile. Contamination during sample collection and analysis causes falsely elevated results. Attention and care must be given to avoid contamination. Proper collection devices containing negligible amounts of trace elements should be used. Labware, preferably plastic and not glass, must be decontaminated prior to use by acid-washing and rinsed with [table: see text] de-ionized water. A complete description of sample collection and contamination has been written by Aitio and
Cells on biomaterials--some aspects of elemental analysis by means of electron probes.
Tylko, G
2016-02-01
Electron probe X-ray microanalysis enables concomitant observation of specimens and analysis of their elemental composition. The method is attractive for engineers developing tissue-compatible biomaterials. Either changes in element composition of cells or biomaterial can be defined according to well-established preparation and quantification procedures. However, the qualitative and quantitative elemental analysis appears more complicated when cells or thin tissue sections are deposited on biomaterials. X-ray spectra generated at the cell/tissue-biomaterial interface are modelled using a Monte Carlo simulation of a cell deposited on borosilicate glass. Enhanced electron backscattering from borosilicate glass was noted until the thickness of the biological layer deposited on the substrate reached 1.25 μm. It resulted in significant increase in X-ray intensities typical for the elements present in the cellular part. In this case, the mean atomic number value of the biomaterial determines the strength of this effect. When elements are present in the cells only, the positive linear relationship appears between X-ray intensities and cell thickness. Then, spatial dimensions of X-ray emission for the particular elements are exclusively in the range of the biological part and the intensities of X-rays become constant. When the elements are present in both the cell and the biomaterial, X-ray intensities are registered for the biological part and the substrate simultaneously leading to a negative linear relationship of X-ray intensities in the function of cell thickness. In the case of the analysis of an element typical for the biomaterial, strong decrease in X-ray emission is observed in the function of cell thickness as the effect of X-ray absorption and the limited excitation range to biological part rather than to the substrate. Correction procedures for calculations of element concentrations in thin films and coatings deposited on substrates are well established in
NASA Technical Reports Server (NTRS)
Holst, T. L.; Thomas, S. D.; Kaynak, U.; Gundy, K. L.; Flores, J.; Chaderjian, N. M.
1985-01-01
Transonic flow fields about wing geometries are computed using an Euler/Navier-Stokes approach in which the flow field is divided into several zones. The flow field immediately adjacent to the wing surface is resolved with fine grid zones and solved using a Navier-Stokes algorithm. Flow field regions removed from the wing are resolved with less finely clustered grid zones and are solved with an Euler algorithm. Computational issues associated with this zonal approach, including data base management aspects, are discussed. Solutions are obtained that are in good agreement with experiment, including cases with significant wind tunnel wall effects. Additional cases with significant shock induced separation on the upper wing surface are also presented.
XU, J.; COSTANTINO, C.; HOFMAYER, C.
2006-06-26
PAPER DISCUSSES COMPUTATIONS OF SEISMIC INDUCED SOIL PRESSURES USING FINITE ELEMENT MODELS FOR DEEPLY EMBEDDED AND OR BURIED STIFF STRUCTURES SUCH AS THOSE APPEARING IN THE CONCEPTUAL DESIGNS OF STRUCTURES FOR ADVANCED REACTORS.
Element-sensitive computed tomography with fast neutrons.
Overley, J C
1983-02-01
Neutrons and X-rays are mathematically equivalent as probes in computed tomography. However, structure in the energy dependence of neutron total cross sections and the feasibility of using time-of-flight techniques for energy sensitivity in neutron detection suggest that spatial distributions of specific substances can be determined from neutron transmission data. We demonstrate that this is possible by tomographically reconstructing from such data a phantom containing several different structural materials.
Rezende-Teixeira, Paula; Siviero, Fábio; Andrade, Alexandre; Santelli, Roberto Vicente; Machado-Santelli, Gláucia M
2008-06-01
Two mariner-like elements, Ramar1 and Ramar2, are described in the genome of Rhynchosciara americana, whose nucleotide consensus sequences were derived from multiple defective copies containing deletions, frame shifts and stop codons. Ramar1 contains several conserved amino acid blocks which were identified, including a specific D,D(34)D signature motif. Ramar2 is a defective mariner-like element, which contains a deletion overlapping in most of the internal region of the transposase ORF while its extremities remain intact. Predicted transposase sequences demonstrated that Ramar1 and Ramar2 phylogenetically present high identity to mariner-like elements of mauritiana subfamily. Southern blot analysis indicated that Ramar1 is widely represented in the genome of Rhynchosciara americana. In situ hybridizations showed Ramar1 localized in several chromosome regions, mainly in pericentromeric heterochromatin and their boundaries, while Ramar2 appeared as a single band in chromosome A.
Computational solution of acoustic radiation problems by Kussmaul's boundary element method
NASA Astrophysics Data System (ADS)
Kirkup, S. M.; Henwood, D. J.
1992-10-01
The problem of computing the properties of the acoustic field exterior to a vibrating surface for the complete wavenumber range by the boundary element method is considered. A particular computational method based on the Kussmaul formulation is described. The method is derived through approximating the surface by a set of planar triangles and approximating the surface functions by a constant on each element. The method is successfully applied to test problems and to the Ricardo crankcase simulation rig.
Experience with automatic, dynamic load balancing and adaptive finite element computation
Wheat, S.R.; Devine, K.D.; Maccabe, A.B.
1993-10-01
Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.
ERIC Educational Resources Information Center
Zaidel, Mark; Luo, XiaoHui
2010-01-01
This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…
ERIC Educational Resources Information Center
Zaidel, Mark; Luo, XiaoHui
2010-01-01
This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…
Plane stress analysis of wood members using isoparametric finite elements, a computer program
Gary D. Gerhardt
1983-01-01
A finite element program is presented which computes displacements, strains, and stresses in wood members of arbitrary shape which are subjected to plane strain/stressloading conditions. This report extends a program developed by R. L. Taylor in 1977, by adding both the cubic isoparametric finite element and the capability to analyze nonisotropic materials. The...
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
A computer program for anisotropic shallow-shell finite elements using symbolic integration
NASA Technical Reports Server (NTRS)
Andersen, C. M.; Bowen, J. T.
1976-01-01
A FORTRAN computer program for anisotropic shallow-shell finite elements with variable curvature is described. A listing of the program is presented together with printed output for a sample case. Computation times and central memory requirements are given for several different elements. The program is based on a stiffness (displacement) finite-element model in which the fundamental unknowns consist of both the displacement and the rotation components of the reference surface of the shell. Two triangular and four quadrilateral elements are implemented in the program. The triangular elements have 6 or 10 nodes, and the quadrilateral elements have 4 or 8 nodes. Two of the quadrilateral elements have internal degrees of freedom associated with displacement modes which vanish along the edges of the elements (bubble modes). The triangular elements and the remaining two quadrilateral elements do not have bubble modes. The output from the program consists of arrays corresponding to the stiffness, the geometric stiffness, the consistent mass, and the consistent load matrices for individual elements. The integrals required for the generation of these arrays are evaluated by using symbolic (or analytic) integration in conjunction with certain group-theoretic techniques. The analytic expressions for the integrals are exact and were developed using the symbolic and algebraic manipulation language.
Chen, Z.; Schreyer, H.L.
1995-09-01
The response of underground structures and transportation facilities under various external loadings and environments is critical for human safety as well as environmental protection. Since quasi-brittle materials such as concrete and rock are commonly used for underground construction, the constitutive modeling of these engineering materials, including post-limit behaviors, is one of the most important aspects in safety assessment. From experimental, theoretical, and computational points of view, this report considers the constitutive modeling of quasi-brittle materials in general and concentrates on concrete in particular. Based on the internal variable theory of thermodynamics, the general formulations of plasticity and damage models are given to simulate two distinct modes of microstructural changes, inelastic flow and degradation of material strength and stiffness, that identify the phenomenological nonlinear behaviors of quasi-brittle materials. The computational aspects of plasticity and damage models are explored with respect to their effects on structural analyses. Specific constitutive models are then developed in a systematic manner according to the degree of completeness. A comprehensive literature survey is made to provide the up-to-date information on prediction of structural failures, which can serve as a reference for future research.
Computer modeling of batteries from non-linear circuit elements
NASA Technical Reports Server (NTRS)
Waaben, S.; Federico, J.; Moskowitz, I.
1983-01-01
A simple non-linear circuit model for battery behavior is given. It is based on time-dependent features of the well-known PIN change storage diode, whose behavior is described by equations similar to those associated with electrochemical cells. The circuit simulation computer program ADVICE was used to predict non-linear response from a topological description of the battery analog built from advice components. By a reasonable choice of one set of parameters, the circuit accurately simulates a wide spectrum of measured non-linear battery responses to within a few millivolts.
Computation of Schenberg response function by using finite element modelling
NASA Astrophysics Data System (ADS)
Frajuca, C.; Bortoli, F. S.; Magalhaes, N. S.
2016-05-01
Schenberg is a detector of gravitational waves resonant mass type, with a central frequency of operation of 3200 Hz. Transducers located on the surface of the resonating sphere, according to a distribution half-dodecahedron, are used to monitor a strain amplitude. The development of mechanical impedance matchers that act by increasing the coupling of the transducers with the sphere is a major challenge because of the high frequency and small in size. The objective of this work is to study the Schenberg response function obtained by finite element modeling (FEM). Finnaly, the result is compared with the result of the simplified model for mass spring type system modeling verifying if that is suitable for the determination of sensitivity detector, as the conclusion the both modeling give the same results.
Analytical Aspects of EPMA for Trace Element Analysis in Complex Accessory Minerals
NASA Astrophysics Data System (ADS)
Jercinovic, M. J.; Williams, M. L.; Lane, E.
2007-12-01
High-resolution microanalysis of complex REE-bearing accessory phases is becoming increasingly necessary for insight into the chronology of phase growth and tectonic histories, and in understanding the mechanisms and manifestations of growth and dissolution reactions. The in-situ analysis of very small grains, inclusions, and sub-domains is revolutionizing our understanding of the evolution of complexly deformed, multiply metamorphosed, rocks. Great progress has been made in refining analytical protocols, and improvements in instrumentation have yielded unprecedented analytical precision and spatial resolution. As signal/noise improves, complexity is revealed, illustrating the level of care that must go into obtaining meaningful results, and in adopting an appropriate approach to minimize error. Background measurement is most critical for low concentration elements. Errors on net intensity values resulting from improper background measurement alone can exceed 50% relative. Regression and modeling of the background spectrum is essential, and must be carried out independently for each spectrometer, regardless of instrument. In complex materials such as REE- bearing phosphates, high concentrations of REEs and actinides create difficult analytical challenges as numerous emission lines and absorption edges cause great spectral complexity. In addition, trace concentrations of "unexpected" emission lines such as those from sulfur, or fluoresced from nearby phases (Ti, K), cause interferences on both measured peaks and background regions which can result in very large errors on target elements (U, Pb, etc.), on the order of 10s to 100s of ppm. Characteristic X-ray emission involving electron transitions from the valence shell are subject to measureable peak shifts, in some cases significantly affecting the accuracy of results if not accounted for. Geochronology by EPMA involves careful measurement of all constituent elements, with the calculated date dependant on the
NASA Astrophysics Data System (ADS)
Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia
2013-10-01
Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.
JCMmode: an adaptive finite element solver for the computation of leaky modes
NASA Astrophysics Data System (ADS)
Zschiedrich, Lin W.; Burger, Sven; Klose, Roland; Schaedle, Achim; Schmidt, Frank
2005-03-01
We present our simulation tool JCMmode for calculating propagating modes of an optical waveguide. As ansatz functions we use higher order, vectorial elements (Nedelec elements, edge elements). Further we construct transparent boundary conditions to deal with leaky modes even for problems with inhomogeneous exterior domains as for integrated hollow core Arrow waveguides. We have implemented an error estimator which steers the adaptive mesh refinement. This allows the precise computation of singularities near the metal's corner of a Plasmon-Polariton waveguide even for irregular shaped metal films on a standard personal computer.
NASA Technical Reports Server (NTRS)
Levy, R.
1991-01-01
Post-processing algorithms are given to compute the vibratory elastic-rigid coupling matrices and the modal contributions to the rigid-body mass matrices and to the effective modal inertias and masses. Recomputation of the elastic-rigid coupling matrices for a change in origin is also described. A computational example is included. The algorithms can all be executed by using standard finite-element program eigenvalue analysis output with no changes to existing code or source programs.
Aspects of the history of 66095 based on trace elements in clasts and whole rock
NASA Astrophysics Data System (ADS)
Jovanovic, S.; Reed, G. W., Jr.
Halogens, P, U and Na are reported in anorthositic and basaltic clasts and matrix from rusty rock 66095. Large fractions of Cl and Br associated with the separated phases from 66095 are soluble in H2O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H2O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H2O- to 0.1 M HNO3-soluble Br in the various components suggests no appreciable alteration in the original distributions of this element in the breccia forming processes. Up to 50% or more of the phosphorus and of the non-H2O-soluble Cl was dissolved from most of the breccia components by 0.1 M HNO3. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P2O5 ratio found in 66095 whole rock and in a number of other Apollo 16 samples. Evidence that phosphates are the major P-phases in the breccia is based on the 0.1 M acid solubility of Cl and P in the matrix sample and on elemental concentrations which are consistent with those of KREEP.
Aspects of the history of 66095 based on trace elements in clasts and whole rock
Jovanovic, S.; Reed, G.W. Jr.
1981-01-01
Halogens, P, U and Na are reported in anorthositic and basaltic clasts and matrix from rusty rock 66095. Large fractions of Cl and Br associated with the separated phases from 66095 are soluble in H/sub 2/O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H/sub 2/O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H/sub 2/O- to 0.1 M HNO/sub 3/-soluble Br in the various components suggests no appreciable alteration in the original distributions of this element in the breccia forming processes. Up to 50% or more of the phosphorus and of the non-H/sub 2/O-soluble Cl was dissolved from most of the breccia components by 0.1 M HNO/sub 3/. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P/sub 2/O/sub 5/ ratio found in 66095 whole rock and in a number of other Apollo 16 samples. Evidence that phosphates are the major P-phases in the brecia is based on the 0.1 M acid solubility of Cl and P in the matrix sample and on elemental concentrations which are consistent with those of KREEP.
Aspects of the history of 66095 based on trace elements in clasts and whole rock
Jovanovic, S.; Reed, G.W. Jr.
1981-01-01
Large fractions of Cl and Br associated with separated anorthositic and basaltic clasts and matrix from rusty rock 66095 are soluble in H/sub 2/O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H/sub 2/O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H/sub 2/O to acid soluble Br, i.e. surface deposits vs possibly phosphate related Br, suggests no appreciable alteration in the original distributions of this element. Weak acid leaching dissolved approx. 50% or more of the phosphorus and of the remaining Cl from most of the breccia components. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P/sub 2/O/sub 5/ ratio found in 66095 whole rock and in a number of other Apollo 16 samples. No dependence on degree of brecciation is indicated. The clasts are typical of Apollo 16 rocks. Matrix leaching results and element concentrations suggest that apatite-whitlockite is a component of KREEP.
Aspects of the history of 66095 based on trace elements in clasts and whole rock
NASA Technical Reports Server (NTRS)
Jovanovic, S.; Reed, G. W., Jr.
1982-01-01
Halogens, P, U and Na are reported in anorthositic and basaltic clasts and matrix from rusty rock 66095. Large fractions of Cl and Br associated with the separated phases from 66095 are soluble in H2O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H2O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H2O- to 0.1 M HNO3-soluble Br in the various components suggests no appreciable alteration in the original distributions of this element in the breccia forming processes. Up to 50% or more of the phosphorus and of the non-H2O-soluble Cl was dissolved from most of the breccia components by 0.1 M HNO3. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P2O5 ratio found in 66095 whole rock and in a number of other Apollo 16 samples. Evidence that phosphates are the major P-phases in the breccia is based on the 0.1 M acid solubility of Cl and P in the matrix sample and on elemental concentrations which are consistent with those of KREEP.
Finite element solution techniques for large-scale problems in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Liou, J.; Tezduyar, T. E.
1987-01-01
Element-by-element approximate factorization, implicit-explicit and adaptive implicit-explicit approximation procedures are presented for the finite-element formulations of large-scale fluid dynamics problems. The element-by-element approximation scheme totally eliminates the need for formation, storage and inversion of large global matrices. Implicit-explicit schemes, which are approximations to implicit schemes, substantially reduce the computational burden associated with large global matrices. In the adaptive implicit-explicit scheme, the implicit elements are selected dynamically based on element level stability and accuracy considerations. This scheme provides implicit refinement where it is needed. The methods are applied to various problems governed by the convection-diffusion and incompressible Navier-Stokes equations. In all cases studied, the results obtained are indistinguishable from those obtained by the implicit formulations.
Modeling of Rolling Element Bearing Mechanics: Computer Program Updates
NASA Technical Reports Server (NTRS)
Ryan, S. G.
1997-01-01
The Rolling Element Bearing Analysis System (REBANS) extends the capability available with traditional quasi-static bearing analysis programs by including the effects of bearing race and support flexibility. This tool was developed under contract for NASA-MSFC. The initial version delivered at the close of the contract contained several errors and exhibited numerous convergence difficulties. The program has been modified in-house at MSFC to correct the errors and greatly improve the convergence. The modifications consist of significant changes in the problem formulation and nonlinear convergence procedures. The original approach utilized sequential convergence for nested loops to achieve final convergence. This approach proved to be seriously deficient in robustness. Convergence was more the exception than the rule. The approach was changed to iterate all variables simultaneously. This approach has the advantage of using knowledge of the effect of each variable on each other variable (via the system Jacobian) when determining the incremental changes. This method has proved to be quite robust in its convergence. This technical memorandum documents the changes required for the original Theoretical Manual and User's Manual due to the new approach.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1988-01-01
This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.
Computed tomography-based finite element analysis to assess fracture risk and osteoporosis treatment
Imai, Kazuhiro
2015-01-01
Finite element analysis (FEA) is a computer technique of structural stress analysis and developed in engineering mechanics. FEA has developed to investigate structural behavior of human bones over the past 40 years. When the faster computers have acquired, better FEA, using 3-dimensional computed tomography (CT) has been developed. This CT-based finite element analysis (CT/FEA) has provided clinicians with useful data. In this review, the mechanism of CT/FEA, validation studies of CT/FEA to evaluate accuracy and reliability in human bones, and clinical application studies to assess fracture risk and effects of osteoporosis medication are overviewed. PMID:26309819
NASA Astrophysics Data System (ADS)
Sugiyama, Atsushi; Masuda, Nobuyuki; Oikawa, Minoru; Okada, Naohisa; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2014-11-01
We have implemented a computer-generated hologram (CGH) calculation on Greatly Reduced Array of Processor Element with Data Reduction (GRAPE-DR) processors. The cost of CGH calculation is enormous, but CGH calculation is well suited to parallel computation. The GRAPE-DR is a multicore processor that has 512 processor elements. The GRAPE-DR supports a double-precision floating-point operation and can perform CGH calculation with high accuracy. The calculation speed of the GRAPE-DR system is seven times faster than that of a personal computer with an Intel Core i7-950 processor.
2010-05-11
Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided About the speaker Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.
None
2016-07-12
Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided About the speaker Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2Â½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.
NASA Astrophysics Data System (ADS)
Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang
2015-05-01
Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.
Recurrent networks with recursive processing elements: paradigm for dynamical computing
NASA Astrophysics Data System (ADS)
Farhat, Nabil H.; del Moral Hernandez, Emilio
1996-11-01
It was shown earlier that models of cortical neurons can, under certain conditions of coherence in their input, behave as recursive processing elements (PEs) that are characterized by an iterative map on the phase interval and by bifurcation diagrams that demonstrate the complex encoding cortical neurons might be able to perform on their input. Here we present results of numerical experiments carried on a recurrent network of such recursive PEs modeled by the logistic map. Network behavior is studied under a novel scheme for generating complex spatio-temporal input patterns that could range from being coherent to partially coherent to being completely incoherent. A nontraditional nonlinear coupling scheme between neurons is employed to incorporate recent findings in brain science, namely that neurons use more than one kind of neurotransmitter in their chemical signaling. It is shown that such network shave the capacity to 'self-anneal' or collapse into period-m attractors that are uniquely related to the stimulus pattern following a transient 'chaotic' period during which the network searches it state-space for the associated dynamic attractor. The network accepts naturally both dynamical or stationary input patterns. Moreover we find that the use of quantized coupling strengths, introduced to reflect recent molecular biology and neurophysiological reports on synapse dynamics, endows the network with clustering ability wherein, depending ont eh stimulus pattern, PEs in the network with clustering ability wherein, depending on the stimulus pattern, PEs in the network divide into phase- locked groups with the PEs in each group being synchronized in period-m orbits. The value of m is found to be the same for all clusters and the number of clusters gives the dimension of the periodic attractor. The implications of these findings for higher-level processing such as feature- binding and for the development of novel learning algorithms are briefly discussed.
Neutron-stimulated emission computed tomography of a multi-element phantom.
Floyd, Carey E; Kapadia, Anuj J; Bender, Janelle E; Sharma, Amy C; Xia, Jessie Q; Harrawood, Brian P; Tourassi, Georgia D; Lo, Joseph Y; Crowell, Alexander S; Kiser, Mathew R; Howell, Calvin R
2008-05-07
This paper describes the implementation of neutron-stimulated emission computed tomography (NSECT) for non-invasive imaging and reconstruction of a multi-element phantom. The experimental apparatus and process for acquisition of multi-spectral projection data are described along with the reconstruction algorithm and images of the two elements in the phantom. Independent tomographic reconstruction of each element of the multi-element phantom was performed successfully. This reconstruction result is the first of its kind and provides encouraging proof of concept for proposed subsequent spectroscopic tomography of biological samples using NSECT.
Mixing characteristics of injector elements in liquid rocket engines - A computational study
NASA Astrophysics Data System (ADS)
Lohr, Jonathan C.; Trinh, Huu P.
1992-07-01
A computational study has been performed to better understand the mixing characteristics of liquid rocket injector elements. Variations in injector geometry as well as differences in injector element inlet flow conditions are among the areas examined in the study. Most results involve the nonreactive mixing of gaseous fuel with gaseous oxidizer but preliminary results are included that involve the spray combustion of oxidizer droplets. The purpose of the study is to numerically predict flowfield behavior in individual injector elements to a high degree of accuracy and in doing so to determine how various injector element properties affect the flow.
Broadband aspects of a triple-patch antenna as an array element
NASA Astrophysics Data System (ADS)
Revankar, U. K.; Kumar, A.
The design of radiating elements having wider bandwidths is an area of major interest in printed antenna technology. This paper describes a novel circular microstrip antenna adopting a three-layer stacked structure presenting a wider bandwidth as high as 20 percent with a low cross-polarization level and a high directive gain. Detailed experimental investigations are carried out on the effects of interlayer spacings and the thickness of the parasitic layers on the impedance bandwidth, 3-dB beamwidth and pattern shape.
Computational aspects in modelling the interaction of low-energy X-rays with liquid scintillators.
Grau Carles, A; Grau Malonda, A
2006-01-01
The commercial liquid scintillators available nowadays are mostly complex cocktails that frequently include non-negligible amounts of heavier elements than the commonly expected carbon or hydrogen. In May 1993, nine laboratories agreed to participate in the frame of the EUROMET project in a comparison of the activity concentration measurement of 55Fe. One particular aspect of the results that caught one's eye was a small systematic difference between the activity concentrations obtained with Ultima Gold and Insta Gel. The detection of the radiation emitted by EC nuclides involves, in addition to the atomic rearrangement generated by the capture of the electron by the nucleus, a frequently ignored secondary atomic rearrangement process due to photoionization. Such a process can be neglected for scintillators that only contain hydrogen and carbon, e.g., toluene, but must be taken into account when the EC nuclide solution is incorporated to cocktails with heavier elements, e.g., Ultima Gold. All along the present year, an improved version of the program EMI has been developed. This code adds the photoionization reduced energy correction to the previous versions, and successfully explains the systematic difference between the measured activity concentrations of 55Fe in Ultima Gold and Insta Gel.
Navier-Stokes computations of vortical flows over low aspect ratio wings
NASA Technical Reports Server (NTRS)
Thomas, J. L.; Taylor, S. L.; Anderson, W. K.
1987-01-01
An upwind-biased finite-volume algorithm is applied to the low-speed flow over a low aspect ratio delta wing from zero to forty degrees angle of attack. The differencing is second-order accurate spatially, and a multigrid algorithm is used to promote convergence to the steady state. The results compare well with the detailed experiments of Hummel (1983) and others for a Re(L) of 0.95 x 10 to the 6th. The predicted maximum lift coefficient of 1.10 at thirty-five degrees angle of attack agrees closely with the measured maximum lift of 1.06 at thirty-three degrees. At forty degrees angle of attack, a bubble type of vortex breakdown is evident in the computations, extending from 0.6 of the root chord to just downstream of the trailing edge.
Boyer, Frédéric; Porez, Mathieu
2015-03-26
This article presents a set of generic tools for multibody system dynamics devoted to the study of bio-inspired locomotion in robotics. First, archetypal examples from the field of bio-inspired robot locomotion are presented to prepare the ground for further discussion. The general problem of locomotion is then stated. In considering this problem, we progressively draw a unified geometric picture of locomotion dynamics. For that purpose, we start from the model of discrete mobile multibody systems (MMSs) that we progressively extend to the case of continuous and finally soft systems. Beyond these theoretical aspects, we address the practical problem of the efficient computation of these models by proposing a Newton-Euler-based approach to efficient locomotion dynamics with a few illustrations of creeping, swimming, and flying.
Computational design of low aspect ratio wing-winglet configurations for transonic wind-tunnel tests
NASA Technical Reports Server (NTRS)
Kuhlman, John M.; Brown, Christopher K.
1989-01-01
Computational designs were performed for three different low aspect ratio wing planforms fitted with nonplanar winglets; one of the three configurations was selected to be constructed as a wind tunnel model for testing in the NASA LaRC 8-foot transonic pressure tunnel. A design point of M = 0.8, C(sub L) is approximate or = to 0.3 was selected, for wings of aspect ratio equal to 2.2, and leading edge sweep angles of 45 deg and 50 deg. Winglet length is 15 percent of the wing semispan, with a cant angle of 15 deg, and a leading edge sweep of 50 deg. Winglet total area equals 2.25 percent of the wing reference area. The design process and the predicted transonic performance are summarized for each configuration. In addition, a companion low-speed design study was conducted, using one of the transonic design wing-winglet planforms but with different camber and thickness distributions. A low-speed wind tunnel model was constructed to match this low-speed design geometry, and force coefficient data were obtained for the model at speeds of 100 to 150 ft/sec. Measured drag coefficient reductions were of the same order of magnitude as those predicted by numerical subsonic performance predictions.
Grissmann, Sebastian; Zander, Thorsten O; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter
2017-01-01
Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios.
Grissmann, Sebastian; Zander, Thorsten O.; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter
2017-01-01
Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios. PMID:28769776
Development of an hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1993-01-01
The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.
Permeability computation on a REV with an immersed finite element method
Laure, P.; Puaux, G.; Silva, L.; Vincent, M.
2011-05-04
An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.
Pathophysiological aspects of cystocele with a 3D finite elements model.
Lamblin, Géry; Mayeur, Olivier; Giraudet, Géraldine; Jean Dit Gautier, Estelle; Chene, Gautier; Brieu, Mathias; Rubod, Chrystèle; Cosson, Michel
2016-11-01
The objective of this study is to design a 3D biomechanical model of the female pelvic system to assess pelvic organ suspension theories and understand cystocele mechanisms. A finite elements (FE) model was constructed to calculate the impact of suspension structure geometry on cystocele. The sample was a geometric model of a control patient's pelvic organs. The method used geometric reconstruction, implemented by the biomechanical properties of each anatomic structure. Various geometric configurations were simulated on the FE method to analyse the role of each structure and compare the two main anatomic theories. The main outcome measure was a 3D biomechanical model of the female pelvic system. The various configurations of bladder displacement simulated mechanisms underlying medial, lateral and apical cystocele. FE simulation revealed that pubocervical fascia is the most influential structure in the onset of median cystocele (essentially after 40 % impairment). Lateral cystocele showed a stronger influence of arcus tendineus fasciae pelvis (ATFP) on vaginal wall displacement under short ATFP lengthening. In apical cystocele, the uterosacral ligament showed greater influence than the cardinal ligament. Suspension system elongation increased displacement by 25 % in each type of cystocele. A 3D digital model enabled simulations of anatomic structures underlying cystocele to better understand cystocele pathophysiology. The model could be used to predict cystocele surgery results and personalising technique by preoperative simulation.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)
2006-01-01
Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.
NASA Technical Reports Server (NTRS)
Bey, K. S.; Thornton, E. A.; Dechaumphai, P.; Ramakrishnan, R.
1985-01-01
Recent progress in the development of finite element methodology for the prediction of aerothermal loads is described. Two dimensional, inviscid computations are presented, but emphasis is placed on development of an approach extendable to three dimensional viscous flows. Research progress is described for: (1) utilization of a commercially available program to construct flow solution domains and display computational results, (2) development of an explicit Taylor-Galerkin solution algorithm, (3) closed form evaluation of finite element matrices, (4) vector computer programming strategies, and (5) validation of solutions. Two test problems of interest to NASA Langley aerothermal research are studied. Comparisons of finite element solutions for Mach 6 flow with other solution methods and experimental data validate fundamental capabilities of the approach for analyzing high speed inviscid compressible flows.
NASA Technical Reports Server (NTRS)
Bey, K. S.; Thornton, E. A.; Dechaumphai, P.; Ramakrishnan, R.
1985-01-01
Recent progress in the development of finite element methodology for the prediction of aerothermal loads is described. Two dimensional, inviscid computations are presented, but emphasis is placed on development of an approach extendable to three dimensional viscous flows. Research progress is described for: (1) utilization of a commerically available program to construct flow solution domains and display computational results, (2) development of an explicit Taylor-Galerkin solution algorithm, (3) closed form evaluation of finite element matrices, (4) vector computer programming strategies, and (5) validation of solutions. Two test problems of interest to NASA Langley aerothermal research are studied. Comparisons of finite element solutions for Mach 6 flow with other solution methods and experimental data validate fundamental capabilities of the approach for analyzing high speed inviscid compressible flows.
A new parallel-vector finite element analysis software on distributed-memory computers
NASA Technical Reports Server (NTRS)
Qin, Jiangning; Nguyen, Duc T.
1993-01-01
A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.
1989-01-01
The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.
A new parallel-vector finite element analysis software on distributed-memory computers
NASA Technical Reports Server (NTRS)
Qin, Jiangning; Nguyen, Duc T.
1993-01-01
A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.
Determination of an Initial Mesh Density for Finite Element Computations via Data Mining
Kanapady, R; Bathina, S K; Tamma, K K; Kamath, C; Kumar, V
2001-07-23
Numerical analysis software packages which employ a coarse first mesh or an inadequate initial mesh need to undergo a cumbersome and time consuming mesh refinement studies to obtain solutions with acceptable accuracy. Hence, it is critical for numerical methods such as finite element analysis to be able to determine a good initial mesh density for the subsequent finite element computations or as an input to a subsequent adaptive mesh generator. This paper explores the use of data mining techniques for obtaining an initial approximate finite element density that avoids significant trial and error to start finite element computations. As an illustration of proof of concept, a square plate which is simply supported at its edges and is subjected to a concentrated load is employed for the test case. Although simplistic, the present study provides insight into addressing the above considerations.
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
TermehYousefi, Amin; Bagheri, Samira; Shahnazar, Sheida; Rahman, Md Habibur; Kadri, Nahrizul Adib
2016-02-01
Carbon nanotubes (CNTs) are potentially ideal tips for atomic force microscopy (AFM) due to the robust mechanical properties, nanoscale diameter and also their ability to be functionalized by chemical and biological components at the tip ends. This contribution develops the idea of using CNTs as an AFM tip in computational analysis of the biological cells. The proposed software was ABAQUS 6.13 CAE/CEL provided by Dassault Systems, which is a powerful finite element (FE) tool to perform the numerical analysis and visualize the interactions between proposed tip and membrane of the cell. Finite element analysis employed for each section and displacement of the nodes located in the contact area was monitored by using an output database (ODB). Mooney-Rivlin hyperelastic model of the cell allows the simulation to obtain a new method for estimating the stiffness and spring constant of the cell. Stress and strain curve indicates the yield stress point which defines as a vertical stress and plan stress. Spring constant of the cell and the local stiffness was measured as well as the applied force of CNT-AFM tip on the contact area of the cell. This reliable integration of CNT-AFM tip process provides a new class of high performance nanoprobes for single biological cell analysis.
NASA Astrophysics Data System (ADS)
Fuchs, A.; Androsov, A.; Harig, S.; Hiller, W.; Rakowsky, N.
2012-04-01
Based on the jeopardy of devastating tsunamis and the unpredictability of such events, tsunami modelling as part of warning systems is still a contemporary topic. The tsunami group of Alfred Wegener Institute developed the simulation tool TsunAWI as contribution to the Early Warning System in Indonesia. Although the precomputed scenarios for this purpose qualify for satisfying deliverables, the study of further improvements continues. While TsunAWI is governed by the Shallow Water Equations, an extension of the model is based on a nonhydrostatic approach. At the arrival of a tsunami wave in coastal regions with rough bathymetry, the term containing the nonhydrostatic part of pressure, that is neglected in the original hydrostatic model, gains in importance. In consideration of this term, a better approximation of the wave is expected. Differences of hydrostatic and nonhydrostatic model results are contrasted in the standard benchmark problem of a solitary wave runup on a plane beach. The observation data provided by Titov and Synolakis (1995) serves as reference. The nonhydrostatic approach implies a set of equations that are similar to the Shallow Water Equations, so the variation of the code can be implemented on top. However, this additional routines cause a lot of issues you have to cope with. So far the computations of the model were purely explicit. In the nonhydrostatic version the determination of an additional unknown and the solution of a large sparse system of linear equations is necessary. The latter constitutes the lion's share of computing time and memory requirement. Since the corresponding matrix is only symmetric in structure and not in values, an iterative Krylov Subspace Method is used, in particular the restarted Generalized Minimal Residual Algorithm GMRES(m). With regard to optimization, we present a comparison of several combinations of sequential and parallel preconditioning techniques respective number of iterations and setup
NASA Astrophysics Data System (ADS)
Gartling, D. K.; Hickox, C. E.
1982-10-01
The theoretical background for the finite element computer program MARIAH is presented. The MARIAH code is designed for the analysis of incompressible fluid flow and heat transfer in saturated porous media. A description of the fluid/thermal boundary value problem treated by the program is presented and the finite element method and associated numerical methods used in MARIAH are discussed. Instructions for use of the program are documented in the Sandia National Laboratories report, SAND79-1623.
Development of a computationally efficient full human body finite element model.
Schwartz, Doron; Guleyupoglu, Berkan; Koya, Bharath; Stitzel, Joel D; Gayzik, F Scott
2015-01-01
A simplified and computationally efficient human body finite element model is presented. The model complements the Global Human Body Models Consortium (GHBMC) detailed 50th percentile occupant (M50-O) by providing kinematic and kinetic data with a significantly reduced run time using the same body habitus. The simplified occupant model (M50-OS) was developed using the same source geometry as the M50-O. Though some meshed components were preserved, the total element count was reduced by remeshing, homogenizing, or in some cases omitting structures that are explicitly contained in the M50-O. Bones are included as rigid bodies, with the exception of the ribs, which are deformable but were remeshed to a coarser element density than the M50-O. Material models for all deformable components were drawn from the biomechanics literature. Kinematic joints were implemented at major articulations (shoulder, elbow, wrist, hip, knee, and ankle) with moment vs. angle relationships from the literature included for the knee and ankle. The brain of the detailed model was inserted within the skull of the simplified model, and kinematics and strain patterns are compared. The M50-OS model has 11 contacts and 354,000 elements; in contrast, the M50-O model has 447 contacts and 2.2 million elements. The model can be repositioned without requiring simulation. Thirteen validation and robustness simulations were completed. This included denuded rib compression at 7 discrete sites, 5 rigid body impacts, and one sled simulation. Denuded tests showed a good match to the experimental data of force vs. deflection slopes. The frontal rigid chest impact simulation produced a peak force and deflection within the corridor of 4.63 kN and 31.2%, respectively. Similar results vs. experimental data (peak forces of 5.19 and 8.71 kN) were found for an abdominal bar impact and lateral sled test, respectively. A lateral plate impact at 12 m/s exhibited a peak of roughly 20 kN (due to stiff foam used around
THERM3D -- A boundary element computer program for transient heat conduction problems
Ingber, M.S.
1994-02-01
The computer code THERM3D implements the direct boundary element method (BEM) to solve transient heat conduction problems in arbitrary three-dimensional domains. This particular implementation of the BEM avoids performing time-consuming domain integrations by approximating a ``generalized forcing function`` in the interior of the domain with the use of radial basis functions. An approximate particular solution is then constructed, and the original problem is transformed into a sequence of Laplace problems. The code is capable of handling a large variety of boundary conditions including isothermal, specified flux, convection, radiation, and combined convection and radiation conditions. The computer code is benchmarked by comparisons with analytic and finite element results.
NASA Astrophysics Data System (ADS)
Wang, Dongdong; Li, Xiwei; Pan, Feixu
2017-01-01
A simple and unified finite element formulation is presented for superconvergent eigenvalue computation of wave equations ranging from 1D to 3D. In this framework, a general method based upon the so called α mass matrix formulation is first proposed to effectively construct 1D higher order mass matrices for arbitrary order elements. The finite elements discussed herein refer to the Lagrangian type of Lobatto elements that take the Lobatto points as nodes. Subsequently a set of quadrature rules that exactly integrate the 1D higher order mass matrices are rationally derived, which are termed as the superconvergent quadrature rules. More importantly, in 2D and 3D cases, it is found that the employment of these quadrature rules via tensor product simultaneously for the mass and stiffness matrix integrations of Lobatto elements produces a unified superconvergent formulation for the eigenvalue or frequency computation without wave propagation direction dependence, which usually is a critical issue for the multidimensional higher order mass matrix formulation. Consequently the proposed approach is capable of computing arbitrary frequencies in a superconvergent fashion. Meanwhile, numerical implementation of the proposed method for multidimensional problems is trivial. The effectiveness of the proposed methodology is systematically demonstrated by a series of numerical examples. Numerical results revealed that a superconvergence with 2(p+1)th order of frequency accuracy is achieved by the present unified formulation for the pth order Lobatto element.
NASA Astrophysics Data System (ADS)
Wang, Dongdong; Li, Xiwei; Pan, Feixu
2016-11-01
A simple and unified finite element formulation is presented for superconvergent eigenvalue computation of wave equations ranging from 1D to 3D. In this framework, a general method based upon the so called α mass matrix formulation is first proposed to effectively construct 1D higher order mass matrices for arbitrary order elements. The finite elements discussed herein refer to the Lagrangian type of Lobatto elements that take the Lobatto points as nodes. Subsequently a set of quadrature rules that exactly integrate the 1D higher order mass matrices are rationally derived, which are termed as the superconvergent quadrature rules. More importantly, in 2D and 3D cases, it is found that the employment of these quadrature rules via tensor product simultaneously for the mass and stiffness matrix integrations of Lobatto elements produces a unified superconvergent formulation for the eigenvalue or frequency computation without wave propagation direction dependence, which usually is a critical issue for the multidimensional higher order mass matrix formulation. Consequently the proposed approach is capable of computing arbitrary frequencies in a superconvergent fashion. Meanwhile, numerical implementation of the proposed method for multidimensional problems is trivial. The effectiveness of the proposed methodology is systematically demonstrated by a series of numerical examples. Numerical results revealed that a superconvergence with 2(p+1)th order of frequency accuracy is achieved by the present unified formulation for the pth order Lobatto element.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1987-01-01
The present paper describes the development of a new hybrid computational approach for applicability for nonlinear/linear thermal structural analysis. The proposed transfinite element approach is a hybrid scheme as it combines the modeling versatility of contemporary finite elements in conjunction with transform methods and the classical Bubnov-Galerkin schemes. Applicability of the proposed formulations for nonlinear analysis is also developed. Several test cases are presented to include nonlinear/linear unified thermal-stress and thermal-stress wave propagations. Comparative results validate the fundamental capablities of the proposed hybrid transfinite element methodology.
The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations
NASA Technical Reports Server (NTRS)
Marcus, Martin H.; Broduer, Steve (Technical Monitor)
2001-01-01
With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.
The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations
NASA Technical Reports Server (NTRS)
Marcus, Martin H.; Broduer, Steve (Technical Monitor)
2001-01-01
With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.
Khashan, S A; Alazzam, A; Furlani, E P
2014-06-16
A microfluidic design is proposed for realizing greatly enhanced separation of magnetically-labeled bioparticles using integrated soft-magnetic elements. The elements are fixed and intersect the carrier fluid (flow-invasive) with their length transverse to the flow. They are magnetized using a bias field to produce a particle capture force. Multiple stair-step elements are used to provide efficient capture throughout the entire flow channel. This is in contrast to conventional systems wherein the elements are integrated into the walls of the channel, which restricts efficient capture to limited regions of the channel due to the short range nature of the magnetic force. This severely limits the channel size and hence throughput. Flow-invasive elements overcome this limitation and enable microfluidic bioseparation systems with superior scalability. This enhanced functionality is quantified for the first time using a computational model that accounts for the dominant mechanisms of particle transport including fully-coupled particle-fluid momentum transfer.
1983-10-01
of each run for each GPP for a 4-hour turnaround, while Table 3 provides costs for delayed processing. Figure 12 shows the effect of changing the...IS. KEY WORDS (Conlinue an reverse side If neceaessy ind Idenitiy by block niber) Computer program ANSYS Finite element analysis Computer program SAP...the Corps to enable them to make an intelligent selection of a GPP . ’The GPP’s studied were SAP, E SAP, GTSTRUDL, MCAUTO STRUDL, ANSYS, and SUPERB
A different aspect to use of some soft computing methods for landslide susceptibility mapping
NASA Astrophysics Data System (ADS)
Akgün, Aykut
2014-05-01
In landslide literature, several applications of soft computing methods such as artifical neural networks (ANN), fuzzy inference systems, and decision trees for landslide susceptibility mapping can be found. In many of these studies, the effectiveness and validation of the models used are also discussed. To carry out analyses, more than one software, for example one statistical package and one geographical information systems software (GIS), are generally used together. In this study, four different soft computing techniques were applied for obtaining landslide susceptibility mapping only by one GIS software. For this purpose, Multi Layer Perceptron (MLP) back propagation neural network, Fuzzy Adaptive Resonance Theory (ARTMAP) neural network, Self-organizing Map (SOM) and Classification Tree Analysis (CTA) approaches were applied to the study area. The study area was selected from a part of Trabzon (North Turkey) city which is one of the most landslide prone areas in Turkey. Initially, five landslide conditioning parameters such as lithology, slope gradient, slope aspect, stream power index (SPI), and topographical wetness index (TWI) for the study area were produced in GIS. Then, these parameters were analysed by MLP, Fuzzy ARTMAP, SOM and CART soft computing classifiers of the IDRISI Taiga GIS and remote sensing software. To accomplish the analyses, two main input groups are needed. These are conditioning parameters and training areas. For training areas, initially, landslide inventory map which was obtained by both field studies and topographical analyses was compared with lithological unit classes. With the help of these comparison, frequency ratio (FR) values of landslide occurrence in the study area were determined. Using the FR values, five landslide susceptibility classes were differentiated from the lowest FR to highest FR values. After this differentiation, the training areas representing the landslide susceptibility classes were determined by using FR
Hypermatrix scheme for finite element systems on CDC STAR-100 computer
NASA Technical Reports Server (NTRS)
Noor, A. K.; Voigt, S. J.
1975-01-01
A study is made of the adaptation of the hypermatrix (block matrix) scheme for solving large systems of finite element equations to the CDC STAR-100 computer. Discussion is focused on the organization of the hypermatrix computation using Cholesky decomposition and the mode of storage of the different submatrices to take advantage of the STAR pipeline (streaming) capability. Consideration is also given to the associated data handling problems and the means of balancing the I/Q and cpu times in the solution process. Numerical examples are presented showing anticipated gain in cpu speed over the CDC 6600 to be obtained by using the proposed algorithms on the STAR computer.
NASA Technical Reports Server (NTRS)
Perucchio, R.; Ingraffea, A. R.
1984-01-01
The establishment of the boundary element method (BEM) as a valid tool for solving problems in structural mechanics and in other fields of applied physics is discussed. The development of an integrated interactive computer graphic system for the application of the BEM to three dimensional problems in elastostatics is described. The integration of interactive computer graphic techniques and the BEM takes place at the preprocessing and postprocessing stages of the analysis process, when, respectively, the data base is generated and the results are interpreted. The interactive computer graphic modeling techniques used for generating and discretizing the boundary surfaces of a solid domain are outlined.
Kozień, Marek S; Lorkowski, Jacek; Szczurek, Sławomir; Hładki, Waldemar; Trybus, Marek
2008-01-01
The aim of this study was to construct a computed simulation of an isolated lesion of tibiofibular syndesmosis on typical clinical range of value. The analysis was made using the method of finite elements with a simplified plain model of a bone and assuming material of bone and ankle joint as isotropic and homogeneous. The distraction processes were modelled by external generalized forces. The computed programme ANSYS was used. For evaluation obtained was the computed image of changes of anatomy in relation to forces.
NASA Technical Reports Server (NTRS)
Perucchio, R.; Ingraffea, A. R.
1984-01-01
The establishment of the boundary element method (BEM) as a valid tool for solving problems in structural mechanics and in other fields of applied physics is discussed. The development of an integrated interactive computer graphic system for the application of the BEM to three dimensional problems in elastostatics is described. The integration of interactive computer graphic techniques and the BEM takes place at the preprocessing and postprocessing stages of the analysis process, when, respectively, the data base is generated and the results are interpreted. The interactive computer graphic modeling techniques used for generating and discretizing the boundary surfaces of a solid domain are outlined.
Automatic data generation scheme for finite-element method /FEDGE/ - Computer program
NASA Technical Reports Server (NTRS)
Akyuz, F.
1970-01-01
Algorithm provides for automatic input data preparation for the analysis of continuous domains in the fields of structural analysis, heat transfer, and fluid mechanics. The computer program utilizes the natural coordinate systems concept and the finite element method for data generation.
COYOTE: a finite-element computer program for nonlinear heat-conduction problems
Gartling, D.K.
1982-10-01
COYOTE is a finite element computer program designed for the solution of two-dimensional, nonlinear heat conduction problems. The theoretical and mathematical basis used to develop the code is described. Program capabilities and complete user instructions are presented. Several example problems are described in detail to demonstrate the use of the program.
NASA Astrophysics Data System (ADS)
El-Azab, Adel S.; Mary, Y. Sheena; Mary, Y. Shyma; Panicker, C. Yohannan; Abdel-Aziz, Alaa A.-M.; El-Sherbeny, Magda A.; Armaković, Stevan; Armaković, Sanja J.; Van Alsenoy, Christian
2017-04-01
In this work, spectroscopic characterization of 2-(2-(4-oxo-3-phenethyl-3,4-dihydroquinazolin-2-ylthio)ethyl)isoindoline-1,3-dione have been obtained with experimentally and theoretically. Complete assignments of fundamental vibrations were performed on the basis of the potential energy distribution of the vibrational modes and good agreement between the experimental and scaled wavenumbers has been achieved. Frontier molecular orbitals have been used as indicators of stability and reactivity. Intramolecular interactions have been investigated by NBO analysis. The dipole moment, linear polarizability and first and second order hyperpolarizability values were also computed. In order to determine molecule sites prone to electrophilic attacks DFT calculations of average local ionization energy (ALIE) and Fukui functions have been performed as well. Intra-molecular non-covalent interactions have been determined and analyzed by the analysis of charge density. Stability of title molecule have also been investigated from the aspect of autoxidation, by calculations of bond dissociation energies (BDE), and hydrolysis, by calculations of radial distribution functions after molecular dynamics (MD) simulations. In order to assess the biological potential of the title compound a molecular docking study towards breast cancer type 2 complex has been performed.
CAVASS: a computer assisted visualization and analysis software system - visualization aspects
NASA Astrophysics Data System (ADS)
Grevera, George; Udupa, Jayaram; Odhner, Dewey; Zhuge, Ying; Souza, Andre; Iwanaga, Tad; Mishra, Shipra
2007-03-01
The Medical Image Processing Group (MIPG) at the University of Pennsylvania has been developing and distributing medical image analysis and visualization software systems for a long period of time. Our most recent system, 3DVIEWNIX, was first released in 1993. Since that time, a number of significant advancements have taken place with regard to computer platforms and operating systems, networking capability, the rise of parallel processing standards, and the development of open source toolkits. The development of CAVASS by our group is the next generation of 3DVIEWNIX. CAVASS will be freely available, open source, and is integrated with toolkits such as ITK and VTK. CAVASS runs on Windows, Unix, and Linux but shares a single code base. Rather than requiring expensive multiprocessor systems, it seamlessly provides for parallel processing via inexpensive COWs (Cluster of Workstations) for more time consuming algorithms. Most importantly, CAVASS is directed at the visualization, processing, and analysis of medical imagery, so support for 3D and higher dimensional medical image data and the efficient implementation of algorithms is given paramount importance. This paper focuses on aspects of visualization. All of the most of the popular modes of visualization including various 2D slice modes, reslicing, MIP, surface rendering, volume rendering, and animation are incorporated into CAVASS.
ERIC Educational Resources Information Center
Cuppini, Cristiano; Magosso, Elisa; Ursino, Mauro
2013-01-01
We present an original model designed to study how a second language (L2) is acquired in bilinguals at different proficiencies starting from an existing L1. The model assumes that the conceptual and lexical aspects of languages are stored separately: conceptual aspects in distinct topologically organized Feature Areas, and lexical aspects in a…
ERIC Educational Resources Information Center
Cuppini, Cristiano; Magosso, Elisa; Ursino, Mauro
2013-01-01
We present an original model designed to study how a second language (L2) is acquired in bilinguals at different proficiencies starting from an existing L1. The model assumes that the conceptual and lexical aspects of languages are stored separately: conceptual aspects in distinct topologically organized Feature Areas, and lexical aspects in a…
Computational fluid flow in two dimensions using simple T4/C3 element
NASA Astrophysics Data System (ADS)
Jan, Y. J.; Huang, S. J.; Lee, T. Y.
2000-10-01
The application of the four nodes for velocity and three nodes for pressure (T4/C3) element discretization technique for simulating two-dimensional steady and transitional flows is presented. The newly developed code has been validated by the application to three benchmark test cases: driven cavity flow, flow over a backward-facing step, and confined surface rib flow. In addition, a transitional flow with vortex shedding has been studied. The numerical results have shown excellent agreement with experimental results, as well as with those of other simulations. It should be pointed out that the advantages of the T4/C3 finite element over other higher-order elements lie in its computational simplicity, efficiency, and less computer memory requirement. Copyright
A Computational and Experimental Study of Nonlinear Aspects of Induced Drag
NASA Technical Reports Server (NTRS)
Smith, Stephen C.
1996-01-01
Despite the 80-year history of classical wing theory, considerable research has recently been directed toward planform and wake effects on induced drag. Nonlinear interactions between the trailing wake and the wing offer the possibility of reducing drag. The nonlinear effect of compressibility on induced drag characteristics may also influence wing design. This thesis deals with the prediction of these nonlinear aspects of induced drag and ways to exploit them. A potential benefit of only a few percent of the drag represents a large fuel savings for the world's commercial transport fleet. Computational methods must be applied carefully to obtain accurate induced drag predictions. Trefftz-plane drag integration is far more reliable than surface pressure integration, but is very sensitive to the accuracy of the force-free wake model. The practical use of Trefftz plane drag integration was extended to transonic flow with the Tranair full-potential code. The induced drag characteristics of a typical transport wing were studied with Tranair, a full-potential method, and A502, a high-order linear panel method to investigate changes in lift distribution and span efficiency due to compressibility. Modeling the force-free wake is a nonlinear problem, even when the flow governing equation is linear. A novel method was developed for computing the force-free wake shape. This hybrid wake-relaxation scheme couples the well-behaved nature of the discrete vortex wake with viscous-core modeling and the high-accuracy velocity prediction of the high-order panel method. The hybrid scheme produced converged wake shapes that allowed accurate Trefftz-plane integration. An unusual split-tip wing concept was studied for exploiting nonlinear wake interaction to reduced induced drag. This design exhibits significant nonlinear interactions between the wing and wake that produced a 12% reduction in induced drag compared to an equivalent elliptical wing at a lift coefficient of 0.7. The
Finite element simulation of the mechanical impact of computer work on the carpal tunnel syndrome.
Mouzakis, Dionysios E; Rachiotis, George; Zaoutsos, Stefanos; Eleftheriou, Andreas; Malizos, Konstantinos N
2014-09-22
Carpal tunnel syndrome (CTS) is a clinical disorder resulting from the compression of the median nerve. The available evidence regarding the association between computer use and CTS is controversial. There is some evidence that computer mouse or keyboard work, or both are associated with the development of CTS. Despite the availability of pressure measurements in the carpal tunnel during computer work (exposure to keyboard or mouse) there are no available data to support a direct effect of the increased intracarpal canal pressure on the median nerve. This study presents an attempt to simulate the direct effects of computer work on the whole carpal area section using finite element analysis. A finite element mesh was produced from computerized tomography scans of the carpal area, involving all tissues present in the carpal tunnel. Two loading scenarios were applied on these models based on biomechanical data measured during computer work. It was found that mouse work can produce large deformation fields on the median nerve region. Also, the high stressing effect of the carpal ligament was verified. Keyboard work produced considerable and heterogeneous elongations along the longitudinal axis of the median nerve. Our study provides evidence that increased intracarpal canal pressures caused by awkward wrist postures imposed during computer work were associated directly with deformation of the median nerve. Despite the limitations of the present study the findings could be considered as a contribution to the understanding of the development of CTS due to exposure to computer work. Copyright © 2014 Elsevier Ltd. All rights reserved.
Greene, Runyu L; Azari, David P; Hu, Yu Hen; Radwin, Robert G
2017-03-09
Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements.
NASA Astrophysics Data System (ADS)
Aristovich, K. Y.; Khan, S. H.
2010-07-01
Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.
Computational Modeling For The Transitional Flow Over A Multi-Element Airfoil
NASA Technical Reports Server (NTRS)
Liou, William W.; Liu, Feng-Jun; Rumsey, Chris L. (Technical Monitor)
2000-01-01
The transitional flow over a multi-element airfoil in a landing configuration are computed using a two equation transition model. The transition model is predictive in the sense that the transition onset is a result of the calculation and no prior knowledge of the transition location is required. The computations were performed using the INS2D) Navier-Stokes code. Overset grids are used for the three-element airfoil. The airfoil operating conditions are varied for a range of angle of attack and for two different Reynolds numbers of 5 million and 9 million. The computed results are compared with experimental data for the surface pressure, skin friction, transition onset location, and velocity magnitude. In general, the comparison shows a good agreement with the experimental data.
NASA Technical Reports Server (NTRS)
Gupta, Kajal K.
1991-01-01
The details of an integrated general-purpose finite element structural analysis computer program which is also capable of solving complex multidisciplinary problems is presented. Thus, the SOLIDS module of the program possesses an extensive finite element library suitable for modeling most practical problems and is capable of solving statics, vibration, buckling, and dynamic response problems of complex structures, including spinning ones. The aerodynamic module, AERO, enables computation of unsteady aerodynamic forces for both subsonic and supersonic flow for subsequent flutter and divergence analysis of the structure. The associated aeroservoelastic analysis module, ASE, effects aero-structural-control stability analysis yielding frequency responses as well as damping characteristics of the structure. The program is written in standard FORTRAN to run on a wide variety of computers. Extensive graphics, preprocessing, and postprocessing routines are also available pertaining to a number of terminals.
Experimental and Computational Investigation of Lift-Enhancing Tabs on a Multi-Element Airfoil
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1996-01-01
An experimental and computational investigation of the effect of lift-enhancing tabs on a two-element airfoil has been conducted. The objective of the study was to develop an understanding of the flow physics associated with lift-enhancing tabs on a multi-element airfoil. An NACA 63(2)-215 ModB airfoil with a 30% chord fowler flap was tested in the NASA Ames 7- by 10-Foot Wind Tunnel. Lift-enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. A combination of tabs located at the main element and flap trailing edges increased the airfoil lift coefficient by 11% relative to the highest lift coefficient achieved by any baseline configuration at an angle of attack of 0 deg, and C(sub 1max) was increased by 3%. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computed results predicted all of the trends observed in the experimental data quite well. In addition, a simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift-enhancing tabs work. The tabs were modeled by a point vortex at the air-foil or flap trailing edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift-enhancing tabs on a multi-element airfoil. Results of the modeling effort indicate that the dominant effects of the tabs on the pressure distribution of each element of the airfoil can be captured with a potential flow model for cases with no flow separation.
Nakamura, Keiko; Tajima, Kiyoshi; Chen, Ker-Kong; Nagamatsu, Yuki; Kakigawa, Hiroshi; Masumi, Shin-ich
2013-12-01
This study focused on the application of novel finite-element analysis software for constructing a finite-element model from the computed tomography data of a human dentulous mandible. The finite-element model is necessary for evaluating the mechanical response of the alveolar part of the mandible, resulting from occlusal force applied to the teeth during biting. Commercially available patient-specific general computed tomography-based finite-element analysis software was solely applied to the finite-element analysis for the extraction of computed tomography data. The mandibular bone with teeth was extracted from the original images. Both the enamel and the dentin were extracted after image processing, and the periodontal ligament was created from the segmented dentin. The constructed finite-element model was reasonably accurate using a total of 234,644 nodes and 1,268,784 tetrahedral and 40,665 shell elements. The elastic moduli of the heterogeneous mandibular bone were determined from the bone density data of the computed tomography images. The results suggested that the software applied in this study is both useful and powerful for creating a more accurate three-dimensional finite-element model of a dentulous mandible from the computed tomography data without the need for any other software.
STARS: A general-purpose finite element computer program for analysis of engineering structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1984-01-01
STARS (Structural Analysis Routines) is primarily an interactive, graphics-oriented, finite-element computer program for analyzing the static, stability, free vibration, and dynamic responses of damped and undamped structures, including rotating systems. The element library consists of one-dimensional (1-D) line elements, two-dimensional (2-D) triangular and quadrilateral shell elements, and three-dimensional (3-D) tetrahedral and hexahedral solid elements. These elements enable the solution of structural problems that include truss, beam, space frame, plane, plate, shell, and solid structures, or any combination thereof. Zero, finite, and interdependent deflection boundary conditions can be implemented by the program. The associated dynamic response analysis capability provides for initial deformation and velocity inputs, whereas the transient excitation may be either forces or accelerations. An effective in-core or out-of-core solution strategy is automatically employed by the program, depending on the size of the problem. Data input may be at random within a data set, and the program offers certain automatic data-generation features. Input data are formatted as an optimal combination of free and fixed formats. Interactive graphics capabilities enable convenient display of nodal deformations, mode shapes, and element stresses.
Applications of Parallel Computation in Micro-Mechanics and Finite Element Method
NASA Technical Reports Server (NTRS)
Tan, Hui-Qian
1996-01-01
This project discusses the application of parallel computations related with respect to material analyses. Briefly speaking, we analyze some kind of material by elements computations. We call an element a cell here. A cell is divided into a number of subelements called subcells and all subcells in a cell have the identical structure. The detailed structure will be given later in this paper. It is obvious that the problem is "well-structured". SIMD machine would be a better choice. In this paper we try to look into the potentials of SIMD machine in dealing with finite element computation by developing appropriate algorithms on MasPar, a SIMD parallel machine. In section 2, the architecture of MasPar will be discussed. A brief review of the parallel programming language MPL also is given in that section. In section 3, some general parallel algorithms which might be useful to the project will be proposed. And, combining with the algorithms, some features of MPL will be discussed in more detail. In section 4, the computational structure of cell/subcell model will be given. The idea of designing the parallel algorithm for the model will be demonstrated. Finally in section 5, a summary will be given.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1988-01-01
This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.
Gartling, D.K.; Hogan, R.E.
1994-10-01
The theoretical and numerical background for the finite element computer program, COYOTE II, is presented in detail. COYOTE II is designed for the multi-dimensional analysis of nonlinear heat conduction problems and other types of diffusion problems. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in COYOTE II are also outlined. Instructions for use of the code are documented in SAND94-1179; examples of problems analyzed with the code are provided in SAND94-1180.
MAPVAR - A Computer Program to Transfer Solution Data Between Finite Element Meshes
Wellman, G.W.
1999-03-01
MAPVAR, as was the case with its precursor programs, MERLIN and MERLIN II, is designed to transfer solution results from one finite element mesh to another. MAPVAR draws heavily from the structure and coding of MERLIN II, but it employs a new finite element data base, EXODUS II, and offers enhanced speed and new capabilities not available in MERLIN II. In keeping with the MERLIN II documentation, the computational algorithms used in MAPVAR are described. User instructions are presented. Example problems are included to demonstrate the operation of the code and the effects of various input options.
Level set discrete element method for three-dimensional computations with triaxial case study
NASA Astrophysics Data System (ADS)
Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.
2016-06-01
In this paper, we outline the level set discrete element method (LS-DEM) which is a discrete element method variant able to simulate systems of particles with arbitrary shape using level set functions as a geometric basis. This unique formulation allows seamless interfacing with level set-based characterization methods as well as computational ease in contact calculations. We then apply LS-DEM to simulate two virtual triaxial specimens generated from XRCT images of experiments and demonstrate LS-DEM's ability to quantitatively capture and predict stress-strain and volume-strain behavior observed in the experiments.
VIEWIT: computation of seen areas, slope, and aspect for land-use planning
Michael R. Travis; Gary H. Elsner; Wayne D. Iverson; Christine G. Johnson
1975-01-01
This user's guide provides instructions for using VIEWIT--a computerized technique for delineating the terrain visible from a single point or from multiple observer points, and for doing slope and aspect analyses. Results are in tabular or in overlay map form. VIEWIT can do individual view-area, slope, or aspect analyses or combined analyses, and can produce...
NASA Technical Reports Server (NTRS)
Voigt, S.
1975-01-01
The use of software engineering aids in the design of a structural finite-element analysis computer program for the STAR-100 computer is described. Nested functional diagrams to aid in communication among design team members were used, and a standardized specification format to describe modules designed by various members was adopted. This is a report of current work in which use of the functional diagrams provided continuity and helped resolve some of the problems arising in this long-running part-time project.
NASA Technical Reports Server (NTRS)
Voigt, S.
1975-01-01
The use of software engineering aids in the design of a structural finite-element analysis computer program for the STAR-100 computer is described. Nested functional diagrams to aid in communication among design team members were used, and a standardized specification format to describe modules designed by various members was adopted. This is a report of current work in which use of the functional diagrams provided continuity and helped resolve some of the problems arising in this long-running part-time project.
NASA Technical Reports Server (NTRS)
Walston, W. H., Jr.
1986-01-01
The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.
Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods
NASA Astrophysics Data System (ADS)
Bause, M.; Knabner, P.
2004-06-01
We present adaptive mixed hybrid finite element discretizations of the Richards equation, a nonlinear parabolic partial differential equation modeling the flow of water into a variably saturated porous medium. The approach simultaneously constructs approximations of the flux and the pressure head in Raviart-Thomas spaces. The resulting nonlinear systems of equations are solved by a Newton method. For the linear problems of the Newton iteration a multigrid algorithm is used. We consider two different kinds of error indicators for space adaptive grid refinement: superconvergence and residual based indicators. They can be calculated easily by means of the available finite element approximations. This seems attractive for computations since no additional (sub-)problems have to be solved. Computational experiments conducted for realistic water table recharge problems illustrate the effectiveness and robustness of the approach.
Computing element evolution towards Exascale and its impact on legacy simulation codes
NASA Astrophysics Data System (ADS)
Colin de Verdière, Guillaume J. L.
2015-12-01
In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.
Debusschere, Nic; Segers, Patrick; Dubruel, Peter; Verhegghe, Benedict; De Beule, Matthieu
2016-02-01
Bioresorbable stents represent an emerging technological development within the field of cardiovascular angioplasty. Their temporary presence avoids long-term side effects of non-degradable stents such as in-stent restenosis, late stent thrombosis and fatigue induced strut fracture. Several numerical modelling strategies have been proposed to evaluate the transitional mechanical characteristics of biodegradable stents using a continuum damage framework. However, these methods rely on an explicit finite-element integration scheme which, in combination with the quasi-static nature of many simulations involving stents and the small element size needed to model corrosion mechanisms, results in a high computational cost. To reduce the simulation times and to expand the general applicability of these degradation models, this paper investigates an implicit finite element solution method to model degradation of biodegradable stents.
Bender, Janelle E; Kapadia, Anuj J; Sharma, Amy C; Tourassi, Georgia D; Harrawood, Brian P; Floyd, Carey E
2007-10-01
Neutron stimulated emission computed tomography (NSECT) is being developed to noninvasively determine concentrations of trace elements in biological tissue. Studies have shown prominent differences in the trace element concentration of normal and malignant breast tissue. NSECT has the potential to detect these differences and diagnose malignancy with high accuracy with dose comparable to that of a single mammogram. In this study, NSECT imaging was simulated for normal and malignant human breast tissue samples to determine the significance of individual elements in determining malignancy. The normal and malignant models were designed with different elemental compositions, and each was scanned spectroscopically using a simulated 2.5 MeV neutron beam. The number of incident neutrons was varied from 0.5 million to 10 million neutrons. The resulting gamma spectra were evaluated through receiver operating characteristic (ROC) analysis to determine which trace elements were prominent enough to be considered markers for breast cancer detection. Four elemental isotopes (133Cs, 81Br, 79Br, and 87Rb) at five energy levels were shown to be promising features for breast cancer detection with an area under the ROC curve (A(Z)) above 0.85. One of these elements--87Rb at 1338 keV--achieved perfect classification at 10 million incident neutrons and could be detected with as low as 3 million incident neutrons. Patient dose was calculated for each gamma spectrum obtained and was found to range from between 0.05 and 0.112 mSv depending on the number of neutrons. This simulation demonstrates that NSECT has the potential to noninvasively detect breast cancer through five prominent trace element energy levels, at dose levels comparable to other breast cancer screening techniques.
Bender, Janelle E.; Kapadia, Anuj J.; Sharma, Amy C.; Tourassi, Georgia D.; Harrawood, Brian P.; Floyd, Carey E. Jr.
2007-10-15
Neutron stimulated emission computed tomography (NSECT) is being developed to noninvasively determine concentrations of trace elements in biological tissue. Studies have shown prominent differences in the trace element concentration of normal and malignant breast tissue. NSECT has the potential to detect these differences and diagnose malignancy with high accuracy with dose comparable to that of a single mammogram. In this study, NSECT imaging was simulated for normal and malignant human breast tissue samples to determine the significance of individual elements in determining malignancy. The normal and malignant models were designed with different elemental compositions, and each was scanned spectroscopically using a simulated 2.5 MeV neutron beam. The number of incident neutrons was varied from 0.5 million to 10 million neutrons. The resulting gamma spectra were evaluated through receiver operating characteristic (ROC) analysis to determine which trace elements were prominent enough to be considered markers for breast cancer detection. Four elemental isotopes ({sup 133}Cs, {sup 81}Br, {sup 79}Br, and {sup 87}Rb) at five energy levels were shown to be promising features for breast cancer detection with an area under the ROC curve (A{sub Z}) above 0.85. One of these elements - {sup 87}Rb at 1338 keV - achieved perfect classification at 10 million incident neutrons and could be detected with as low as 3 million incident neutrons. Patient dose was calculated for each gamma spectrum obtained and was found to range from between 0.05 and 0.112 mSv depending on the number of neutrons. This simulation demonstrates that NSECT has the potential to noninvasively detect breast cancer through five prominent trace element energy levels, at dose levels comparable to other breast cancer screening techniques.
NASA Astrophysics Data System (ADS)
Shlimak, I.; Safarov, V. I.; Vagner, I. D.
2001-07-01
The idea of quantum computation is the most promising recent development in the high-tech domain, while experimental realization of a quantum computer poses a formidable challenge. Among the proposed models especially attractive are semiconductor based nuclear spin quantum computers (S-NSQCs), where nuclear spins are used as quantum bistable elements, `qubits', coupled to the electron spin and orbital dynamics. We propose here a scheme for implementation of basic elements for S-NSQCs which are realizable within achievements of the modern nanotechnology. These elements are expected to be based on a nuclear-spin-controlled isotopically engineered Si/SiGe heterojunction, because in these semiconductors one can vary the abundance of nuclear spins by engineering the isotopic composition. A specific device is suggested, which allows one to model the processes of recording, reading and information transfer on a quantum level using the technique of electrical detection of the magnetic state of nuclear spins. Improvement of this technique for a semiconductor system with a relatively small number of nuclei might be applied to the manipulation of nuclear spin `qubits' in the future S-NSQCs.
Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 1
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr. (Compiler)
1989-01-01
Control/Structures Integration program software needs, computer aided control engineering for flexible spacecraft, computer aided design, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software for flexible structures and robots are among the topics discussed.
Report of a Workshop on the Pedagogical Aspects of Computational Thinking
ERIC Educational Resources Information Center
National Academies Press, 2011
2011-01-01
In 2008, the Computer and Information Science and Engineering Directorate of the National Science Foundation asked the National Research Council (NRC) to conduct two workshops to explore the nature of computational thinking and its cognitive and educational implications. The first workshop focused on the scope and nature of computational thinking…
Report of a Workshop on the Pedagogical Aspects of Computational Thinking
ERIC Educational Resources Information Center
National Academies Press, 2011
2011-01-01
In 2008, the Computer and Information Science and Engineering Directorate of the National Science Foundation asked the National Research Council (NRC) to conduct two workshops to explore the nature of computational thinking and its cognitive and educational implications. The first workshop focused on the scope and nature of computational thinking…
Chabanas, Matthieu; Luboz, Vincent; Payan, Yohan
2003-06-01
This paper addresses the prediction of face soft tissue deformations resulting from bone repositioning in maxillofacial surgery. A generic 3D Finite Element model of the face soft tissues was developed. Face muscles are defined in the mesh as embedded structures, with different mechanical properties (transverse isotropy, stiffness depending on muscle contraction). Simulations of face deformations under muscle actions can thus be performed. In the context of maxillofacial surgery, this generic soft-tissue model is automatically conformed to patient morphology by elastic registration, using skin and skull surfaces segmented from a CT scan. Some elements of the patient mesh could be geometrically distorted during the registration, which disables Finite Element analysis. Irregular elements are thus detected and automatically regularized. This semi-automatic patient model generation is robust, fast and easy to use. Therefore it seems compatible with clinical use. Six patient models were successfully built, and simulations of soft tissue deformations resulting from bone displacements performed on two patient models. Both the adequation of the models to the patient morphologies and the simulations of post-operative aspects were qualitatively validated by five surgeons. Their conclusions are that the models fit the morphologies of the patients, and that the predicted soft tissue modifications are coherent with what they would expect.
Methodical and technological aspects of creation of interactive computer learning systems
NASA Astrophysics Data System (ADS)
Vishtak, N. M.; Frolov, D. A.
2017-01-01
The article presents a methodology for the development of an interactive computer training system for training power plant. The methods used in the work are a generalization of the content of scientific and methodological sources on the use of computer-based training systems in vocational education, methods of system analysis, methods of structural and object-oriented modeling of information systems. The relevance of the development of the interactive computer training systems in the preparation of the personnel in the conditions of the educational and training centers is proved. Development stages of the computer training systems are allocated, factors of efficient use of the interactive computer training system are analysed. The algorithm of work performance at each development stage of the interactive computer training system that enables one to optimize time, financial and labor expenditure on the creation of the interactive computer training system is offered.
Joldes, Grand Roman; Wittek, Adam; Miller, Karol
2008-01-01
Real time computation of soft tissue deformation is important for the use of augmented reality devices and for providing haptic feedback during operation or surgeon training. This requires algorithms that are fast, accurate and can handle material nonlinearities and large deformations. A set of such algorithms is presented in this paper, starting with the finite element formulation and the integration scheme used and addressing common problems such as hourglass control and locking. The computation examples presented prove that by using these algorithms, real time computations become possible without sacrificing the accuracy of the results. For a brain model having more than 7000 degrees of freedom, we computed the reaction forces due to indentation with frequency of around 1000 Hz using a standard dual core PC. Similarly, we conducted simulation of brain shift using a model with more than 50 000 degrees of freedom in less than a minute. The speed benefits of our models results from combining the Total Lagrangian formulation with explicit time integration and low order finite elements. PMID:19152791
Gravenkamp, Hauke; Birk, Carolin; Song, Chongmin
2014-07-01
This paper addresses the computation of dispersion curves and mode shapes of elastic guided waves in axisymmetric waveguides. The approach is based on a Scaled Boundary Finite Element formulation, that has previously been presented for plate structures and general three-dimensional waveguides with complex cross-section. The formulation leads to a Hamiltonian eigenvalue problem for the computation of wavenumbers and displacement amplitudes, that can be solved very efficiently. In the axisymmetric representation, only the radial direction in a cylindrical coordinate system has to be discretized, while the circumferential direction as well as the direction of propagation are described analytically. It is demonstrated, how the computational costs can drastically be reduced by employing spectral elements of extremely high order. Additionally, an alternative formulation is presented, that leads to real coefficient matrices. It is discussed, how these two approaches affect the computational efficiency, depending on the elasticity matrix. In the case of solid cylinders, the singularity of the governing equations that occurs in the center of the cross-section is avoided by changing the quadrature scheme. Numerical examples show the applicability of the approach to homogeneous as well as layered structures with isotropic or anisotropic material behavior.
NASA Astrophysics Data System (ADS)
Ozer, Hasan; Ghauch, Ziad G.; Dhasmana, Heena; Al-Qadi, Imad L.
2016-08-01
Micromechanical computational modeling is used in this study to determine the smallest domain, or Representative Volume Element (RVE), that can be used to characterize the effective properties of composite materials such as Asphalt Concrete (AC). Computational Finite Element (FE) micromechanical modeling was coupled with digital image analysis of surface scans of AC specimens. Three mixtures with varying Nominal Maximum Aggregate Size (NMAS) of 4.75 mm, 12.5 mm, and 25 mm, were prepared for digital image analysis and computational micromechanical modeling. The effects of window size and phase modulus mismatch on the apparent viscoelastic response of the composite were numerically examined. A good agreement was observed in the RVE size predictions based on micromechanical computational modeling and image analysis. Micromechanical results indicated that a degradation in the matrix stiffness increases the corresponding RVE size. Statistical homogeneity was observed for window sizes equal to two to three times the NMAS. A model was presented for relating the degree of statistical homogeneity associated with each window size for materials with varying inclusion dimensions.
Experimental and computational investigation of lift-enhancing tabs on a multi-element airfoil
NASA Technical Reports Server (NTRS)
Ashby, Dale
1996-01-01
An experimental and computational investigation of the effect of lift enhancing tabs on a two-element airfoil was conducted. The objective of the study was to develop an understanding of the flow physics associated with lift enhancing tabs on a multi-element airfoil. A NACA 63(sub 2)-215 ModB airfoil with a 30 percent chord Fowler flap was tested in the NASA Ames 7 by 10 foot wind tunnel. Lift enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computer results predict all of the trends in the experimental data quite well. When the flow over the flap upper surface is attached, tabs mounted at the main element trailing edge (cove tabs) produce very little change in lift. At high flap deflections. however, the flow over the flap is separated and cove tabs produce large increases in lift and corresponding reductions in drag by eliminating the separated flow. Cove tabs permit high flap deflection angles to be achieved and reduce the sensitivity of the airfoil lift to the size of the flap gap. Tabs attached to the flap training edge (flap tabs) are effective at increasing lift without significantly increasing drag. A combination of a cove tab and a flap tab increased the airfoil lift coefficient by 11 percent relative to the highest lift tab coefficient achieved by any baseline configuration at an angle of attack of zero percent and the maximum lift coefficient was increased by more than 3 percent. A simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift enhancing tabs work. The tabs were modeled by a point vortex at the training edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift enhancing tabs on a multi-element airfoil. Results of the modeling
Elements of computational fluid dynamics on block structured grids using implicit solvers
NASA Astrophysics Data System (ADS)
Badcock, K. J.; Richards, B. E.; Woodgate, M. A.
2000-08-01
This paper reviews computational fluid dynamics (CFD) for aerodynamic applications. The key elements of a rigorous CFD analysis are discussed. Modelling issues are summarised and the state of modern discretisation schemes considered. Implicit solution schemes are discussed in some detail, as is multiblock grid generation. The cost and availability of computing power is described in the context of cluster computing and its importance for CFD. Several complex applications are then considered in light of these simulation components. Verification and validation is presented for each application and the important flow mechanisms are shown through the use of the simulation results. The applications considered are: cavity flow, spiked body supersonic flow, underexpanded jet shock wave hysteresis, slender body aerodynamics and wing flutter. As a whole the paper aims to show the current strengths and limitations of CFD and the conclusions suggest a way of enhancing the usefulness of flow simulation for industrial class problems.
Fiber pushout test - A three-dimensional finite element computational simulation
NASA Technical Reports Server (NTRS)
Mital, Subodh K.; Chamis, Christos C.
1991-01-01
A fiber pushthrough process was computationally simulated using three-dimensional finite element method. The interface material is replaced by an anisotropic material with greatly reduced shear modulus in order to simulate the fiber pushthrough process using a linear analysis. Such a procedure is easily implemented and is computational very effective. It can be used to predict fiber pushthrough load for a composite system at any temperature. The average interface shear strength obtained from pushthrough load can easily be separated into its two components: one that comes from frictioal stresses and the other that comes from chemical adhesion between fiber and the matrix and mechanical interlocking that develops due to shrinkage of the composite because of phase change during the processing. Step-by-step procedures are described to perform the computational simulation, to establish bounds on interfacial bond strength and to interpret interfacial bond quality.
Fiber pushout test: A three-dimensional finite element computational simulation
NASA Technical Reports Server (NTRS)
Mital, Subodh K.; Chamis, Christos C.
1990-01-01
A fiber pushthrough process was computationally simulated using three-dimensional finite element method. The interface material is replaced by an anisotropic material with greatly reduced shear modulus in order to simulate the fiber pushthrough process using a linear analysis. Such a procedure is easily implemented and is computationally very effective. It can be used to predict fiber pushthrough load for a composite system at any temperature. The average interface shear strength obtained from pushthrough load can easily be separated into its two components: one that comes from frictional stresses and the other that comes from chemical adhesion between fiber and the matrix and mechanical interlocking that develops due to shrinkage of the composite because of phase change during the processing. Step-by-step procedures are described to perform the computational simulation, to establish bounds on interfacial bond strength and to interpret interfacial bond quality.
Khashan, S. A.; Alazzam, A.; Furlani, E. P.
2014-01-01
A microfluidic design is proposed for realizing greatly enhanced separation of magnetically-labeled bioparticles using integrated soft-magnetic elements. The elements are fixed and intersect the carrier fluid (flow-invasive) with their length transverse to the flow. They are magnetized using a bias field to produce a particle capture force. Multiple stair-step elements are used to provide efficient capture throughout the entire flow channel. This is in contrast to conventional systems wherein the elements are integrated into the walls of the channel, which restricts efficient capture to limited regions of the channel due to the short range nature of the magnetic force. This severely limits the channel size and hence throughput. Flow-invasive elements overcome this limitation and enable microfluidic bioseparation systems with superior scalability. This enhanced functionality is quantified for the first time using a computational model that accounts for the dominant mechanisms of particle transport including fully-coupled particle-fluid momentum transfer. PMID:24931437
NASA Technical Reports Server (NTRS)
Choudhari, Meelan; Li, Fei; Bynum, Michael; Kegerise, Michael; King, Rudolph
2015-01-01
Computations are performed to study laminar-turbulent transition due to isolated roughness elements in boundary layers at Mach 3.5 and 5.95, with an emphasis on flow configurations for which experimental measurements from low disturbance wind tunnels are available. The Mach 3.5 case corresponds to a roughness element with right-triangle planform with hypotenuse that is inclined at 45 degrees with respect to the oncoming stream, presenting an obstacle with spanwise asymmetry. The Mach 5.95 case corresponds to a circular roughness element along the nozzle wall of the Purdue BAMQT wind tunnel facility. In both cases, the mean flow distortion due to the roughness element is characterized by long-lived streamwise streaks in the roughness wake, which can support instability modes that did not exist in the absence of the roughness element. The linear amplification characteristics of the wake flow are examined towards the eventual goal of developing linear growth correlations for the onset of transition.
Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.
2015-01-01
Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559
NASA Astrophysics Data System (ADS)
Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.
2015-07-01
Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3× 3× 20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162+/- 1 ps FWHM, approaching the analytically calculated lower bound within 6.5%.
Chernenkov, Iu V; Gumeniuk, O I
2009-01-01
The paper presents the results of studying the impact of using cellular phones and personal computers on the health status of 277 Saratov schoolchildren (mean age 13.2 +/- 2.3 years). About 80% of the adolescents have been ascertained to use cellular phones and computers mainly for game purposes. The active users of cellular phones and computers show a high aggressiveness, anxiety, hostility, and social stress, low stress resistance, and susceptibility to arterial hypotension. The negative influence of cellular phones and computers on the schoolchildren's health increases with the increased duration and frequency of their use.
Shadid, J.N.; Moffat, H.K.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Salinger, A.G.
1996-05-01
The theoretical background for the finite element computer program, MPSalsa, is presented in detail. MPSalsa is designed to solve laminar, low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow, heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solver coupled multiple Poisson or advection-diffusion- reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMKIN, respectively. The code employs unstructured meshes, using the EXODUS II finite element data base suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec solver library.
Lee, Kang-Hoon; Shin, Kyung-Seop; Lim, Debora; Kim, Woo-Chan; Chung, Byung Chang; Han, Gyu-Bum; Roh, Jeongkyu; Cho, Dong-Ho; Cho, Kiho
2015-07-01
The genomes of living organisms are populated with pleomorphic repetitive elements (REs) of varying densities. Our hypothesis that genomic RE landscapes are species/strain/individual-specific was implemented into the Genome Signature Imaging system to visualize and compute the RE-based signatures of any genome. Following the occurrence profiling of 5-nucleotide REs/words, the information from top-50 frequency words was transformed into a genome-specific signature and visualized as Genome Signature Images (GSIs), using a CMYK scheme. An algorithm for computing distances among GSIs was formulated using the GSIs' variables (word identity, frequency, and frequency order). The utility of the GSI-distance computation system was demonstrated with control genomes. GSI-based computation of genome-relatedness among 1766 microbes (117 archaea and 1649 bacteria) identified their clustering patterns; although the majority paralleled the established classification, some did not. The Genome Signature Imaging system, with its visualization and distance computation functions, enables genome-scale evolutionary studies involving numerous genomes with varying sizes.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1997-01-01
A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.
NASA Astrophysics Data System (ADS)
Chang, Chau-Lyan; Venkatachari, Balaji
2016-11-01
Flow physics near the viscous wall is intrinsically anisotropic in nature, namely, the gradient along the wall normal direction is much larger than that along the other two orthogonal directions parallel to the surface. Accordingly, high aspect ratio meshes are employed near the viscous wall to capture the physics and maintain low grid count. While such arrangement works fine for structured-grid based methods with dimensional splitting that handles derivatives in each direction separately, similar treatments often lead to numerical instability for unstructured-mesh based methods when triangular or tetrahedral meshes are used. The non-splitting treatment of near-wall gradients for high-aspect ratio triangular or tetrahedral elements results in an ill-conditioned linear system of equations that is closely related to the numerical instability. Altering the side lengths of the near wall tetrahedrons in the gradient calculations would make the system less unstable but more dissipative. This research presents recent progress in applying numerical dissipation control in the space-time conservation element solution element (CESE) method to reduce or alleviate the above-mentioned instability while maintaining reasonable solution accuracy.
Computational analysis of auxin responsive elements in the Arabidopsis thaliana L. genome
2014-01-01
Auxin responsive elements (AuxRE) were found in upstream regions of target genes for ARFs (Auxin response factors). While Chip-seq data for most of ARFs are still unavailable, prediction of potential AuxRE is restricted by consensus models that detect too many false positive sites. Using sequence analysis of experimentally proven AuxREs, we revealed both an extended nucleotide context pattern for AuxRE itself and three distinct types of its coupling motifs (Y-patch, AuxRE-like, and ABRE-like), which together with AuxRE may form the composite elements. Computational analysis of the genome-wide distribution of the predicted AuxREs and their impact on auxin responsive gene expression allowed us to conclude that: (1) AuxREs are enriched around the transcription start site with the maximum density in 5'UTR; (2) AuxREs mediate auxin responsive up-regulation, not down-regulation. (3) Directly oriented single AuxREs and reverse multiple AuxREs are mostly associated with auxin responsiveness. In the composite AuxRE elements associated with auxin response, ABRE-like and Y-patch are 5'-flanking or overlapping AuxRE, whereas AuxRE-like motif is 3'-flanking. The specificity in location and orientation of the coupling elements suggests them as potential binding sites for ARFs partners. PMID:25563792
Development of an adaptive hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1994-01-01
In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.
Large-scale computation of incompressible viscous flow by least-squares finite element method
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.
1993-01-01
The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.
Computational Analysis of Some Aspects of a Synthetic Route to Ammonium Dinitramide
1993-12-27
OF SOME ASPECTS OF A SYNTHETIC ROUTE TO AMMONIUM DINITRAMIDE by Tore Brinck and Peter Politzer D T IC ELECTES JAN1 119943 Department of Chemistry JAN...NO0014-91-J-4057 a Synthetic Route to Ammonium Dinitramide Dr. Richard S. Miller 6. AUTHOR(S) Tore Brinck and Peter Politzer R&T Code 4131D02 7
Student and Staff Perceptions of Key Aspects of Computer Science Engineering Capstone Projects
ERIC Educational Resources Information Center
Olarte, Juan José; Dominguez, César; Jaime, Arturo; Garcia-Izquierdo, Francisco José
2016-01-01
In carrying out their capstone projects, students use knowledge and skills acquired throughout their degree program to create a product or provide a technical service. An assigned advisor guides the students and supervises the work, and a committee assesses the projects. This study compares student and staff perceptions of key aspects of…
Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González
2016-01-01
Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.
Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González
2016-01-01
Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering. PMID:27872840
Algorithmic computation of knot polynomials of secondary structure elements of proteins.
Emmert-Streib, Frank
2006-10-01
The classification of protein structures is an important and still outstanding problem. The purpose of this paper is threefold. First, we utilize a relation between the Tutte and homfly polynomial to show that the Alexander-Conway polynomial can be algorithmically computed for a given planar graph. Second, as special cases of planar graphs, we use polymer graphs of protein structures. More precisely, we use three building blocks of the three-dimensional protein structure--alpha-helix, antiparallel beta-sheet, and parallel beta-sheet--and calculate, for their corresponding polymer graphs, the Tutte polynomials analytically by providing recurrence equations for all three secondary structure elements. Third, we present numerical results comparing the results from our analytical calculations with the numerical results of our algorithm-not only to test consistency, but also to demonstrate that all assigned polynomials are unique labels of the secondary structure elements. This paves the way for an automatic classification of protein structures.
Finite element analysis of the hip and spine based on quantitative computed tomography.
Carpenter, R Dana
2013-06-01
Quantitative computed tomography (QCT) provides three-dimensional information about bone geometry and the spatial distribution of bone mineral. Images obtained with QCT can be used to create finite element models, which offer the ability to analyze bone strength and the distribution of mechanical stress and physical deformation. This approach can be used to investigate different mechanical loading scenarios (stance and fall configurations at the hip, for example) and to estimate whole bone strength and the relative mechanical contributions of the cortical and trabecular bone compartments. Finite element analyses based on QCT images of the hip and spine have been used to provide important insights into the biomechanical effects of factors such as age, sex, bone loss, pharmaceuticals, and mechanical loading at sites of high clinical importance. Thus, this analysis approach has become an important tool in the study of the etiology and treatment of osteoporosis at the hip and spine.
Computer modeling of single-cell and multicell thermionic fuel elements
Dickinson, J.W.; Klein, A.C.
1996-05-01
Modeling efforts are undertaken to perform coupled thermal-hydraulic and thermionic analysis for both single-cell and multicell thermionic fuel elements (TFE). The analysis--and the resulting MCTFE computer code (multicell thermionic fuel element)--is a steady-state finite volume model specifically designed to analyze cylindrical TFEs. It employs an interactive successive overrelaxation solution technique to solve for the temperatures throughout the TFE and a coupled thermionic routine to determine the total TFE performance. The calculated results include temperature distributions in all regions of the TFE, axial interelectrode voltages and current densities, and total TFE electrical output parameters including power, current, and voltage. MCTFE-generated results compare experimental data from the single-cell Topaz-II-type TFE and multicell data from the General Atomics 3H5 TFE to benchmark the accuracy of the code methods.
Erba, Alessandro; Caglioti, Dominique; Zicovich-Wilson, Claudio Marcelo; Dovesi, Roberto
2017-02-15
Two alternative approaches for the quantum-mechanical calculation of the nuclear-relaxation term of elastic and piezoelectric tensors of crystalline materials are illustrated and their computational aspects discussed: (i) a numerical approach based on the geometry optimization of atomic positions at strained lattice configurations and (ii) a quasi-analytical approach based on the evaluation of the force- and displacement-response internal-strain tensors as combined with the interatomic force-constant matrix. The two schemes are compared both as regards their computational accuracy and performance. The latter approach, not being affected by the many numerical parameters and procedures of a typical quasi-Newton geometry optimizer, constitutes a more reliable and robust mean to the evaluation of such properties, at a reduced computational cost for most crystalline systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Mamaev, K.; Obkhodsky, A.; Popov, A.
2016-01-01
Computational model, technique and the basic principles of operation program complex for quantum-chemical calculations of material's physico-chemical parameters with rare earth elements are discussed. The calculating system is scalable and includes CPU and GPU computational resources. Control and operation of computational jobs and also Globus Toolkit 5 software provides the possibility to join computer users in a unified system of data processing with peer-to-peer architecture. CUDA software is used to integrate graphic processors into calculation system.
Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 2
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr. (Compiler)
1989-01-01
The Control/Structures Integration Program, a survey of available software for control of flexible structures, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software are discussed.
Fuchs, Lynn S; Fuchs, Douglas; Hamlett, Carol L; Lambert, Warren; Stuebing, Karla; Fletcher, Jack M
2008-02-01
The purpose of this study was to explore patterns of difficulty in 2 domains of mathematical cognition: computation and problem solving. Third graders (n = 924; 47.3% male) were representatively sampled from 89 classrooms; assessed on computation and problem solving; classified as having difficulty with computation, problem solving, both domains, or neither domain; and measured on 9 cognitive dimensions. Difficulty occurred across domains with the same prevalence as difficulty with a single domain; specific difficulty was distributed similarly across domains. Multivariate profile analysis on cognitive dimensions and chi-square tests on demographics showed that specific computational difficulty was associated with strength in language and weaknesses in attentive behavior and processing speed; problem-solving difficulty was associated with deficient language as well as race and poverty. Implications for understanding mathematics competence and for the identification and treatment of mathematics difficulties are discussed.
Ahmed, B.; Ahmad, J.; Guy, G.
1994-09-01
A finite elements method coupled with the Preisach model of hysteresis is used to compute-the ferrite losses in medium power transformers (10--60 kVA) working at relatively high frequencies (20--60 kHz) and with an excitation level of about 0.3 Tesla. The dynamic evolution of the permeability is taken into account. The simple and doubly cubic spline functions are used to account for temperature effects respectively on electric and on magnetic parameters of the ferrite cores. The results are compared with test data obtained with 3C8 and B50 ferrites at different frequencies.
Freels, J.D.; Baker, A.J. ); Ianelli, G.S. )
1991-01-01
A weak statement forms the theoretical basis for identifying the range of choices/decisions for constructing approximate solutions to the compressible Navier-Stokes equations. The Galerkin form is intrinsically non-dissipative, and a Taylor series analysis identifies the extension needed for shock capturing. Thereafter, the approximation trial space is constructed with compact support using a spatial domain semi-discretization into finite elements. An implicit temporal algorithm produces the terminal algebraic form, which is iteratively solved using a tensor product factorization quasi-Newton procedure. Computational results verify algorithm performance for a range of aerodynamics specifications. 6 refs., 3 figs.
NASA Technical Reports Server (NTRS)
Byun, Chansup; Guruswamy, Guru P.; Kutler, Paul (Technical Monitor)
1994-01-01
In recent years significant advances have been made for parallel computers in both hardware and software. Now parallel computers have become viable tools in computational mechanics. Many application codes developed on conventional computers have been modified to benefit from parallel computers. Significant speedups in some areas have been achieved by parallel computations. For single-discipline use of both fluid dynamics and structural dynamics, computations have been made on wing-body configurations using parallel computers. However, only a limited amount of work has been completed in combining these two disciplines for multidisciplinary applications. The prime reason is the increased level of complication associated with a multidisciplinary approach. In this work, procedures to compute aeroelasticity on parallel computers using direct coupling of fluid and structural equations will be investigated for wing-body configurations. The parallel computer selected for computations is an Intel iPSC/860 computer which is a distributed-memory, multiple-instruction, multiple data (MIMD) computer with 128 processors. In this study, the computational efficiency issues of parallel integration of both fluid and structural equations will be investigated in detail. The fluid and structural domains will be modeled using finite-difference and finite-element approaches, respectively. Results from the parallel computer will be compared with those from the conventional computers using a single processor. This study will provide an efficient computational tool for the aeroelastic analysis of wing-body structures on MIMD type parallel computers.
Symbolic algorithms for the computation of Moshinsky brackets and nuclear matrix elements
NASA Astrophysics Data System (ADS)
Ursescu, D.; Tomaselli, M.; Kuehl, T.; Fritzsche, S.
2005-12-01
To facilitate the use of the extended nuclear shell model (NSM), a FERMI module for calculating some of its basic quantities in the framework of MAPLE is provided. The Moshinsky brackets, the matrix elements for several central and non-central interactions between nuclear two-particle states as well as their expansion in terms of Talmi integrals are easily given within a symbolic formulation. All of these quantities are available for interactive work. Program summaryTitle of program:Fermi Catalogue identifier:ADVO Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVO Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:None Computer for which the program is designed and others on which is has been tested:All computers with a licence for the computer algebra package MAPLE [Maple is a registered trademark of Waterloo Maple Inc., produced by MapleSoft division of Waterloo Maple Inc.] Instalations:GSI-Darmstadt; University of Kassel (Germany) Operating systems or monitors under which the program has beentested: WindowsXP, Linux 2.4 Programming language used:MAPLE 8 and 9.5 from MapleSoft division of Waterloo Maple Inc. Memory required to execute with typical data:30 MB No. of lines in distributed program including test data etc.:5742 No. of bytes in distributed program including test data etc.:288 939 Distribution program:tar.gz Nature of the physical problem:In order to perform calculations within the nuclear shell model (NSM), a quick and reliable access to the nuclear matrix elements is required. These matrix elements, which arise from various types of forces among the nucleons, can be calculated using Moshinsky's transformation brackets between relative and center-of-mass coordinates [T.A. Brody, M. Moshinsky, Tables of Transformation Brackets, Monografias del Instituto de Fisica, Universidad Nacional Autonoma de Mexico, 1960] and by the proper use of the nuclear states in different coupling notations
Computing interaural differences through finite element modeling of idealized human heads.
Cai, Tingli; Rakerd, Brad; Hartmann, William M
2015-09-01
Acoustical interaural differences were computed for a succession of idealized shapes approximating the human head-related anatomy: sphere, ellipsoid, and ellipsoid with neck and torso. Calculations were done as a function of frequency (100-2500 Hz) and for source azimuths from 10 to 90 degrees using finite element models. The computations were compared to free-field measurements made with a manikin. Compared to a spherical head, the ellipsoid produced greater large-scale variation with frequency in both interaural time differences and interaural level differences, resulting in better agreement with the measurements. Adding a torso, represented either as a large plate or as a rectangular box below the neck, further improved the agreement by adding smaller-scale frequency variation. The comparisons permitted conjectures about the relationship between details of interaural differences and gross features of the human anatomy, such as the height of the head, and length of the neck.
Computing interaural differences through finite element modeling of idealized human heads
Cai, Tingli; Rakerd, Brad; Hartmann, William M.
2015-01-01
Acoustical interaural differences were computed for a succession of idealized shapes approximating the human head-related anatomy: sphere, ellipsoid, and ellipsoid with neck and torso. Calculations were done as a function of frequency (100–2500 Hz) and for source azimuths from 10 to 90 degrees using finite element models. The computations were compared to free-field measurements made with a manikin. Compared to a spherical head, the ellipsoid produced greater large-scale variation with frequency in both interaural time differences and interaural level differences, resulting in better agreement with the measurements. Adding a torso, represented either as a large plate or as a rectangular box below the neck, further improved the agreement by adding smaller-scale frequency variation. The comparisons permitted conjectures about the relationship between details of interaural differences and gross features of the human anatomy, such as the height of the head, and length of the neck. PMID:26428792
NASA Astrophysics Data System (ADS)
Rosemeier, Jolanta Iwona
1992-09-01
With the need to develop very fast computers compared to the conventional digital chip based systems, the future is very bright for optical based signal processing. Attention has turned to a different application of optics utilizing mathematical operations, in which case operations are numerical, sometimes discrete, and often algebraic in nature. Interest has been so vigorous that many view it as a small revolution in optics whereby optical signal processing is beginning to encompass what many frequently describe as optical computing. The term is fully intended to imply close comparison with the operations performed by scientific digital computers. Most present computer intensive problem solving processors rely on a common set of linear equations found in numerical matrix algebra. Recently, considerable research focused on the use of systolic array, which can operate at high speeds with great efficiency. This approach addresses the acousto-optic digital and analog arrays utilizing three dimensional optical interconnect technology. In part I of this dissertation the first single element 2-dimensional (2-D) acousto-optic deflector was designed, fabricated and incorporated into an optical 3 x 3 vector-vector or matrix-matrix multiplier system. This single element deflector is used as a outer-product device. The input vectors are addressed by electronic means and the outer product matrix is displayed as a 2-D array of optical (laser) pixels. In part II of this work a multichannel single element 2-D deflector was designed, fabricated and implemented into a Programmable Logic Array (PLA) optical computing system. This system can be used for: word equality detection, free space optical interconnections, half adder optical system implementation. The PLA system described in this dissertation has capability of word equality detection. The 2-D multichannel deflector that was designed and fabricated is capable of comparing 16 x 16 words every 316 nanoseconds. Each word is 8
ERIC Educational Resources Information Center
Stroup, Walter M.; Hills, Thomas; Carmona, Guadalupe
2011-01-01
This paper summarizes an approach to helping future educators to engage with key issues related to the application of measurement-related statistics to learning and teaching, especially in the contexts of science, mathematics, technology and engineering (STEM) education. The approach we outline has two major elements. First, students are asked to…
ERIC Educational Resources Information Center
Stroup, Walter M.; Hills, Thomas; Carmona, Guadalupe
2011-01-01
This paper summarizes an approach to helping future educators to engage with key issues related to the application of measurement-related statistics to learning and teaching, especially in the contexts of science, mathematics, technology and engineering (STEM) education. The approach we outline has two major elements. First, students are asked to…
A comparative computational analysis of nonautonomous Helitron elements between maize and rice
Sweredoski, Michael; DeRose-Wilson, Leah; Gaut, Brandon S
2008-01-01
Background Helitrons are DNA transposable elements that are proposed to replicate via a rolling circle mechanism. Non-autonomous helitron elements have captured gene fragments from many genes in maize (Zea mays ssp. mays) but only a handful of genes in Arabidopsis (Arabidopsis thaliana). This observation suggests very different histories for helitrons in these two species, but it is unclear which species contains helitrons that are more typical of plants. Results We performed computational searches to identify helitrons in maize and rice genomic sequence data. Using 12 previously identified helitrons as a seed set, we identified 23 helitrons in maize, five of which were polymorphic among a sample of inbred lines. Our total sample of maize helitrons contained fragments of 44 captured genes. Twenty-one of 35 of these helitrons did not cluster with other elements into closely related groups, suggesting substantial diversity in the maize element complement. We identified over 552 helitrons in the japonica rice genome. More than 70% of these were found in a collinear location in the indica rice genome, and 508 clustered as a single large subfamily. The japonica rice elements contained fragments of only 11 genes, a number similar to that in Arabidopsis. Given differences in gene capture between maize and rice, we examined sequence properties that could contribute to differences in capture rates, focusing on 3' palindromes that are hypothesized to play a role in transposition termination. The free energy of folding for maize helitrons were significantly lower than those in rice, but the direction of the difference differed from our prediction. Conclusion Maize helitrons are clearly unique relative to those of rice and Arabidopsis in the prevalence of gene capture, but the reasons for this difference remain elusive. Maize helitrons do not seem to be more polymorphic among individuals than those of Arabidopsis; they do not appear to be substantially older or younger than
A comparison of turbulence models in computing multi-element airfoil flows
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Menter, Florian; Durbin, Paul A.; Mansour, Nagi N.
1994-01-01
Four different turbulence models are used to compute the flow over a three-element airfoil configuration. These models are the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, a two-equation k-omega model, and a new one-equation Durbin-Mansour model. The flow is computed using the INS2D two-dimensional incompressible Navier-Stokes solver. An overset Chimera grid approach is utilized. Grid resolution tests are presented, and manual solution-adaptation of the grid was performed. The performance of each of the models is evaluated for test cases involving different angles-of-attack, Reynolds numbers, and flap riggings. The resulting surface pressure coefficients, skin friction, velocity profiles, and lift, drag, and moment coefficients are compared with experimental data. The models produce very similar results in most cases. Excellent agreement between computational and experimental surface pressures was observed, but only moderately good agreement was seen in the velocity profile data. In general, the difference between the predictions of the different models was less than the difference between the computational and experimental data.
A comparison of turbulence models in computing multi-element airfoil flows
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Menter, Florian; Durbin, Paul A.; Mansour, Nagi N.
1994-01-01
Four different turbulence models are used to compute the flow over a three-element airfoil configuration. These models are the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, a two-equation k-omega model, and a new one-equation Durbin-Mansour model. The flow is computed using the INS2D two-dimensional incompressible Navier-Stokes solver. An overset Chimera grid approach is utilized. Grid resolution tests are presented, and manual solution-adaptation of the grid was performed. The performance of each of the models is evaluated for test cases involving different angles-of-attack, Reynolds numbers, and flap riggings. The resulting surface pressure coefficients, skin friction, velocity profiles, and lift, drag, and moment coefficients are compared with experimental data. The models produce very similar results in most cases. Excellent agreement between computational and experimental surface pressures was observed, but only moderately good agreement was seen in the velocity profile data. In general, the difference between the predictions of the different models was less than the difference between the computational and experimental data.
Visual ergonomic aspects of glare on computer displays: glossy screens and angular dependence
NASA Astrophysics Data System (ADS)
Brunnström, Kjell; Andrén, Börje; Konstantinides, Zacharias; Nordström, Lukas
2007-02-01
Recently flat panel computer displays and notebook computer are designed with a so called glare panel i.e. highly glossy screens, have emerged on the market. The shiny look of the display appeals to the costumers, also there are arguments that the contrast, colour saturation etc improves by using a glare panel. LCD displays suffer often from angular dependent picture quality. This has been even more pronounced by the introduction of Prism Light Guide plates into displays for notebook computers. The TCO label is the leading labelling system for computer displays. Currently about 50% of all computer displays on the market are certified according to the TCO requirements. The requirements are periodically updated to keep up with the technical development and the latest research in e.g. visual ergonomics. The gloss level of the screen and the angular dependence has recently been investigated by conducting user studies. A study of the effect of highly glossy screens compared to matt screens has been performed. The results show a slight advantage for the glossy screen when no disturbing reflexes are present, however the difference was not statistically significant. When disturbing reflexes are present the advantage is changed into a larger disadvantage and this difference is statistically significant. Another study of angular dependence has also been performed. The results indicates a linear relationship between the picture quality and the centre luminance of the screen.
NASA Astrophysics Data System (ADS)
Kiss, F.
1991-10-01
Such materials as rubber, rocket propellants, or materials that flow, such as fluids or plastic solids, are often modeled to be incompressible. For the analysis of incompressible problems, a series of element formulations and solution procedures were recently adopted and tested in the finite element system ASKA. Some of the experiences gained during the implementation of a hierarchy family of mixed Herrmann finite elements are considered.
NASA Astrophysics Data System (ADS)
Tavadyan, Levon, Prof; Sachkov, Viktor, Prof; Godymchuk, Anna, Dr.; Bogdan, Anna
2016-01-01
The 2nd International Symposium «Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering» (REES2015) was jointly organized by Tomsk State University (Russia), National Academy of Science (Armenia), Shenyang Polytechnic University (China), Moscow Institute of Physics and Engineering (Russia), Siberian Physical-technical Institute (Russia), and Tomsk Polytechnic University (Russia) in September, 7-15, 2015, Belokuriha, Russia. The Symposium provided a high quality of presentations and gathered engineers, scientists, academicians, and young researchers working in the field of rare and rare earth elements mining, modification, separation, elaboration and application, in order to facilitate aggregation and sharing interests and results for a better collaboration and activity visibility. The goal of the REES2015 was to bring researchers and practitioners together to share the latest knowledge on rare and rare earth elements technologies. The Symposium was aimed at presenting new trends in rare and rare earth elements mining, research and separation and recent achievements in advanced materials elaboration and developments for different purposes, as well as strengthening the already existing contacts between manufactures, highly-qualified specialists and young scientists. The topics of the REES2015 were: (1) Problems of extraction and separation of rare and rare earth elements; (2) Methods and approaches to the separation and isolation of rare and rare earth elements with ultra-high purity; (3) Industrial technologies of production and separation of rare and rare earth elements; (4) Economic aspects in technology of rare and rare earth elements; and (5) Rare and rare earth based materials (application in metallurgy, catalysis, medicine, optoelectronics, etc.). We want to thank the Organizing Committee, the Universities and Sponsors supporting the Symposium, and everyone who contributed to the organization of the event and to
Illán, Ignacio Alvarez; Górriz, Juan Manuel; Ramírez, Javier; Lang, Elmar W; Salas-Gonzalez, Diego; Puntonet, Carlos G
2012-11-01
This paper explores the importance of the latent symmetry of the brain in computer-aided systems for diagnosing Alzheimer's disease (AD). Symmetry and asymmetry are studied from two points of view: (i) the development of an effective classifier within the scope of machine learning techniques, and (ii) the assessment of its relevance to the AD diagnosis in the early stages of the disease. The proposed methodology is based on eigenimage decomposition of single-photon emission-computed tomography images, using an eigenspace extension to accommodate odd and even eigenvectors separately. This feature extraction technique allows for support-vector-machine classification and image analysis. Identification of AD patterns is improved when the latent symmetry of the brain is considered, with an estimated 92.78% accuracy (92.86% sensitivity, 92.68% specificity) using a linear kernel and a leave-one-out cross validation strategy. Also, asymmetries may be used to define a test for AD that is very specific (90.24% specificity) but not especially sensitive. Two main conclusions are derived from the analysis of the eigenimage spectrum. Firstly, the recognition of AD patterns is improved when considering only the symmetric part of the spectrum. Secondly, asymmetries in the hypo-metabolic patterns, when present, are more pronounced in subjects with AD. Copyright © 2012 Elsevier B.V. All rights reserved.
Some aspects of optimal human-computer symbiosis in multisensor geospatial data fusion
NASA Astrophysics Data System (ADS)
Levin, E.; Sergeyev, A.
Nowadays vast amount of the available geospatial data provides additional opportunities for the targeting accuracy increase due to possibility of geospatial data fusion. One of the most obvious operations is determining of the targets 3D shapes and geospatial positions based on overlapped 2D imagery and sensor modeling. 3D models allows for the extraction of such information about targets, which cannot be measured directly based on single non-fused imagery. Paper describes ongoing research effort at Michigan Tech attempting to combine advantages of human analysts and computer automated processing for efficient human computer symbiosis for geospatial data fusion. Specifically, capabilities provided by integration into geospatial targeting interfaces novel human-computer interaction method such as eye-tracking and EEG was explored. Paper describes research performed and results in more details.
NASA Astrophysics Data System (ADS)
Azzimonti, D. F.; Willot, F.; Jeulin, D.
2013-04-01
A 3D model of microstructure containing spherical and rhombi-shaped inclusions 'falling' along a deposit direction is used to simulate the distribution of nanoscale color pigments in paints. The microstructure's anisotropy and length scales, characterized by their covariance functions and representative volume element, follow that of transversely isotropic or orthotropic media. Full-field computations by means of the fast Fourier method are undertaken to compute the local and effective permittivity function of the mixture, as a function of the wavelength in the visible spectrum. Transverse isotropy is numerically recovered for the effective permittivity of the deposit model of spheres. Furthermore, in the complex plane, the transverse and parallel components of the effective permittivity tensor are very close to the frontiers of the Hashin-Shtrikman's domain, at all frequencies (or color) of the incident wave. The representative volume element for the optical properties of paint deposit models are studied. At fixed accuracy, it is much larger for the imaginary part of the permittivity than for the real part, an effect of the strong variations of the electric displacement field, exhibiting hot-spots, a feature previously described in the context of conductivity.
A linear-scaling spectral-element method for computing electrostatic potentials.
Watson, Mark A; Hirao, Kimihiko
2008-11-14
A new linear-scaling method is presented for the fast numerical evaluation of the electronic Coulomb potential. Our approach uses a simple real-space partitioning of the system into cubic cells and a spectral-element representation of the density in a tensorial basis of high-order Chebyshev polynomials. Electrostatic interactions between non-neighboring cells are described using the fast multipole method. The remaining near-field interactions are computed in the tensorial basis as a sum of differential contributions by exploiting the numerical low-rank separability of the Coulomb operator. The method is applicable to arbitrary charge densities, avoids the Poisson equation, and does not involve the solution of any systems of linear equations. Above all, an adaptive resolution of the Chebyshev basis in each cell facilitates the accurate and efficient treatment of molecular systems. We demonstrate the performance of our implementation for quantum chemistry with benchmark calculations on the noble gas atoms, long-chain alkanes, and diamond fragments. We conclude that the spectral-element method can be a competitive tool for the accurate computation of electrostatic potentials in large-scale molecular systems.
Patt, B.E.; Iwanczyk, J.S.; Szczebiot, R.; Maculewicz, G.; Wang, M.; Wang, Y.J.; Hedman, B.; Hodgson, K.O.; Cox, A.D. |
1995-08-01
Construction of a 100-element HgI{sub 2} detector array, with miniaturized electronics, and software developed for synchrotron applications in the 5 keV to 35 keV region has been completed. Recently, extended x-ray absorption fine structure (EXAFS) data on dilute ({approximately} 1mM) metallo-protein samples were obtained with up to seventy-five elements of the system installed. The data quality obtained is excellent and shows that the detector is quite competitive as compared to commercially available systems. The system represents the largest detector array ever developed for high resolution, high count rate x-ray synchrotron applications. It also represents the first development and demonstration of high-density miniaturized spectroscopy electronics with this high level of performance. Lastly, the integration of the whole system into an automated computer-controlled environment represents a major advancement in the user interface for XAS measurements. These experiments clearly demonstrate that the HgI{sub 2} system, with the miniaturized electronics and associated computer control functions well. In addition it shows that the new system provides superior ease of use and functionality, and that data quality is as good as or better than with state-of-the-art cryogenically cooled Ge systems.
Computer-aided manufacturing for freeform optical elements by ultraprecision micromilling
NASA Astrophysics Data System (ADS)
Stoebenau, Sebastian; Kleindienst, Roman; Hofmann, Meike; Sinzinger, Stefan
2011-09-01
The successful fabrication of several freeform optical elements by ultraprecision micromilling is presented in this article. We discuss in detail the generation of the tool paths using different variations of a computer-aided manufacturing (CAM) process. Following a classical CAM approach, a reflective beam shaper was fabricated. The approach is based on a solid model calculated by optical design software. As no analytical description of the surface is needed, this procedure is the most general solution for the programming of the tool paths. A second approach is based on the same design data. But instead of a solid model, a higher order polynomial was fitted to the data using computational methods. Taking advantage of the direct programming capabilities of state-of-the-art computerized numerical control units, the mathematics to calculate the polynomial based tool paths on-the-fly during the machining process are implemented in a highly flexible CNC code. As another example for this programming method, the fabrication of a biconic lens from a closed analytical description directly derived from the optical design is shown. We provide details about the different programming methods and the fabrication processes as well as the results of characterizations concerning surface quality and shape accuracy of the freeform optical elements.
Efficient Computation of Info-Gap Robustness for Finite Element Models
Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.
2012-07-05
A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.
Computational aspects of the prediction of multidimensional transonic flows in turbomachinery
NASA Technical Reports Server (NTRS)
Oliver, D. A.; Sparis, P.
1975-01-01
The analytical prediction and description of transonic flow in turbomachinery is complicated by three fundamental effects: (1) the fluid equations describing the transonic regime are inherently nonlinear, (2) shock waves may be present in the flow, and (3) turbomachine blading is geometrically complex, possessing large amounts of curvature, stagger, and twist. A three-dimensional computation procedure for the study of transonic turbomachine fluid mechanics is described. The fluid differential equations and corresponding difference operators are presented, the boundary conditions for complex blade shapes are described, and the computational implementation and mapping procedures are developed. Illustrative results of a typical unthrottled transonic rotor are also presented.
Aspects on the Development of the New Computer Technology in Indonesia as a Developing Country.
1984-09-01
management jobs include creative or unstricturel effort. . Klahr and leavitt, as cited by Myers (Ref. 2] suggests that executive jots that involve processing...assessing, and acting upon information will te creatively extended and augmented by the computer. The increased use of computers, both larje anI...2.5% ’ 135% 34% 34S6 i - 2sd I -sd I isd 7e inov aivenes dimension .as measured tj the time at h;Ch an irin1i-!22 a~oots an innovation or innrovanions
SoftLab: A Soft-Computing Software for Experimental Research with Commercialization Aspects
NASA Technical Reports Server (NTRS)
Akbarzadeh-T, M.-R.; Shaikh, T. S.; Ren, J.; Hubbell, Rob; Kumbla, K. K.; Jamshidi, M
1998-01-01
SoftLab is a software environment for research and development in intelligent modeling/control using soft-computing paradigms such as fuzzy logic, neural networks, genetic algorithms, and genetic programs. SoftLab addresses the inadequacies of the existing soft-computing software by supporting comprehensive multidisciplinary functionalities from management tools to engineering systems. Furthermore, the built-in features help the user process/analyze information more efficiently by a friendly yet powerful interface, and will allow the user to specify user-specific processing modules, hence adding to the standard configuration of the software environment.
NASA Technical Reports Server (NTRS)
Ecer, A.; Akay, H. U.
1981-01-01
The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.
Adaptive finite element simulation of flow and transport applications on parallel computers
NASA Astrophysics Data System (ADS)
Kirk, Benjamin Shelton
The subject of this work is the adaptive finite element simulation of problems arising in flow and transport applications on parallel computers. Of particular interest are new contributions to adaptive mesh refinement (AMR) in this parallel high-performance context, including novel work on data structures, treatment of constraints in a parallel setting, generality and extensibility via object-oriented programming, and the design/implementation of a flexible software framework. This technology and software capability then enables more robust, reliable treatment of multiscale--multiphysics problems and specific studies of fine scale interaction such as those in biological chemotaxis (Chapter 4) and high-speed shock physics for compressible flows (Chapter 5). The work begins by presenting an overview of key concepts and data structures employed in AMR simulations. Of particular interest is how these concepts are applied in the physics-independent software framework which is developed here and is the basis for all the numerical simulations performed in this work. This open-source software framework has been adopted by a number of researchers in the U.S. and abroad for use in a wide range of applications. The dynamic nature of adaptive simulations pose particular issues for efficient implementation on distributed-memory parallel architectures. Communication cost, computational load balance, and memory requirements must all be considered when developing adaptive software for this class of machines. Specific extensions to the adaptive data structures to enable implementation on parallel computers is therefore considered in detail. The libMesh framework for performing adaptive finite element simulations on parallel computers is developed to provide a concrete implementation of the above ideas. This physics-independent framework is applied to two distinct flow and transport applications classes in the subsequent application studies to illustrate the flexibility of the
NASA Astrophysics Data System (ADS)
Javili, A.; Saeb, S.; Steinmann, P.
2017-01-01
In the past decades computational homogenization has proven to be a powerful strategy to compute the overall response of continua. Central to computational homogenization is the Hill-Mandel condition. The Hill-Mandel condition is fulfilled via imposing displacement boundary conditions (DBC), periodic boundary conditions (PBC) or traction boundary conditions (TBC) collectively referred to as canonical boundary conditions. While DBC and PBC are widely implemented, TBC remains poorly understood, with a few exceptions. The main issue with TBC is the singularity of the stiffness matrix due to rigid body motions. The objective of this manuscript is to propose a generic strategy to implement TBC in the context of computational homogenization at finite strains. To eliminate rigid body motions, we introduce the concept of semi-Dirichlet boundary conditions. Semi-Dirichlet boundary conditions are non-homogeneous Dirichlet-type constraints that simultaneously satisfy the Neumann-type conditions. A key feature of the proposed methodology is its applicability for both strain-driven as well as stress-driven homogenization. The performance of the proposed scheme is demonstrated via a series of numerical examples.
Tying Theory To Practice: Cognitive Aspects of Computer Interaction in the Design Process.
ERIC Educational Resources Information Center
Mikovec, Amy E.; Dake, Dennis M.
The new medium of computer-aided design requires changes to the creative problem-solving methodologies typically employed in the development of new visual designs. Most theoretical models of creative problem-solving suggest a linear progression from preparation and incubation to some type of evaluative study of the "inspiration." These…
Delta: An object-oriented finite element code architecture for massively parallel computers
Weatherby, J.R.; Schutt, J.A.; Peery, J.S.; Hogan, R.E.
1996-02-01
Delta is an object-oriented code architecture based on the finite element method which enables simulation of a wide range of engineering mechanics problems in a parallel processing environment. Written in C{sup ++}, Delta is a natural framework for algorithm development and for research involving coupling of mechanics from different Engineering Science disciplines. To enhance flexibility and encourage code reuse, the architecture provides a clean separation of the major aspects of finite element programming. Spatial discretization, temporal discretization, and the solution of linear and nonlinear systems of equations are each implemented separately, independent from the governing field equations. Other attractive features of the Delta architecture include support for constitutive models with internal variables, reusable ``matrix-free`` equation solvers, and support for region-to-region variations in the governing equations and the active degrees of freedom. A demonstration code built from the Delta architecture has been used in two-dimensional and three-dimensional simulations involving dynamic and quasi-static solid mechanics, transient and steady heat transport, and flow in porous media.
Aragon, Sergio; Hahn, David K.
2006-01-01
A precise boundary element method for the computation of hydrodynamic properties has been applied to the study of a large suite of 41 soluble proteins ranging from 6.5 to 377 kDa in molecular mass. A hydrodynamic model consisting of a rigid protein excluded volume, obtained from crystallographic coordinates, surrounded by a uniform hydration thickness has been found to yield properties in excellent agreement with experiment. The hydration thickness was determined to be δ = 1.1 ± 0.1 Å. Using this value, standard deviations from experimental measurements are: 2% for the specific volume; 2% for the translational diffusion coefficient, and 6% for the rotational diffusion coefficient. These deviations are comparable to experimental errors in these properties. The precision of the boundary element method allows the unified description of all of these properties with a single hydration parameter, thus far not achieved with other methods. An approximate method for computing transport properties with a statistical precision of 1% or better (compared to 0.1–0.2% for the full computation) is also presented. We have also estimated the total amount of hydration water with a typical −9% deviation from experiment in the case of monomeric proteins. Both the water of hydration and the more precise translational diffusion data hint that some multimeric proteins may not have the same solution structure as that in the crystal because the deviations are systematic and larger than in the monomeric case. On the other hand, the data for monomeric proteins conclusively show that there is no difference in the protein structure going from the crystal into solution. PMID:16714342
Fast Computation of Global Sensitivity Kernel Database Based on Spectral-Element Simulations
NASA Astrophysics Data System (ADS)
Sales de Andrade, Elliott; Liu, Qinya
2017-07-01
Finite-frequency sensitivity kernels, a theoretical improvement from simple infinitely thin ray paths, have been used extensively in recent global and regional tomographic inversions. These sensitivity kernels provide more consistent and accurate interpretation of a growing number of broadband measurements, and are critical in mapping 3D heterogeneous structures of the mantle. Based on Born approximation, the calculation of sensitivity kernels requires the interaction of the forward wavefield and an adjoint wavefield generated by placing adjoint sources at stations. Both fields can be obtained accurately through numerical simulations of seismic wave propagation, particularly important for kernels of phases that cannot be sufficiently described by ray theory (such as core-diffracted waves). However, the total number of forward and adjoint numerical simulations required to build kernels for individual source-receiver pairs and to form the design matrix for classical tomography is computationally unaffordable. In this paper, we take advantage of the symmetry of 1D reference models, perform moment tensor forward and point force adjoint spectral-element simulations, and save six-component strain fields only on the equatorial plane based on the open-source spectral-element simulation package, SPECFEM3D_GLOBE. Sensitivity kernels for seismic phases at any epicentral distance can be efficiently computed by combining forward and adjoint strain wavefields from the saved strain field database, which significantly reduces both the number of simulations and the amount of storage required for global tomographic problems. Based on this technique, we compute traveltime, amplitude and/or boundary kernels of isotropic and radially anisotropic elastic parameters for various (P, S, P_{diff}, S_{diff}, depth, surface-reflected, surface wave, S 660 S boundary, etc.) phases for 1D ak135 model, in preparation for future global tomographic inversions.
FLAME: A finite element computer code for contaminant transport n variably-saturated media
Baca, R.G.; Magnuson, S.O.
1992-06-01
A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A.
MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide
Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H.
1996-09-01
This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.
Computational aspects of crack growth in sandwich plates from reinforced concrete and foam
NASA Astrophysics Data System (ADS)
Papakaliatakis, G.; Panoskaltsis, V. P.; Liontas, A.
2012-12-01
In this work we study the initiation and propagation of cracks in sandwich plates made from reinforced concrete in the boundaries and from a foam polymeric material in the core. A nonlinear finite element approach is followed. Concrete is modeled as an elastoplastic material with its tensile behavior and damage taken into account. Foam is modeled as a crushable, isotropic compressible material. We analyze slabs with a pre-existing macro crack at the position of the maximum bending moment and we study the macrocrack propagation, as well as the condition under which we have crack arrest.
Carl Aberg, Kristoffer; Doell, Kimberly C.; Schwartz, Sophie
2016-01-01
Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance) learning scored higher on measures of approach (vs. avoidance) trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative) outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits. PMID:27851807
NASA Astrophysics Data System (ADS)
Vala, Jiří; Jarošová, Petra
2016-07-01
Development of advanced materials resistant to high temperature, needed namely for the design of heat storage for low-energy and passive buildings, requires simple, inexpensive and reliable methods of identification of their temperature-sensitive thermal conductivity and diffusivity, covering both well-advised experimental setting and implementation of robust and effective computational algorithms. Special geometrical configurations offer a possibility of quasi-analytical evaluation of temperature development for direct problems, whereas inverse problems of simultaneous evaluation of thermal conductivity and diffusivity must be handled carefully, using some least-squares (minimum variance) arguments. This paper demonstrates the proper mathematical and computational approach to such model problem, thanks to the radial symmetry of hot-wire measurements, including its numerical implementation.
A computational approach to the dynamic aspects of primitive auditory scene analysis.
Kashino, Makio; Adachi, Eisuke; Hirose, Haruto
2013-01-01
Recent psychophysical and physiological studies demonstrated that auditory scene analysis (ASA) is inherently a dynamic process, suggesting that the system conducting ASA constantly changes itself, incorporating the dynamics of sound sources in the acoustic scene, to realize efficient and robust information processing. Here, we propose computational models of ASA based on two computational principles of ASA, namely, separation in a feature space and temporal regularity. We explicitly introduced learning processes, so that the system could autonomously develop its selectivity to features or bases for analyses according to the observed acoustic data. Simulation results demonstrated that the models were able to predict some essential features of behavioral properties of ASA, such as the buildup of streaming, multistable perception, and the segregation of repeated patterns embedded in distracting sounds.
Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie
2016-01-01
Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance) learning scored higher on measures of approach (vs. avoidance) trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative) outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits.
CAVASS: a computer-assisted visualization and analysis software system - image processing aspects
NASA Astrophysics Data System (ADS)
Udupa, Jayaram K.; Grevera, George J.; Odhner, Dewey; Zhuge, Ying; Souza, Andre; Mishra, Shipra; Iwanaga, Tad
2007-03-01
The development of the concepts within 3DVIEWNIX and of the software system 3DVIEWNIX itself dates back to the 1970s. Since then, a series of software packages for Computer Assisted Visualization and Analysis (CAVA) of images came out from our group, 3DVIEWNIX released in 1993, being the most recent, and all were distributed with source code. CAVASS, an open source system, is the latest in this series, and represents the next major incarnation of 3DVIEWNIX. It incorporates four groups of operations: IMAGE PROCESSING (including ROI, interpolation, filtering, segmentation, registration, morphological, and algebraic operations), VISUALIZATION (including slice display, reslicing, MIP, surface rendering, and volume rendering), MANIPULATION (for modifying structures and surgery simulation), ANALYSIS (various ways of extracting quantitative information). CAVASS is designed to work on all platforms. Its key features are: (1) most major CAVA operations incorporated; (2) very efficient algorithms and their highly efficient implementations; (3) parallelized algorithms for computationally intensive operations; (4) parallel implementation via distributed computing on a cluster of PCs; (5) interface to other systems such as CAD/CAM software, ITK, and statistical packages; (6) easy to use GUI. In this paper, we focus on the image processing operations and compare the performance of CAVASS with that of ITK. Our conclusions based on assessing performance by utilizing a regular (6 MB), large (241 MB), and a super (873 MB) 3D image data set are as follows: CAVASS is considerably more efficient than ITK, especially in those operations which are computationally intensive. It can handle considerably larger data sets than ITK. It is easy and ready to use in applications since it provides an easy to use GUI. The users can easily build a cluster from ordinary inexpensive PCs and reap the full power of CAVASS inexpensively compared to expensive multiprocessing systems which are less
Eirín-López, José M
2013-01-01
The study of chromatin constitutes one of the most active research fields in life sciences, being subject to constant revisions that continuously redefine the state of the art in its knowledge. As every other rapidly changing field, chromatin biology requires clear and straightforward educational strategies able to efficiently translate such a vast body of knowledge to the classroom. With this aim, the present work describes a multidisciplinary computer lab designed to introduce undergraduate students to the dynamic nature of chromatin, within the context of the one semester course "Chromatin: Structure, Function and Evolution." This exercise is organized in three parts including (a) molecular evolutionary biology of histone families (using the H1 family as example), (b) histone structure and variation across different animal groups, and (c) effect of histone diversity on nucleosome structure and chromatin dynamics. By using freely available bioinformatic tools that can be run on common computers, the concept of chromatin dynamics is interactively illustrated from a comparative/evolutionary perspective. At the end of this computer lab, students are able to translate the bioinformatic information into a biochemical context in which the relevance of histone primary structure on chromatin dynamics is exposed. During the last 8 years this exercise has proven to be a powerful approach for teaching chromatin structure and dynamics, allowing students a higher degree of independence during the processes of learning and self-assessment.
NASA Astrophysics Data System (ADS)
Gianti, Eleonora
Computer-Aided Drug Design (CADD) has deservedly gained increasing popularity in modern drug discovery (Schneider, G.; Fechner, U. 2005), whether applied to academic basic research or the pharmaceutical industry pipeline. In this work, after reviewing theoretical advancements in CADD, we integrated novel and stateof- the-art methods to assist in the design of small-molecule inhibitors of current cancer drug targets, specifically: Androgen Receptor (AR), a nuclear hormone receptor required for carcinogenesis of Prostate Cancer (PCa); Signal Transducer and Activator of Transcription 5 (STAT5), implicated in PCa progression; and Epstein-Barr Nuclear Antigen-1 (EBNA1), essential to the Epstein Barr Virus (EBV) during latent infections. Androgen Receptor. With the aim of generating binding mode hypotheses for a class (Handratta, V.D. et al. 2005) of dual AR/CYP17 inhibitors (CYP17 is a key enzyme for androgens biosynthesis and therefore implicated in PCa development), we successfully implemented a receptor-based computational strategy based on flexible receptor docking (Gianti, E.; Zauhar, R.J. 2012). Then, with the ultimate goal of identifying novel AR binders, we performed Virtual Screening (VS) by Fragment-Based Shape Signatures, an improved version of the original method developed in our Laboratory (Zauhar, R.J. et al. 2003), and we used the results to fully assess the high-level performance of this innovative tool in computational chemistry. STAT5. The SRC Homology 2 (SH2) domain of STAT5 is responsible for phospho-peptide recognition and activation. As a keystone of Structure-Based Drug Design (SBDD), we characterized key residues responsible for binding. We also generated a model of STAT5 receptor bound to a phospho-peptide ligand, which was validated by docking publicly known STAT5 inhibitors. Then, we performed Shape Signatures- and docking-based VS of the ZINC database (zinc.docking.org), followed by Molecular Mechanics Generalized Born Surface Area (MMGBSA
Some computational aspects of the hals (harmonic analysis of x-ray line shape) method
Moshkina, T.I.; Nakhmanson, M.S.
1986-02-01
This paper discusses the problem of distinguishing the analytical line from the background and approximates the background component. One of the constituent parts of the program package in the procedural-mathematical software for x-ray investigations of polycrystalline substances in application to the DRON-3, DRON-2 and ADP-1 diffractometers is the SSF system of programs, which is designed for determining the parameters of the substructure of materials. The SSF system is tailored not only to Unified Series (ES) computers, but also to the M-6000 and SM-1 minicomputers.
Chen, Xiaodong; Ren, Liqiang; Zheng, Bin; Liu, Hong
2013-01-01
The conventional optical microscopes have been used widely in scientific research and in clinical practice. The modern digital microscopic devices combine the power of optical imaging and computerized analysis, archiving and communication techniques. It has a great potential in pathological examinations for improving the efficiency and accuracy of clinical diagnosis. This chapter reviews the basic optical principles of conventional microscopes, fluorescence microscopes and electron microscopes. The recent developments and future clinical applications of advanced digital microscopic imaging methods and computer assisted diagnosis schemes are also discussed.
Computational aspects of real-time simulation of rotary-wing aircraft. M.S. Thesis
NASA Technical Reports Server (NTRS)
Houck, J. A.
1976-01-01
A study was conducted to determine the effects of degrading a rotating blade element rotor mathematical model suitable for real-time simulation of rotorcraft. Three methods of degradation were studied, reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments; (2) reduction of number of blades; and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which a different rotor mathematical model should be used.
NASA Astrophysics Data System (ADS)
Salamon, Joe
In this dissertation, I will discuss and explore the various theoretical pillars re- quired to investigate the world of discretized gauge theories in a purely classical setting, with the long-term aim of achieving a fully-fledged discretization of General Relativity (GR). I will start with a brief review of differential forms, then present some results on the geometric framework of finite element exterior calculus (FEEC); in particular, I will elaborate on integrating metric structures within the framework and categorize the dual spaces of the various spaces of polynomial differential forms P rLambdak(R n). After a brief pedagogical detour on Noether's two theorems, I will apply all of the above into discretizations of electromagnetism and linearized GR. I will conclude with an excursion into the geodesic finite element method (GFEM) as a way to generalize some of the above notions to curved manifolds.
Cone-Beam Computed Tomography and Radiographs in Dentistry: Aspects Related to Radiation Dose
Lorenzoni, Diego Coelho; Bolognese, Ana Maria; Garib, Daniela Gamba; Guedes, Fabio Ribeiro; Sant'Anna, Eduardo Franzotti
2012-01-01
Introduction. The aim of this study was to discuss the radiation doses associated with plain radiographs, cone-beam computed tomography (CBCT), and conventional computed tomography (CT) in dentistry, with a special focus on orthodontics. Methods. A systematic search for articles was realized by MEDLINE from 1997–March 2011. Results. Twenty-seven articles met the established criteria. The data of these papers were grouped in a table and discussed. Conclusions. Increases in kV, mA, exposure time, and field of view (FOV) increase the radiation dose. The dose for CT is greater than other modalities. When the full-mouth series (FMX) is performed with round collimation, the orthodontic radiographs transmit higher dose than most of the large FOV CBCT, but it can be reduced if used rectangular collimation, showing lower effective dose than large FOV CBCT. Despite the image quality, the CBCT does not replace the FMX. In addition to the radiation dose, image quality and diagnostic needs should be strongly taken into account. PMID:22548064
Preprocessor and postprocessor computer programs for a radial-flow finite-element model
Pucci, A.A.; Pope, D.A.
1987-01-01
Preprocessing and postprocessing computer programs that enhance the utility of the U.S. Geological Survey radial-flow model have been developed. The preprocessor program: (1) generates a triangular finite element mesh from minimal data input, (2) produces graphical displays and tabulations of data for the mesh , and (3) prepares an input data file to use with the radial-flow model. The postprocessor program is a version of the radial-flow model, which was modified to (1) produce graphical output for simulation and field results, (2) generate a statistic for comparing the simulation results with observed data, and (3) allow hydrologic properties to vary in the simulated region. Examples of the use of the processor programs for a hypothetical aquifer test are presented. Instructions for the data files, format instructions, and a listing of the preprocessor and postprocessor source codes are given in the appendixes. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Komninos, Yannis; Mercouris, Theodoros; Nicolaides, Cleanthes A.
2017-01-01
The present study examines the mathematical properties of the free-free ( f - f) matrix elements of the full electric field operator, O E (κ, r̅), of the multipolar Hamiltonian. κ is the photon wavenumber. Special methods are developed and applied for their computation, for the general case where the scattering wavefunctions are calculated numerically in the potential of the term-dependent ( N - 1) electron core, and are energy-normalized. It is found that, on the energy axis, the f - f matrix elements of O E (κ, r̅) have singularities of first order, i.e., as ɛ' → ɛ, they behave as ( ɛ - ɛ')-1. The numerical applications are for f - f transitions in hydrogen and neon, obeying electric dipole and quadrupole selection rules. In the limit κ = 0, O E (κ, r̅) reduces to the length form of the electric dipole approximation (EDA). It is found that the results for the EDA agree with those of O E (κ, r̅), with the exception of a wave-number region k' = k ± κ about the point k' = k.
A finite element framework for computation of protein normal modes and mechanical response.
Bathe, Mark
2008-03-01
A computational framework based on the Finite Element Method is presented to calculate the normal modes and mechanical response of proteins and their supramolecular assemblies. Motivated by elastic network models, proteins are treated as continuum elastic solids with molecular volume defined by their solvent-excluded surface. The discretized Finite Element representation is obtained using a surface simplification algorithm that facilitates the generation of models of arbitrary prescribed spatial resolution. The procedure is applied to a mutant of T4 phage lysozyme, G-actin, syntenin, cytochrome-c', beta-tubulin, and the supramolecular assembly filamentous actin (F-actin). Equilibrium thermal fluctuations of alpha-carbon atoms and their inter-residue correlations compare favorably with all-atom-based results, the Rotational-Translational Block procedure, and experiment. Additionally, the free vibration and compressive buckling responses of F-actin are in quantitative agreement with experiment. The proposed methodology is applicable to any protein or protein assembly and facilitates the incorporation of specific atomic-level interactions, including aqueous-electrolyte-mediated electrostatic effects and solvent damping. The procedure is equally applicable to proteins with known atomic coordinates as it is to electron density maps of proteins, protein complexes, and supramolecular assemblies of unknown atomic structure.
Yuan, Dajing; Bakker, Eric
2017-08-01
Finite difference analysis of ion-selective membranes is a valuable tool for understanding a range of time dependent phenomena such as response times, long and medium term potential drifts, determination of selectivity, and (re)conditioning kinetics. It is here shown that an established approach based on the diffusion layer model applied to an ion-exchange membrane fails to use mass transport to account for concentration changes at the membrane side of the phase boundary. Instead, such concentrations are imposed by the ion-exchange equilibrium condition, without taking into account the source of these ions. The limitation is illustrated with a super-Nernstian potential jump, where a membrane initially void of analyte ion is exposed to incremental concentrations of analyte in the sample. To overcome this limitation, the two boundary elements, one at either side of the sample-membrane interface, are treated here as a combined entity and its total concentration change is dictated by diffusional fluxes into and out of the interface. For each time step, the concentration distribution between the two boundary elements is then computed by ion-exchange theory. The resulting finite difference simulation is much more robust than the earlier model and gives a good correlation to experiments.
Sternick, Marcelo Back; Dallacosta, Darlan; Bento, Daniela Águida; do Reis, Marcelo Lemos
2015-01-01
Objective: To analyze the rigidity of a platform-type external fixator assembly, according to different numbers of pins on each clamp. Methods: Computer simulation on a large-sized Cromus dynamic external fixator (Baumer SA) was performed using a finite element method, in accordance with the standard ASTM F1541. The models were generated with approximately 450,000 quadratic tetrahedral elements. Assemblies with two, three and four Schanz pins of 5.5 mm in diameter in each clamp were compared. Every model was subjected to a maximum force of 200 N, divided into 10 sub-steps. For the components, the behavior of the material was assumed to be linear, elastic, isotropic and homogeneous. For each model, the rigidity of the assembly and the Von Mises stress distribution were evaluated. Results: The rigidity of the system was 307.6 N/mm for two pins, 369.0 N/mm for three and 437.9 N/mm for four. Conclusion: The results showed that four Schanz pins in each clamp promoted rigidity that was 19% greater than in the configuration with three pins and 42% greater than with two pins. Higher tension occurred in configurations with fewer pins. In the models analyzed, the maximum tension occurred on the surface of the pin, close to the fixation area. PMID:27047879
Sternick, Marcelo Back; Dallacosta, Darlan; Bento, Daniela Águida; do Reis, Marcelo Lemos
2012-01-01
To analyze the rigidity of a platform-type external fixator assembly, according to different numbers of pins on each clamp. Computer simulation on a large-sized Cromus dynamic external fixator (Baumer SA) was performed using a finite element method, in accordance with the standard ASTM F1541. The models were generated with approximately 450,000 quadratic tetrahedral elements. Assemblies with two, three and four Schanz pins of 5.5 mm in diameter in each clamp were compared. Every model was subjected to a maximum force of 200 N, divided into 10 sub-steps. For the components, the behavior of the material was assumed to be linear, elastic, isotropic and homogeneous. For each model, the rigidity of the assembly and the Von Mises stress distribution were evaluated. The rigidity of the system was 307.6 N/mm for two pins, 369.0 N/mm for three and 437.9 N/mm for four. The results showed that four Schanz pins in each clamp promoted rigidity that was 19% greater than in the configuration with three pins and 42% greater than with two pins. Higher tension occurred in configurations with fewer pins. In the models analyzed, the maximum tension occurred on the surface of the pin, close to the fixation area.
Predicting mouse vertebra strength with micro-computed tomography-derived finite element analysis.
Nyman, Jeffry S; Uppuganti, Sasidhar; Makowski, Alexander J; Rowland, Barbara J; Merkel, Alyssa R; Sterling, Julie A; Bredbenner, Todd L; Perrien, Daniel S
2015-01-01
As in clinical studies, finite element analysis (FEA) developed from computed tomography (CT) images of bones are useful in pre-clinical rodent studies assessing treatment effects on vertebral body (VB) strength. Since strength predictions from microCT-derived FEAs (μFEA) have not been validated against experimental measurements of mouse VB strength, a parametric analysis exploring material and failure definitions was performed to determine whether elastic μFEAs with linear failure criteria could reasonably assess VB strength in two studies, treatment and genetic, with differences in bone volume fraction between the control and the experimental groups. VBs were scanned with a 12-μm voxel size, and voxels were directly converted to 8-node, hexahedral elements. The coefficient of determination or R (2) between predicted VB strength and experimental VB strength, as determined from compression tests, was 62.3% for the treatment study and 85.3% for the genetic study when using a homogenous tissue modulus (E t) of 18 GPa for all elements, a failure volume of 2%, and an equivalent failure strain of 0.007. The difference between prediction and measurement (that is, error) increased when lowering the failure volume to 0.1% or increasing it to 4%. Using inhomogeneous tissue density-specific moduli improved the R (2) between predicted and experimental strength when compared with uniform E t=18 GPa. Also, the optimum failure volume is higher for the inhomogeneous than for the homogeneous material definition. Regardless of model assumptions, μFEA can assess differences in murine VB strength between experimental groups when the expected difference in strength is at least 20%.
Development of a numerical computer code and circuit element models for simulation of firing systems
Carpenter, K.H. . Dept. of Electrical and Computer Engineering)
1990-07-02
Numerical simulation of firing systems requires both the appropriate circuit analysis framework and the special element models required by the application. We have modified the SPICE circuit analysis code (version 2G.6), developed originally at the Electronic Research Laboratory of the University of California, Berkeley, to allow it to be used on MSDOS-based, personal computers and to give it two additional circuit elements needed by firing systems--fuses and saturating inductances. An interactive editor and a batch driver have been written to ease the use of the SPICE program by system designers, and the interactive graphical post processor, NUTMEG, supplied by U. C. Berkeley with SPICE version 3B1, has been interfaced to the output from the modified SPICE. Documentation and installation aids have been provided to make the total software system accessible to PC users. Sample problems show that the resulting code is in agreement with the FIRESET code on which the fuse model was based (with some modifications to the dynamics of scaling fuse parameters). In order to allow for more complex simulations of firing systems, studies have been made of additional special circuit elements--switches and ferrite cored inductances. A simple switch model has been investigated which promises to give at least a first approximation to the physical effects of a non ideal switch, and which can be added to the existing SPICE circuits without changing the SPICE code itself. The effect of fast rise time pulses on ferrites has been studied experimentally in order to provide a base for future modeling and incorporation of the dynamic effects of changes in core magnetization into the SPICE code. This report contains detailed accounts of the work on these topics performed during the period it covers, and has appendices listing all source code written documentation produced.
Kar, Rajiv K; Bhunia, Anirban
2015-11-01
Antifreeze proteins (AFPs) are the key biomolecules that protect species from extreme climatic conditions. Studies of AFPs, which are based on recognition of ice plane and structural motifs, have provided vital information that point towards the mechanism responsible for executing antifreeze activity. Importantly, the use of experimental techniques has revealed key information for AFPs, but the exact microscopic details are still not well understood, which limits the application and design of novel antifreeze agents. The present review focuses on the importance of computational tools for investigating (i) molecular properties, (ii) structure-function relationships, and (iii) AFP-ice interactions at atomistic levels. In this context, important details pertaining to the methodological approaches used in molecular dynamics studies of AFPs are also discussed. It is hoped that the information presented herein is helpful for enriching our knowledge of antifreeze properties, which can potentially pave the way for the successful design of novel antifreeze biomolecular agents.
Computer Modelling of Functional Aspects of Noise in Endogenously Oscillating Neurons
NASA Astrophysics Data System (ADS)
Huber, M. T.; Dewald, M.; Voigt, K.; Braun, H. A.; Moss, F.
1998-03-01
Membrane potential oscillations are a widespread feature of neuronal activity. When such oscillations operate close to the spike-triggering threshold, noise can become an essential property of spike-generation. According to that, we developed a minimal Hodgkin-Huxley-type computer model which includes a noise term. This model accounts for experimental data from quite different cells ranging from mammalian cortical neurons to fish electroreceptors. With slight modifications of the parameters, the model's behavior can be tuned to bursting activity, which additionally allows it to mimick temperature encoding in peripheral cold receptors including transitions to apparently chaotic dynamics as indicated by methods for the detection of unstable periodic orbits. Under all conditions, cooperative effects between noise and nonlinear dynamics can be shown which, beyond stochastic resonance, might be of functional significance for stimulus encoding and neuromodulation.
Use of SNP-arrays for ChIP assays: computational aspects.
Muro, Enrique M; McCann, Jennifer A; Rudnicki, Michael A; Andrade-Navarro, Miguel A
2009-01-01
The simultaneous genotyping of thousands of single nucleotide polymorphisms (SNPs) in a genome using SNP-Arrays is a very important tool that is revolutionizing genetics and molecular biology. We expanded the utility of this technique by using it following chromatin immunoprecipitation (ChIP) to assess the multiple genomic locations protected by a protein complex recognized by an antibody. The power of this technique is illustrated through an analysis of the changes in histone H4 acetylation, a marker of open chromatin and transcriptionally active genomic regions, which occur during differentiation of human myoblasts into myotubes. The findings have been validated by the observation of a significant correlation between the detected histone modifications and the expression of the nearby genes, as measured by DNA expression microarrays. This chapter focuses on the computational analysis of the data.
[The use of learning computers in the general educational system: hygienic aspects].
Teksheva, L M; El'ksnina, E V; Perminov, M A
2007-01-01
The use of learning computers (LCs) in a present-day academic process poses for hygienists and physiologists new problems, such as to evaluate the influence of LCs on schoolchildren's health and to substantiate and to develop the ways of drawing up and presenting the materials in terms of their readability and regulation of learning regimens. Analysis of the currently available LCs has established the factors contributing to the accelerated development of visual and overall fatigue, its accumulation: the brightness characteristics of electronic pages, including both the violation of the allowable brightness levels and the irregularity of intensity distribution; a significant inadequacy of type sizes; a great variety of lettering and coloring. The recent LCs for general education are the visual aggressive medium for schoolchildren, which is certain to require their hygienic evaluation on the basis of specially developed hygienic requirements for LCs.
Addition of higher order plate and shell elements into NASTRAN computer program
NASA Technical Reports Server (NTRS)
Narayanaswami, R.; Goglia, G. L.
1976-01-01
Two higher order plate elements, the linear strain triangular membrane element and the quintic bending element, along with a shallow shell element, suitable for inclusion into the NASTRAN (NASA Structural Analysis) program are described. Additions to the NASTRAN Theoretical Manual, Users' Manual, Programmers' Manual and the NASTRAN Demonstration Problem Manual, for inclusion of these elements into the NASTRAN program are also presented.
Trampert, Patrick; Vogelgesang, Jonas; Schorr, Christian; Maisl, Michael; Bogachev, Sviatoslav; Marniok, Nico; Louis, Alfred; Dahmen, Tim; Slusallek, Philipp
2017-03-21
Laminography is a tomographic technique that allows three-dimensional imaging of flat and elongated objects that stretch beyond the extent of a reconstruction volume. Laminography images can be reconstructed using iterative algorithms based on the Kaczmarz method. This study aims to develop and demonstrate a new reconstruction algorithm that may provide superior image reconstruction quality for this challenged imaging application. The images are initially represented using the coefficients over basis functions, which are typically piecewise constant functions (voxels). By replacing voxels with spherically symmetric volume elements (blobs) based on the generalized Kaiser-Bessel window functions, the images are reconstructed using this new adapted version of the algebraic image reconstruction technique. Band-limiting properties of blob functions are beneficial particular in the case of noisy projections and with only a limited number of available projections. Study showed that using blob basis functions improved full-width-at-half-maximum resolution from 10.2±1.0 to 9.9±0.9 (p < 0.001). Signal-to-noise ratio also improved from 16.1 to 31.0. The increased computational demand per iteration was compensated by using a faster convergence rate, such that the overall performance is approximately identical for blobs and voxels. Despite the higher complexity, tomographic reconstruction from computed laminography data should be implemented using blob basis functions, especially if noisy data is expected.
High numerical aperture diffractive optical elements for neutral atom quantum computing
NASA Astrophysics Data System (ADS)
Young, A. L.; Kemme, S. A.; Wendt, J. R.; Carter, T. R.; Samora, S.
2013-03-01
The viability of neutral atom based quantum computers is dependent upon scalability to large numbers of qubits. Diffractive optical elements (DOEs) offer the possibility to scale up to many qubit systems by enabling the manipulation of light to collect signal or deliver a tailored spatial trapping pattern. DOEs have an advantage over refractive microoptics since they do not have measurable surface sag, making significantly larger numerical apertures (NA) accessible with a smaller optical component. The smaller physical size of a DOE allows the micro-lenses to be placed in vacuum with the atoms, reducing aberration effects that would otherwise be introduced by the cell walls of the vacuum chamber. The larger collection angle accessible with DOEs enable faster quantum computation speeds. We have designed a set of DOEs for collecting the 852 nm fluorescence from the D2 transition in trapped cesium atoms, and compare these DOEs to several commercially available refractive micro-lenses. The largest DOE is able to collect over 20% of the atom's radiating sphere whereas the refractive micro-optic is able to collect just 8% of the atom's radiating sphere.
NASA Astrophysics Data System (ADS)
Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.
2016-05-01
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC
2009-06-19
Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.
Wong, J.; Göktepe, S.; Kuhl, E.
2014-01-01
Summary Computational modeling of the human heart allows us to predict how chemical, electrical, and mechanical fields interact throughout a cardiac cycle. Pharmacological treatment of cardiac disease has advanced significantly over the past decades, yet it remains unclear how the local biochemistry of an individual heart cell translates into global cardiac function. Here we propose a novel, unified strategy to simulate excitable biological systems across three biological scales. To discretize the governing chemical, electrical, and mechanical equations in space, we propose a monolithic finite element scheme. We apply a highly efficient and inherently modular global-local split, in which the deformation and the transmembrane potential are introduced globally as nodal degrees of freedom, while the chemical state variables are treated locally as internal variables. To ensure unconditional algorithmic stability, we apply an implicit backward Euler finite difference scheme to discretize the resulting system in time. To increase algorithmic robustness and guarantee optimal quadratic convergence, we suggest an incremental iterative Newton-Raphson scheme. The proposed algorithm allows us to simulate the interaction of chemical, electrical, and mechanical fields during a representative cardiac cycle on a patient-specific geometry, robust and stable, with calculation times on the order of four days on a standard desktop computer. PMID:23798328
Glass, Micheal W.; Hogan, Roy E., Jr.; Gartling, David K.
2010-03-01
The need for the engineering analysis of systems in which the transport of thermal energy occurs primarily through a conduction process is a common situation. For all but the simplest geometries and boundary conditions, analytic solutions to heat conduction problems are unavailable, thus forcing the analyst to call upon some type of approximate numerical procedure. A wide variety of numerical packages currently exist for such applications, ranging in sophistication from the large, general purpose, commercial codes, such as COMSOL, COSMOSWorks, ABAQUS and TSS to codes written by individuals for specific problem applications. The original purpose for developing the finite element code described here, COYOTE, was to bridge the gap between the complex commercial codes and the more simplistic, individual application programs. COYOTE was designed to treat most of the standard conduction problems of interest with a user-oriented input structure and format that was easily learned and remembered. Because of its architecture, the code has also proved useful for research in numerical algorithms and development of thermal analysis capabilities. This general philosophy has been retained in the current version of the program, COYOTE, Version 5.0, though the capabilities of the code have been significantly expanded. A major change in the code is its availability on parallel computer architectures and the increase in problem complexity and size that this implies. The present document describes the theoretical and numerical background for the COYOTE program. This volume is intended as a background document for the user's manual. Potential users of COYOTE are encouraged to become familiar with the present report and the simple example analyses reported in before using the program. The theoretical and numerical background for the finite element computer program, COYOTE, is presented in detail. COYOTE is designed for the multi-dimensional analysis of nonlinear heat conduction problems
Zhang, Meng; Gao, Jiazi; Huang, Xu; Zhang, Min; Liu, Bei
2017-01-01
Quantitative computed tomography-based finite element analysis (QCT/FEA) has been developed to predict vertebral strength. However, QCT/FEA models may be different with scan resolutions and element sizes. The aim of this study was to explore the effects of scan resolutions and element sizes on QCT/FEA outcomes. Nine bovine vertebral bodies were scanned using the clinical CT scanner and reconstructed from datasets with the two-slice thickness, that is, 0.6 mm (PA resolution) and 1 mm (PB resolution). There were significantly linear correlations between the predicted and measured principal strains (R2 > 0.7, P < 0.0001), and the predicted vertebral strength and stiffness were modestly correlated with the experimental values (R2 > 0.6, P < 0.05). Two different resolutions and six different element sizes were combined in pairs, and finite element (FE) models of bovine vertebral cancellous bones in the 12 cases were obtained. It showed that the mechanical parameters of FE models with the PB resolution were similar to those with the PA resolution. The computational accuracy of FE models with the element sizes of 0.41 × 0.41 × 0.6 mm3 and 0.41 × 0.41 × 1 mm3 was higher by comparing the apparent elastic modulus and yield strength. Therefore, scan resolution and element size should be chosen optimally to improve the accuracy of QCT/FEA.
A new algorithm for computing primitive elements in GF q square
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.; Miller, R. L.
1978-01-01
A new method is developed to find primitive elements in the Galois field of sq q elements GF(sqq), where q is a Mersenne prime. Such primitive elements are needed to implement transforms over GF(sq q).
A new algorithm for computing primitive elements in GF q square
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.; Miller, R. L.
1978-01-01
A new method is developed to find primitive elements in the Galois field of sq q elements GF(sqq), where q is a Mersenne prime. Such primitive elements are needed to implement transforms over GF(sq q).
Discovering local patterns of co - evolution: computational aspects and biological examples
2010-01-01
Background Co-evolution is the process in which two (or more) sets of orthologs exhibit a similar or correlative pattern of evolution. Co-evolution is a powerful way to learn about the functional interdependencies between sets of genes and cellular functions and to predict physical interactions. More generally, it can be used for answering fundamental questions about the evolution of biological systems. Orthologs that exhibit a strong signal of co-evolution in a certain part of the evolutionary tree may show a mild signal of co-evolution in other branches of the tree. The major reasons for this phenomenon are noise in the biological input, genes that gain or lose functions, and the fact that some measures of co-evolution relate to rare events such as positive selection. Previous publications in the field dealt with the problem of finding sets of genes that co-evolved along an entire underlying phylogenetic tree, without considering the fact that often co-evolution is local. Results In this work, we describe a new set of biological problems that are related to finding patterns of local co-evolution. We discuss their computational complexity and design algorithms for solving them. These algorithms outperform other bi-clustering methods as they are designed specifically for solving the set of problems mentioned above. We use our approach to trace the co-evolution of fungal, eukaryotic, and mammalian genes at high resolution across the different parts of the corresponding phylogenetic trees. Specifically, we discover regions in the fungi tree that are enriched with positive evolution. We show that metabolic genes exhibit a remarkable level of co-evolution and different patterns of co-evolution in various biological datasets. In addition, we find that protein complexes that are related to gene expression exhibit non-homogenous levels of co-evolution across different parts of the fungi evolutionary line. In the case of mammalian evolution, signaling pathways that are
Computational aspects of the nonlinear normal mode initialization of the GLAS 4th order GCM
NASA Technical Reports Server (NTRS)
Navon, I. M.; Bloom, S. C.; Takacs, L.
1984-01-01
Using the normal modes of the GLAS 4th Order Model, a Machenhauer nonlinear normal mode initialization (NLNMI) was carried out for the external vertical mode using the GLAS 4th Order shallow water equations model for an equivalent depth corresponding to that associated with the external vertical mode. A simple procedure was devised which was directed at identifying computational modes by following the rate of increase of BAL sub M, the partial (with respect to the zonal wavenumber m) sum of squares of the time change of the normal mode coefficients (for fixed vertical mode index) varying over the latitude index L of symmetric or antisymmetric gravity waves. A working algorithm is presented which speeds up the convergence of the iterative Machenhauer NLNMI. A 24 h integration using the NLNMI state was carried out using both Matsuno and leap-frog time-integration schemes; these runs were then compared to a 24 h integration starting from a non-initialized state. The maximal impact of the nonlinear normal mode initialization was found to occur 6-10 hours after the initial time.
Coolant passage heat transfer with rotation. A progress report on the computational aspects
NASA Technical Reports Server (NTRS)
Aceto, L. D.; Sturgess, G. J.
1983-01-01
Turbine airfoils are subjected to increasingly higher heat loads which escalate the cooling requirements in order to satisfy life goals for the component materials. If turbine efficiency is to be maintained, however, cooling requirements should be as low as possible. To keep the quantity of cooling air bounded, a more efficient internal cooling scheme must be developed. One approach is to employ airfoils with multipass cooling passages that contain devices to augment internal heat transfer while limiting pressure drop. Design experience with multipass cooling passage airfoils has shown that a surplus of cooling air must be provided as a margin of safety. This increased cooling air leads to a performance penalty. Reliable methods for predicting the internal thermal and aerodynamic performance of multipass cooling passage airfoils would reduce or eliminate the need for the safety margin of surplus cooling air. The objective of the program is to develop and verify improved analytical methods that will form the basis for design technology which will result in efficient turbine components with improved durability without sacrificing performance. The objective will be met by: (1) establishing a comprehensive experimental data base that can form the basis of an empirical design system; (2) developing computational fluid dynamic techniques; and (3) analyzing the information in the data base with both phenomenological modeling and mathematical modeling to derive a suitable design and analysis procedure.
Kim, Jang Sik; Lee, Sangwook; Lee, Kwang Woo; Kim, Jun Mo; Kim, Young Ho; Kim, Min Eui
2014-07-01
Computed tomography (CT) has become popular in the diagnosis of acute pyelonephritis (APN) and its related complications in adults. The aim of this study was to investigate the relationship between uncommon CT findings and clinical and laboratory data in patients with APN. From July 2009 to July 2012, CT findings and clinical data were collected from 125 female patients with APN. The six uncommon CT findings (excluding a wedge-shaped area of hypoperfusion in the renal parenchyma) studied were perirenal fat infiltration, ureteral wall edema, renal abscess formation, pelvic ascites, periportal edema, and renal scarring. The clinical parameters analyzed were the age and body mass index of the patients as well as the degree and duration of fever. Laboratory parameters related to inflammation and infection included white blood cell count, C-reactive protein (CRP) level, erythrocyte sedimentation rate, pyuria, and bacteriuria. The most common CT finding was perirenal fat infiltration (69 cases, 55%). A longer duration of fever, higher CRP level, and grade of pyuria were related with perirenal fat infiltration (p=0.010, p=0.003, and p=0.049, respectively). The CRP level was significantly higher in patients with renal abscess and ureteral wall edema (p=0.005 and p=0.015, respectively). The uncommon CT findings that were related to aggravated clinical and laboratory parameters of APN patients were perirenal fat infiltration, ureteral wall edema, and renal abscess formation. The inflammatory reaction and tissue destruction may be more aggressive in patients with these CT findings.
Coolant passage heat transfer with rotation. A progress report on the computational aspects
NASA Astrophysics Data System (ADS)
Aceto, L. D.; Sturgess, G. J.
1983-10-01
Turbine airfoils are subjected to increasingly higher heat loads which escalate the cooling requirements in order to satisfy life goals for the component materials. If turbine efficiency is to be maintained, however, cooling requirements should be as low as possible. To keep the quantity of cooling air bounded, a more efficient internal cooling scheme must be developed. One approach is to employ airfoils with multipass cooling passages that contain devices to augment internal heat transfer while limiting pressure drop. Design experience with multipass cooling passage airfoils has shown that a surplus of cooling air must be provided as a margin of safety. This increased cooling air leads to a performance penalty. Reliable methods for predicting the internal thermal and aerodynamic performance of multipass cooling passage airfoils would reduce or eliminate the need for the safety margin of surplus cooling air. The objective of the program is to develop and verify improved analytical methods that will form the basis for design technology which will result in efficient turbine components with improved durability without sacrificing performance. The objective will be met by: (1) establishing a comprehensive experimental data base that can form the basis of an empirical design system; (2) developing computational fluid dynamic techniques; and (3) analyzing the information in the data base with both phenomenological modeling and mathematical modeling to derive a suitable design and analysis procedure.
Cuoco, Valentina; Colletti, Chiara; Anastasia, Annalisa; Weisz, Filippo; Bersani, Giuseppe
2015-01-01
Shared psychotic disorder (folie à deux) is a rare condition characterized by the transmission of delusional aspects from a patient (the "dominant partner") to another (the "submissive partner") linked to the first by a close relationship. We report the case of two Moroccan sisters who have experienced a combined delusional episode diagnosed as shared psychotic disorder. In these circumstances, assessment of symptoms from a cross-cultural perspective is a key factor for proper diagnostic evaluation.
Wahle, Chris W; Ross, David S; Thurston, George M
2012-07-21
We provide a mathematical and computational analysis of light scattering measurement of mixing free energies of quaternary isotropic liquids. In previous work, we analyzed mathematical and experimental design considerations for the ternary mixture case [D. Ross, G. Thurston, and C. Lutzer, J. Chem. Phys. 129, 064106 (2008); C. Wahle, D. Ross, and G. Thurston, J. Chem. Phys. 137, 034201 (2012)]. Here, we review and introduce dimension-free general formulations of the fully nonlinear partial differential equation (PDE) and its linearization, a basis for applying the method to composition spaces of any dimension, in principle. With numerical analysis of the PDE as applied to the light scattering implied by a test free energy and dielectric gradient combination, we show that values of the Rayleigh ratio within the quaternary composition tetrahedron can be used to correctly reconstruct the composition dependence of the free energy. We then extend the analysis to the case of a finite number of data points, measured with noise. In this context the linearized PDE describes the relevant diffusion of information from light scattering noise to the free energy. The fully nonlinear PDE creates a special set of curves in the composition tetrahedron, collections of which form characteristics of the nonlinear and linear PDEs, and we show that the information diffusion has a time-like direction along the positive normals to these curves. With use of Monte Carlo simulations of light scattering experiments, we find that for a modest laboratory light scattering setup, about 100-200 samples and 100 s of measurement time are enough to be able to measure the mixing free energy over the entire quaternary composition tetrahedron, to within an L(2) error norm of 10(-3). The present method can help quantify thermodynamics of quaternary isotropic liquid mixtures.
Wahle, Chris W.; Ross, David S.; Thurston, George M.
2012-01-01
We provide a mathematical and computational analysis of light scattering measurement of mixing free energies of quaternary isotropic liquids. In previous work, we analyzed mathematical and experimental design considerations for the ternary mixture case [D. Ross, G. Thurston, and C. Lutzer, J. Chem. Phys. 129, 064106 (2008)10.1063/1.2937902;C. Wahle, D. Ross, and G. Thurston, J. Chem. Phys. 137, 034201 (2012)10.1063/1.4731694]. Here, we review and introduce dimension-free general formulations of the fully nonlinear partial differential equation (PDE) and its linearization, a basis for applying the method to composition spaces of any dimension, in principle. With numerical analysis of the PDE as applied to the light scattering implied by a test free energy and dielectric gradient combination, we show that values of the Rayleigh ratio within the quaternary composition tetrahedron can be used to correctly reconstruct the composition dependence of the free energy. We then extend the analysis to the case of a finite number of data points, measured with noise. In this context the linearized PDE describes the relevant diffusion of information from light scattering noise to the free energy. The fully nonlinear PDE creates a special set of curves in the composition tetrahedron, collections of which form characteristics of the nonlinear and linear PDEs, and we show that the information diffusion has a time-like direction along the positive normals to these curves. With use of Monte Carlo simulations of light scattering experiments, we find that for a modest laboratory light scattering setup, about 100–200 samples and 100 s of measurement time are enough to be able to measure the mixing free energy over the entire quaternary composition tetrahedron, to within an \\documentclass[12pt]{minimal}\\begin{document}$\\mathcal {L}_2$\\end{document}L2 error norm of 10−3. The present method can help quantify thermodynamics of quaternary isotropic liquid mixtures. PMID:22830694
Chartrand-Lefebvre, Carl; Cadrin-Chênevert, Alexandre; Bordeleau, Edith; Ugolini, Patricia; Ouellet, Robert; Sablayrolles, Jean-Louis; Prenovault, Julie
2007-04-01
Multidetector-row electrocardiogram (ECG)-gated cardiac computed tomography (CT) will probably be a major noninvasive imaging option in the near future. Recent developments indicate that this new technology is improving rapidly. This article presents an overview of the current concepts, perspectives, and technical capabilities in coronary CT angiography (CTA). We have reviewed the recent literature on the different applications of this technology; of particular note are the many studies that have demonstrated the high negative predictive value (NPV) of coronary CTA, when performed under optimal conditions, for significant stenoses in native coronary arteries. This new technology's level of performance allows it to be used to evaluate the presence of calcified plaques, coronary bypass graft patency, and the origin and course of congenital coronary anomalies. Despite a high NPV, the robustness of the technology is limited by arrhythmias, the requirement of low heart rates, and calcium-related artifacts. Some improvements are needed in the imaging of coronary stents, especially the smaller stents, and in the detection and characterization of noncalcified plaques. Further studies are needed to more precisely determine the role of CTA in various symptomatic and asymptomatic patient groups. Clinical testing of 64-slice scanners has recently begun. As the technology improves, so does the spatial and temporal resolution. To date, this is being achieved through the development of systems with an increased number of detectors and shorter gantry rotation time, as well as the development of systems equipped with 2 X-ray tubes and the eventual development of flat-panel technology. Thus further improvement of image quality is expected.
Ainsbury, Elizabeth A; Barquinero, J Francesc
2009-01-01
Consideration of statistical methodology is essential for the application of cytogenetic and other biodosimetry techniques to triage for mass casualty situations. This is because the requirement for speed and accuracy in biodosimetric triage necessarily introduces greater uncertainties than would be acceptable in day-to-day biodosimetry. Additionally, in a large scale accident type situation, it is expected that a large number of laboratories from around the world will assist and it is likely that each laboratory will use one or more different dosimetry techniques. Thus issues arise regarding combination of results and the associated errors. In this article we discuss the statistical and computational aspects of radiation biodosimetry for triage in a large scale accident-type situation. The current status of statistical analysis techniques is reviewed and suggestions are made for improvements to these methods which will allow first responders to estimate doses quickly and reliably for suspected exposed persons.
Buzuk, G N; Lovkova, M Ia; Sokolova, S M; Tiutekin, Iu V
2003-01-01
Interrelations between the total content of isoquinoline alkaloids, concentrations of quaternary protoberberines and benzophenanthridines, and the amount of K, Cu, Co, Al, Ba, and Zn in aerial parts of individual celandine plants were revealed, within a single cenopopulation, using correlation analysis and regression analysis. Mathematical models describing the regulation of isoquinoline metabolism by some of the mineral elements were obtained in the analytical form. The results suggest that this process is genetically determined.
NASA Astrophysics Data System (ADS)
Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.
2014-06-01
The non-hydrostatic (NH) compressible Euler equations of dry atmosphere are solved in a simplified two dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative terms and quadrature. The Euler equations used here are in a flux form based on the hydrostatic pressure vertical coordinate, which are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate is implemented in this model. We verified the model by conducting widely used standard benchmark tests: the inertia-gravity wave, rising thermal bubble, density current wave, and linear hydrostatic mountain wave. The results from those tests demonstrate that the horizontally spectral element vertically finite difference model is accurate and robust. By using the 2-D slice model, we effectively show that the combined spatial discretization method of the spectral element and finite difference method in the horizontal and vertical directions, respectively, offers a viable method for the development of a NH dynamical core.
NASA Astrophysics Data System (ADS)
Bakker, Mark; Kuhlman, Kristopher L.
2011-09-01
Two new approaches are presented for the accurate computation of the potential due to line elements that satisfy the modified Helmholtz equation with complex parameters. The first approach is based on fundamental solutions in elliptical coordinates and results in products of Mathieu functions. The second approach is based on the integration of modified Bessel functions. Both approaches allow evaluation of the potential at any distance from the element. The computational approaches are applied to model transient flow with the Laplace transform analytic element method. The Laplace domain solution is computed using a combination of point elements and the presented line elements. The time domain solution is obtained through a numerical inversion. Two applications are presented to transient flow fields, which could not be modeled with the Laplace transform analytic element method prior to this work. The first application concerns transient single-aquifer flow to wells near impermeable walls modeled with line-doublets. The second application concerns transient two-aquifer flow to a well near a stream modeled with line-sinks.
NASA Astrophysics Data System (ADS)
Alakus, Bayram
Mathematical modeling involving porous heterogeneous media is important in a number of composite manufacturing processes, such as resin transfer molding (RTM), injection molding and the like. Of interest here are process modeling issues as related to composites manufacturing by RTM, because of the ability of the method to manufacture consolidated net shapes of complex geometric parts. In this research, we propose a mathematical model by utilizing the local volume averaging technique to establish the governing equations and therein provide finite element computational developments to predict the flow behavior of a viscous and viscoelastic fluid through a porous fiber network. The developments predict the velocity, pressure, and polymeric stress by modeling the conservation laws (e.g. mass and momentum) of the flow field coupled with constitutive equations for polymeric stress field. The governing equations of the flow are averaged for the fluid phase. Furthermore, the simulations target a variety of viscoelastic models (e.g. Newtonian model, Upper-Convected-Maxwell Model, Oldroyd-B model and Giesekus model) to provide a fundamental understanding of the elastic effects on the flow field. To solve the complex coupled nonlinear equations of the mathematical model described above, a combination of Newton linearization and the Galerkin and Streamline-Upwinding-Petrov-Galerkin (SUPG) finite element procedures are employed to accurately capture the representative physics. The formulations are first validated with available test cases of viscoelastic flows without porous media. Therein, the simulations are described for viscoelastic flow through porous media and the comparative results of different constitutive models are presented and discussed at length.
Towards Computing Full 3D Seismic Sensitivity: The Axisymmetric Spectral Element Method
NASA Astrophysics Data System (ADS)
Nissen-Meyer, T.; Fournier, A.; Dahlen, F. A.
2004-12-01
Finite frequency tomography has recently provided detailed images of the Earth's deep interior. However, the Fréchet sensitivity kernels used in these inversions are calculated using ray theory and can therefore not account for D''-diffracted phases or any caustics in the wavefield, as e.g. occurring in phases used to map boundary layer topography. Our objective is to compile an extensive set of full sensitivity kernels based on seismic forward modeling to allow for inversion of any seismic phase. The sensitivity of the wavefield due to a scatterer off the theoretical ray path is generally determined by the convolution of the source-to-scatterer response with, using reciprocity, the receiver-to-scatterer response. Thus, exact kernels require the knowledge of the Green's function for the full moment tensor (i.e., source) and body forces (i.e., receiver components) throughout the model space and time. We develop an axisymmetric spectral element method for elastodynamics to serve this purpose. The axisymmetric approach takes advantage of the fact that kernels are computed upon a spherically symmetric Earth model. In this reduced dimension formulation, all moment tensor elements and single forces can be included and collectively unfold in six different 2D problems to be solved separately. The efficient simulations on a 2D mesh then allow for currently unattainable high resolution at low hardware requirements. The displacement field {u} for the 3D sphere can be expressed as {u}( {x}, {t})= {u}( {x}φ =0}, {t}) {f(φ ), where φ =0 represents the 2D computational domain and {f}(φ ) are trigonometric functions. Here, we describe the variational formalism for the full multipole source system and validate its implementation against normal mode solutions for the solid sphere. The global mesh includes several conforming coarsening levels to minimize grid spacing variations. In an effort of algorithmic optimization, the discretization is acquired on the basis of matrix
NASA Astrophysics Data System (ADS)
Biotteau, E.; Gravouil, A.; Lubrecht, A. A.; Combescure, A.
2012-01-01
In this paper, the refinement strategy based on the "Non-Linear Localized Full MultiGrid" solver originally published in Int. J. Numer. Meth. Engng 84(8):947-971 (2010) for 2-D structural problems is extended to 3-D simulations. In this context, some extra information concerning the refinement strategy and the behavior of the error indicators are given. The adaptive strategy is dedicated to the accurate modeling of elastoplastic materials with isotropic hardening in transient dynamics. A multigrid solver with local mesh refinement is used to reduce the amount of computational work needed to achieve an accurate calculation at each time step. The locally refined grids are automatically constructed, depending on the user prescribed accuracy. The discretization error is estimated by a dedicated error indicator within the multigrid method. In contrast to other adaptive procedures, where grids are erased when new ones are generated, the previous solutions are used recursively to reduce the computing time on the new mesh. Moreover, the adaptive strategy needs no costly coarsening method as the mesh is reassessed at each time step. The multigrid strategy improves the convergence rate of the non-linear solver while ensuring the information transfer between the different meshes. It accounts for the influence of localized non-linearities on the whole structure. All the steps needed to achieve the adaptive strategy are automatically performed within the solver such that the calculation does not depend on user experience. This paper presents three-dimensional results using the adaptive multigrid strategy on elastoplastic structures in transient dynamics and in a linear geometrical framework. Isoparametric cubic elements with energy and plastic work error indicators are used during the calculation.
NASA Astrophysics Data System (ADS)
Wijesinghe, Philip; Sampson, David D.; Kennedy, Brendan F.
2016-03-01
Accurate quantification of forces, applied to, or generated by, tissue, is key to understanding many biomechanical processes, fabricating engineered tissues, and diagnosing diseases. Many techniques have been employed to measure forces; in particular, tactile imaging - developed to spatially map palpation-mimicking forces - has shown potential in improving the diagnosis of cancer on the macro-scale. However, tactile imaging often involves the use of discrete force sensors, such as capacitive or piezoelectric sensors, whose spatial resolution is often limited to 1-2 mm. Our group has previously presented a type of tactile imaging, termed optical palpation, in which the change in thickness of a compliant layer in contact with tissue is measured using optical coherence tomography, and surface forces are extracted, with a micro-scale spatial resolution, using a one-dimensional spring model. We have also recently combined optical palpation with compression optical coherence elastography (OCE) to quantify stiffness. A main limitation of this work, however, is that a one-dimensional spring model is insufficient in describing the deformation of mechanically heterogeneous tissue with uneven boundaries, generating significant inaccuracies in measured forces. Here, we present a computational, finite-element method, which we term computational optical palpation. In this technique, by knowing the non-linear mechanical properties of the layer, and from only the axial component of displacement measured by phase-sensitive OCE, we can estimate, not only the axial forces, but the three-dimensional traction forces at the layer-tissue interface. We use a non-linear, three-dimensional model of deformation, which greatly increases the ability to accurately measure force and stiffness in complex tissues.
Using Finite Volume Element Definitions to Compute the Gravitation of Irregular Small Bodies
NASA Astrophysics Data System (ADS)
Zhao, Y. H.; Hu, S. C.; Wang, S.; Ji, J. H.
2015-03-01
In the orbit design procedure of the small bodies exploration missions, it's important to take the effect of the gravitation of the small bodies into account. However, a majority of the small bodies in the solar system are irregularly shaped with non-uniform density distribution which makes it difficult to precisely calculate the gravitation of these bodies. This paper proposes a method to model the gravitational field of an irregularly shaped small body and calculate the corresponding spherical harmonic coefficients. This method is based on the shape of the small bodies resulted from the light curve data via observation, and uses finite volume element to approximate the body shape. The spherical harmonic parameters could be derived numerically by computing the integrals according to their definition. Comparison with the polyhedral method is shown in our works. We take the asteroid (433) Eros as an example. Spherical harmonic coefficients resulted from this method are compared with the results derived from the track data obtained by NEAR (Near-Earth Asteroid Rendezvous) detector. The comparison shows that the error of C_{20} is less than 2%. The spherical harmonic coefficients of (1996) FG3 which is a selected target in our future exploration mission are computed. Taking (4179) Toutatis, the target body in Chang'e 2's flyby mission, for example, the gravitational field is calculated combined with the shape model from radar data, which provides theoretical basis for analyzing the soil distribution and flow from the optical image obtained in the mission. This method is applied to uneven density distribution objects, and could be used to provide reliable gravity field data of small bodies for orbit design and landing in the future exploration missions.
Chen, Xiaowei Sylvia; Brown, Chris M
2012-10-01
Messenger ribonucleic acids (RNAs) contain a large number of cis-regulatory RNA elements that function in many types of post-transcriptional regulation. These cis-regulatory elements are often characterized by conserved structures and/or sequences. Although some classes are well known, given the wide range of RNA-interacting proteins in eukaryotes, it is likely that many new classes of cis-regulatory elements are yet to be discovered. An approach to this is to use computational methods that have the advantage of analysing genomic data, particularly comparative data on a large scale. In this study, a set of structural discovery algorithms was applied followed by support vector machine (SVM) classification. We trained a new classification model (CisRNA-SVM) on a set of known structured cis-regulatory elements from 3'-untranslated regions (UTRs) and successfully distinguished these and groups of cis-regulatory elements not been strained on from control genomic and shuffled sequences. The new method outperformed previous methods in classification of cis-regulatory RNA elements. This model was then used to predict new elements from cross-species conserved regions of human 3'-UTRs. Clustering of these elements identified new classes of potential cis-regulatory elements. The model, training and testing sets and novel human predictions are available at: http://mRNA.otago.ac.nz/CisRNA-SVM.
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen; Himansu, Ananda; Chang, Sin-Chung; Jorgenson, Philip C. E.
2000-01-01
The Internal Propagation problems, Fan Noise problem, and Turbomachinery Noise problems are solved using the space-time conservation element and solution element (CE/SE) method. The problems in internal propagation problems address the propagation of sound waves through a nozzle. Both the nonlinear and linear quasi 1D Euler equations are solved. Numerical solutions are presented and compared with the analytical solution. The fan noise problem concerns the effect of the sweep angle on the acoustic field generated by the interaction of a convected gust with a cascade of 3D flat plates. A parallel version of the 3D CE/SE Euler solver is developed and employed to obtain numerical solutions for a family of swept flat plates. Numerical solutions for sweep angles of 0, 5, 10, and 15 deg are presented. The turbomachinery problems describe the interaction of a 2D vortical gust with a cascade of flat-plate airfoils with/without a downstream moving grid. The 2D nonlinear Euler Equations are solved and the converged numerical solutions are presented and compared with the corresponding analytical solution. All the comparisons demonstrate that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple and efficient manner. Furthermore, the simple non-reflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well in 1D, 2D and 3D problems.
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen; Himansu, Ananda; Chang, Sin-Chung; Jorgenson, Philip C. E.
2000-01-01
The Internal Propagation problems, Fan Noise problem, and Turbomachinery Noise problems are solved using the space-time conservation element and solution element (CE/SE) method. The problems in internal propagation problems address the propagation of sound waves through a nozzle. Both the nonlinear and linear quasi 1D Euler equations are solved. Numerical solutions are presented and compared with the analytical solution. The fan noise problem concerns the effect of the sweep angle on the acoustic field generated by the interaction of a convected gust with a cascade of 3D flat plates. A parallel version of the 3D CE/SE Euler solver is developed and employed to obtain numerical solutions for a family of swept flat plates. Numerical solutions for sweep angles of 0, 5, 10, and 15 deg are presented. The turbomachinery problems describe the interaction of a 2D vortical gust with a cascade of flat-plate airfoils with/without a downstream moving grid. The 2D nonlinear Euler Equations are solved and the converged numerical solutions are presented and compared with the corresponding analytical solution. All the comparisons demonstrate that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple and efficient manner. Furthermore, the simple non-reflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well in 1D, 2D and 3D problems.
NASA Astrophysics Data System (ADS)
Liu, X.
2013-12-01
In many natural and human-impacted rivers, the porous sediment beds are either fully or partially covered by large roughness elements, such as gravels and boulders. The existence of these large roughness elements, which are in direct contact with the turbulent river flow, changes the dynamics of mass and momentum transfer across the river bed. It also impacts the overall hydraulics in the river channel and over time, indirectly influences the geomorphological evolution of the system. Ideally, one should resolve each of these large roughness elements in a computational fluid model. This approach is apparently not feasible due to the prohibitive computational cost. Considering a typical river bed with armoring, the distribution of sediment sizes usually shows significant vertical variations. Computationally, it poses great challenge to resolve all the size scales. Similar multiscale problem exists in the much broader porous media flow field. To cope with this, we propose a hybrid computational approach where the large surface roughness elements are resolved using immersed boundary method and sediment layers below (usually finer) are modeled by adding extra drag terms in momentum equations. Large roughness elements are digitized using a 3D laser scanner. They are put into the computational domain using the collision detection and rigid body dynamics algorithms which guarantees realistic and physically-correct spatial arrangement of the surface elements. Simulation examples have shown the effectiveness of the hybrid approach which captures the effect of the surface roughness on the turbulent flow as well as the hyporheic flow pattern in and out of the bed.
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
Cheng, J Y; Chahine, G L
2001-12-01
The slender body theory, lifting surface theories, and more recently panel methods and Navier-Stokes solvers have been used to study the hydrodynamics of fish swimming. This paper presents progress on swimming hydrodynamics using a boundary integral equation method (or boundary element method) based on potential flow model. The unsteady three-dimensional BEM code 3DynaFS that we developed and used is able to model realistic body geometries, arbitrary movements, and resulting wake evolution. Pressure distribution over the body surface, vorticity in the wake, and the velocity field around the body can be computed. The structure and dynamic behavior of the vortex wakes generated by the swimming body are responsible for the underlying fluid dynamic mechanisms to realize the high-efficiency propulsion and high-agility maneuvering. Three-dimensional vortex wake structures are not well known, although two-dimensional structures termed 'reverse Karman Vortex Street' have been observed and studied. In this paper, simulations about a swimming saithe (Pollachius virens) using our BEM code have demonstrated that undulatory swimming reduces three-dimensional effects due to substantially weakened tail tip vortex, resulting in a reverse Karman Vortex Street as the major flow pattern in the three-dimensional wake of an undulating swimming fish.
Inversion of potential field data using the finite element method on parallel computers
NASA Astrophysics Data System (ADS)
Gross, L.; Altinay, C.; Shaw, S.
2015-11-01
In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.
Zampolli, Mario; Nijhof, Marten J J; de Jong, Christ A F; Ainslie, Michael A; Jansen, Erwin H W; Quesson, Benoit A J
2013-01-01
The acoustic radiation from a pile being driven into the sediment by a sequence of hammer strikes is studied with a linear, axisymmetric, structural acoustic frequency domain finite element model. Each hammer strike results in an impulsive sound that is emitted from the pile and then propagated in the shallow water waveguide. Measurements from accelerometers mounted on the head of a test pile and from hydrophones deployed in the water are used to validate the model results. Transfer functions between the force input at the top of the anvil and field quantities, such as acceleration components in the structure or pressure in the fluid, are computed with the model. These transfer functions are validated using accelerometer or hydrophone measurements to infer the structural forcing. A modeled hammer forcing pulse is used in the successive step to produce quantitative predictions of sound exposure at the hydrophones. The comparison between the model and the measurements shows that, although several simplifying assumptions were made, useful predictions of noise levels based on linear structural acoustic models are possible. In the final part of the paper, the model is used to characterize the pile as an acoustic radiator by analyzing the flow of acoustic energy.
CAST2D: A finite element computer code for casting process modeling
Shapiro, A.B.; Hallquist, J.O.
1991-10-01
CAST2D is a coupled thermal-stress finite element computer code for casting process modeling. This code can be used to predict the final shape and stress state of cast parts. CAST2D couples the heat transfer code TOPAZ2D and solid mechanics code NIKE2D. CAST2D has the following features in addition to all the features contained in the TOPAZ2D and NIKE2D codes: (1) a general purpose thermal-mechanical interface algorithm (i.e., slide line) that calculates the thermal contact resistance across the part-mold interface as a function of interface pressure and gap opening; (2) a new phase change algorithm, the delta function method, that is a robust method for materials undergoing isothermal phase change; (3) a constitutive model that transitions between fluid behavior and solid behavior, and accounts for material volume change on phase change; and (4) a modified plot file data base that allows plotting of thermal variables (e.g., temperature, heat flux) on the deformed geometry. Although the code is specialized for casting modeling, it can be used for other thermal stress problems (e.g., metal forming).
A Hybrid FPGA/Tilera Compute Element for Autonomous Hazard Detection and Navigation
NASA Technical Reports Server (NTRS)
Villalpando, Carlos Y.; Werner, Robert A.; Carson, John M., III; Khanoyan, Garen; Stern, Ryan A.; Trawny, Nikolas
2013-01-01
To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.
[Numerical finite element modeling of custom car seat using computer aided design].
Huang, Xuqi; Singare, Sekou
2014-02-01
A good cushion can not only provide the sitter with a high comfort, but also control the distribution of the hip pressure to reduce the incidence of diseases. The purpose of this study is to introduce a computer-aided design (CAD) modeling method of the buttocks-cushion using numerical finite element (FE) simulation to predict the pressure distribution on the buttocks-cushion interface. The buttock and the cushion model geometrics were acquired from a laser scanner, and the CAD software was used to create the solid model. The FE model of a true seated individual was developed using ANSYS software (ANSYS Inc, Canonsburg, PA). The model is divided into two parts, i.e. the cushion model made of foam and the buttock model represented by the pelvis covered with a soft tissue layer. Loading simulations consisted of imposing a vertical force of 520N on the pelvis, corresponding to the weight of the user upper extremity, and then solving iteratively the system.
NASA Astrophysics Data System (ADS)
Derakhshani, S. M.; Schott, D. L.; Lodewijks, G.
2013-06-01
Dust emissions can have significant effects on the human health, environment and industry equipment. Understanding the dust generation process helps to select a suitable dust preventing approach and also is useful to evaluate the environmental impact of dust emission. To describe these processes, numerical methods such as Computational Fluid Dynamics (CFD) are widely used, however nowadays particle based methods like Discrete Element Method (DEM) allow researchers to model interaction between particles and fluid flow. In this study, air flow over a stockpile, dust emission, erosion and surface deformation of granular material in the form of stockpile are studied by using DEM and CFD as a coupled method. Two and three dimensional simulations are respectively developed for CFD and DEM methods to minimize CPU time. The standard κ-ɛ turbulence model is used in a fully developed turbulent flow. The continuous gas phase and the discrete particle phase link to each other through gas-particle void fractions and momentum transfer. In addition to stockpile deformation, dust dispersion is studied and finally the accuracy of stockpile deformation results obtained by CFD-DEM modelling will be validated by the agreement with the existing experimental data.
A hybrid FPGA/Tilera compute element for autonomous hazard detection and navigation
NASA Astrophysics Data System (ADS)
Villalpando, C. Y.; Werner, R. A.; Carson, J. M.; Khanoyan, G.; Stern, R. A.; Trawny, N.
To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.
NASA Astrophysics Data System (ADS)
Morita, K.-I.; Ishiguro, M.
1980-03-01
The array performance in several successive configurations was examined for the 10-m(phi) 5-element super-synthesis telescope. The number of (u, v) samples was used as a criterion of optimum (u, v) coverages. The optimum solution for a given declination was obtained by a random trial method. The performance was evaluated by computer simulation using model brightness distributions.
USDA-ARS?s Scientific Manuscript database
Computational methods offer great hope but limited accuracy in the prediction of functional cis-regulatory elements; improvements are needed to enable synthetic promoter design. We applied an ensemble strategy for de novo soybean cyst nematode (SCN)-inducible motif discovery among promoters of 18 co...
Numerical computation of transonic flows by finite-element and finite-difference methods
NASA Technical Reports Server (NTRS)
Hafez, M. M.; Wellford, L. C.; Merkle, C. L.; Murman, E. M.
1978-01-01
Studies on applications of the finite element approach to transonic flow calculations are reported. Different discretization techniques of the differential equations and boundary conditions are compared. Finite element analogs of Murman's mixed type finite difference operators for small disturbance formulations were constructed and the time dependent approach (using finite differences in time and finite elements in space) was examined.
Jursic, B.S.
1996-12-31
Up to four ionization potentials of elements from the second-row of the periodic table were computed using the ab initio (HF, MP2, MP3, MP4, QCISD, GI, G2, and G2MP2) and DFT (B3LY, B3P86, B3PW91, XALPHA, HFS, HFB, BLYP, BP86, BPW91, BVWN, XAPLY, XAP86, XAPW91, XAVWN, SLYR SP86, SPW91 and SVWN) methods. In all of the calculations, the large 6-311++G(3df,3pd) gaussian type of basis set was used. The computed values were compared with the experimental results and suitability of the ab initio and DFF methods were discussed, in regard to reproducing the experimental data. From the computed ionization potentials of the second-row elements, it can be concluded that the HF ab initio computation is not capable of reproducing the experimental results. The computed ionization potentials are too low. However, by using the ab initio methods that include electron correlation, the computed IPs are becoming much closer to the experimental values. In all cases, with the exception of the first ionization potential for oxygen, the G2 computation result produces ionization potentials that are indistinguishable from the experimental results.
NASA Technical Reports Server (NTRS)
Byun, Chansup; Guruswamy, Guru P.
1993-01-01
This paper presents a procedure for computing the aeroelasticity of wing-body configurations on multiple-instruction, multiple-data (MIMD) parallel computers. In this procedure, fluids are modeled using Euler equations discretized by a finite difference method, and structures are modeled using finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. A parallel integration scheme is used to compute aeroelastic responses by solving the coupled fluid and structural equations concurrently while keeping modularity of each discipline. The present procedure is validated by computing the aeroelastic response of a wing and comparing with experiment. Aeroelastic computations are illustrated for a High Speed Civil Transport type wing-body configuration.
Computer Security Systems Enable Access.
ERIC Educational Resources Information Center
Riggen, Gary
1989-01-01
A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)
NASA Technical Reports Server (NTRS)
Cooke, C. H.; Blanchard, D. K.
1975-01-01
A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.
The computational structural mechanics testbed generic structural-element processor manual
NASA Technical Reports Server (NTRS)
Stanley, Gary M.; Nour-Omid, Shahram
1990-01-01
The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).
NASA Astrophysics Data System (ADS)
Stenvall, A.; Tarhasaari, T.
2010-07-01
Due to the rapid development of personal computers from the beginning of the 1990s, it has become a reality to simulate current penetration, and thus hysteresis losses, in superconductors with other than very simple one-dimensional (1D) Bean model computations or Norris formulae. Even though these older approaches are still usable, they do not consider, for example, multifilamentary conductors, local critical current dependency on magnetic field or varying n-values. Currently, many numerical methods employing different formulations are available. The problem of hysteresis losses can be scrutinized via an eddy current formulation of the classical theory of electromagnetism. The difficulty of the problem lies in the non-linear resistivity of the superconducting region. The steep transition between the superconducting and the normal states often causes convergence problems for the most common finite element method based programs. The integration methods suffer from full system matrices and, thus, restrict the number of elements to a few thousands at most. The so-called T - phiv formulation and the use of edge elements, or more precisely Whitney 1-forms, within the finite element method have proved to be a very suitable method for hysteresis loss simulations of different geometries. In this paper we consider making such finite element method software from first steps, employing differential geometry and forms.
Topics in Computer Literacy as Elements of Two Introductory College Mathematics Courses.
ERIC Educational Resources Information Center
Spresser, Diane M.
1986-01-01
Explains the integrated approach implemented by James Madison University, Virginia, in enhancing computer literacy. Reviews the changes in the mathematics courses and provides topical listings and outlines of the courses that emphasize computer applications. (ML)
NASA Astrophysics Data System (ADS)
Aristovich, K. Y.; Khan, S. H.
2010-07-01
Complex multi-scale Finite Element (FE) analyses always involve high number of elements and therefore require very long time of computations. This is caused by the fact, that considered effects on smaller scales have greater influences on the whole model and larger scales. Thus, mesh density should be as high as required by the smallest scale factor. New submodelling routine has been developed to sufficiently decrease the time of computation without loss of accuracy for the whole solution. The presented approach allows manipulation of different mesh sizes on different scales and, therefore total optimization of mesh density on each scale and transfer results automatically between the meshes corresponding to respective scales of the whole model. Unlike classical submodelling routine, the new technique operates with not only transfer of boundary conditions but also with volume results and transfer of forces (current density load in case of electromagnetism), which allows the solution of full Maxwell's equations in FE space. The approach was successfully implemented for electromagnetic solution in the forward problem of Magnetic Field Tomography (MFT) based on Magnetoencephalography (MEG), where the scale of one neuron was considered as the smallest and the scale of whole-brain model as the largest. The time of computation was reduced about 100 times, with the initial requirements of direct computations without submodelling routine of 10 million elements.
Design of a massively parallel computer using bit serial processing elements
NASA Technical Reports Server (NTRS)
Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing
1995-01-01
A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.
Floyd, M.A.
1980-03-01
A computer controlled, scanning monochromator system specifically designed for the rapid, sequential determination of the elements is described. The monochromator is combined with an inductively coupled plasma excitation source so that elements at major, minor, trace, and ultratrace levels may be determined, in sequence, without changing experimental parameters other than the spectral line observed. A number of distinctive features not found in previously described versions are incorporated into the system here described. Performance characteristics of the entire system and several analytical applications are discussed.
Gartling, D.K.
1996-05-01
The theoretical and numerical background for the finite element computer program, TORO II, is presented in detail. TORO II is designed for the multi-dimensional analysis of nonlinear, electromagnetic field problems described by the quasi-static form of Maxwell`s equations. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in TORO II are also outlined. Instructions for the use of the code are documented in SAND96-0903; examples of problems analyzed with the code are also provided in the user`s manual. 24 refs., 8 figs.
NASA Astrophysics Data System (ADS)
Sizov, Gennadi Y.
In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow
ERIC Educational Resources Information Center
Leong-Hong, Belkis; Marron, Beatrice
A Data Element Dictionary/Directory (DED/D) is a software tool that is used to control and manage data elements in a uniform manner. It can serve data base administrators, systems analysts, software designers, and programmers by providing a central repository for information about data resources across organization and application lines. This…
NASA Astrophysics Data System (ADS)
Dong, Yumin; Xiao, Shufen; Ma, Hongyang; Chen, Libo
2016-12-01
Cloud computing and big data have become the developing engine of current information technology (IT) as a result of the rapid development of IT. However, security protection has become increasingly important for cloud computing and big data, and has become a problem that must be solved to develop cloud computing. The theft of identity authentication information remains a serious threat to the security of cloud computing. In this process, attackers intrude into cloud computing services through identity authentication information, thereby threatening the security of data from multiple perspectives. Therefore, this study proposes a model for cloud computing protection and management based on quantum authentication, introduces the principle of quantum authentication, and deduces the quantum authentication process. In theory, quantum authentication technology can be applied in cloud computing for security protection. This technology cannot be cloned; thus, it is more secure and reliable than classical methods.
Shwartz, Assaf; Cheval, Helene; Simon, Laurent; Julliard, Romain
2013-08-01
Urban ecology is emerging as an integrative science that explores the interactions of people and biodiversity in cities. Interdisciplinary research requires the creation of new tools that allow the investigation of relations between people and biodiversity. It has been established that access to green spaces or nature benefits city dwellers, but the role of species diversity in providing psychological benefits remains poorly studied. We developed a user-friendly 3-dimensional computer program (Virtual Garden [www.tinyurl.com/3DVirtualGarden]) that allows people to design their own public or private green spaces with 95 biotic and abiotic features. Virtual Garden allows researchers to explore what elements of biodiversity people would like to have in their nearby green spaces while accounting for other functions that people value in urban green spaces. In 2011, 732 participants used our Virtual Garden program to design their ideal small public garden. On average gardens contained 5 different animals, 8 flowers, and 5 woody plant species. Although the mathematical distribution of flower and woody plant richness (i.e., number of species per garden) appeared to be similar to what would be expected by random selection of features, 30% of participants did not place any animal species in their gardens. Among those who placed animals in their gardens, 94% selected colorful species (e.g., ladybug [Coccinella septempunctata], Great Tit [Parus major], and goldfish), 53% selected herptiles or large mammals, and 67% selected non-native species. Older participants with a higher level of education and participants with a greater concern for nature designed gardens with relatively higher species richness and more native species. If cities are to be planned for the mutual benefit of people and biodiversity and to provide people meaningful experiences with urban nature, it is important to investigate people's relations with biodiversity further. Virtual Garden offers a standardized
NASA Technical Reports Server (NTRS)
Lombard, C. K.; Lombard, M. P.; Menees, G. P.; Yang, J. Y.
1980-01-01
Several aspects connected with the notion of computation with flow oriented mesh systems are presented. Simple, effective approaches to the ideas discussed are demonstrated in current applications to blown forebody shock layer flow and full bluff body shock layer flow including the massively separated wake region.
ERIC Educational Resources Information Center
Fekonja-Peklaj, Urška; Marjanovic-Umek, Ljubica
2015-01-01
The aim of this qualitative study was to evaluate the positive and negative aspects of the interactive whiteboard (IWB) and tablet computers use in the first grade of primary school from the perspectives of three groups of evaluators, namely the teachers, the pupils and an independent observer. The sample included three first grade classes with…
Unnikrishnan, Ginu U.; Morgan, Elise F.
2011-01-01
Inaccuracies in the estimation of material properties and errors in the assignment of these properties into ﬁnite element models limit the reliability, accuracy, and precision of quantitative computed tomography (QCT)-based ﬁnite element analyses of the vertebra. In this work, a new mesh-independent, material mapping procedure was developed to improve the quality of predictions of vertebral mechanical behavior from QCT-based ﬁnite element models. In this procedure, an intermediate step, called the material block model, was introduced to determine the distribution of material properties based on bone mineral density, and these properties were then mapped onto the ﬁnite element mesh. A sensitivity study was ﬁrst conducted on a calibration phantom to understand the inﬂuence of the size of the material blocks on the computed bone mineral density. It was observed that varying the material block size produced only marginal changes in the predictions of mineral density. Finite element (FE) analyses were then conducted on a square column-shaped region of the vertebra and also on the entire vertebra in order to study the effect of material block size on the FE-derived outcomes. The predicted values of stiffness for the column and the vertebra decreased with decreasing block size. When these results were compared to those of a mesh convergence analysis, it was found that the inﬂuence of element size on vertebral stiffness was less than that of the material block size. This mapping procedure allows the material properties in a ﬁnite element study to be determined based on the block size required for an accurate representation of the material ﬁeld, while the size of the ﬁnite elements can be selected independently and based on the required numerical accuracy of the ﬁnite element solution. The mesh-independent, material mapping procedure developed in this study could be particularly helpful in improving the accuracy of ﬁnite element analyses of
Unnikrishnan, Ginu U; Morgan, Elise F
2011-07-01
Inaccuracies in the estimation of material properties and errors in the assignment of these properties into finite element models limit the reliability, accuracy, and precision of quantitative computed tomography (QCT)-based finite element analyses of the vertebra. In this work, a new mesh-independent, material mapping procedure was developed to improve the quality of predictions of vertebral mechanical behavior from QCT-based finite element models. In this procedure, an intermediate step, called the material block model, was introduced to determine the distribution of material properties based on bone mineral density, and these properties were then mapped onto the finite element mesh. A sensitivity study was first conducted on a calibration phantom to understand the influence of the size of the material blocks on the computed bone mineral density. It was observed that varying the material block size produced only marginal changes in the predictions of mineral density. Finite element (FE) analyses were then conducted on a square column-shaped region of the vertebra and also on the entire vertebra in order to study the effect of material block size on the FE-derived outcomes. The predicted values of stiffness for the column and the vertebra decreased with decreasing block size. When these results were compared to those of a mesh convergence analysis, it was found that the influence of element size on vertebral stiffness was less than that of the material block size. This mapping procedure allows the material properties in a finite element study to be determined based on the block size required for an accurate representation of the material field, while the size of the finite elements can be selected independently and based on the required numerical accuracy of the finite element solution. The mesh-independent, material mapping procedure developed in this study could be particularly helpful in improving the accuracy of finite element analyses of vertebroplasty and
Gartling, D.K.; Hogan, R.E.
1994-10-01
User instructions are given for the finite element computer program, COYOTE II. COYOTE II is designed for the multi-dimensional analysis of nonlinear heat conduction problems including the effects of enclosure radiation and chemical reaction. The theoretical background and numerical methods used in the program are documented in SAND94-1173. Examples of the use of the code are presented in SAND94-1180.
NASA Technical Reports Server (NTRS)
Taylor, C. M.
1977-01-01
A finite element computer program which enables the analysis of distortions and stresses occurring in compounds having a relative interference is presented. The program is limited to situations in which the loading is axisymmetric. Loads arising from the interference fit(s) and external, inertial, and thermal loadings are accommodated. The components comprise several different homogeneous isotropic materials whose properties may be a function of temperature. An example illustrating the data input and program output is given.
NASA Astrophysics Data System (ADS)
Yavuz, Fuat
2003-12-01
Micas are significant ferromagnesian minerals in felsic to mafic igneous, metamorphic, and hydrothermal rocks. Because of their considerable potential to reveal the physicochemical conditions of magmas in terms of petrologic and metallogenic aspects, mica chemistry is used extensively in the earth sciences. For example, the composition of phlogopite and biotite can be used to evaluate the intensive thermodynamic parameters of temperature ( T, °C), oxygen fugacity ( fO 2), and water fugacity ( fH 2O) of magmatic rocks. The halogen contents of micas permit the estimation of the fluorine and chlorine fugacities that may be used in understanding the metal transportation and deposition processes in hydrothermal ore deposits. The Mica + computer program has been written to edit and store electron-microprobe or wet-chemical mica analyses. This software calculates structural formulae and shares out the calculated anions into the I, M, T, and A sites. Mica + classifies micas in terms of composition and octahedral site-occupancy. It also calculates the intensive parameters such as fO 2, T, and fH 2O from the composition of biotite in equilibrium with K-feldspar and magnetite. Using the calculated F-OH and Cl-OH exchange systematics and various log ratios ( fH 2O/ fHF, fH 2O/ fHCl, fHCl/ fHF, XCl/ XOH, XF/ XOH, XF/ XCl) of mica analyses. Mica + gives valuable determinations about the characteristics of hydrothermal fluids associated with alteration and mineralization processes. The program output is generally in the form of screen outputs. However, by using the "Grf" files that come up with this program they can be visualized under the Grapher software both as binary and ternary diagrams. Mica analyses subjected to the Mica + program were calculated on the basis of 22+ z positive charges taking into account the procedure by the Commission on New Mineral Names Mica Subcommittee of 1998.
NASA Technical Reports Server (NTRS)
Ruf, Joseph H.
1992-01-01
Phase 2+ Space Shuttle Main Engine powerheads, E0209 and E0215 degraded their main combustion chamber (MCC) liners at a faster rate than is normal for phase 2 powerheads. One possible cause of the accelerated degradation was a reduction of coolant flow through the MCC. Hardware changes were made to the preburner fuel leg which may have reduced the resistance and, therefore, pulled some of the hydrogen from the MCC coolant leg. A computational fluid dynamics (CFD) analysis was performed to determine hydrogen flow path resistances of the phase 2+ fuel preburner injector elements relative to the phase 2 element. FDNS was implemented on axisymmetric grids with the hydrogen assumed to be incompressible. The analysis was performed in two steps: the first isolated the effect of the different inlet areas and the second modeled the entire injector element hydrogen flow path.
NASA Technical Reports Server (NTRS)
Melis, Matthew E.
1990-01-01
COMGEN (Composite Model Generator) is an interactive FORTRAN program which can be used to create a wide variety of finite element models of continuous fiber composite materials at the micro level. It quickly generates batch or session files to be submitted to the finite element pre- and postprocessor PATRAN based on a few simple user inputs such as fiber diameter and percent fiber volume fraction of the composite to be analyzed. In addition, various mesh densities, boundary conditions, and loads can be assigned easily to the models within COMGEN. PATRAN uses a session file to generate finite element models and their associated loads which can then be translated to virtually any finite element analysis code such as NASTRAN or MARC.
Development of an hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1993-01-01
The purpose of this research effort is to develop a means to use, and to ultimately implement, hp-version finite elements in the numerical solution of optimal control problems. The hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element.
On Computing the Pressure by the p Version of the Finite Element Method for Stokes Problem
1990-02-15
approximation of saddlepoint prob- lems arising from Lagrangian multipliers. RAIRO , 8:129-151, 1974. [9] M. Dauge. Stationary Stokes and Navier-Stokes systems...Jensen and M. Vogelius. Divergence stability in connection with the p version of the finite element method. RAIRO , Modelisation Math. Anal. Numer., 1990...element method for elliptic problems of order 2 1. RAIRO , Modelisation Math. Anal. Numer., 24:107-146, 1990. 1261 M. Suri. On the stability and convergence
CUERVO: A finite element computer program for nonlinear scalar transport problems
Sirman, M.B.; Gartling, D.K.
1995-11-01
CUERVO is a finite element code that is designed for the solution of multi-dimensional field problems described by a general nonlinear, advection-diffusion equation. The code is also applicable to field problems described by diffusion, Poisson or Laplace equations. The finite element formulation and the associated numerical methods used in CUERVO are outlined here; detailed instructions for use of the code are also presented. Example problems are provided to illustrate the use of the code.
Calculations and Canonical Elements: Part I.
ERIC Educational Resources Information Center
Stewart, Ian; Tall, David
1979-01-01
The author argues that the idea of canonical elements provides a coherent relationship between equivalence relations, the basis of modern approaches to too many mathematical topics, and the traditional aspect of computation. Examples of equivalent relations, canonical elements, and their calculations are given. (MK)
NASA Technical Reports Server (NTRS)
Moxon, Bruce C.; Green, John A.
1990-01-01
A high-performance platform for development of real-time helicopter flight simulations based on a simulation development and analysis platform combining a parallel simulation development and analysis environment with a scalable multiprocessor computer system is described. Simulation functional decomposition is covered, including the sequencing and data dependency of simulation modules and simulation functional mapping to multiple processors. The multiprocessor-based implementation of a blade-element simulation of the UH-60 helicopter is presented, and a prototype developed for a TC2000 computer is generalized in order to arrive at a portable multiprocessor software architecture. It is pointed out that the proposed approach coupled with a pilot's station creates a setting in which simulation engineers, computer scientists, and pilots can work together in the design and evaluation of advanced real-time helicopter simulations.
Parallel Object-Oriented Computation Applied to a Finite Element Problem
NASA Technical Reports Server (NTRS)
Weissman, Jon B.; Grimshaw, Andrew S.; Ferraro, Robert
1993-01-01
The conventional wisdom in the scientific computing community is that the best way to solve large-scale numerically intensive scientific problems on today's parallel MIMD computers is to use Fortran or C programmed in a data-parallel style using low-level message-passing primitives. This approach inevitably leads to nonportable codes, extensive development time, and restricts parallel programming to the domain of the expert programmer. We believe that these problems are not inherent to parallel computing but are the result of the tools used. We will show that comparable performance can be achieved with little effort if better tools that present higher level abstractions are used.
Isoparametric 3-D Finite Element Mesh Generation Using Interactive Computer Graphics
NASA Technical Reports Server (NTRS)
Kayrak, C.; Ozsoy, T.
1985-01-01
An isoparametric 3-D finite element mesh generator was developed with direct interface to an interactive geometric modeler program called POLYGON. POLYGON defines the model geometry in terms of boundaries and mesh regions for the mesh generator. The mesh generator controls the mesh flow through the 2-dimensional spans of regions by using the topological data and defines the connectivity between regions. The program is menu driven and the user has a control of element density and biasing through the spans and can also apply boundary conditions, loads interactively.
Spanne, P.; Rivers, M.L.
1988-01-01
The initial development shows that CMT using synchrotron x-rays can be developed to ..mu..m spatial resolution and perhaps even better. This creates a new microscopy technique which is of special interest in morphological studies of tissues, since no chemical preparation or slicing of the sample is necessary. The combination of CMT with spatial resolution in the ..mu..m range and elemental mapping with sensitivity in the ppM range results in a new tool for elemental mapping at the cellular level. 7 refs., 1 fig.
Analytical model and finite element computation of braking torque in electromagnetic retarder
NASA Astrophysics Data System (ADS)
Ye, Lezhi; Yang, Guangzhao; Li, Desheng
2014-12-01
An analytical model has been developed for analyzing the braking torque in electromagnetic retarder by flux tube and armature reaction method. The magnetic field distribution in air gap, the eddy current induced in the rotor and the braking torque are calculated by the developed model. Two-dimensional and three-dimensional finite element models for retarder have also been developed. Results from the analytical model are compared with those from finite element models. The validity of these three models is checked by the comparison of the theoretical predictions and the measurements from an experimental prototype. The influencing factors of braking torque have been studied.
2015-01-01
Simulations of Mixing Under Supercritical Conditions," Physics of Fluids , vol. 24, 2012. [21] K. Shipley, "Multi-injector Modeling of Transverse...availability of high-performance computational resources is allowing computational fluid dynamics (CFD) of unsteady reacting flow simulations to be...D. Talley, "Coupling between hydrodynamics, acoustics, and heat release in a self-excited unstable combustor," Physics of Fluids , vol. 27, p. 045102
NASA Astrophysics Data System (ADS)
Uhlmann, Gunther
2008-07-01
This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology
Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...
2015-05-01
unstable seven element linear array of shear coaxial injectors. The first approach is a reduced model where the driving injectors are replaced with...tests > $400 million for propellants alone (2010 prices) Irreparable damage can occur in less than 1 second. Damaged engine injector faceplate
USDA-ARS?s Scientific Manuscript database
Transcription initiation, essential to gene expression regulation, involves recruitment of basal transcription factors to the core promoter elements (CPEs). The distribution of currently known CPEs across plant genomes is largely unknown. This is the first large scale genome-wide report on the compu...
TEnest 2.0: Computational annotation and visualization of nested transposable elements
USDA-ARS?s Scientific Manuscript database
Grass genomes are highly repetitive, for example, Oryza sativa (rice) contains 35% repeat sequences, Zea mays (maize) comprise 75%, and Triticum aestivum (wheat) includes approximately 80%. Most of these repeats occur as abundant transposable elements (TE), which present unique challenges to sequen...
ERIC Educational Resources Information Center
Exner, Robert; And Others
The sixteen chapters of this book provide the core material for the Elements of Mathematics Program, a secondary sequence developed for highly motivated students with strong verbal abilities. The sequence is based on a functional-relational approach to mathematics teaching, and emphasizes teaching by analysis of real-life situations. This text is…
Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...
Devine, K.D.; Hennigan, G.L.; Hutchinson, S.A.; Moffat, H.K.; Salinger, A.G.; Schmidt, R.C.; Shadid, J.N.; Smith, T.M.
1999-01-01
The theoretical background for the finite element computer program, MPSalsa Version 1.5, is presented in detail. MPSalsa is designed to solve laminar or turbulent low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow (with auxiliary turbulence equations), heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solve coupled multiple Poisson or advection-diffusion-reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMK3N, respectively. The code employs unstructured meshes, using the EXODUS II finite element database suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec. solver library.
Liu, Haofei; Sun, Wei
2016-01-01
In this study, we evaluated computational efficiency of finite element (FE) simulations when a numerical approximation method was used to obtain the tangent moduli. A fiber-reinforced hyperelastic material model for nearly incompressible soft tissues was implemented for 3D solid elements using both the approximation method and the closed-form analytical method, and validated by comparing the components of the tangent modulus tensor (also referred to as the material Jacobian) between the two methods. The computational efficiency of the approximation method was evaluated with different perturbation parameters and approximation schemes, and quantified by the number of iteration steps and CPU time required to complete these simulations. From the simulation results, it can be seen that the overall accuracy of the approximation method is improved by adopting the central difference approximation scheme compared to the forward Euler approximation scheme. For small-scale simulations with about 10,000 DOFs, the approximation schemes could reduce the CPU time substantially compared to the closed-form solution, due to the fact that fewer calculation steps are needed at each integration point. However, for a large-scale simulation with about 300,000 DOFs, the advantages of the approximation schemes diminish because the factorization of the stiffness matrix will dominate the solution time. Overall, as it is material model independent, the approximation method simplifies the FE implementation of a complex constitutive model with comparable accuracy and computational efficiency to the closed-form solution, which makes it attractive in FE simulations with complex material models.
Dynamic Load Balancing for Finite Element Calculations on Parallel Computers. Chapter 1
NASA Technical Reports Server (NTRS)
Pramono, Eddy; Simon, Horst D.; Sohn, Andrew; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a frame work is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine SP2.
Dynamic Load Balancing for Finite Element Calculations on Parallel Computers. Chapter 1
NASA Technical Reports Server (NTRS)
Pramono, Eddy; Simon, Horst D.; Sohn, Andrew; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a frame work is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine SP2.
Computation of Dancoff Factors for Fuel Elements Incorporating Randomly Packed TRISO Particles
J. L. Kloosterman; Abderrafi M. Ougouag
2005-01-01
A new method for estimating the Dancoff factors in pebble beds has been developed and implemented within two computer codes. The first of these codes, INTRAPEB, is used to compute Dancoff factors for individual pebbles taking into account the random packing of TRISO particles within the fuel zone of the pebble and explicitly accounting for the finite geometry of the fuel kernels. The second code, PEBDAN, is used to compute the pebble-to-pebble contribution to the overall Dancoff factor. The latter code also accounts for the finite size of the reactor vessel and for the proximity of reflectors, as well as for fluctuations in the pebble packing density that naturally arises in pebble beds.
Li, Mao; Wittek, Adam; Miller, Karol
2014-01-01
Biomechanical modeling methods can be used to predict deformations for medical image registration and particularly, they are very effective for whole-body computed tomography (CT) image registration because differences between the source and target images caused by complex articulated motions and soft tissues deformations are very large. The biomechanics-based image registration method needs to deform the source images using the deformation field predicted by finite element models (FEMs). In practice, the global and local coordinate systems are used in finite element analysis. This involves the transformation of coordinates from the global coordinate system to the local coordinate system when calculating the global coordinates of image voxels for warping images. In this paper, we present an efficient numerical inverse isoparametric mapping algorithm to calculate the local coordinates of arbitrary points within the eight-noded hexahedral finite element. Verification of the algorithm for a nonparallelepiped hexahedral element confirms its accuracy, fast convergence, and efficiency. The algorithm's application in warping of the whole-body CT using the deformation field predicted by means of a biomechanical FEM confirms its reliability in the context of whole-body CT registration. PMID:24828796
Dossou, Kokou . E-mail: Kokou.Dossou@uts.edu.au; Byrne, Michael A.; Botten, Lindsay C.
2006-11-20
We consider the calculation of the band structure and Bloch mode basis of two-dimensional photonic crystals, modelled as stacks of one-dimensional diffraction gratings. The scattering properties of each grating are calculated using an efficient finite element method (FEM) and allow the complete mode structure to be derived from a transfer matrix method. A range of numerical examples showing the accuracy, flexibility and utility of the method is presented.
Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements
NASA Astrophysics Data System (ADS)
Casu, P.; Pisu, C.
2013-02-01
This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.
Design and Construction of Detector and Data Acquisition Elements for Proton Computed Tomography
Fermi Research Alliance; Northern Illinois University
2015-07-15
Proton computed tomography (pCT) offers an alternative to x-ray imaging with potential for three-dimensional imaging, reduced radiation exposure, and in-situ imaging. Northern Illinois University (NIU) is developing a second-generation proton computed tomography system with a goal of demonstrating the feasibility of three-dimensional imaging within clinically realistic imaging times. The second-generation pCT system is comprised of a tracking system, a calorimeter, data acquisition, a computing farm, and software algorithms. The proton beam encounters the upstream tracking detectors, the patient or phantom, the downstream tracking detectors, and a calorimeter. The schematic layout of the PCT system is shown. The data acquisition sends the proton scattering information to an offline computing farm. Major innovations of the second generation pCT project involve an increased data acquisition rate ( MHz range) and development of three-dimensional imaging algorithms. The Fermilab Particle Physics Division and Northern Illinois Center for Accelerator and Detector Development at Northern Illinois University worked together to design and construct the tracking detectors, calorimeter, readout electronics and detector mounting system.
Three-Dimensional Effects on Multi-Element High Lift Computations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Watson, Ralph D.
2002-01-01
In an effort to discover the causes for disagreement between previous 2-D computations and nominally 2-D experiment for flow over the 3-clement McDonnell Douglas 30P-30N airfoil configuration at high lift, a combined experimental/CFD investigation is described. The experiment explores several different side-wall boundary layer control venting patterns, document's venting mass flow rates, and looks at corner surface flow patterns. The experimental angle of attack at maximum lift is found to be sensitive to the side wall venting pattern: a particular pattern increases the angle of attack at maximum lift by at least 2 deg. A significant amount of spanwise pressure variation is present at angles of attack near maximum lift. A CFD study using 3-D structured-grid computations, which includes the modeling of side-wall venting, is employed to investigate 3-D effects of the flow. Side-wall suction strength is found to affect the angle at which maximum lift is predicted. Maximum lift in the CFD is shown to be limited by the growth of all off-body corner flow vortex and consequent increase in spanwise pressure variation and decrease in circulation. The 3-D computations with and without wall venting predict similar trends to experiment at low angles of attack, but either stall too earl or else overpredict lift levels near maximum lift by as much as 5%. Unstructured-grid computations demonstrate that mounting brackets lower die the levels near maximum lift conditions.
NASA Astrophysics Data System (ADS)
Corsini, A.; Rispoli, F.; Santoriello, A.; Tezduyar, T. E.
2006-09-01
Recent advances in turbulence modeling brought more and more sophisticated turbulence closures (e.g. k-ɛ, k-ɛ - v 2- f, Second Moment Closures), where the governing equations for the model parameters involve advection, diffusion and reaction terms. Numerical instabilities can be generated by the dominant advection or reaction terms. Classical stabilized formulations such as the Streamline Upwind/Petrov Galerkin (SUPG) formulation (Brook and Hughes, comput methods Appl Mech Eng 32:199 255, 1982; Hughes and Tezduyar, comput methods Appl Mech Eng 45: 217 284, 1984) are very well suited for preventing the numerical instabilities generated by the dominant advection terms. A different stabilization however is needed for instabilities due to the dominant reaction terms. An additional stabilization term, called the diffusion for reaction-dominated (DRD) term, was introduced by Tezduyar and Park (comput methods Appl Mech Eng 59:307 325, 1986) for that purpose and improves the SUPG performance. In recent years a new class of variational multi-scale (VMS) stabilization (Hughes, comput methods Appl Mech Eng 127:387 401, 1995) has been introduced, and this approach, in principle, can deal with advection diffusion reaction equations. However, it was pointed out in Hanke (comput methods Appl Mech Eng 191:2925 2947) that this class of methods also need some improvement in the presence of high reaction rates. In this work we show the benefits of using the DRD operator to enhance the core stabilization techniques such as the SUPG and VMS formulations. We also propose a new operator called the DRDJ (DRD with the local variation jump) term, targeting the reduction of numerical oscillations in the presence of both high reaction rates and sharp solution gradients. The methods are evaluated in the context of two stabilized methods: the classical SUPG formulation and a recently-developed VMS formulation called the V-SGS (Corsini et al. comput methods Appl Mech Eng 194:4797 4823, 2005
Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P
Candel, A; Kabel, A.; Lee, L.; Li, Z.; Ng, C.; Schussman, G.; Ko, K.; Syratchev, I.; /CERN
2009-06-19
In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.
ERIC Educational Resources Information Center
Threlfall, John; Pool, Peter; Homer, Matthew; Swinnerton, Bronwen
2007-01-01
This article explores the effect on assessment of "translating" paper and pencil test items into their computer equivalents. Computer versions of a set of mathematics questions derived from the paper-based end of key stage 2 and 3 assessments in England were administered to age appropriate pupil samples, and the outcomes compared.…
NASA Astrophysics Data System (ADS)
López Ortega, A.; Scovazzi, G.
2011-07-01
This article describes a conservative synchronized remap algorithm applicable to arbitrary Lagrangian-Eulerian computations with nodal finite elements. In the proposed approach, ideas derived from flux-corrected transport (FCT) methods are extended to conservative remap. Unique to the proposed method is the direct incorporation of the geometric conservation law (GCL) in the resulting numerical scheme. It is shown here that the geometric conservation law allows the method to inherit the positivity preserving and local extrema diminishing (LED) properties typical of FCT schemes. The proposed framework is extended to the systems of equations that typically arise in meteorological and compressible flow computations. The proposed algorithm remaps the vector fields associated with these problems by means of a synchronized strategy. The present paper also complements and extends the work of the second author on nodal-based methods for shock hydrodynamics, delivering a fully integrated suite of Lagrangian/remap algorithms for computations of compressible materials under extreme load conditions. Extensive testing in one, two, and three dimensions shows that the method is robust and accurate under typical computational scenarios.
Books and monographs on finite element technology
NASA Technical Reports Server (NTRS)
Noor, A. K.
1985-01-01
The present paper proviees a listing of all of the English books and some of the foreign books on finite element technology, taking into account also a list of the conference proceedings devoted solely to finite elements. The references are divided into categories. Attention is given to fundamentals, mathematical foundations, structural and solid mechanics applications, fluid mechanics applications, other applied science and engineering applications, computer implementation and software systems, computational and modeling aspects, special topics, boundary element methods, proceedings of symmposia and conferences on finite element technology, bibliographies, handbooks, and historical accounts.
Books and monographs on finite element technology
NASA Technical Reports Server (NTRS)
Noor, A. K.
1985-01-01
The present paper proviees a listing of all of the English books and some of the foreign books on finite element technology, taking into account also a list of the conference proceedings devoted solely to finite elements. The references are divided into categories. Attention is given to fundamentals, mathematical foundations, structural and solid mechanics applications, fluid mechanics applications, other applied science and engineering applications, computer implementation and software systems, computational and modeling aspects, special topics, boundary element methods, proceedings of symmposia and conferences on finite element technology, bibliographies, handbooks, and historical accounts.
Zhou, P.; Gilmore, J.; Badics, Z.; Cendes, Z.J.
1998-09-01
A method for accurately predicting the steady-state performance of squirrel cage induction motors is presented. The approach is based on the use of complex two-dimensional finite element solutions to deduce per-phase equivalent circuit parameters for any operating condition. Core saturation and skin effect are directly considered in the field calculation. Corrections can be introduced to include three-dimensional effects such as end-winding and rotor skew. An application example is provided to demonstrate the effectiveness of the proposed approach.
Trial-by-trial motor adaptation: a window into elemental neural computation.
Thoroughman, Kurt A; Fine, Michael S; Taylor, Jordan A
2007-01-01
How does the brain compute? To address this question, mathematical modelers, neurophysiologists, and psychophysicists have sought behaviors that provide evidence of specific neural computations. Human motor behavior consists of several such computations [Shadmehr, R., Wise, S.P. (2005). MIT Press: Cambridge, MA], such as the transformation of a sensory input to a motor output. The motor system is also capable of learning new transformations to produce novel outputs; humans have the remarkable ability to alter their motor output to adapt to changes in their own bodies and the environment [Wolpert, D.M., Ghahramani, Z. (2000). Nat. Neurosci., 3: 1212-1217]. These changes can be long term, through growth and changing body proportions, or short term, through changes in the external environment. Here we focus on trial-by-trial adaptation, the transformation of individually sensed movements into incremental updates of adaptive control. These investigations have the promise of revealing important basic principles of motor control and ultimately guiding a new understanding of the neuronal correlates of motor behaviors.
Hiller, Michael; Agarwal, Saatvik; Notwell, James H.; Parikh, Ravi; Guturu, Harendra; Wenger, Aaron M.; Bejerano, Gill
2013-01-01
Many important model organisms for biomedical and evolutionary research have sequenced genomes, but occupy a phylogenetically isolated position, evolutionarily distant from other sequenced genomes. This phylogenetic isolation is exemplified for zebrafish, a vertebrate model for cis-regulation, development and human disease, whose evolutionary distance to all other currently sequenced fish exceeds the distance between human and chicken. Such large distances make it difficult to align genomes and use them for comparative analysis beyond gene-focused questions. In particular, detecting conserved non-genic elements (CNEs) as promising cis-regulatory elements with biological importance is challenging. Here, we develop a general comparative genomics framework to align isolated genomes and to comprehensively detect CNEs. Our approach integrates highly sensitive and quality-controlled local alignments and uses alignment transitivity and ancestral reconstruction to bridge large evolutionary distances. We apply our framework to zebrafish and demonstrate substantially improved CNE detection and quality compared with previous sets. Our zebrafish CNE set comprises 54 533 CNEs, of which 11 792 (22%) are conserved to human or mouse. Our zebrafish CNEs (http://zebrafish.stanford.edu) are highly enriched in known enhancers and extend existing experimental (ChIP-Seq) sets. The same framework can now be applied to the isolated genomes of frog, amphioxus, Caenorhabditis elegans and many others. PMID:23814184
ERIC Educational Resources Information Center
Klecka, Joseph A.
This report describes various aspects of lesson production and use of the PLATO system at Chanute Air Force Base. The first chapter considers four major factors influencing lesson production: (1) implementation of the "lean approach," (2) the Instructional Systems Development (ISD) role in lesson production, (3) the transfer of…
Xie, Yang; Ying, Jinyong; Xie, Dexuan
2017-03-30
SMPBS (Size Modified Poisson-Boltzmann Solvers) is a web server for computing biomolecular electrostatics using finite element solvers of the size modified Poisson-Boltzmann equation (SMPBE). SMPBE not only reflects ionic size effects but also includes the classic Poisson-Boltzmann equation (PBE) as a special case. Thus, its web server is expected to have a broader range of applications than a PBE web server. SMPBS is designed with a dynamic, mobile-friendly user interface, and features easily accessible help text, asynchronous data submission, and an interactive, hardware-accelerated molecular visualization viewer based on the 3Dmol.js library. In particular, the viewer allows computed electrostatics to be directly mapped onto an irregular triangular mesh of a molecular surface. Due to this functionality and the fast SMPBE finite element solvers, the web server is very efficient in the calculation and visualization of electrostatics. In addition, SMPBE is reconstructed using a new objective electrostatic free energy, clearly showing that the electrostatics and ionic concentrations predicted by SMPBE are optimal in the sense of minimizing the objective electrostatic free energy. SMPBS is available at the URL: smpbs.math.uwm.edu © 2017 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Sainsbury-Carter, J. B.; Conaway, J. H.
1973-01-01
The development and implementation of a preprocessor system for the finite element analysis of helicopter fuselages is described. The system utilizes interactive graphics for the generation, display, and editing of NASTRAN data for fuselage models. It is operated from an IBM 2250 cathode ray tube (CRT) console driven by an IBM 370/145 computer. Real time interaction plus automatic data generation reduces the nominal 6 to 10 week time for manual generation and checking of data to a few days. The interactive graphics system consists of a series of satellite programs operated from a central NASTRAN Systems Monitor. Fuselage structural models including the outer shell and internal structure may be rapidly generated. All numbering systems are automatically assigned. Hard copy plots of the model labeled with GRID or elements ID's are also available. General purpose programs for displaying and editing NASTRAN data are included in the system. Utilization of the NASTRAN interactive graphics system has made possible the multiple finite element analysis of complex helicopter fuselage structures within design schedules.
Computation of the steady viscous flow over a tri-element 'augmentor wing' airfoil
NASA Technical Reports Server (NTRS)
Lasinski, T. A.; Andrews, A. E.; Sorenson, R. L.; Chaussee, D. S.; Pulliam, T. H.; Kutler, P.
1982-01-01
The augmentor wing consists of a main airfoil with a slotted trailing edge for blowing, and two smaller aft airfoils which shroud the jet. This configuration has been modeled for numerical simulation by a novel discretization procedure which generates four separate grids: three surface-oriented airfoil grids and one outer free-stream grid. Grid lines and slopes are continuous across boundaries, so grid overlap at common boundaries provides boundary information without interpolation. A two-dimensional unsteady thin-layer Navier-Stokes code is used to calculate the flow for the no-blowing case at freestream Mach number = 0.7, Re = 12,600.000, and angles-of-incidence = 1.05 deg. Qualitative agreement with experimental data indicates the utility of this procedure in the analysis of multi-element configurations.
NASA Astrophysics Data System (ADS)
Whiteley, J. P.
2017-06-01
Large, incompressible elastic deformations are governed by a system of nonlinear partial differential equations. The finite element discretisation of these partial differential equations yields a system of nonlinear algebraic equations that are usually solved using Newton's method. On each iteration of Newton's method, a linear system must be solved. We exploit the structure of the Jacobian matrix to propose a preconditioner, comprising two steps. The first step is the solution of a relatively small, symmetric, positive definite linear system using the preconditioned conjugate gradient method. This is followed by a small number of multigrid V-cycles for a larger linear system. Through the use of exemplar elastic deformations, the preconditioner is demonstrated to facilitate the iterative solution of the linear systems arising. The number of GMRES iterations required has only a very weak dependence on the number of degrees of freedom of the linear systems.
NASA Astrophysics Data System (ADS)
Xie, Dexuan; Jiang, Yi
2016-10-01
The nonlocal dielectric approach has been studied for more than forty years but only limited to water solvent until the recent work of Xie et al. (2013) [20]. As the development of this recent work, in this paper, a nonlocal modified Poisson-Boltzmann equation (NMPBE) is proposed to incorporate nonlocal dielectric effects into the classic Poisson-Boltzmann equation (PBE) for protein in ionic solvent. The focus of this paper is to present an efficient finite element algorithm and a related software package for solving NMPBE. Numerical results are reported to validate this new software package and demonstrate its high performance for protein molecules. They also show the potential of NMPBE as a better predictor of electrostatic solvation and binding free energies than PBE.
The impact of computers on biostratigraphy: A key element in sequence stratigraphic interpretations
Becker, R.C. ); Goodman, D.K. ); Couvering, J.V. )
1993-02-01
Advances in personal computers have provided the power to perform complex analytical procedures in a cost effective manner. Three newly developed PC-based MS DOS-compatible software systems provide biostratigraphers with the computing flexibility to meet the ever changing needs of today's business environment. PALEX is a paleoecologic expert system that uses ruled to produce both raw and summarized paleoecologic interpretations based on a species code. These codes contain information such as fossil type, geologic age, geographic area, paleoecologic rule, and confidence level. Annotated modifications to the PALEX interpretation are allowed, providing an audit trail for both geoscientists and other biostratigraphers, PALEX, designed to bring consistency to paleoecologic interpretation, can also be used to interpret palynofacies. RAGWARE is a complete data capture and composite log plotting system. Data is captures using a species template on a digitizing pad and entered directly into the RAGWARE system. Plots are produced integrating abundance and diversity curves with paleoecology, electric logs, synthetic seismogram, dipmeter data, geochemistry data, well bore tests, petrographic analysis, or other data. The results can be converted from depth to two-way time and plotted to overlay a seismic section.
Davis, Matthew L; Scott Gayzik, F
2016-10-01
Biofidelity response corridors developed from post-mortem human subjects are commonly used in the design and validation of anthropomorphic test devices and computational human body models (HBMs). Typically, corridors are derived from a diverse pool of biomechanical data and later normalized to a target body habitus. The objective of this study was to use morphed computational HBMs to compare the ability of various scaling techniques to scale response data from a reference to a target anthropometry. HBMs are ideally suited for this type of study since they uphold the assumptions of equal density and modulus that are implicit in scaling method development. In total, six scaling procedures were evaluated, four from the literature (equal-stress equal-velocity, ESEV, and three variations of impulse momentum) and two which are introduced in the paper (ESEV using a ratio of effective masses, ESEV-EffMass, and a kinetic energy approach). In total, 24 simulations were performed, representing both pendulum and full body impacts for three representative HBMs. These simulations were quantitatively compared using the International Organization for Standardization (ISO) ISO-TS18571 standard. Based on these results, ESEV-EffMass achieved the highest overall similarity score (indicating that it is most proficient at scaling a reference response to a target). Additionally, ESEV was found to perform poorly for two degree-of-freedom (DOF) systems. However, the results also indicated that no single technique was clearly the most appropriate for all scenarios.
NASA Astrophysics Data System (ADS)
Vasquez, David Alan
Can the educational effectiveness of physics instruction software for middle schoolers be improved by employing "game elements" commonly found in recreational computer games? This study utilized a selected set of game elements to contextualize and embellish physics word problems with the aim of making such problems more engaging. Game elements used included: (1) a fantasy-story context with developed characters; and (2) high-end graphics and visual effects. The primary purpose of the study was to find out if the added production cost of using such game elements was justified by proportionate gains in physics learning. The theoretical framework for the study was a modified version of Lepper and Malone's "intrinsically-motivating game elements" model. A key design issue in this model is the concept of "endogeneity", or the degree to which the game elements used in educational software are integrated with its learning content. Two competing courseware treatments were custom-designed and produced for the study; both dealt with Newton's first law. The first treatment (T1) was a 45 minute interactive tutorial that featured cartoon characters, color animations, hypertext, audio narration, and realistic motion simulations using the Interactive PhysicsspTM software. The second treatment (T2) was similar to the first except for the addition of approximately three minutes of cinema-like sequences where characters, game objectives, and a science-fiction story premise were described and portrayed with high-end graphics and visual effects. The sample of 47 middle school students was evenly divided between eighth and ninth graders and between boys and girls. Using a pretest/posttest experimental design, the independent variables for the study were: (1) two levels of treatment; (2) gender; and (3) two schools. The dependent variables were scores on a written posttest for both: (1) physics learning, and (2) attitude toward physics learning. Findings indicated that, although
Piro, M.H.A; Wassermann, F.; Grundmann, S.; ...
2017-05-23
The current work presents experimental and computational investigations of fluid flow through a 37 element CANDU nuclear fuel bundle. Experiments based on Magnetic Resonance Velocimetry (MRV) permit three-dimensional, three-component fluid velocity measurements to be made within the bundle with sub-millimeter resolution that are non-intrusive, do not require tracer particles or optical access of the flow field. Computational fluid dynamic (CFD) simulations of the foregoing experiments were performed with the hydra-th code using implicit large eddy simulation, which were in good agreement with experimental measurements of the fluid velocity. Greater understanding has been gained in the evolution of geometry-induced inter-subchannel mixing,more » the local effects of obstructed debris on the local flow field, and various turbulent effects, such as recirculation, swirl and separation. These capabilities are not available with conventional experimental techniques or thermal-hydraulic codes. Finally, the overall goal of this work is to continue developing experimental and computational capabilities for further investigations that reliably support nuclear reactor performance and safety.« less
NASA Astrophysics Data System (ADS)
Barnett, Michael P.; Decker, Thomas; Krandick, Werner
2001-06-01
We use computer algebra to expand the Pekeris secular determinant for two-electron atoms symbolically, to produce an explicit polynomial in the energy parameter ɛ, with coefficients that are polynomials in the nuclear charge Z. Repeated differentiation of the polynomial, followed by a simple transformation, gives a series for ɛ in decreasing powers of Z. The leading term is linear, consistent with well-known behavior that corresponds to the approximate quadratic dependence of ionization potential on atomic number (Moseley's law). Evaluating the 12-term series for individual Z gives the roots to a precision of 10 or more digits for Z⩾2. This suggests the use of similar tactics to construct formulas for roots vs atomic, molecular, and variational parameters in other eigenvalue problems, in accordance with the general objectives of gradient theory. Matrix elements can be represented by symbols in the secular determinants, enabling the use of analytical expressions for the molecular integrals in the differentiation of the explicit polynomials. The mathematical and computational techniques include modular arithmetic to handle matrix and polynomial operations, and unrestricted precision arithmetic to overcome severe digital erosion. These are likely to find many further applications in computational chemistry.
NASA Astrophysics Data System (ADS)
Della Morte, Michele; Giusti, Leonardo
2011-05-01
We make use of the global symmetries of the Yang-Mills theory on the lattice to design a new computational strategy for extracting glueball masses and matrix elements which achieves an exponential reduction of the statistical error with respect to standard techniques. By generalizing our previous work on the parity symmetry, the partition function of the theory is decomposed into a sum of path integrals each giving the contribution from multiplets of states with fixed quantum numbers associated to parity, charge conjugation, translations, rotations and central conjugations Z N 3. Ratios of path integrals and correlation functions can then be computed with a multi-level Monte Carlo integration scheme whose numerical cost, at a fixed statistical precision and at asymptotically large times, increases power-like with the time extent of the lattice. The strategy is implemented for the SU(3) Yang-Mills theory, and a full-fledged computation of the mass and multiplicity of the lightest glueball with vacuum quantum numbers is carried out at a lattice spacing of 0.17 fm.
Bookout, G.; Sinacori, J.
1993-01-01
The objective of this paper is to advance hypotheses about texture as a visual cueing medium in simulation and to provide guidelines for data base modelers in the use of computer image generator resources to provide effective visual cues for simulation purposes. The emphasis is on a texture decoration of the earth's surface data base in order to support low-level flight, i.e., flight at elevations above the surface of 500 feet or less. The appearance of the surface of the sea is the focus of this paper. The physics of the sea's appearance are discussed and guidelines are given for its representation for sea states from 0 (calm) to 5 (fresh breeze of 17-21 knots and sixfoot waves, peak-to-trough). The viewpoints considered vary from 500 feet above the mean sea surface to an altitude just above the wave crests. 7 refs.
Parallel computation in a three-dimensional elastic-plastic finite-element analysis
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Bigelow, C. A.; Newman, J. C., Jr.
1992-01-01
A CRAY parallel processing technique called autotasking was implemented in a three-dimensional elasto-plastic finite-element code. The technique was evaluated on two CRAY supercomputers, a CRAY 2 and a CRAY Y-MP. Autotasking was implemented in all major portions of the code, except the matrix equations solver. Compiler directives alone were not able to properly multitask the code; user-inserted directives were required to achieve better performance. It was noted that the connect time, rather than wall-clock time, was more appropriate to determine speedup in multiuser environments. For a typical example problem, a speedup of 2.1 (1.8 when the solution time was included) was achieved in a dedicated environment and 1.7 (1.6 with solution time) in a multiuser environment on a four-processor CRAY 2 supercomputer. The speedup on a three-processor CRAY Y-MP was about 2.4 (2.0 with solution time) in a multiuser environment.
Towards an Entropy Stable Spectral Element Framework for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.
2016-01-01
Entropy stable (SS) discontinuous spectral collocation formulations of any order are developed for the compressible Navier-Stokes equations on hexahedral elements. Recent progress on two complementary efforts is presented. The first effort is a generalization of previous SS spectral collocation work to extend the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to tensor product Legendre-Gauss (LG) points. The LG and LGL point formulations are compared on a series of test problems. Although being more costly to implement, it is shown that the LG operators are significantly more accurate on comparable grids. Both the LGL and LG operators are of comparable efficiency and robustness, as is demonstrated using test problems for which conventional FEM techniques suffer instability. The second effort generalizes previous SS work to include the possibility of p-refinement at non-conforming interfaces. A generalization of existing entropy stability machinery is developed to accommodate the nuances of fully multi-dimensional summation-by-parts (SBP) operators. The entropy stability of the compressible Euler equations on non-conforming interfaces is demonstrated using the newly developed LG operators and multi-dimensional interface interpolation operators.
Determination of Rolling-Element Fatigue Life From Computer Generated Bearing Tests
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Two types of rolling-element bearings representing radial loaded and thrust loaded bearings were used for this study. Three hundred forty (340) virtual bearing sets totaling 31400 bearings were randomly assembled and tested by Monte Carlo (random) number generation. The Monte Carlo results were compared with endurance data from 51 bearing sets comprising 5321 bearings. A simple algebraic relation was established for the upper and lower L(sub 10) life limits as function of number of bearings failed for any bearing geometry. There is a fifty percent (50 percent) probability that the resultant bearing life will be less than that calculated. The maximum and minimum variation between the bearing resultant life and the calculated life correlate with the 90-percent confidence limits for a Weibull slope of 1.5. The calculated lives for bearings using a load-life exponent p of 4 for ball bearings and 5 for roller bearings correlated with the Monte Carlo generated bearing lives and the bearing data. STLE life factors for bearing steel and processing provide a reasonable accounting for differences between bearing life data and calculated life. Variations in Weibull slope from the Monte Carlo testing and bearing data correlated. There was excellent agreement between percent of individual components failed from Monte Carlo simulation and that predicted.
NASA Technical Reports Server (NTRS)
Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan; Streett, Craig L.; Carpenter, Mark H.
2011-01-01
A combination of parabolized stability equations and secondary instability theory has been applied to a low-speed swept airfoil model with a chord Reynolds number of 7.15 million, with the goals of (i) evaluating this methodology in the context of transition prediction for a known configuration for which roughness based crossflow transition control has been demonstrated under flight conditions and (ii) of analyzing the mechanism of transition delay via the introduction of discrete roughness elements (DRE). Roughness based transition control involves controlled seeding of suitable, subdominant crossflow modes, so as to weaken the growth of naturally occurring, linearly more unstable crossflow modes. Therefore, a synthesis of receptivity, linear and nonlinear growth of stationary crossflow disturbances, and the ensuing development of high frequency secondary instabilities is desirable to understand the experimentally observed transition behavior. With further validation, such higher fidelity prediction methodology could be utilized to assess the potential for crossflow transition control at even higher Reynolds numbers, where experimental data is currently unavailable.
Features generated for computational splice-site prediction correspond to functional elements
Dogan, Rezarta Islamaj; Getoor, Lise; Wilbur, W John; Mount, Stephen M
2007-01-01
Background Accurate selection of splice sites during the splicing of precursors to messenger RNA requires both relatively well-characterized signals at the splice sites and auxiliary signals in the adjacent exons and introns. We previously described a feature generation algorithm (FGA) that is capable of achieving high classification accuracy on human 3' splice sites. In this paper, we extend the splice-site prediction to 5' splice sites and explore the generated features for biologically meaningful splicing signals. Results We present examples from the observed features that correspond to known signals, both core signals (including the branch site and pyrimidine tract) and auxiliary signals (including GGG triplets and exon splicing enhancers). We present evidence that features identified by FGA include splicing signals not found by other methods. Conclusion Our generated features capture known biological signals in the expected sequence interval flanking splice sites. The method can be easily applied to other species and to similar classification problems, such as tissue-specific regulatory elements, polyadenylation sites, promoters, etc. PMID:17958908
3D Finite Element Models of Shoulder Muscles for Computing Lines of Actions and Moment Arms
Webb, Joshua D.; Blemker, Silvia S.; Delp, Scott L.
2014-01-01
Accurate representation of musculoskeletal geometry is needed to characterize the function of shoulder muscles. Previous models of shoulder muscles have represented muscle geometry as a collection of line segments, making it difficult to account the large attachment areas, muscle-muscle interactions, and complex muscle fiber trajectories typical of shoulder muscles. To better represent shoulder muscle geometry we developed three-dimensional finite element models of the deltoid and rotator cuff muscles and used the models to examine muscle function. Muscle fiber paths within the muscles were approximated, and moment arms were calculated for two motions: thoracohumeral abduction and internal/external rotation. We found that muscle fiber moment arms varied substantially across each muscle. For example, supraspinatus is considered a weak external rotator, but the three-dimensional model of supraspinatus showed that the anterior fibers provide substantial internal rotation while the posterior fibers act as external rotators. Including the effects of large attachment regions and three-dimensional mechanical interactions of muscle fibers constrains muscle motion, generates more realistic muscle paths, and allows deeper analysis of shoulder muscle function. PMID:22994141
3D finite element models of shoulder muscles for computing lines of actions and moment arms.
Webb, Joshua D; Blemker, Silvia S; Delp, Scott L
2014-01-01
Accurate representation of musculoskeletal geometry is needed to characterise the function of shoulder muscles. Previous models of shoulder muscles have represented muscle geometry as a collection of line segments, making it difficult to account for the large attachment areas, muscle-muscle interactions and complex muscle fibre trajectories typical of shoulder muscles. To better represent shoulder muscle geometry, we developed 3D finite element models of the deltoid and rotator cuff muscles and used the models to examine muscle function. Muscle fibre paths within the muscles were approximated, and moment arms were calculated for two motions: thoracohumeral abduction and internal/external rotation. We found that muscle fibre moment arms varied substantially across each muscle. For example, supraspinatus is considered a weak external rotator, but the 3D model of supraspinatus showed that the anterior fibres provide substantial internal rotation while the posterior fibres act as external rotators. Including the effects of large attachment regions and 3D mechanical interactions of muscle fibres constrains muscle motion, generates more realistic muscle paths and allows deeper analysis of shoulder muscle function.
TEMP: a computational method for analyzing transposable element polymorphism in populations
Zhuang, Jiali; Wang, Jie; Theurkauf, William; Weng, Zhiping
2014-01-01
Insertions and excisions of transposable elements (TEs) affect both the stability and variability of the genome. Studying the dynamics of transposition at the population level can provide crucial insights into the processes and mechanisms of genome evolution. Pooling genomic materials from multiple individuals followed by high-throughput sequencing is an efficient way of characterizing genomic polymorphisms in a population. Here we describe a novel method named TEMP, specifically designed to detect TE movements present with a wide range of frequencies in a population. By combining the information provided by pair-end reads and split reads, TEMP is able to identify both the presence and absence of TE insertions in genomic DNA sequences derived from heterogeneous samples; accurately estimate the frequencies of transposition events in the population and pinpoint junctions of high frequency transposition events at nucleotide resolution. Simulation data indicate that TEMP outperforms other algorithms such as PoPoolationTE, RetroSeq, VariationHunter and GASVPro. TEMP also performs well on whole-genome human data derived from the 1000 Genomes Project. We applied TEMP to characterize the TE frequencies in a wild Drosophila melanogaster population and study the inheritance patterns of TEs during hybrid dysgenesis. We also identified sequence signatures of TE insertion and possible molecular effects of TE movements, such as altered gene expression and piRNA production. TEMP is freely available at github: https://github.com/JialiUMassWengLab/TEMP.git. PMID:24753423
Parallel computation in a three-dimensional elastic-plastic finite-element analysis
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Bigelow, C. A.; Newman, J. C., Jr.
1992-01-01
A CRAY parallel processing technique called autotasking was implemented in a three-dimensional elasto-plastic finite-element code. The technique was evaluated on two CRAY supercomputers, a CRAY 2 and a CRAY Y-MP. Autotasking was implemented in all major portions of the code, except the matrix equations solver. Compiler directives alone were not able to properly multitask the code; user-inserted directives were required to achieve better performance. It was noted that the connect time, rather than wall-clock time, was more appropriate to determine speedup in multiuser environments. For a typical example problem, a speedup of 2.1 (1.8 when the solution time was included) was achieved in a dedicated environment and 1.7 (1.6 with solution time) in a multiuser environment on a four-processor CRAY 2 supercomputer. The speedup on a three-processor CRAY Y-MP was about 2.4 (2.0 with solution time) in a multiuser environment.
Development of a Computationally Efficient, High Fidelity, Finite Element Based Hall Thruster Model
NASA Technical Reports Server (NTRS)
Jacobson, David (Technical Monitor); Roy, Subrata
2004-01-01
This report documents the development of a two dimensional finite element based numerical model for efficient characterization of the Hall thruster plasma dynamics in the framework of multi-fluid model. Effect of the ionization and the recombination has been included in the present model. Based on the experimental data, a third order polynomial in electron temperature is used to calculate the ionization rate. The neutral dynamics is included only through the neutral continuity equation in the presence of a uniform neutral flow. The electrons are modeled as magnetized and hot, whereas ions are assumed magnetized and cold. The dynamics of Hall thruster is also investigated in the presence of plasma-wall interaction. The plasma-wall interaction is a function of wall potential, which in turn is determined by the secondary electron emission and sputtering yield. The effect of secondary electron emission and sputter yield has been considered simultaneously, Simulation results are interpreted in the light of experimental observations and available numerical solutions in the literature.
NASA Technical Reports Server (NTRS)
Britcher, C. P.
1982-01-01
The development of a powerful method of magnetic roll torque generation is essential before construction of a large magnetic suspension and balance system (LMSBS) can be undertaken. Some preliminary computed data concerning a relatively new dc scheme, referred to as the spanwise iron magnet scheme are presented. Computations made using the finite element computer program 'GFUN' indicate that adequate torque is available for at least a first generation LMSBS. Torque capability appears limited principally by current electromagnet technology.
Zhou, Tuan-feng; Zhang, Xiang-hao; Wang, Xin-zhi
2015-02-18
To analyze the biomechanics trait of one-piece computer aided design and computer aided manufacture (CAD/CAM) zirconia post and core by the Three-dimensional finite element. The Three-dimensional finite element models of three upper central incisors restored with one-piece CAD/CAM zirconia post and core (group 1), refabricated zirconia post and hot-pressed porcelain core (group 2), and cast gold alloy post and core (group 3) were built by geometry method respectively. 100 N vertical loading through the central incisor models long axis and 100 N loading along directing at an angle of 45° with the models long axis were used to imitate the central incisor stress state in biting and mandible physiological protraction movement. Under vertical loading, the restored teeth without dentin ferrule, the maximum Von-Mises stress value of the tooth root in group 1 was the least(11.02 N), which was the largest (13.17 N)in group 2. The stress became weaker from the upper to the lower of the tooth root. The maximum Von-Mises stress value of the tooth root, post and core became smaller while the restored teeth with the 2.0 mm high dentin ferrule. Under directing at an angle of 45° loading, without the design of dentin ferrule in the restored teeth, the maximum Von-Mises stress value of the post and core in group 1 was the greatest(20.45 N), while that stress of post and core in group 3 was the smallest(13.61 N). With 2.0 mm high dentin ferrule design in the restored teeth, the tooth root stress became weaker. The maximum Von-Mises stress value of the tooth root was the greatest (14.10 N) in group 3, but which was the lowest (13.38 N) in group 1. The results of the Three-dimensional finite element analysis infers that one-piece zirconia post and core restoration is more beneficial to disperse the bite force than the prefabricated zirconia post and the cast gold alloy post and core. The one-piece of zirconia post and core is good to protect the teeth and keep the restoration intact.
NASA Technical Reports Server (NTRS)
Roske-Hofstrand, Renate J.
1990-01-01
The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.
ERIC Educational Resources Information Center
Wild, Mary
2011-01-01
This study considers in what ways sustained shared thinking between young children aged 5-6 years can be facilitated by working in dyads on a computer-based literacy task. The study considers 107 observational records of 44 children from 6 different schools, in Oxfordshire in the UK, collected over the course of a school year. The study raises…
Mattheos, N; Nattestad, A; Schittek, M; Attström, R
2002-02-01
A questionnaire survey was carried out to investigate the competence and attitude of dental students towards computers. The current study presents the findings deriving from 590 questionnaires collected from 16 European dental schools from 9 countries between October 1998 and October 1999. The results suggest that 60% of students use computers for their education, while 72% have access to the Internet. The overall figures, however, disguise major differences between the various universities. Students in Northern and Western Europe seem to rely mostly on university facilities to access the Internet. The same however, is not true for students in Greece and Spain, who appear to depend on home computers. Less than half the students have been exposed to some form of computer literacy education in their universities, with the great majority acquiring their competence in other ways. The Information and Communication Technology (ICT) skills of the average dental student, within this limited sample of dental schools, do not facilitate full use of new media available. In addition, if the observed regional differences are valid, there may be an educational and political problem that could intensify inequalities among professionals in the future. To minimize this potential problem, closer cooperation between academic institutions, with sharing of resources and expertise, is recommended.
NASA Technical Reports Server (NTRS)
Roske-Hofstrand, Renate J.
1990-01-01
The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.
NASA Astrophysics Data System (ADS)
Kantardžić, I.; Vasiljević, D.; Blažić, L.; Puškar, T.; Tasić, M.
2012-05-01
Mechanical properties of restorative material have an effect on stress distribution in the tooth structure and the restorative material during mastication. The aim of this study was to investigate the influence of restorative materials with different moduli of elasticity on stress distribution in the three-dimensional (3D) solid tooth model. Computed tomography scan data of human maxillary second premolars were used for 3D solid model generation. Four composite resins with a modulus of elasticity of 6700, 9500, 14 100 and 21 000 MPa were considered to simulate four different clinical direct restoration types. Each model was subjected to a resulting force of 200 N directed to the occlusal surface, and stress distribution and maximal von Mises stresses were calculated using finite-element analysis. We found that the von Mises stress values and stress distribution in tooth structures did not vary considerably with changing the modulus of elasticity of restorative material.
NASA Astrophysics Data System (ADS)
Agarwal, B. N. P.; Srivastava, Shalivahan
2010-07-01
In view of the several publications on the application of the Finite Element Method (FEM) to compute regional gravity anomaly involving only 8 nodes on the periphery of a rectangular map, we present an interactive FORTRAN program, FEAODD.FOR, for wider applicability of the technique. A brief description of the theory of FEM is presented for the sake of completeness. The efficacy of the program has been demonstrated by analyzing the gravity anomaly over Salt dome, South Houston, USA using two differently oriented rectangular blocks and over chromite deposits, Camaguey, Cuba. The analyses over two sets of data reveal that the outline of the ore body/structure matches well with the maxima of the residuals. Further, the data analyses over South Houston, USA, have revealed that though the broad regional trend remains the same for both the blocks, the magnitudes of the residual anomalies differ approximately by 25% of the magnitude as obtained from previous studies.
NASA Astrophysics Data System (ADS)
Than, Vinh-Du; Tang, Anh-Minh; Roux, Jean-Noël; Pereira, Jean-Michel; Aimedieu, Patrick; Bornert, Michel
2017-06-01
We present an investigation into macroscopic and microscopic behaviors of wet granular soils using the discrete element method (DEM) and the X-ray Computed Tomography (XRCT) observations. The specimens are first prepared in very loose states, with frictional spherical grains in the presence of a small amount of an interstitial liquid. Experimental oedometric tests are carried out with small glass beads, while DEM simulations implement a model of spherical grains joined by menisci. Both in experiments and in simulations, loose configurations with solid fraction as low as 0.30 are prepared under low stress, and undergo a gradual collapse in compression, until the solid fraction of cohesionless bead packs (0.58 to 0.6) is obtained. In the XRCT tests, four 3D tomography images corresponding to different typical stages of the compression curve are used to characterize the microstructure.
Vogel, Edgar H; Díaz, Claudia A; Ramírez, Jorge A; Jarur, Mary C; Pérez-Acosta, Andrés M; Wagner, Allan R
2007-08-01
Despite of the apparent simplicity of Pavlovian conditioning, research on its mechanisms has caused considerable debate, such as the dispute about whether the associated stimuli are coded in an "elementistic"(a compound stimuli is equivalent to the sum of its components) or a "configural" (a compound stimuli is a unique exemplar) fashion. This controversy is evident in the abundant research on the contrasting predictions of elementistic and the configural models. Recently, some mixed solutions have been proposed, which, although they have the advantages of both approaches, are difficult to evaluate due to their complexity. This paper presents a computer program to conduct simulations of a mixed model ( replaced elements model or REM). Instructions and examples are provided to use the simulator for research and educational purposes.
Jose, Jithin; Willemink, Rene G H; Resink, Steffen; Piras, Daniele; van Hespen, J C G; Slump, Cornelis H; Steenbergen, Wiendelt; van Leeuwen, Ton G; Manohar, Srirang
2011-01-31
We present a 'hybrid' imaging approach which can image both light absorption properties and acoustic transmission properties of an object in a two-dimensional slice using a computed tomography (CT) photoacoustic imager. The ultrasound transmission measurement method uses a strong optical absorber of small cross-section placed in the path of the light illuminating the sample. This absorber, which we call a passive element acts as a source of ultrasound. The interaction of ultrasound with the sample can be measured in transmission, using the same ultrasound detector used for photoacoustics. Such measurements are made at various angles around the sample in a CT approach. Images of the ultrasound propagation parameters, attenuation and speed of sound, can be reconstructed by inversion of a measurement model. We validate the method on specially designed phantoms and biological specimens. The obtained images are quantitative in terms of the shape, size, location, and acoustic properties of the examined heterogeneities.
Finite element techniques in computational time series analysis of turbulent flows
NASA Astrophysics Data System (ADS)
Horenko, I.
2009-04-01
In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical
Mitchell, Nathan M; Cutting, Court B; King, Timothy W; Oliker, Aaron; Sifakis, Eftychios D
2016-02-01
This article presents a real-time surgical simulator for teaching three- dimensional local flap concepts. Mass-spring based simulators are interactive, but they compromise accuracy and realism. Accurate finite element approaches have traditionally been too slow to permit development of a real-time simulator. A new computational formulation of the finite element method has been applied to a simulated surgical environment. The surgical operators of retraction, incision, excision, and suturing are provided for three-dimensional operation on skin sheets and scalp flaps. A history mechanism records a user's surgical sequence. Numerical simulation was accomplished by a single small-form-factor computer attached to eight inexpensive Web-based terminals at a total cost of $2100. A local flaps workshop was held for the plastic surgery residents at the University of Wisconsin hospitals. Various flap designs of Z-plasty, rotation, rhomboid flaps, S-plasty, and related techniques were demonstrated in three dimensions. Angle and incision segment length alteration advantages were demonstrated (e.g., opening the angle of a Z-plasty in a three-dimensional web contracture). These principles were then combined in a scalp flap model demonstrating rotation flaps, dual S-plasty, and the Dufourmentel Mouly quad rhomboid flap procedure to demonstrate optimal distribution of secondary defect closure stresses. A preliminary skin flap simulator has been demonstrated to be an effective teaching platform for the real-time elucidation of local flap principles. Future work will involve adaptation of the system to facial flaps, breast surgery, cleft lip, and other problems in plastic surgery as well as surgery in general.
NASA Astrophysics Data System (ADS)
Shamshuddin, MD.; Anwar Bég, O.; Sunder Ram, M.; Kadir, A.
2017-08-01
Non-Newtonian flows arise in numerous industrial transport processes including materials fabrication systems. Micropolar theory offers an excellent mechanism for exploring the fluid dynamics of new non-Newtonian materials which possess internal microstructure. Magnetic fields may also be used for controlling electrically-conducting polymeric flows. To explore numerical simulation of transport in rheological materials processing, in the current paper, a finite element computational solution is presented for magnetohydrodynamic, incompressible, dissipative, radiative and chemically-reacting micropolar fluid flow, heat and mass transfer adjacent to an inclined porous plate embedded in a saturated homogenous porous medium. Heat generation/absorption effects are included. Rosseland's diffusion approximation is used to describe the radiative heat flux in the energy equation. A Darcy model is employed to simulate drag effects in the porous medium. The governing transport equations are rendered into non-dimensional form under the assumption of low Reynolds number and also low magnetic Reynolds number. Using a Galerkin formulation with a weighted residual scheme, finite element solutions are presented to the boundary value problem. The influence of plate inclination, Eringen coupling number, radiation-conduction number, heat absorption/generation parameter, chemical reaction parameter, plate moving velocity parameter, magnetic parameter, thermal Grashof number, species (solutal) Grashof number, permeability parameter, Eckert number on linear velocity, micro-rotation, temperature and concentration profiles. Furthermore, the influence of selected thermo-physical parameters on friction factor, surface heat transfer and mass transfer rate is also tabulated. The finite element solutions are verified with solutions from several limiting cases in the literature. Interesting features in the flow are identified and interpreted.
A higher-order finite element method for computing the radar cross section of bodies of revolution
NASA Astrophysics Data System (ADS)
Branch, Eric Douglas
2001-12-01
The finite element method (FEM) is used to compute the radar cross section (RCS) of bodies of revolution (BORs). The FEM described here uses scalar basis functions for the φ component of the field and vector basis functions for the transverse component of the field. Higher-order basis functions are used to improve the performance of the FEM code. The mesh is truncated using two methods. The first method is the perfectly matched layer (PML). This method has a number of parameters that must be optimized to obtain good results. Furthermore, the PML must be kept a reasonable distance away from the scatterer, which causes the number of unknowns to be relatively high. To decrease the number of unknowns the iterative absorbing boundary condition (IABC) is proposed. In this method an absorbing boundary condition (ABC) is used as the starting point for the mesh truncation, and then the fields at the mesh truncation are updated by propagating the fields from another surface in the computational domain to the mesh truncation boundary. The IABC allows the mesh truncation to be moved much closer to the scatterer without corrupting the final results. A comparison is given between the results of the PML and the IABC, and it is determined that using higher-order basis functions with the IABC is more efficient in terms of the number of unknowns and the CPU time than the PML.
Plontke, Stefan K.; Siedow, Norbert; Wegener, Raimund; Zenner, Hans-Peter; Salt, Alec N.
2006-01-01
Hypothesis: Cochlear fluid pharmacokinetics can be better represented by three-dimensional (3D) finite-element simulations of drug dispersal. Background: Local drug deliveries to the round window membrane are increasingly being used to treat inner ear disorders. Crucial to the development of safe therapies is knowledge of drug distribution in the inner ear with different delivery methods. Computer simulations allow application protocols and drug delivery systems to be evaluated, and may permit animal studies to be extrapolated to the larger cochlea of the human. Methods: A finite-element 3D model of the cochlea was constructed based on geometric dimensions of the guinea pig cochlea. Drug propagation along and between compartments was described by passive diffusion. To demonstrate the potential value of the model, methylprednisolone distribution in the cochlea was calculated for two clinically relevant application protocols using pharmacokinetic parameters derived from a prior one-dimensional (1D) model. In addition, a simplified geometry was used to compare results from 3D with 1D simulations. Results: For the simplified geometry, calculated concentration profiles with distance were in excellent agreement between the 1D and the 3D models. Different drug delivery strategies produce very different concentration time courses, peak concentrations and basal-apical concentration gradients of drug. In addition, 3D computations demonstrate the existence of substantial gradients across the scalae in the basal turn. Conclusion: The 3D model clearly shows the presence of drug gradients across the basal scalae of guinea pigs, demonstrating the necessity of a 3D approach to predict drug movements across and between scalae with larger cross-sectional areas, such as the human, with accuracy. This is the first model to incorporate the volume of the spiral ligament and to calculate diffusion through this structure. Further development of the 3D model will have to incorporate a more
O'Rourke, Dermot; Martelli, Saulo; Bottema, Murk; Taylor, Mark
2016-12-01
Assessing the sensitivity of a finite-element (FE) model to uncertainties in geometric parameters and material properties is a fundamental step in understanding the reliability of model predictions. However, the computational cost of individual simulations and the large number of required models limits comprehensive quantification of model sensitivity. To quickly assess the sensitivity of an FE model, we built linear and Kriging surrogate models of an FE model of the intact hemipelvis. The percentage of the total sum of squares (%TSS) was used to determine the most influential input parameters and their possible interactions on the median, 95th percentile and maximum equivalent strains. We assessed the surrogate models by comparing their predictions to those of a full factorial design of FE simulations. The Kriging surrogate model accurately predicted all output metrics based on a training set of 30 analyses (R2 = 0.99). There was good agreement between the Kriging surrogate model and the full factorial design in determining the most influential input parameters and interactions. For the median, 95th percentile and maximum equivalent strain, the bone geometry (60%, 52%, and 76%, respectively) was the most influential input parameter. The interactions between bone geometry and cancellous bone modulus (13%) and bone geometry and cortical bone thickness (7%) were also influential terms on the output metrics. This study demonstrates a method with a low time and computational cost to quantify the sensitivity of an FE model. It can be applied to FE models in computational orthopaedic biomechanics in order to understand the reliability of predictions.
NASA Astrophysics Data System (ADS)
Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos
2014-09-01
One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo99 used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 106 cm-1) in a tube, their delta reactivities are the still
Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos
2014-09-30
One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo{sup 99} used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 10{sup 6} cm{sup −1}) in a tube, their delta
NASA Astrophysics Data System (ADS)
Gassmöller, Rene; Bangerth, Wolfgang
2016-04-01
Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a