On the computational aspects of comminution in discrete element method
NASA Astrophysics Data System (ADS)
Chaudry, Mohsin Ali; Wriggers, Peter
2018-04-01
In this paper, computational aspects of crushing/comminution of granular materials are addressed. For crushing, maximum tensile stress-based criterion is used. Crushing model in discrete element method (DEM) is prone to problems of mass conservation and reduction in critical time step. The first problem is addressed by using an iterative scheme which, depending on geometric voids, recovers mass of a particle. In addition, a global-local framework for DEM problem is proposed which tends to alleviate the local unstable motion of particles and increases the computational efficiency.
NASA Astrophysics Data System (ADS)
Moreno, H. A.; Ogden, F. L.; Steinke, R. C.; Alvarez, L. V.
2015-12-01
Triangulated Irregular Networks (TINs) are increasingly popular for terrain representation in high performance surface and hydrologic modeling by their skill to capture significant changes in surface forms such as topographical summits, slope breaks, ridges, valley floors, pits and cols. This work presents a methodology for estimating slope, aspect and the components of the incoming solar radiation by using a vectorial approach within a topocentric coordinate system by establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computing a unit vector defining the position of the sun at each hour and DOY. Thus, a dot product determines the radiation flux at each TIN element. Remote shading is computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector. Sky view fractions are computed by a simplified scanning algorithm in prescribed directions and are useful to determine diffuse radiation. Finally, remote radiation scattering is computed from the sky view factor complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. This methodology represents an improvement on the current algorithms to compute terrain and radiation parameters on TINs in an efficient manner. All terrain features (e.g. slope, aspect, sky view factors and remote sheltering) can be pre-computed and stored for easy access for a subsequent ground surface or hydrologic simulation.
Books and monographs on finite element technology
NASA Technical Reports Server (NTRS)
Noor, A. K.
1985-01-01
The present paper proviees a listing of all of the English books and some of the foreign books on finite element technology, taking into account also a list of the conference proceedings devoted solely to finite elements. The references are divided into categories. Attention is given to fundamentals, mathematical foundations, structural and solid mechanics applications, fluid mechanics applications, other applied science and engineering applications, computer implementation and software systems, computational and modeling aspects, special topics, boundary element methods, proceedings of symmposia and conferences on finite element technology, bibliographies, handbooks, and historical accounts.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar
2017-01-01
Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD re-searchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions and also cause numerical instability. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where triangular/tetrahedral elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identities the reason behind the difficulties in use of such high-aspect ratio triangular/tetrahedral elements is presented here. As will be shown, it turns out that the degree of accuracy deterioration of gradient computation involving a triangular element is hinged on the value of its shape factor Gamma def = sq sin Alpha1 + sq sin Alpha2 + sq sin Alpha3, where Alpha1; Alpha2 and Alpha3 are the internal angles of the element. In fact, it is shown that the degree of accuracy deterioration increases monotonically as the value of Gamma decreases monotonically from its maximal value 9/4 (attained by an equilateral triangle only) to a value much less than 1 (associated with a highly obtuse triangle). By taking advantage of the fact that a high-aspect ratio triangle is not necessarily highly obtuse, and in fact it can have a shape factor whose value is close to the maximal value 9/4, a potential solution to avoid accuracy deterioration of gradient computation associated with a high-aspect ratio triangular grid is given. Also a brief discussion on the extension of the current mathematical framework to the tetrahedral-grid case along with some of the practical results of this extension is also provided. Furthermore, through the use of numerical simulations of practical viscous problems involving high-Reynolds number flows, the effectiveness of the gradient evaluation procedures within the CESE framework (that have their basis on the analysis presented here) to produce accurate and stable results on such high-aspect ratio meshes is also showcased.
1988-05-01
for Advanced Computer Studies and Department of Computer Science University of Maryland College Park, MD 20742 4, ABSTRACT We discuss some aspects of...Computer Studies and Technology & Dept. of Compute. Scienc II. CONTROLLING OFFICE NAME AND ADDRESS Viyriyf~ 12. REPORT DATE Department of the Navy uo...number)-1/ 2.) We study the performance of CG and PCG by examining its performance for u E (0,1), for solving the two model problems with an accuracy
NASA Technical Reports Server (NTRS)
Wang, R.; Demerdash, N. A.
1990-01-01
The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.
NASA Technical Reports Server (NTRS)
Fallon, D. J.; Thornton, E. A.
1983-01-01
Documentation for the computer program FLUTTER is presented. The theory of aerodynamic instability with thermal prestress is discussed. Theoretical aspects of the finite element matrices required in the aerodynamic instability analysis are also discussed. General organization of the computer program is explained, and instructions are then presented for the execution of the program.
Computer Security Systems Enable Access.
ERIC Educational Resources Information Center
Riggen, Gary
1989-01-01
A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Editor)
1986-01-01
The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.
NASA Astrophysics Data System (ADS)
Moreno, H. A.; Ogden, F. L.; Alvarez, L. V.
2016-12-01
This research work presents a methodology for estimating terrain slope degree, aspect (slope orientation) and total incoming solar radiation from Triangular Irregular Network (TIN) terrain models. The algorithm accounts for self shading and cast shadows, sky view fractions for diffuse radiation, remote albedo and atmospheric backscattering, by using a vectorial approach within a topocentric coordinate system and establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computingunit vector defining the position of the sun at each hour and day of the year. Thus, a dot product determines the radiation flux at each TIN element. Cast shadows are computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector only in the visible horizon range. Sky view fractions are computed by a simplified scanning algorithm from the highest to the lowest triangles along prescribed directions and visible distances, useful to determine diffuse radiation. Finally, remotealbedo is computed from the sky view fraction complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. The sensitivity of the different radiative components is tested a in a moutainuous watershed in Wyoming, to seasonal changes in weather and surrounding albedo (snow). This methodology represents an improvement on the current algorithms to compute terrain and radiation values on triangular-based models in an accurate and efficient manner. All terrain-related features (e.g. slope, aspect, sky view fraction) can be pre-computed and stored for easy access for a subsequent, progressive-in-time, numerical simulation.
The Overshoot Phenomenon in Geodynamics Codes
NASA Astrophysics Data System (ADS)
Kommu, R. K.; Heien, E. M.; Kellogg, L. H.; Bangerth, W.; Heister, T.; Studley, E. H.
2013-12-01
The overshoot phenomenon is a common occurrence in numerical software when a continuous function on a finite dimensional discretized space is used to approximate a discontinuous jump, in temperature and material concentration, for example. The resulting solution overshoots, and undershoots, the discontinuous jump. Numerical simulations play an extremely important role in mantle convection research. This is both due to the strong temperature and stress dependence of viscosity and also due to the inaccessibility of deep earth. Under these circumstances, it is essential that mantle convection simulations be extremely accurate and reliable. CitcomS and ASPECT are two finite element based mantle convection simulations developed and maintained by the Computational Infrastructure for Geodynamics. CitcomS is a finite element based mantle convection code that is designed to run on multiple high-performance computing platforms. ASPECT, an adaptive mesh refinement (AMR) code built on the Deal.II library, is also a finite element based mantle convection code that scales well on various HPC platforms. CitcomS and ASPECT both exhibit the overshoot phenomenon. One attempt at controlling the overshoot uses the Entropy Viscosity method, which introduces an artificial diffusion term in the energy equation of mantle convection. This artificial diffusion term is small where the temperature field is smooth. We present results from CitcomS and ASPECT that quantify the effect of the Entropy Viscosity method in reducing the overshoot phenomenon. In the discontinuous Galerkin (DG) finite element method, the test functions used in the method are continuous within each element but are discontinuous across inter-element boundaries. The solution space in the DG method is discontinuous. FEniCS is a collection of free software tools that automate the solution of differential equations using finite element methods. In this work we also present results from a finite element mantle convection simulation implemented in FEniCS that investigates the effect of using DG elements in reducing the overshoot problem.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar
2017-01-01
Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD researchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where simplex elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identifies the reason behind the difficulties in use of such high-aspect ratio simplex elements is formulated using two different approaches and presented here. Drawing insights from the analysis, a potential solution to avoid that pitfall is also provided as part of this work. Furthermore, through the use of numerical simulations of practical viscous problems involving high-Reynolds number flows, how the gradient evaluation procedures of the CESE framework can be effectively used to produce accurate and stable results on such high-aspect ratio simplex meshes is also showcased.
ERIC Educational Resources Information Center
Williams, Fred D.
An adventure game is a role-playing game that usually, but not always, has some fantasy aspect. The role-playing aspect is the key element because players become personally involved when they assume a role, and defeat becomes personal and less acceptable than in other types of games. Computer-based role-playing games are extremely popular because…
MARC calculations for the second WIPP structural benchmark problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, H.S.
1981-05-01
This report describes calculations made with the MARC structural finite element code for the second WIPP structural benchmark problem. Specific aspects of problem implementation such as element choice, slip line modeling, creep law implementation, and thermal-mechanical coupling are discussed in detail. Also included are the computational results specified in the benchmark problem formulation.
NASA Astrophysics Data System (ADS)
Fischer, R.; Müller, R.
1989-08-01
It is shown that nonlinear optical devices are the most promising elements for an optical digital supercomputer. The basic characteristics of various developed nonlinear elements are presented, including bistable Fabry-Perot etalons, interference filters, self-electrooptic effect devices, quantum-well devices utilizing transitions between the lowest electron states in the conduction band of GaAs, etc.
Advances and trends in the development of computational models for tires
NASA Technical Reports Server (NTRS)
Noor, A. K.; Tanner, J. A.
1985-01-01
Status and some recent developments of computational models for tires are summarized. Discussion focuses on a number of aspects of tire modeling and analysis including: tire materials and their characterization; evolution of tire models; characteristics of effective finite element models for analyzing tires; analysis needs for tires; and impact of the advances made in finite element technology, computational algorithms, and new computing systems on tire modeling and analysis. An initial set of benchmark problems has been proposed in concert with the U.S. tire industry. Extensive sets of experimental data will be collected for these problems and used for evaluating and validating different tire models. Also, the new Aircraft Landing Dynamics Facility (ALDF) at NASA Langley Research Center is described.
Composite mechanics for engine structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1987-01-01
Recent research activities and accomplishments at Lewis Research Center on composite mechanics for engine structures are summarized. The activities focused mainly on developing procedures for the computational simulation of composite intrinsic and structural behavior. The computational simulation encompasses all aspects of composite mechanics, advanced three-dimensional finite-element methods, damage tolerance, composite structural and dynamic response, and structural tailoring and optimization.
Composite mechanics for engine structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1989-01-01
Recent research activities and accomplishments at Lewis Research Center on composite mechanics for engine structures are summarized. The activities focused mainly on developing procedures for the computational simulation of composite intrinsic and structural behavior. The computational simulation encompasses all aspects of composite mechanics, advanced three-dimensional finite-element methods, damage tolerance, composite structural and dynamic response, and structural tailoring and optimization.
An Anisotropic A posteriori Error Estimator for CFD
NASA Astrophysics Data System (ADS)
Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando
In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.
Optimal mapping of irregular finite element domains to parallel processors
NASA Technical Reports Server (NTRS)
Flower, J.; Otto, S.; Salama, M.
1987-01-01
Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.
A boundary element alternating method for two-dimensional mixed-mode fracture problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Krishnamurthy, T.
1992-01-01
A boundary element alternating method, denoted herein as BEAM, is presented for two dimensional fracture problems. This is an iterative method which alternates between two solutions. An analytical solution for arbitrary polynomial normal and tangential pressure distributions applied to the crack faces of an embedded crack in an infinite plate is used as the fundamental solution in the alternating method. A boundary element method for an uncracked finite plate is the second solution. For problems of edge cracks a technique of utilizing finite elements with BEAM is presented to overcome the inherent singularity in boundary element stress calculation near the boundaries. Several computational aspects that make the algorithm efficient are presented. Finally, the BEAM is applied to a variety of two dimensional crack problems with different configurations and loadings to assess the validity of the method. The method gives accurate stress intensity factors with minimal computing effort.
Analytical modeling of helicopter static and dynamic induced velocity in GRASP
NASA Technical Reports Server (NTRS)
Kunz, Donald L.; Hodges, Dewey H.
1987-01-01
The methodology used by the General Rotorcraft Aeromechanical Stability Program (GRASP) to model the characteristics of the flow through a helicopter rotor in hovering or axial flight is described. Since the induced flow plays a significant role in determining the aeroelastic properties of rotorcraft, the computation of the induced flow is an important aspect of the program. Because of the combined finite-element/multibody methodology used as the basis for GRASP, the implementation of induced velocity calculations presented an unusual challenge to the developers. To preserve the modelling flexibility and generality of the code, it was necessary to depart from the traditional methods of computing the induced velocity. This is accomplished by calculating the actuator disc contributions to the rotor loads in a separate element called the air mass element, and then performing the calculations of the aerodynamic forces on individual blade elements within the aeroelastic beam element.
A method for determining spiral-bevel gear tooth geometry for finite element analysis
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.; Litvin, Faydor L.
1991-01-01
An analytical method was developed to determine gear tooth surface coordinates of face-milled spiral bevel gears. The method uses the basic gear design parameters in conjunction with the kinematical aspects of spiral bevel gear manufacturing machinery. A computer program, SURFACE, was developed. The computer program calculates the surface coordinates and outputs 3-D model data that can be used for finite element analysis. Development of the modeling method and an example case are presented. This analysis method could also find application for gear inspection and near-net-shape gear forging die design.
NASA Astrophysics Data System (ADS)
Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.
2017-12-01
In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.
Combining elements of information fusion and knowledge-based systems to support situation analysis
NASA Astrophysics Data System (ADS)
Roy, Jean
2006-04-01
Situation awareness has emerged as an important concept in military and public security environments. Situation analysis is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of situation awareness for the decision maker(s). It is well established that information fusion, defined as the process of utilizing one or more information sources over time to assemble a representation of aspects of interest in an environment, is a key enabler to meeting the demanding requirements of situation analysis. However, although information fusion is important, developing and adopting a knowledge-centric view of situation analysis should provide a more holistic perspective of this process. This is based on the notion that awareness ultimately has to do with having knowledge of something. Moreover, not all of the situation elements and relationships of interest are directly observable. Those aspects of interest that cannot be observed must be inferred, i.e., derived as a conclusion from facts or premises, or by reasoning from evidence. This paper discusses aspects of knowledge, and how it can be acquired from experts, formally represented and stored in knowledge bases to be exploited by computer programs, and validated. Knowledge engineering is reviewed, with emphasis given to cognitive and ontological engineering. Facets of reasoning are discussed, along with inferencing methods that can be used in computer applications. Finally, combining elements of information fusion and knowledge-based systems, an overall approach and framework for the building of situation analysis support systems is presented.
Neurobiomimetic constructs for intelligent unmanned systems and robotics
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Shah, Danelle C.; DeAngelus, Marianne A.
2014-06-01
This paper discusses a paradigm we refer to as neurobiomimetic, which involves emulations of brain neuroanatomy and neurobiology aspects and processes. Neurobiomimetic constructs include rudimentary and down-scaled computational representations of brain regions, sub-regions, and synaptic connectivity. Many different instances of neurobiomimetic constructs are possible, depending on various aspects such as the initial conditions of synaptic connectivity, number of neuron elements in regions, connectivity specifics, and more, and we refer to these instances as `animats'. While downscaled for computational feasibility, the animats are very large constructs; the animats implemented in this work contain over 47,000 neuron elements and over 720,000 synaptic connections. The paper outlines aspects of the animats implemented, spatial memory and learning cognitive task, the virtual-reality environment constructed to study the animat performing that task, and discussion of results. In a broad sense, we argue that the neurobiomimetic paradigm pursued in this work constitutes a particularly promising path to artificial cognition and intelligent unmanned systems. Biological brains readily cope with challenges of real-life tasks that consistently prove beyond even the most sophisticated algorithmic approaches known. At the cross-over point of neuroscience, cognitive science and computer science, paradigms such as the one pursued in this work aim to mimic the mechanisms of biological brains and as such, we argue, may lead to machines with abilities closer to those of biological species.
Computational Aspects of the h, p and h-p Versions of the Finite Element Method.
1987-03-01
Then we will 3,4,5,6 the dependence of the accuracy of the error fliell study the dependence between the error jjeljl and the on the computational...University, June 23-26, 1987 paper presented at the First World Congress on Computational [23] Szab6, B.A.: PROBE: Theoretical Manual, NOETIC Tech...ment agencies such as the National Bureau of Standards. 0 To be an international center of study and research for foreign students in numerical
Computational Toxicology as Implemented by the US EPA ...
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the T
Probalistic Finite Elements (PFEM) structural dynamics and fracture mechanics
NASA Technical Reports Server (NTRS)
Liu, Wing-Kam; Belytschko, Ted; Mani, A.; Besterfield, G.
1989-01-01
The purpose of this work is to develop computationally efficient methodologies for assessing the effects of randomness in loads, material properties, and other aspects of a problem by a finite element analysis. The resulting group of methods is called probabilistic finite elements (PFEM). The overall objective of this work is to develop methodologies whereby the lifetime of a component can be predicted, accounting for the variability in the material and geometry of the component, the loads, and other aspects of the environment; and the range of response expected in a particular scenario can be presented to the analyst in addition to the response itself. Emphasis has been placed on methods which are not statistical in character; that is, they do not involve Monte Carlo simulations. The reason for this choice of direction is that Monte Carlo simulations of complex nonlinear response require a tremendous amount of computation. The focus of efforts so far has been on nonlinear structural dynamics. However, in the continuation of this project, emphasis will be shifted to probabilistic fracture mechanics so that the effect of randomness in crack geometry and material properties can be studied interactively with the effect of random load and environment.
High precision computing with charge domain devices and a pseudo-spectral method therefor
NASA Technical Reports Server (NTRS)
Barhen, Jacob (Inventor); Toomarian, Nikzad (Inventor); Fijany, Amir (Inventor); Zak, Michail (Inventor)
1997-01-01
The present invention enhances the bit resolution of a CCD/CID MVM processor by storing each bit of each matrix element as a separate CCD charge packet. The bits of each input vector are separately multiplied by each bit of each matrix element in massive parallelism and the resulting products are combined appropriately to synthesize the correct product. In another aspect of the invention, such arrays are employed in a pseudo-spectral method of the invention, in which partial differential equations are solved by expressing each derivative analytically as matrices, and the state function is updated at each computation cycle by multiplying it by the matrices. The matrices are treated as synaptic arrays of a neural network and the state function vector elements are treated as neurons. In a further aspect of the invention, moving target detection is performed by driving the soliton equation with a vector of detector outputs. The neural architecture consists of two synaptic arrays corresponding to the two differential terms of the soliton-equation and an adder connected to the output thereof and to the output of the detector array to drive the soliton equation.
How to determine spiral bevel gear tooth geometry for finite element analysis
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.; Litvin, Faydor L.
1991-01-01
An analytical method was developed to determine gear tooth surface coordinates of face milled spiral bevel gears. The method combines the basic gear design parameters with the kinematical aspects for spiral bevel gear manufacturing. A computer program was developed to calculate the surface coordinates. From this data a 3-D model for finite element analysis can be determined. Development of the modeling method and an example case are presented.
Liver CT image processing: a short introduction of the technical elements.
Masutani, Y; Uozumi, K; Akahane, Masaaki; Ohtomo, Kuni
2006-05-01
In this paper, we describe the technical aspects of image analysis for liver diagnosis and treatment, including the state-of-the-art of liver image analysis and its applications. After discussion on modalities for liver image analysis, various technical elements for liver image analysis such as registration, segmentation, modeling, and computer-assisted detection are covered with examples performed with clinical data sets. Perspective in the imaging technologies is also reviewed and discussed.
3-D modeling of ductile tearing using finite elements: Computational aspects and techniques
NASA Astrophysics Data System (ADS)
Gullerud, Arne Stewart
This research focuses on the development and application of computational tools to perform large-scale, 3-D modeling of ductile tearing in engineering components under quasi-static to mild loading rates. Two standard models for ductile tearing---the computational cell methodology and crack growth controlled by the crack tip opening angle (CTOA)---are described and their 3-D implementations are explored. For the computational cell methodology, quantification of the effects of several numerical issues---computational load step size, procedures for force release after cell deletion, and the porosity for cell deletion---enables construction of computational algorithms to remove the dependence of predicted crack growth on these issues. This work also describes two extensions of the CTOA approach into 3-D: a general 3-D method and a constant front technique. Analyses compare the characteristics of the extensions, and a validation study explores the ability of the constant front extension to predict crack growth in thin aluminum test specimens over a range of specimen geometries, absolutes sizes, and levels of out-of-plane constraint. To provide a computational framework suitable for the solution of these problems, this work also describes the parallel implementation of a nonlinear, implicit finite element code. The implementation employs an explicit message-passing approach using the MPI standard to maintain portability, a domain decomposition of element data to provide parallel execution, and a master-worker organization of the computational processes to enhance future extensibility. A linear preconditioned conjugate gradient (LPCG) solver serves as the core of the solution process. The parallel LPCG solver utilizes an element-by-element (EBE) structure of the computations to permit a dual-level decomposition of the element data: domain decomposition of the mesh provides efficient coarse-grain parallel execution, while decomposition of the domains into blocks of similar elements (same type, constitutive model, etc.) provides fine-grain parallel computation on each processor. A major focus of the LPCG solver is a new implementation of the Hughes-Winget element-by-element (HW) preconditioner. The implementation employs a weighted dependency graph combined with a new coloring algorithm to provide load-balanced scheduling for the preconditioner and overlapped communication/computation. This approach enables efficient parallel application of the HW preconditioner for arbitrary unstructured meshes.
Finite-element modelling of multilayer X-ray optics.
Cheng, Xianchao; Zhang, Lin
2017-05-01
Multilayer optical elements for hard X-rays are an attractive alternative to crystals whenever high photon flux and moderate energy resolution are required. Prediction of the temperature, strain and stress distribution in the multilayer optics is essential in designing the cooling scheme and optimizing geometrical parameters for multilayer optics. The finite-element analysis (FEA) model of the multilayer optics is a well established tool for doing so. Multilayers used in X-ray optics typically consist of hundreds of periods of two types of materials. The thickness of one period is a few nanometers. Most multilayers are coated on silicon substrates of typical size 60 mm × 60 mm × 100-300 mm. The high aspect ratio between the size of the optics and the thickness of the multilayer (10 7 ) can lead to a huge number of elements for the finite-element model. For instance, meshing by the size of the layers will require more than 10 16 elements, which is an impossible task for present-day computers. Conversely, meshing by the size of the substrate will produce a too high element shape ratio (element geometry width/height > 10 6 ), which causes low solution accuracy; and the number of elements is still very large (10 6 ). In this work, by use of ANSYS layer-functioned elements, a thermal-structural FEA model has been implemented for multilayer X-ray optics. The possible number of layers that can be computed by presently available computers is increased considerably.
Finite-element modelling of multilayer X-ray optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Xianchao; Zhang, Lin
Multilayer optical elements for hard X-rays are an attractive alternative to crystals whenever high photon flux and moderate energy resolution are required. Prediction of the temperature, strain and stress distribution in the multilayer optics is essential in designing the cooling scheme and optimizing geometrical parameters for multilayer optics. The finite-element analysis (FEA) model of the multilayer optics is a well established tool for doing so. Multilayers used in X-ray optics typically consist of hundreds of periods of two types of materials. The thickness of one period is a few nanometers. Most multilayers are coated on silicon substrates of typical sizemore » 60 mm × 60 mm × 100–300 mm. The high aspect ratio between the size of the optics and the thickness of the multilayer (10 7) can lead to a huge number of elements for the finite-element model. For instance, meshing by the size of the layers will require more than 10 16elements, which is an impossible task for present-day computers. Conversely, meshing by the size of the substrate will produce a too high element shape ratio (element geometry width/height > 10 6), which causes low solution accuracy; and the number of elements is still very large (10 6). In this work, by use of ANSYS layer-functioned elements, a thermal-structural FEA model has been implemented for multilayer X-ray optics. The possible number of layers that can be computed by presently available computers is increased considerably.« less
Time-related patient data retrieval for the case studies from the pharmacogenomics research network
Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G.
2012-01-01
There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users’ own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities. PMID:23076712
Time-related patient data retrieval for the case studies from the pharmacogenomics research network.
Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G
2012-11-01
There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users' own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities.
Ferraro, Mauro; Auricchio, Ferdinando; Boatti, Elisa; Scalet, Giulia; Conti, Michele; Morganti, Simone; Reali, Alessandro
2015-01-01
Computer-based simulations are nowadays widely exploited for the prediction of the mechanical behavior of different biomedical devices. In this aspect, structural finite element analyses (FEA) are currently the preferred computational tool to evaluate the stent response under bending. This work aims at developing a computational framework based on linear and higher order FEA to evaluate the flexibility of self-expandable carotid artery stents. In particular, numerical simulations involving large deformations and inelastic shape memory alloy constitutive modeling are performed, and the results suggest that the employment of higher order FEA allows accurately representing the computational domain and getting a better approximation of the solution with a widely-reduced number of degrees of freedom with respect to linear FEA. Moreover, when buckling phenomena occur, higher order FEA presents a superior capability of reproducing the nonlinear local effects related to buckling phenomena. PMID:26184329
Finite elements: Theory and application
NASA Technical Reports Server (NTRS)
Dwoyer, D. L. (Editor); Hussaini, M. Y. (Editor); Voigt, R. G. (Editor)
1988-01-01
Recent advances in FEM techniques and applications are discussed in reviews and reports presented at the ICASE/LaRC workshop held in Hampton, VA in July 1986. Topics addressed include FEM approaches for partial differential equations, mixed FEMs, singular FEMs, FEMs for hyperbolic systems, iterative methods for elliptic finite-element equations on general meshes, mathematical aspects of FEMS for incompressible viscous flows, and gradient weighted moving finite elements in two dimensions. Consideration is given to adaptive flux-corrected FEM transport techniques for CFD, mixed and singular finite elements and the field BEM, p and h-p versions of the FEM, transient analysis methods in computational dynamics, and FEMs for integrated flow/thermal/structural analysis.
The h-p Version of the Finite Element Method with Quasiuniform Meshes.
1986-05-01
Noetic Technologies, St. Louis).1 The theoretical aspects have been studied only recently. The first theoretical paper appeared in 1981 (see [6...mapping approach, the -esllt3 are also valid for curvilinear elements. r." ./" . • • .o i - .. • • .. 32 6. APPLICATIONS In this section we will study the...which were performed with a computer program called PROBE [20], [22] developed by V Noetic Technologies Corporation, St Louis. We will consider a
Design of microstrip components by computer
NASA Technical Reports Server (NTRS)
Cisco, T. C.
1972-01-01
A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.
Inelastic strain analogy for piecewise linear computation of creep residues in built-up structures
NASA Technical Reports Server (NTRS)
Jenkins, Jerald M.
1987-01-01
An analogy between inelastic strains caused by temperature and those caused by creep is presented in terms of isotropic elasticity. It is shown how the theoretical aspects can be blended with existing finite-element computer programs to exact a piecewise linear solution. The creep effect is determined by using the thermal stress computational approach, if appropriate alterations are made to the thermal expansion of the individual elements. The overall transient solution is achieved by consecutive piecewise linear iterations. The total residue caused by creep is obtained by accumulating creep residues for each iteration and then resubmitting the total residues for each element as an equivalent input. A typical creep law is tested for incremental time convergence. The results indicate that the approach is practical, with a valid indication of the extent of creep after approximately 20 hr of incremental time. The general analogy between body forces and inelastic strain gradients is discussed with respect to how an inelastic problem can be worked as an elastic problem.
NASA Technical Reports Server (NTRS)
Mehrotra, S. C.; Lan, C. E.
1978-01-01
The necessary information for using a computer program to predict distributed and total aerodynamic characteristics for low aspect ratio wings with partial leading-edge separation is presented. The flow is assumed to be steady and inviscid. The wing boundary condition is formulated by the Quasi-Vortex-Lattice method. The leading edge separated vortices are represented by discrete free vortex elements which are aligned with the local velocity vector at midpoints to satisfy the force free condition. The wake behind the trailing edge is also force free. The flow tangency boundary condition is satisfied on the wing, including the leading and trailing edges. The program is restricted to delta wings with zero thickness and no camber. It is written in FORTRAN language and runs on CDC 6600 computer.
Patient-specific finite element modeling of bones.
Poelert, Sander; Valstar, Edward; Weinans, Harrie; Zadpoor, Amir A
2013-04-01
Finite element modeling is an engineering tool for structural analysis that has been used for many years to assess the relationship between load transfer and bone morphology and to optimize the design and fixation of orthopedic implants. Due to recent developments in finite element model generation, for example, improved computed tomography imaging quality, improved segmentation algorithms, and faster computers, the accuracy of finite element modeling has increased vastly and finite element models simulating the anatomy and properties of an individual patient can be constructed. Such so-called patient-specific finite element models are potentially valuable tools for orthopedic surgeons in fracture risk assessment or pre- and intraoperative planning of implant placement. The aim of this article is to provide a critical overview of current themes in patient-specific finite element modeling of bones. In addition, the state-of-the-art in patient-specific modeling of bones is compared with the requirements for a clinically applicable patient-specific finite element method, and judgment is passed on the feasibility of application of patient-specific finite element modeling as a part of clinical orthopedic routine. It is concluded that further development in certain aspects of patient-specific finite element modeling are needed before finite element modeling can be used as a routine clinical tool.
Teacher Recruitment (Part 1 of a series). Spotlight: Updating Our Agendas.
ERIC Educational Resources Information Center
Smith, Cheryl
2002-01-01
Describes the teacher shortage and details characteristics of the current generation of potential teachers for private schools, including their work-to-live perspective, independence, and reliance on computers and communication technologies. Asserts that community outreach should be an essential element of recruitment efforts. Outlines aspects of…
Multiphysics Thrust Chamber Modeling for Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Cheng, Gary; Chen, Yen-Sen
2006-01-01
The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation. A two-pronged approach is employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of heat transfer on thrust performance. Preliminary results on both aspects are presented.
The Elastic Behaviour of Sintered Metallic Fibre Networks: A Finite Element Study by Beam Theory
Bosbach, Wolfram A.
2015-01-01
Background The finite element method has complimented research in the field of network mechanics in the past years in numerous studies about various materials. Numerical predictions and the planning efficiency of experimental procedures are two of the motivational aspects for these numerical studies. The widespread availability of high performance computing facilities has been the enabler for the simulation of sufficiently large systems. Objectives and Motivation In the present study, finite element models were built for sintered, metallic fibre networks and validated by previously published experimental stiffness measurements. The validated models were the basis for predictions about so far unknown properties. Materials and Methods The finite element models were built by transferring previously published skeletons of fibre networks into finite element models. Beam theory was applied as simplification method. Results and Conclusions The obtained material stiffness isn’t a constant but rather a function of variables such as sample size and boundary conditions. Beam theory offers an efficient finite element method for the simulated fibre networks. The experimental results can be approximated by the simulated systems. Two worthwhile aspects for future work will be the influence of size and shape and the mechanical interaction with matrix materials. PMID:26569603
Mobile Genetic Elements: In Silico, In Vitro, In Vivo
Arkhipova, Irina R.; Rice, Phoebe A.
2016-01-01
Mobile genetic elements (MGEs), also called transposable elements (TEs), represent universal components of most genomes and are intimately involved in nearly all aspects of genome organization, function, and evolution. However, there is currently a gap between fast-paced TE discovery in silico, stimulated by exponential growth of comparative genomic studies, and a limited number of experimental models amenable to more traditional in vitro and in vivo studies of structural, mechanistic, and regulatory properties of diverse MGEs. Experimental and computational scientists came together to bridge this gap at a recent conference, “Mobile Genetic Elements: in silico, in vitro, in vivo,” held at the Marine Biological Laboratory (MBL) in Woods Hole, MA, USA. PMID:26822117
ERIC Educational Resources Information Center
Durrett, John; Trezona, Judi
1982-01-01
Discusses physiological and psychological aspects of color. Includes guidelines for using color effectively, especially in the development of computer programs. Indicates that if applied with its limitations and requirements in mind, color can be a powerful manipulator of attention, memory, and understanding. (Author/JN)
Simulation analysis of air flow and turbulence statistics in a rib grit roughened duct.
Vogiatzis, I I; Denizopoulou, A C; Ntinas, G K; Fragos, V P
2014-01-01
The implementation of variable artificial roughness patterns on a surface is an effective technique to enhance the rate of heat transfer to fluid flow in the ducts of solar air heaters. Different geometries of roughness elements investigated have demonstrated the pivotal role that vortices and associated turbulence have on the heat transfer characteristics of solar air heater ducts by increasing the convective heat transfer coefficient. In this paper we investigate the two-dimensional, turbulent, unsteady flow around rectangular ribs of variable aspect ratios by directly solving the transient Navier-Stokes and continuity equations using the finite elements method. Flow characteristics and several aspects of turbulent flow are presented and discussed including velocity components and statistics of turbulence. The results reveal the impact that different rib lengths have on the computed mean quantities and turbulence statistics of the flow. The computed turbulence parameters show a clear tendency to diminish downstream with increasing rib length. Furthermore, the applied numerical method is capable of capturing small-scale flow structures resulting from the direct solution of Navier-Stokes and continuity equations.
[Computers in biomedical research: I. Analysis of bioelectrical signals].
Vivaldi, E A; Maldonado, P
2001-08-01
A personal computer equipped with an analog-to-digital conversion card is able to input, store and display signals of biomedical interest. These signals can additionally be submitted to ad-hoc software for analysis and diagnosis. Data acquisition is based on the sampling of a signal at a given rate and amplitude resolution. The automation of signal processing conveys syntactic aspects (data transduction, conditioning and reduction); and semantic aspects (feature extraction to describe and characterize the signal and diagnostic classification). The analytical approach that is at the basis of computer programming allows for the successful resolution of apparently complex tasks. Two basic principles involved are the definition of simple fundamental functions that are then iterated and the modular subdivision of tasks. These two principles are illustrated, respectively, by presenting the algorithm that detects relevant elements for the analysis of a polysomnogram, and the task flow in systems that automate electrocardiographic reports.
Probabilistic finite elements for fatigue and fracture analysis
NASA Astrophysics Data System (ADS)
Belytschko, Ted; Liu, Wing Kam
1993-04-01
An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.
Probabilistic finite elements for fatigue and fracture analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Liu, Wing Kam
1993-01-01
An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.
Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox
NASA Astrophysics Data System (ADS)
Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas
In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.
Three-dimensional structural analysis using interactive graphics
NASA Technical Reports Server (NTRS)
Biffle, J.; Sumlin, H. A.
1975-01-01
The application of computer interactive graphics to three-dimensional structural analysis was described, with emphasis on the following aspects: (1) structural analysis, and (2) generation and checking of input data and examination of the large volume of output data (stresses, displacements, velocities, accelerations). Handling of three-dimensional input processing with a special MESH3D computer program was explained. Similarly, a special code PLTZ may be used to perform all the needed tasks for output processing from a finite element code. Examples were illustrated.
NASA Astrophysics Data System (ADS)
Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang
2015-05-01
Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.
The TeachScheme! Project: Computing and Programming for Every Student
ERIC Educational Resources Information Center
Felleisen, Matthias; Findler, Robert Bruce; Flatt, Matthew; Krishnamurthi, Shriram
2004-01-01
The TeachScheme! Project aims to reform three aspects of introductory programming courses in secondary schools. First, we use a design method that asks students to develop programs in a stepwise fashion such that each step produces a well-specified intermediate product. Second, we use an entire series of sublanguages, not just one. Each element of…
ERIC Educational Resources Information Center
Stroup, Walter M.; Hills, Thomas; Carmona, Guadalupe
2011-01-01
This paper summarizes an approach to helping future educators to engage with key issues related to the application of measurement-related statistics to learning and teaching, especially in the contexts of science, mathematics, technology and engineering (STEM) education. The approach we outline has two major elements. First, students are asked to…
Accuracy of Three Dimensional Solid Finite Elements
NASA Technical Reports Server (NTRS)
Case, W. R.; Vandegrift, R. E.
1984-01-01
The results of a study to determine the accuracy of the three dimensional solid elements available in NASTRAN for predicting displacements is presented. Of particular interest in the study is determining how to effectively use solid elements in analyzing thick optical mirrors, as might exist in a large telescope. Surface deformations due to thermal and gravity loading can be significant contributors to the determination of the overall optical quality of a telescope. The study investigates most of the solid elements currently available in either COSMIC or MSC NASTRAN. Error bounds as a function of mesh refinement and element aspect ratios are addressed. It is shown that the MSC solid elements are, in general, more accurate than their COSMIC NASTRAN counterparts due to the specialized numerical integration used. In addition, the MSC elements appear to be more economical to use on the DEC VAX 11/780 computer.
Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio
NASA Technical Reports Server (NTRS)
Thomas, James
2008-01-01
Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Owen, Jeffrey E.
1988-01-01
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.
Micromagnetics on high-performance workstation and mobile computational platforms
NASA Astrophysics Data System (ADS)
Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.
2015-05-01
The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dag, Serkan; Yildirim, Bora; Sabuncuoglu, Baris
The objective of this study is to develop crack growth analysis methods for functionally graded materials (FGMs) subjected to mode I cyclic loading. The study presents finite elements based computational procedures for both two and three dimensional problems to examine fatigue crack growth in functionally graded materials. Developed methods allow the computation of crack length and generation of crack front profile for a graded medium subjected to fluctuating stresses. The results presented for an elliptical crack embedded in a functionally graded medium, illustrate the competing effects of ellipse aspect ratio and material property gradation on the fatigue crack growth behavior.
How Gamers Manage Aggression: Situating Skills in Collaborative Computer Games
ERIC Educational Resources Information Center
Bennerstedt, Ulrika; Ivarsson, Jonas; Linderoth, Jonas
2012-01-01
In the discussion on what players learn from digital games, there are two major camps in clear opposition to each other. As one side picks up on negative elements found in games the other side focuses on positive aspects. While the agendas differ, the basic arguments still depart from a shared logic: that engagement in game-related activities…
Conversational high resolution mass spectrographic data reduction
NASA Technical Reports Server (NTRS)
Romiez, M. P.
1973-01-01
A FORTRAN 4 program is described which reduces the data obtained from a high resolution mass spectrograph. The program (1) calculates an accurate mass for each line on the photoplate, and (2) assigns elemental compositions to each accurate mass. The program is intended for use in a time-shared computing environment and makes use of the conversational aspects of time-sharing operating systems.
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Lesoinne, Michel
1993-01-01
Most of the recently proposed computational methods for solving partial differential equations on multiprocessor architectures stem from the 'divide and conquer' paradigm and involve some form of domain decomposition. For those methods which also require grids of points or patches of elements, it is often necessary to explicitly partition the underlying mesh, especially when working with local memory parallel processors. In this paper, a family of cost-effective algorithms for the automatic partitioning of arbitrary two- and three-dimensional finite element and finite difference meshes is presented and discussed in view of a domain decomposed solution procedure and parallel processing. The influence of the algorithmic aspects of a solution method (implicit/explicit computations), and the architectural specifics of a multiprocessor (SIMD/MIMD, startup/transmission time), on the design of a mesh partitioning algorithm are discussed. The impact of the partitioning strategy on load balancing, operation count, operator conditioning, rate of convergence and processor mapping is also addressed. Finally, the proposed mesh decomposition algorithms are demonstrated with realistic examples of finite element, finite volume, and finite difference meshes associated with the parallel solution of solid and fluid mechanics problems on the iPSC/2 and iPSC/860 multiprocessors.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less
A physics-motivated Centroidal Voronoi Particle domain decomposition method
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-04-01
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Computational Toxicology at the US EPA | Science Inventory ...
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in America’s air, water, and hazardous-waste sites. The ORD Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the EPA Science to Achieve Results (STAR) program. Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast™) and exposure (ExpoCast™), and creating virtual liver (v-Liver™) and virtual embryo (v-Embryo™) systems models. The models and underlying data are being made publicly available t
Interfacing HTCondor-CE with OpenStack
NASA Astrophysics Data System (ADS)
Bockelman, B.; Caballero Bejar, J.; Hover, J.
2017-10-01
Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.
NASA Technical Reports Server (NTRS)
Kempel, Leo C.
1994-01-01
The Finite Element-Boundary Integral (FE-BI) technique was used to analyze the scattering and radiation properties of cavity-backed patch antennas recessed in a metallic groundplane. A program, CAVITY3D, was written and found to yield accurate results for large arrays without the usual high memory and computational demand associated with competing formulations. Recently, the FE-BI approach was extended to cavity-backed antennas recessed in an infinite, metallic circular cylinder. EXCALIBUR is a computer program written in the Radiation Laboratory of the University of Michigan which implements this formulation. This user manual gives a brief introduction to EXCALIBUR and some hints as to its proper use. As with all computational electromagnetics programs (especially finite element programs), skilled use and best performance are only obtained through experience. However, several important aspects of the program such as portability, geometry generation, interpretation of results, and custom modification are addressed.
A computational model of the cognitive impact of decorative elements on the perception of suspense
NASA Astrophysics Data System (ADS)
Delatorre, Pablo; León, Carlos; Gervás, Pablo; Palomo-Duarte, Manuel
2017-10-01
Suspense is a key narrative issue in terms of emotional gratification, influencing the way in which the audience experiences a story. Virtually all narrative media uses suspense as a strategy for reader engagement regardless of the technology or genre. Being such an important narrative component, computational creativity has tackled suspense in a number of automatic storytelling. These systems are mainly based on narrative theories, and in general lack a cognitive approach involving the study of empathy or emotional effect of the environment impact. With this idea in mind, this paper reports on a computational model of the influence of decorative elements on suspense. It has been developed as part of a more general proposal for plot generation based on cognitive aspects. In order to test and parameterise the model, an evaluation based on textual stories and an evaluation based on a 3D virtual environment was run. In both cases, results suggest a direct influence of emotional perception of decorative objects in the suspense of a scene.
Multiphysics Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen
2006-01-01
The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics methodology. Formulations for heat transfer in solids and porous media were implemented and anchored. A two-pronged approach was employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of hydrogen dissociation and recombination on heat transfer and thrust performance. The formulations and preliminary results on both aspects are presented.
Generic element processor (application to nonlinear analysis)
NASA Technical Reports Server (NTRS)
Stanley, Gary
1989-01-01
The focus here is on one aspect of the Computational Structural Mechanics (CSM) Testbed: finite element technology. The approach involves a Generic Element Processor: a command-driven, database-oriented software shell that facilitates introduction of new elements into the testbed. This shell features an element-independent corotational capability that upgrades linear elements to geometrically nonlinear analysis, and corrects the rigid-body errors that plague many contemporary plate and shell elements. Specific elements that have been implemented in the Testbed via this mechanism include the Assumed Natural-Coordinate Strain (ANS) shell elements, developed with Professor K. C. Park (University of Colorado, Boulder), a new class of curved hybrid shell elements, developed by Dr. David Kang of LPARL (formerly a student of Professor T. Pian), other shell and solid hybrid elements developed by NASA personnel, and recently a repackaged version of the workhorse shell element used in the traditional STAGS nonlinear shell analysis code. The presentation covers: (1) user and developer interfaces to the generic element processor, (2) an explanation of the built-in corotational option, (3) a description of some of the shell-elements currently implemented, and (4) application to sample nonlinear shell postbuckling problems.
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Gherlone, Marco; Versino, Daniele; DiSciuva, Marco
2012-01-01
This paper reviews the theoretical foundation and computational mechanics aspects of the recently developed shear-deformation theory, called the Refined Zigzag Theory (RZT). The theory is based on a multi-scale formalism in which an equivalent single-layer plate theory is refined with a robust set of zigzag local layer displacements that are free of the usual deficiencies found in common plate theories with zigzag kinematics. In the RZT, first-order shear-deformation plate theory is used as the equivalent single-layer plate theory, which represents the overall response characteristics. Local piecewise-linear zigzag displacements are used to provide corrections to these overall response characteristics that are associated with the plate heterogeneity and the relative stiffnesses of the layers. The theory does not rely on shear correction factors and is equally accurate for homogeneous, laminated composite, and sandwich beams and plates. Regardless of the number of material layers, the theory maintains only seven kinematic unknowns that describe the membrane, bending, and transverse shear plate-deformation modes. Derived from the virtual work principle, RZT is well-suited for developing computationally efficient, C(sup 0)-continuous finite elements; formulations of several RZT-based elements are highlighted. The theory and its finite element approximations thus provide a unified and reliable computational platform for the analysis and design of high-performance load-bearing aerospace structures.
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Gherlone, Marco; Versino, Daniele; Di Sciuva, Marco
2012-01-01
This paper reviews the theoretical foundation and computational mechanics aspects of the recently developed shear-deformation theory, called the Refined Zigzag Theory (RZT). The theory is based on a multi-scale formalism in which an equivalent single-layer plate theory is refined with a robust set of zigzag local layer displacements that are free of the usual deficiencies found in common plate theories with zigzag kinematics. In the RZT, first-order shear-deformation plate theory is used as the equivalent single-layer plate theory, which represents the overall response characteristics. Local piecewise-linear zigzag displacements are used to provide corrections to these overall response characteristics that are associated with the plate heterogeneity and the relative stiffnesses of the layers. The theory does not rely on shear correction factors and is equally accurate for homogeneous, laminated composite, and sandwich beams and plates. Regardless of the number of material layers, the theory maintains only seven kinematic unknowns that describe the membrane, bending, and transverse shear plate-deformation modes. Derived from the virtual work principle, RZT is well-suited for developing computationally efficient, C0-continuous finite elements; formulations of several RZT-based elements are highlighted. The theory and its finite elements provide a unified and reliable computational platform for the analysis and design of high-performance load-bearing aerospace structures.
Zerbini, Talita; da Silva, Luiz Fernando Ferraz; Ferro, Antonio Carlos Gonçalves; Kay, Fernando Uliana; Junior, Edson Amaro; Pasqualucci, Carlos Augusto Gonçalves; do Nascimento Saldiva, Paulo Hilario
2014-01-01
OBJECTIVE: The aim of the present work is to analyze the differences and similarities between the elements of a conventional autopsy and images obtained from postmortem computed tomography in a case of a homicide stab wound. METHOD: Comparison between the findings of different methods: autopsy and postmortem computed tomography. RESULTS: In some aspects, autopsy is still superior to imaging, especially in relation to external examination and the description of lesion vitality. However, the findings of gas embolism, pneumothorax and pulmonary emphysema and the relationship between the internal path of the instrument of aggression and the entry wound are better demonstrated by postmortem computed tomography. CONCLUSIONS: Although multislice computed tomography has greater accuracy than autopsy, we believe that the conventional autopsy method is fundamental for providing evidence in criminal investigations. PMID:25518020
The Montage architecture for grid-enabled science processing of large, distributed datasets
NASA Technical Reports Server (NTRS)
Jacob, Joseph C.; Katz, Daniel S .; Prince, Thomas; Berriman, Bruce G.; Good, John C.; Laity, Anastasia C.; Deelman, Ewa; Singh, Gurmeet; Su, Mei-Hui
2004-01-01
Montage is an Earth Science Technology Office (ESTO) Computational Technologies (CT) Round III Grand Challenge investigation to deploy a portable, compute-intensive, custom astronomical image mosaicking service for the National Virtual Observatory (NVO). Although Montage is developing a compute- and data-intensive service for the astronomy community, we are also helping to address a problem that spans both Earth and Space science, namely how to efficiently access and process multi-terabyte, distributed datasets. In both communities, the datasets are massive, and are stored in distributed archives that are, in most cases, remote from the available Computational resources. Therefore, state of the art computational grid technologies are a key element of the Montage portal architecture. This paper describes the aspects of the Montage design that are applicable to both the Earth and Space science communities.
Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.
Performance of an anisotropic Allman/DKT 3-node thin triangular flat shell element
NASA Astrophysics Data System (ADS)
Ertas, A.; Krafcik, J. T.; Ekwaro-Osire, S.
1992-05-01
A simple, explicit formulation of the stiffness matrix for an anisotropic, 3-node, thin triangular flat shell element in global coordinates is presented. An Allman triangle (AT) is used for membrane stiffness. The membrane stiffness matrix is explicitly derived by applying an Allman transformation to a Felippa 6-node linear strain triangle (LST). Bending stiffness is incorporated by the use of a discrete Kirchhoff triangle (DKT) bending element. Stiffness terms resulting from anisotropic membrane-bending coupling are included by integrating, in area coordinates, the membrane and bending strain-displacement matrices. Using the aforementioned approach, the objective of this study is to develop and test the performance of a practical 3-node flat shell element that could be used in plate problems with unsymmetrically stacked composite laminates. The performance of the latter element is tested on plates of varying aspect ratios. The developed 3-node shell element should simplify the programming task and have the potential of reducing the computational time.
NASA Astrophysics Data System (ADS)
Roy, Jean; Breton, Richard; Paradis, Stephane
2001-08-01
Situation Awareness (SAW) is essential for commanders to conduct decision-making (DM) activities. Situation Analysis (SA) is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of SAW for the decision maker. Operational trends in warfare put the situation analysis process under pressure. This emphasizes the need for a real-time computer-based Situation analysis Support System (SASS) to aid commanders in achieving the appropriate situation awareness, thereby supporting their response to actual or anticipated threats. Data fusion is clearly a key enabler for SA and a SASS. Since data fusion is used for SA in support of dynamic human decision-making, the exploration of the SA concepts and the design of data fusion techniques must take into account human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight integration of the human element with the SA technology is essential. Regarding these issues, this paper provides a description of CODSI (Command Decision Support Interface), and operational- like human machine interface prototype for investigations in computer-based SA and command decision support. With CODSI, one objective was to apply recent developments in SA theory and information display technology to the problem of enhancing SAW quality. It thus provides a capability to adequately convey tactical information to command decision makers. It also supports the study of human-computer interactions for SA, and methodologies for SAW measurement.
Advances in Integrated Computational Materials Engineering "ICME"
NASA Astrophysics Data System (ADS)
Hirsch, Jürgen
The methods of Integrated Computational Materials Engineering that were developed and successfully applied for Aluminium have been constantly improved. The main aspects and recent advances of integrated material and process modeling are simulations of material properties like strength and forming properties and for the specific microstructure evolution during processing (rolling, extrusion, annealing) under the influence of material constitution and process variations through the production process down to the final application. Examples are discussed for the through-process simulation of microstructures and related properties of Aluminium sheet, including DC ingot casting, pre-heating and homogenization, hot and cold rolling, final annealing. New results are included of simulation solution annealing and age hardening of 6xxx alloys for automotive applications. Physically based quantitative descriptions and computer assisted evaluation methods are new ICME methods of integrating new simulation tools also for customer applications, like heat affected zones in welding of age hardening alloys. The aspects of estimating the effect of specific elements due to growing recycling volumes requested also for high end Aluminium products are also discussed, being of special interest in the Aluminium producing industries.
Wang, Monan; Zhang, Kai; Yang, Ning
2018-04-09
To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.
Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.
Local field potentials and border ownership: A conjecture about computation in visual cortex.
Zucker, Steven W
2012-01-01
Border ownership is an intermediate-level visual task: it must integrate (upward flowing) image information about edges with (downward flowing) shape information. This highlights the familiar local-to-global aspect of border formation (linking of edge elements to form contours) with the much less studied global-to-local aspect (which edge elements form part of the same shape). To address this task we show how to incorporate certain high-level notions of distance and geometric arrangement into a form that can influence image-based edge information. The center of the argument is a reaction-diffusion equation that reveals how (global) aspects of the distance map (that is, shape) can be "read out" locally, suggesting a solution to the border ownership problem. Since the reaction-diffusion equation defines a field, a possible information processing role for the local field potential can be defined. We argue that such fields also underlie the Gestalt notion of closure, especially when it is refined using modern experimental techniques. An important implication of this theoretical argument is that, if true, then network modeling must be extended to include the substrate surrounding spiking neurons, including glia. Copyright © 2012 Elsevier Ltd. All rights reserved.
Connectivity Measures in EEG Microstructural Sleep Elements.
Sakellariou, Dimitris; Koupparis, Andreas M; Kokkinos, Vasileios; Koutroumanidis, Michalis; Kostopoulos, George K
2016-01-01
During Non-Rapid Eye Movement sleep (NREM) the brain is relatively disconnected from the environment, while connectedness between brain areas is also decreased. Evidence indicates, that these dynamic connectivity changes are delivered by microstructural elements of sleep: short periods of environmental stimuli evaluation followed by sleep promoting procedures. The connectivity patterns of the latter, among other aspects of sleep microstructure, are still to be fully elucidated. We suggest here a methodology for the assessment and investigation of the connectivity patterns of EEG microstructural elements, such as sleep spindles. The methodology combines techniques in the preprocessing, estimation, error assessing and visualization of results levels in order to allow the detailed examination of the connectivity aspects (levels and directionality of information flow) over frequency and time with notable resolution, while dealing with the volume conduction and EEG reference assessment. The high temporal and frequency resolution of the methodology will allow the association between the microelements and the dynamically forming networks that characterize them, and consequently possibly reveal aspects of the EEG microstructure. The proposed methodology is initially tested on artificially generated signals for proof of concept and subsequently applied to real EEG recordings via a custom built MATLAB-based tool developed for such studies. Preliminary results from 843 fast sleep spindles recorded in whole night sleep of 5 healthy volunteers indicate a prevailing pattern of interactions between centroparietal and frontal regions. We demonstrate hereby, an opening to our knowledge attempt to estimate the scalp EEG connectivity that characterizes fast sleep spindles via an "EEG-element connectivity" methodology we propose. The application of the latter, via a computational tool we developed suggests it is able to investigate the connectivity patterns related to the occurrence of EEG microstructural elements. Network characterization of specified physiological or pathological EEG microstructural elements can potentially be of great importance in the understanding, identification, and prediction of health and disease.
Connectivity Measures in EEG Microstructural Sleep Elements
Sakellariou, Dimitris; Koupparis, Andreas M.; Kokkinos, Vasileios; Koutroumanidis, Michalis; Kostopoulos, George K.
2016-01-01
During Non-Rapid Eye Movement sleep (NREM) the brain is relatively disconnected from the environment, while connectedness between brain areas is also decreased. Evidence indicates, that these dynamic connectivity changes are delivered by microstructural elements of sleep: short periods of environmental stimuli evaluation followed by sleep promoting procedures. The connectivity patterns of the latter, among other aspects of sleep microstructure, are still to be fully elucidated. We suggest here a methodology for the assessment and investigation of the connectivity patterns of EEG microstructural elements, such as sleep spindles. The methodology combines techniques in the preprocessing, estimation, error assessing and visualization of results levels in order to allow the detailed examination of the connectivity aspects (levels and directionality of information flow) over frequency and time with notable resolution, while dealing with the volume conduction and EEG reference assessment. The high temporal and frequency resolution of the methodology will allow the association between the microelements and the dynamically forming networks that characterize them, and consequently possibly reveal aspects of the EEG microstructure. The proposed methodology is initially tested on artificially generated signals for proof of concept and subsequently applied to real EEG recordings via a custom built MATLAB-based tool developed for such studies. Preliminary results from 843 fast sleep spindles recorded in whole night sleep of 5 healthy volunteers indicate a prevailing pattern of interactions between centroparietal and frontal regions. We demonstrate hereby, an opening to our knowledge attempt to estimate the scalp EEG connectivity that characterizes fast sleep spindles via an “EEG-element connectivity” methodology we propose. The application of the latter, via a computational tool we developed suggests it is able to investigate the connectivity patterns related to the occurrence of EEG microstructural elements. Network characterization of specified physiological or pathological EEG microstructural elements can potentially be of great importance in the understanding, identification, and prediction of health and disease. PMID:26924980
Control Aspects of Highly Constrained Guidance Techniques
1978-02-01
cycle. The advantages of this approach are (1) it requires only one time- consuming computation of the platform-to-body transformation matrix from...of steering gain corresponding to the three autopilot configurations, Kchange is KFCS change 2 0.0006 5 0.00156 8 0.00256 2.7 Terminal Steering As...a time- consuming process that it is desirable to consider ways of reducing the com- putation time by approximating the elements of B and/or updating
Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.
2011-01-01
Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562
NASA Astrophysics Data System (ADS)
Blakely, Christopher D.
This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.
Methods for improving simulations of biological systems: systemic computation and fractal proteins
Bentley, Peter J.
2009-01-01
Modelling and simulation are becoming essential for new fields such as synthetic biology. Perhaps the most important aspect of modelling is to follow a clear design methodology that will help to highlight unwanted deficiencies. The use of tools designed to aid the modelling process can be of benefit in many situations. In this paper, the modelling approach called systemic computation (SC) is introduced. SC is an interaction-based language, which enables individual-based expression and modelling of biological systems, and the interactions between them. SC permits a precise description of a hypothetical mechanism to be written using an intuitive graph-based or a calculus-based notation. The same description can then be directly run as a simulation, merging the hypothetical mechanism and the simulation into the same entity. However, even when using well-designed modelling tools to produce good models, the best model is not always the most accurate one. Frequently, computational constraints or lack of data make it infeasible to model an aspect of biology. Simplification may provide one way forward, but with inevitable consequences of decreased accuracy. Instead of attempting to replace an element with a simpler approximation, it is sometimes possible to substitute the element with a different but functionally similar component. In the second part of this paper, this modelling approach is described and its advantages are summarized using an exemplar: the fractal protein model. Finally, the paper ends with a discussion of good biological modelling practice by presenting lessons learned from the use of SC and the fractal protein model. PMID:19324681
Cortical Neural Computation by Discrete Results Hypothesis
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called “Discrete Results” (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of “Discrete Results” is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel “Discrete Results” concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation. PMID:27807408
Cortical Neural Computation by Discrete Results Hypothesis.
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called "Discrete Results" (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of "Discrete Results" is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel "Discrete Results" concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation.
2014-01-01
Background Evidence indicates that post − stroke rehabilitation improves function, independence and quality of life. A key aspect of rehabilitation is the provision of appropriate information and feedback to the learner. Advances in information and communications technology (ICT) have allowed for the development of various systems to complement stroke rehabilitation that could be used in the home setting. These systems may increase the provision of rehabilitation a stroke survivor receives and carries out, as well as providing a learning platform that facilitates long-term self-managed rehabilitation and behaviour change. This paper describes the application of an innovative evaluative methodology to explore the utilisation of feedback for post-stroke upper-limb rehabilitation in the home. Methods Using the principles of realistic evaluation, this study aimed to test and refine intervention theories by exploring the complex interactions of contexts, mechanisms and outcomes that arise from technology deployment in the home. Methods included focus groups followed by multi-method case studies (n = 5) before, during and after the use of computer-based equipment. Data were analysed in relation to the context-mechanism-outcome hypotheses case by case. This was followed by a synthesis of the findings to answer the question, ‘what works for whom and in what circumstances and respects?’ Results Data analysis reveals that to achieve desired outcomes through the use of ICT, key elements of computer feedback, such as accuracy, measurability, rewarding feedback, adaptability, and knowledge of results feedback, are required to trigger the theory-driven mechanisms underpinning the intervention. In addition, the pre-existing context and the personal and environmental contexts, such as previous experience of service delivery, personal goals, trust in the technology, and social circumstances may also enable or constrain the underpinning theory-driven mechanisms. Conclusions Findings suggest that the theory-driven mechanisms underpinning the utilisation of feedback from computer-based technology for home-based upper-limb post-stroke rehabilitation are dependent on key elements of computer feedback and the personal and environmental context. The identification of these elements may therefore inform the development of technology; therapy education and the subsequent adoption of technology and a self-management paradigm; long-term self-managed rehabilitation; and importantly, improvements in the physical and psychosocial aspects of recovery. PMID:24903401
Parker, Jack; Mawson, Susan; Mountain, Gail; Nasr, Nasrin; Zheng, Huiru
2014-06-05
Evidence indicates that post-stroke rehabilitation improves function, independence and quality of life. A key aspect of rehabilitation is the provision of appropriate information and feedback to the learner.Advances in information and communications technology (ICT) have allowed for the development of various systems to complement stroke rehabilitation that could be used in the home setting. These systems may increase the provision of rehabilitation a stroke survivor receives and carries out, as well as providing a learning platform that facilitates long-term self-managed rehabilitation and behaviour change. This paper describes the application of an innovative evaluative methodology to explore the utilisation of feedback for post-stroke upper-limb rehabilitation in the home. Using the principles of realistic evaluation, this study aimed to test and refine intervention theories by exploring the complex interactions of contexts, mechanisms and outcomes that arise from technology deployment in the home. Methods included focus groups followed by multi-method case studies (n = 5) before, during and after the use of computer-based equipment. Data were analysed in relation to the context-mechanism-outcome hypotheses case by case. This was followed by a synthesis of the findings to answer the question, 'what works for whom and in what circumstances and respects?' Data analysis reveals that to achieve desired outcomes through the use of ICT, key elements of computer feedback, such as accuracy, measurability, rewarding feedback, adaptability, and knowledge of results feedback, are required to trigger the theory-driven mechanisms underpinning the intervention. In addition, the pre-existing context and the personal and environmental contexts, such as previous experience of service delivery, personal goals, trust in the technology, and social circumstances may also enable or constrain the underpinning theory-driven mechanisms. Findings suggest that the theory-driven mechanisms underpinning the utilisation of feedback from computer-based technology for home-based upper-limb post-stroke rehabilitation are dependent on key elements of computer feedback and the personal and environmental context. The identification of these elements may therefore inform the development of technology; therapy education and the subsequent adoption of technology and a self-management paradigm; long-term self-managed rehabilitation; and importantly, improvements in the physical and psychosocial aspects of recovery.
A Distributed Data Base System Concept for Defense Test and Evaluation.
1983-03-01
measure adequately all variables that affect the outcome of the test. 7. Free - Play The second aspect of variable control that should be considered is the...amount of free - play permitted by a particular simulation. In a computer simulation free - play , if any, is limited to those elements designed into it by...the programmers. In operational testing of prototypes, however, a great deal of free - play can be introduced by allowing players to react to situations
Torak, L.J.
1993-01-01
A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.
State-Transition Structures in Physics and in Computation
NASA Astrophysics Data System (ADS)
Petri, C. A.
1982-12-01
In order to establish close connections between physical and computational processes, it is assumed that the concepts of “state” and of “transition” are acceptable both to physicists and to computer scientists, at least in an informal way. The aim of this paper is to propose formal definitions of state and transition elements on the basis of very low level physical concepts in such a way that (1) all physically possible computations can be described as embedded in physical processes; (2) the computational aspects of physical processes can be described on a well-defined level of abstraction; (3) the gulf between the continuous models of physics and the discrete models of computer science can be bridged by simple mathematical constructs which may be given a physical interpretation; (4) a combinatorial, nonstatistical definition of “information” can be given on low levels of abstraction which may serve as a basis to derive higher-level concepts of information, e.g., by a statistical or probabilistic approach. Conceivable practical consequences are discussed.
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
NASA Technical Reports Server (NTRS)
Baker, A. J.
1982-01-01
An order-of-magnitude analysis of the subsonic three dimensional steady time averaged Navier-Stokes equations, for semibounded aerodynamic juncture geometries, yields the parabolic Navier-Stokes simplification. The numerical solution of the resultant pressure Poisson equation is cast into complementary and particular parts, yielding an iterative interaction algorithm with an exterior three dimensional potential flow solution. A parabolic transverse momentum equation set is constructed, wherein robust enforcement of first order continuity effects is accomplished using a penalty differential constraint concept within a finite element solution algorithm. A Reynolds stress constitutive equation, with low turbulence Reynolds number wall functions, is employed for closure, using parabolic forms of the two-equation turbulent kinetic energy-dissipation equation system. Numerical results document accuracy, convergence, and utility of the developed finite element algorithm, and the CMC:3DPNS computer code applied to an idealized wing-body juncture region. Additional results document accuracy aspects of the algorithm turbulence closure model.
SCEAPI: A unified Restful Web API for High-Performance Computing
NASA Astrophysics Data System (ADS)
Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi
2017-10-01
The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.
Design of hat-stiffened composite panels loaded in axial compression
NASA Astrophysics Data System (ADS)
Paul, T. K.; Sinha, P. K.
An integrated step-by-step analysis procedure for the design of axially compressed stiffened composite panels is outlined. The analysis makes use of the effective width concept. A computer code, BUSTCOP, is developed incorporating various aspects of buckling such as skin buckling, stiffener crippling and column buckling. Other salient features of the computer code include capabilities for generation of data based on micromechanics theories and hygrothermal analysis, and for prediction of strength failure. Parametric studies carried out on a hat-stiffened structural element indicate that, for all practical purposes, composite panels exhibit higher structural efficiency. Some hybrid laminates with outer layers made of aluminum alloy also show great promise for flight vehicle structural applications.
Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces
NASA Technical Reports Server (NTRS)
Ellman, Alvin; Carlton, Magdi
1993-01-01
The technical challenges, engineering solutions, and results of the NOCC computer-human interface design are presented. The use-centered design process was as follows: determine the design criteria for user concerns; assess the impact of design decisions on the users; and determine the technical aspects of the implementation (tools, platforms, etc.). The NOCC hardware architecture is illustrated. A graphical model of the DSN that represented the hierarchical structure of the data was constructed. The DSN spacecraft summary display is shown. Navigation from top to bottom is accomplished by clicking the appropriate button for the element about which the user desires more detail. The telemetry summary display and the antenna color decision table are also shown.
An analysis of general chain systems
NASA Technical Reports Server (NTRS)
Passerello, C. E.; Huston, R. L.
1972-01-01
A general analysis of dynamic systems consisting of connected rigid bodies is presented. The number of bodies and their manner of connection is arbitrary so long as no closed loops are formed. The analysis represents a dynamic finite element method, which is computer-oriented and designed so that nonworking, interval constraint forces are automatically eliminated. The method is based upon Lagrange's form of d'Alembert's principle. Shifter matrix transformations are used with the geometrical aspects of the analysis. The method is illustrated with a space manipulator.
Neurodynamic system theory: scope and limits.
Erdi, P
1993-06-01
This paper proposes that neurodynamic system theory may be used to connect structural and functional aspects of neural organization. The paper claims that generalized causal dynamic models are proper tools for describing the self-organizing mechanism of the nervous system. In particular, it is pointed out that ontogeny, development, normal performance, learning, and plasticity, can be treated by coherent concepts and formalism. Taking into account the self-referential character of the brain, autopoiesis, endophysics and hermeneutics are offered as elements of a poststructuralist brain (-mind-computer) theory.
1977-09-01
Division, Barry Wright Corporation, Watertown, MA DESIGN OF ELASTOMERIC COMPONENTS BY USING THE FINITE -" b ELEMENT TECHNIQUE R.H. Finney and B.P. Gupta...Alabama in Huntsville, Huntsville, AL PAPERS APPEARING IN PART 2 Vibration Analysis SOME ASPECTS OF VIBRATION CONTROL SUPPORT DESIGN 0 P. Bezler and J.R...at the Air Force Flight August 1968, pp. 239-248. Dynamics Laboratory (AFFDL). The laser force measuring mounting brackets were designed and 5. G. K
Protection - Principles and practice.
NASA Technical Reports Server (NTRS)
Graham, G. S.; Denning, P. J.
1972-01-01
The protection mechanisms of computer systems control the access to objects, especially information objects. The principles of protection system design are formalized as a model (theory) of protection. Each process has a unique identification number which is attached by the system to each access attempted by the process. Details of system implementation are discussed, taking into account the storing of the access matrix, aspects of efficiency, and the selection of subjects and objects. Two systems which have protection features incorporating all the elements of the model are described.
Towards an integral computer environment supporting system operations analysis and conceptual design
NASA Technical Reports Server (NTRS)
Barro, E.; Delbufalo, A.; Rossi, F.
1994-01-01
VITROCISET has in house developed a prototype tool named System Dynamic Analysis Environment (SDAE) to support system engineering activities in the initial definition phase of a complex space system. The SDAE goal is to provide powerful means for the definition, analysis, and trade-off of operations and design concepts for the space and ground elements involved in a mission. For this purpose SDAE implements a dedicated modeling methodology based on the integration of different modern (static and dynamic) analysis and simulation techniques. The resulting 'system model' is capable of representing all the operational, functional, and behavioral aspects of the system elements which are part of a mission. The execution of customized model simulations enables: the validation of selected concepts with respect to mission requirements; the in-depth investigation of mission specific operational and/or architectural aspects; and the early assessment of performances required by the system elements to cope with mission constraints and objectives. Due to its characteristics, SDAE is particularly tailored for nonconventional or highly complex systems, which require a great analysis effort in their early definition stages. SDAE runs under PC-Windows and is currently used by VITROCISET system engineering group. This paper describes the SDAE main features, showing some tool output examples.
A multiscale method for modeling high-aspect-ratio micro/nano flows
NASA Astrophysics Data System (ADS)
Lockerby, Duncan; Borg, Matthew; Reese, Jason
2012-11-01
In this paper we present a new multiscale scheme for simulating micro/nano flows of high aspect ratio in the flow direction, e.g. within long ducts, tubes, or channels, of varying section. The scheme consists of applying a simple hydrodynamic description over the entire domain, and allocating micro sub-domains in very small ``slices'' of the channel. Every micro element is a molecular dynamics simulation (or other appropriate model, e.g., a direct simulation Monte Carlo method for micro-channel gas flows) over the local height of the channel/tube. The number of micro elements as well as their streamwise position is chosen to resolve the geometrical features of the macro channel. While there is no direct communication between individual micro elements, coupling occurs via an iterative imposition of mass and momentum-flux conservation on the macro scale. The greater the streamwise scale of the geometry, the more significant is the computational speed-up when compared to a full MD simulation. We test our new multiscale method on the case of a converging/diverging nanochannel conveying a simple Lennard-Jones liquid. We validate the results from our simulations by comparing them to a full MD simulation of the same test case. Supported by EPSRC Programme Grant, EP/I011927/1.
NASA Astrophysics Data System (ADS)
Pašti, Igor A.; Jovanović, Aleksandar; Dobrota, Ana S.; Mentus, Slavko V.; Johansson, Börje; Skorodumova, Natalia V.
2018-04-01
The understanding of atomic adsorption on graphene is of high importance for many advanced technologies. Here we present a complete database of the atomic adsorption energies for the elements of the Periodic Table up to the atomic number 86 (excluding lanthanides) on pristine graphene. The energies have been calculated using the projector augmented wave (PAW) method with PBE, long-range dispersion interaction corrected PBE (PBE+D2, PBE+D3) as well as non-local vdW-DF2 approach. The inclusion of dispersion interactions leads to an exothermic adsorption for all the investigated elements. Dispersion interactions are found to be of particular importance for the adsorption of low atomic weight earth alkaline metals, coinage and s-metals (11th and 12th groups), high atomic weight p-elements and noble gases. We discuss the observed adsorption trends along the groups and rows of the Periodic Table as well some computational aspects of modelling atomic adsorption on graphene.
Simulations of acoustic waves in channels and phonation in glottal ducts
NASA Astrophysics Data System (ADS)
Yang, Jubiao; Krane, Michael; Zhang, Lucy
2014-11-01
Numerical simulations of acoustic wave propagation were performed by solving compressible Navier-Stokes equations using finite element method. To avoid numerical contamination of acoustic field induced by non-physical reflections at computational boundaries, a Perfectly Matched Layer (PML) scheme was implemented to attenuate the acoustic waves and their reflections near these boundaries. The acoustic simulation was further combined with the simulation of interaction of vocal fold vibration and glottal flow, using our fully-coupled Immersed Finite Element Method (IFEM) approach, to study phonation in the glottal channel. In order to decouple the aeroelastic and aeroacoustic aspects of phonation, the airway duct used has a uniform cross section with PML properly applied. The dynamics of phonation were then studied by computing the terms of the equations of motion for a control volume comprised of the fluid in the vicinity of the vocal folds. It is shown that the principal dynamics is comprised of the near cancellation of the pressure force driving the flow through the glottis, and the aerodynamic drag on the vocal folds. Aeroacoustic source strengths are also presented, estimated from integral quantities computed in the source region, as well as from the radiated acoustic field.
Di Paola, Vieri; Marijuán, Pedro C; Lahoz-Beltra, Rafael
2004-01-01
Adaptive behavior in unicellular organisms (i.e., bacteria) depends on highly organized networks of proteins governing purposefully the myriad of molecular processes occurring within the cellular system. For instance, bacteria are able to explore the environment within which they develop by utilizing the motility of their flagellar system as well as a sophisticated biochemical navigation system that samples the environmental conditions surrounding the cell, searching for nutrients or moving away from toxic substances or dangerous physical conditions. In this paper we discuss how proteins of the intervening signal transduction network could be modeled as artificial neurons, simulating the dynamical aspects of the bacterial taxis. The model is based on the assumption that, in some important aspects, proteins can be considered as processing elements or McCulloch-Pitts artificial neurons that transfer and process information from the bacterium's membrane surface to the flagellar motor. This simulation of bacterial taxis has been carried out on a hardware realization of a McCulloch-Pitts artificial neuron using an operational amplifier. Based on the behavior of the operational amplifier we produce a model of the interaction between CheY and FliM, elements of the prokaryotic two component system controlling chemotaxis, as well as a simulation of learning and evolution processes in bacterial taxis. On the one side, our simulation results indicate that, computationally, these protein 'switches' are similar to McCulloch-Pitts artificial neurons, suggesting a bridge between evolution and learning in dynamical systems at cellular and molecular levels and the evolutive hardware approach. On the other side, important protein 'tactilizing' properties are not tapped by the model, and this suggests further complexity steps to explore in the approach to biological molecular computing.
Scattering Properties of Needle-Like and plate-like Ice Spheroids with Moderate Size Parameters
NASA Technical Reports Server (NTRS)
Zakharova, Nadia T.; Mishchenko, Michael I.; Hansen, James E. (Technical Monitor)
2000-01-01
We use the current advanced version of the T-matrix method to compute the optical cross sections, the asymmetry parameter of the phase function, and the scattering matrix elements of ice spheroids with aspect ratios up to 20 and surface-equivalent-sphere size parameters up to 12. We demonstrate that plate-like and needle-like particles with moderate size parameters possess unique scattering properties: their asymmetry parameters and phase functions are similar to those of surface-equivalent spheres, whereas all other elements of the scattering matrix are typical of particles much smaller than the wavelength (Rayleigh scatterers). This result may have important implications for optical particle sizing and remote sensing of the terrestrial and planetary atmospheres.
Returns on Investment in California County Departments of Public Health
2016-01-01
Objectives. To estimate the average return on investment for the overall activities of county departments of public health in California. Methods. I gathered the elements necessary to estimate the average return on investment for county departments of public health in California during the period 2001 to 2008–2009. These came from peer-reviewed journal articles published as part of a larger project to develop a method for determining return on investment for public health by using a health economics framework. I combined these elements by using the standard formula for computing return on investment, and performed a sensitivity analysis. Then I compared the return on investment for county departments of public health with the returns on investment generated for various aspects of medical care. Results. The estimated return on investment from $1 invested in county departments of public health in California ranges from $67.07 to $88.21. Conclusions. The very large estimated return on investment for California county departments of public health relative to the return on investment for selected aspects of medical care suggests that public health is a wise investment. PMID:27310339
Returns on Investment in California County Departments of Public Health.
Brown, Timothy T
2016-08-01
To estimate the average return on investment for the overall activities of county departments of public health in California. I gathered the elements necessary to estimate the average return on investment for county departments of public health in California during the period 2001 to 2008-2009. These came from peer-reviewed journal articles published as part of a larger project to develop a method for determining return on investment for public health by using a health economics framework. I combined these elements by using the standard formula for computing return on investment, and performed a sensitivity analysis. Then I compared the return on investment for county departments of public health with the returns on investment generated for various aspects of medical care. The estimated return on investment from $1 invested in county departments of public health in California ranges from $67.07 to $88.21. The very large estimated return on investment for California county departments of public health relative to the return on investment for selected aspects of medical care suggests that public health is a wise investment.
Control aspects of quantum computing using pure and mixed states.
Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J
2012-10-13
Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems.
Control aspects of quantum computing using pure and mixed states
Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J.
2012-01-01
Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems. PMID:22946034
Mineral density volume gradients in normal and diseased human tissues
Djomehri, Sabra I.; Candell, Susan; Case, Thomas; ...
2015-04-09
Clinical computed tomography provides a single mineral density (MD) value for heterogeneous calcified tissues containing early and late stage pathologic formations. The novel aspect of this study is that, it extends current quantitative methods of mapping mineral density gradients to three dimensions, discretizes early and late mineralized stages, identifies elemental distribution in discretized volumes, and correlates measured MD with respective calcium (Ca) to phosphorus (P) and Ca to zinc (Zn) elemental ratios. To accomplish this, MD variations identified using polychromatic radiation from a high resolution micro-computed tomography (micro-CT) benchtop unit were correlated with elemental mapping obtained from a microprobe X-raymore » fluorescence (XRF) using synchrotron monochromatic radiation. Digital segmentation of tomograms from normal and diseased tissues (N=5 per group; 40-60 year old males) contained significant mineral density variations (enamel: 2820-3095mg/cc, bone: 570-1415mg/cc, cementum: 1240-1340mg/cc, dentin: 1480-1590mg/cc, cementum affected by periodontitis: 1100-1220mg/cc, hypomineralized carious dentin: 345-1450mg/cc, hypermineralized carious dentin: 1815-2740mg/cc, and dental calculus: 1290-1770mg/cc). A plausible linear correlation between segmented MD volumes and elemental ratios within these volumes was established, and Ca/P ratios for dentin (1.49), hypomineralized dentin (0.32-0.46), cementum (1.51), and bone (1.68) were observed. Furthermore, varying Ca/Zn ratios were distinguished in adapted compared to normal tissues, such as in bone (855-2765) and in cementum (595-990), highlighting Zn as an influential element in prompting observed adaptive properties. Hence, results provide insights on mineral density gradients with elemental concentrations and elemental footprints that in turn could aid in elucidating mechanistic processes for pathologic formations.« less
Mineral density volume gradients in normal and diseased human tissues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djomehri, Sabra I.; Candell, Susan; Case, Thomas
Clinical computed tomography provides a single mineral density (MD) value for heterogeneous calcified tissues containing early and late stage pathologic formations. The novel aspect of this study is that, it extends current quantitative methods of mapping mineral density gradients to three dimensions, discretizes early and late mineralized stages, identifies elemental distribution in discretized volumes, and correlates measured MD with respective calcium (Ca) to phosphorus (P) and Ca to zinc (Zn) elemental ratios. To accomplish this, MD variations identified using polychromatic radiation from a high resolution micro-computed tomography (micro-CT) benchtop unit were correlated with elemental mapping obtained from a microprobe X-raymore » fluorescence (XRF) using synchrotron monochromatic radiation. Digital segmentation of tomograms from normal and diseased tissues (N=5 per group; 40-60 year old males) contained significant mineral density variations (enamel: 2820-3095mg/cc, bone: 570-1415mg/cc, cementum: 1240-1340mg/cc, dentin: 1480-1590mg/cc, cementum affected by periodontitis: 1100-1220mg/cc, hypomineralized carious dentin: 345-1450mg/cc, hypermineralized carious dentin: 1815-2740mg/cc, and dental calculus: 1290-1770mg/cc). A plausible linear correlation between segmented MD volumes and elemental ratios within these volumes was established, and Ca/P ratios for dentin (1.49), hypomineralized dentin (0.32-0.46), cementum (1.51), and bone (1.68) were observed. Furthermore, varying Ca/Zn ratios were distinguished in adapted compared to normal tissues, such as in bone (855-2765) and in cementum (595-990), highlighting Zn as an influential element in prompting observed adaptive properties. Hence, results provide insights on mineral density gradients with elemental concentrations and elemental footprints that in turn could aid in elucidating mechanistic processes for pathologic formations.« less
Mineral density volume gradients in normal and diseased human tissues.
Djomehri, Sabra I; Candell, Susan; Case, Thomas; Browning, Alyssa; Marshall, Grayson W; Yun, Wenbing; Lau, S H; Webb, Samuel; Ho, Sunita P
2015-01-01
Clinical computed tomography provides a single mineral density (MD) value for heterogeneous calcified tissues containing early and late stage pathologic formations. The novel aspect of this study is that, it extends current quantitative methods of mapping mineral density gradients to three dimensions, discretizes early and late mineralized stages, identifies elemental distribution in discretized volumes, and correlates measured MD with respective calcium (Ca) to phosphorus (P) and Ca to zinc (Zn) elemental ratios. To accomplish this, MD variations identified using polychromatic radiation from a high resolution micro-computed tomography (micro-CT) benchtop unit were correlated with elemental mapping obtained from a microprobe X-ray fluorescence (XRF) using synchrotron monochromatic radiation. Digital segmentation of tomograms from normal and diseased tissues (N=5 per group; 40-60 year old males) contained significant mineral density variations (enamel: 2820-3095 mg/cc, bone: 570-1415 mg/cc, cementum: 1240-1340 mg/cc, dentin: 1480-1590 mg/cc, cementum affected by periodontitis: 1100-1220 mg/cc, hypomineralized carious dentin: 345-1450 mg/cc, hypermineralized carious dentin: 1815-2740 mg/cc, and dental calculus: 1290-1770 mg/cc). A plausible linear correlation between segmented MD volumes and elemental ratios within these volumes was established, and Ca/P ratios for dentin (1.49), hypomineralized dentin (0.32-0.46), cementum (1.51), and bone (1.68) were observed. Furthermore, varying Ca/Zn ratios were distinguished in adapted compared to normal tissues, such as in bone (855-2765) and in cementum (595-990), highlighting Zn as an influential element in prompting observed adaptive properties. Hence, results provide insights on mineral density gradients with elemental concentrations and elemental footprints that in turn could aid in elucidating mechanistic processes for pathologic formations.
Mineral Density Volume Gradients in Normal and Diseased Human Tissues
Djomehri, Sabra I.; Candell, Susan; Case, Thomas; Browning, Alyssa; Marshall, Grayson W.; Yun, Wenbing; Lau, S. H.; Webb, Samuel; Ho, Sunita P.
2015-01-01
Clinical computed tomography provides a single mineral density (MD) value for heterogeneous calcified tissues containing early and late stage pathologic formations. The novel aspect of this study is that, it extends current quantitative methods of mapping mineral density gradients to three dimensions, discretizes early and late mineralized stages, identifies elemental distribution in discretized volumes, and correlates measured MD with respective calcium (Ca) to phosphorus (P) and Ca to zinc (Zn) elemental ratios. To accomplish this, MD variations identified using polychromatic radiation from a high resolution micro-computed tomography (micro-CT) benchtop unit were correlated with elemental mapping obtained from a microprobe X-ray fluorescence (XRF) using synchrotron monochromatic radiation. Digital segmentation of tomograms from normal and diseased tissues (N=5 per group; 40-60 year old males) contained significant mineral density variations (enamel: 2820-3095mg/cc, bone: 570-1415mg/cc, cementum: 1240-1340mg/cc, dentin: 1480-1590mg/cc, cementum affected by periodontitis: 1100-1220mg/cc, hypomineralized carious dentin: 345-1450mg/cc, hypermineralized carious dentin: 1815-2740mg/cc, and dental calculus: 1290-1770mg/cc). A plausible linear correlation between segmented MD volumes and elemental ratios within these volumes was established, and Ca/P ratios for dentin (1.49), hypomineralized dentin (0.32-0.46), cementum (1.51), and bone (1.68) were observed. Furthermore, varying Ca/Zn ratios were distinguished in adapted compared to normal tissues, such as in bone (855-2765) and in cementum (595-990), highlighting Zn as an influential element in prompting observed adaptive properties. Hence, results provide insights on mineral density gradients with elemental concentrations and elemental footprints that in turn could aid in elucidating mechanistic processes for pathologic formations. PMID:25856386
Gültekin, Osman; Sommer, Gerhard; Holzapfel, Gerhard A
2016-11-01
This study deals with the viscoelastic constitutive modeling and the respective computational analysis of the human passive myocardium. We start by recapitulating the locally orthotropic inner structure of the human myocardial tissue and model the mechanical response through invariants and structure tensors associated with three orthonormal basis vectors. In accordance with recent experimental findings the ventricular myocardial tissue is assumed to be incompressible, thick-walled, orthotropic and viscoelastic. In particular, one spring element coupled with Maxwell elements in parallel endows the model with viscoelastic features such that four dashpots describe the viscous response due to matrix, fiber, sheet and fiber-sheet fragments. In order to alleviate the numerical obstacles, the strictly incompressible model is altered by decomposing the free-energy function into volumetric-isochoric elastic and isochoric-viscoelastic parts along with the multiplicative split of the deformation gradient which enables the three-field mixed finite element method. The crucial aspect of the viscoelastic formulation is linked to the rate equations of the viscous overstresses resulting from a 3-D analogy of a generalized 1-D Maxwell model. We provide algorithmic updates for second Piola-Kirchhoff stress and elasticity tensors. In the sequel, we address some numerical aspects of the constitutive model by applying it to elastic, cyclic and relaxation test data obtained from biaxial extension and triaxial shear tests whereby we assess the fitting capacity of the model. With the tissue parameters identified, we conduct (elastic and viscoelastic) finite element simulations for an ellipsoidal geometry retrieved from a human specimen.
NASA Technical Reports Server (NTRS)
Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.
1998-01-01
This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves vectorization of the code on uni-processor hardware and enables straightforward parallel-vector processing of element blocks on multi-processor hardware.
DiBianca, F A; Gupta, V; Zeman, H D
2000-08-01
A computed tomography imaging technique called variable resolution x-ray (VRX) detection provides detector resolution ranging from that of clinical body scanning to that of microscopy (1 cy/mm to 100 cy/mm). The VRX detection technique is based on a new principle denoted as "projective compression" that allows the detector resolution element to scale proportionally to the image field size. Two classes of VRX detector geometry are considered. Theoretical aspects related to x-ray physics and data sampling are presented. Measured resolution parameters (line-spread function and modulation-transfer function) are presented and discussed. A VRX image that resolves a pair of 50 micron tungsten hairs spaced 30 microns apart is shown.
NASA Astrophysics Data System (ADS)
Suzuki, Yoshi-ichi; Seideman, Tamar; Stener, Mauro
2004-01-01
Time-resolved photoelectron differential cross sections are computed within a quantum dynamical theory that combines a formally exact solution of the nuclear dynamics with density functional theory (DFT)-based approximations of the electronic dynamics. Various observables of time-resolved photoelectron imaging techniques are computed at the Kohn-Sham and at the time-dependent DFT levels. Comparison of the results serves to assess the reliability of the former method and hence its usefulness as an economic approach for time-domain photoelectron cross section calculations, that is applicable to complex polyatomic systems. Analysis of the matrix elements that contain the electronic dynamics provides insight into a previously unexplored aspect of femtosecond-resolved photoelectron imaging.
Computational structures technology and UVA Center for CST
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1992-01-01
Rapid advances in computer hardware have had a profound effect on various engineering and mechanics disciplines, including the materials, structures, and dynamics disciplines. A new technology, computational structures technology (CST), has recently emerged as an insightful blend between material modeling, structural and dynamic analysis and synthesis on the one hand, and other disciplines such as computer science, numerical analysis, and approximation theory, on the other hand. CST is an outgrowth of finite element methods developed over the last three decades. The focus of this presentation is on some aspects of CST which can impact future airframes and propulsion systems, as well as on the newly established University of Virginia (UVA) Center for CST. The background and goals for CST are described along with the motivations for developing CST, and a brief discussion is made on computational material modeling. We look at the future in terms of technical needs, computing environment, and research directions. The newly established UVA Center for CST is described. One of the research projects of the Center is described, and a brief summary of the presentation is given.
Parametric study of guided waves dispersion curves for composite plates
NASA Astrophysics Data System (ADS)
Predoi, Mihai Valentin; Petre, Cristian Cǎtǎlin; Kettani, Mounsif Ech Cherif El; Leduc, Damien
2018-02-01
Nondestructive testing of composite panels benefit from the relatively long range propagation of guided waves in sandwich structures. The guided waves are sensitive to delamination, air bubbles inclusions and cracks and can thus bring information about hidden defects in the composite panel. The preliminary data in all such inspections is represented by the dispersion curves, representing the dependency of the phase/group velocity on the frequency for the propagating modes. In fact, all modes are more or less attenuated, so it is even more important to compute the dispersion curves, which provide also the modal attenuation as function of frequency. Another important aspect is the sensitivity of the dispersion curves on each of the elastic constant of the composite, which are orthotropic in most cases. All these aspects are investigated in the present work, based on our specially developed finite element numerical model implemented in Comsol, which has several advantages over existing methods. The dispersion curves and modal displacements are computed for an example of composite plate. Comparison with literature data validates the accuracy of our results.
Computational aspects in mechanical modeling of the articular cartilage tissue.
Mohammadi, Hadi; Mequanint, Kibret; Herzog, Walter
2013-04-01
This review focuses on the modeling of articular cartilage (at the tissue level), chondrocyte mechanobiology (at the cell level) and a combination of both in a multiscale computation scheme. The primary objective is to evaluate the advantages and disadvantages of conventional models implemented to study the mechanics of the articular cartilage tissue and chondrocytes. From monophasic material models as the simplest form to more complicated multiscale theories, these approaches have been frequently used to model articular cartilage and have contributed significantly to modeling joint mechanics, addressing and resolving numerous issues regarding cartilage mechanics and function. It should be noted that attentiveness is important when using different modeling approaches, as the choice of the model limits the applications available. In this review, we discuss the conventional models applicable to some of the mechanical aspects of articular cartilage such as lubrication, swelling pressure and chondrocyte mechanics and address some of the issues associated with the current modeling approaches. We then suggest future pathways for a more realistic modeling strategy as applied for the simulation of the mechanics of the cartilage tissue using multiscale and parallelized finite element method.
Using computer graphics to design Space Station Freedom viewing
NASA Technical Reports Server (NTRS)
Goldsberry, B. S.; Lippert, B. O.; Mckee, S. D.; Lewis, J. L., Jr.; Mount, F. E.
1989-01-01
An important aspect of planning for Space Station Freedom at the United States National Aeronautics and Space Administration (NASA) is the placement of the viewing windows and cameras for optimum crewmember use. Researchers and analysts are evaluating the placement options using a three-dimensional graphics program called PLAID. This program, developed at the NASA Johnson Space Center (JSC), is being used to determine the extent to which the viewing requirements for assembly and operations are being met. A variety of window placement options in specific modules are assessed for accessibility. In addition, window and camera placements are analyzed to insure that viewing areas are not obstructed by the truss assemblies, externally-mounted payloads, or any other station element. Other factors being examined include anthropometric design considerations, workstation interfaces, structural issues, and mechanical elements.
NASA Technical Reports Server (NTRS)
Coeckelenbergh, Y.; Macelroy, R. D.; Rein, R.
1978-01-01
The investigation of specific interactions among biological molecules must take into consideration the stereochemistry of the structures. Thus, models of the molecules are essential for describing the spatial organization of potentially interacting groups, and estimations of conformation are required for a description of spatial organization. Both the function of visualizing molecules, and that of estimating conformation through calculations of energy, are part of the molecular modeling system described in the present paper. The potential uses of the system in investigating some aspects of the origin of life rest on the assumption that translation of conformation from genetic elements to catalytic elements would have been required for the development of the first replicating systems subject to the process of biological evolution.
Finite element implementation of state variable-based viscoplasticity models
NASA Technical Reports Server (NTRS)
Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.
1991-01-01
The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.
Fulian; Gooch; Fisher; Stevens; Compton
2000-08-01
The development and application of a new electrochemical device using a computer-aided design strategy is reported. This novel design is based on the flow of electrolyte solution past a microwire electrode situated centrally within a large duct. In the design stage, finite element simulations were employed to evaluate feasible working geometries and mass transport rates. The computer-optimized designs were then exploited to construct experimental devices. Steady-state voltammetric measurements were performed for a reversible one-electron-transfer reaction to establish the experimental relationship between electrolysis current and solution velocity. The experimental results are compared to those predicted numerically, and good agreement is found. The numerical studies are also used to establish an empirical relationship between the mass transport limited current and the volume flow rate, providing a simple and quantitative alternative for workers who would prefer to exploit this device without the need to develop the numerical aspects.
NASA Astrophysics Data System (ADS)
Cho, Yi Je; Lee, Wook Jin; Park, Yong Ho
2014-11-01
Aspects of numerical results from computational experiments on representative volume element (RVE) problems using finite element analyses are discussed. Two different boundary conditions (BCs) are examined and compared numerically for volume elements with different sizes, where tests have been performed on the uniaxial tensile deformation of random particle reinforced composites. Structural heterogeneities near model boundaries such as the free-edges of particle/matrix interfaces significantly influenced the overall numerical solutions, producing force and displacement fluctuations along the boundaries. Interestingly, this effect was shown to be limited to surface regions within a certain distance of the boundaries, while the interior of the model showed almost identical strain fields regardless of the applied BCs. Also, the thickness of the BC-affected regions remained constant with varying volume element sizes in the models. When the volume element size was large enough compared to the thickness of the BC-affected regions, the structural response of most of the model was found to be almost independent of the applied BC such that the apparent properties converged to the effective properties. Finally, the mechanism that leads a RVE model for random heterogeneous materials to be representative is discussed in terms of the size of the volume element and the thickness of the BC-affected region.
NASA Astrophysics Data System (ADS)
Pantale, O.; Caperaa, S.; Rakotomalala, R.
2004-07-01
During the last 50 years, the development of better numerical methods and more powerful computers has been a major enterprise for the scientific community. In the same time, the finite element method has become a widely used tool for researchers and engineers. Recent advances in computational software have made possible to solve more physical and complex problems such as coupled problems, nonlinearities, high strain and high-strain rate problems. In this field, an accurate analysis of large deformation inelastic problems occurring in metal-forming or impact simulations is extremely important as a consequence of high amount of plastic flow. In this presentation, the object-oriented implementation, using the C++ language, of an explicit finite element code called DynELA is presented. The object-oriented programming (OOP) leads to better-structured codes for the finite element method and facilitates the development, the maintainability and the expandability of such codes. The most significant advantage of OOP is in the modeling of complex physical systems such as deformation processing where the overall complex problem is partitioned in individual sub-problems based on physical, mathematical or geometric reasoning. We first focus on the advantages of OOP for the development of scientific programs. Specific aspects of OOP, such as the inheritance mechanism, the operators overload procedure or the use of template classes are detailed. Then we present the approach used for the development of our finite element code through the presentation of the kinematics, conservative and constitutive laws and their respective implementation in C++. Finally, the efficiency and accuracy of our finite element program are investigated using a number of benchmark tests relative to metal forming and impact simulations.
Approximate Micromechanics Treatise of Composite Impact
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Handler, Louis M.
2005-01-01
A formalism is described for micromechanic impact of composites. The formalism consists of numerous equations which describe all aspects of impact from impactor and composite conditions to impact contact, damage progression, and penetration or containment. The formalism is based on through-the-thickness displacement increments simulation which makes it convenient to track local damage in terms of microfailure modes and their respective characteristics. A flow chart is provided to cast the formalism (numerous equations) into a computer code for embedment in composite mechanic codes and/or finite element composite structural analysis.
A Strategy to Safely Live and Work in the Space Radiation Environment
NASA Technical Reports Server (NTRS)
Corbin, Barbara J.; Sulzman, Frank M.; Krenek, Sam
2006-01-01
The goal of the National Aeronautics and Space Agency and the Space Radiation Project is to ensure that astronauts can safely live and work in the space radiation environment. The space radiation environment poses both acute and chronic risks to crew health and safety, but unlike some other aspects of space travel, space radiation exposure has clinically relevant implications for the lifetime of the crew. The term safely means that risks are sufficiently understood such that acceptable limits on mission, post-mission and multi-mission consequences (for example, excess lifetime fatal cancer risk) can be defined. The Space Radiation Project strategy has several elements. The first element is to use a peer-reviewed research program to increase our mechanistic knowledge and genetic capabilities to develop tools for individual risk projection, thereby reducing our dependency on epidemiological data and population-based risk assessment. The second element is to use the NASA Space Radiation Laboratory to provide a ground-based facility to study the understanding of health effects/mechanisms of damage from space radiation exposure and the development and validation of biological models of risk, as well as methods for extrapolation to human risk. The third element is a risk modeling effort that integrates the results from research efforts into models of human risk to reduce uncertainties in predicting risk of carcinogenesis, central nervous system damage, degenerative tissue disease, and acute radiation effects. To understand the biological basis for risk, we must also understand the physical aspects of the crew environment. Thus the fourth element develops computer codes to predict radiation transport properties, evaluate integrated shielding technologies and provide design optimization recommendations for the design of human space systems. Understanding the risks and determining methods to mitigate the risks are keys to a successful radiation protection strategy.
Seeing the forest for the trees: Networked workstations as a parallel processing computer
NASA Technical Reports Server (NTRS)
Breen, J. O.; Meleedy, D. M.
1992-01-01
Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.
Noise Radiation From a Leading-Edge Slat
NASA Technical Reports Server (NTRS)
Lockard, David P.; Choudhari, Meelan M.
2009-01-01
This paper extends our previous computations of unsteady flow within the slat cove region of a multi-element high-lift airfoil configuration, which showed that both statistical and structural aspects of the experimentally observed unsteady flow behavior can be captured via 3D simulations over a computational domain of narrow spanwise extent. Although such narrow domain simulation can account for the spanwise decorrelation of the slat cove fluctuations, the resulting database cannot be applied towards acoustic predictions of the slat without invoking additional approximations to synthesize the fluctuation field over the rest of the span. This deficiency is partially alleviated in the present work by increasing the spanwise extent of the computational domain from 37.3% of the slat chord to nearly 226% (i.e., 15% of the model span). The simulation database is used to verify consistency with previous computational results and, then, to develop predictions of the far-field noise radiation in conjunction with a frequency-domain Ffowcs-Williams Hawkings solver.
Enhancing Decision-Making in STSE Education by Inducing Reflection and Self-Regulated Learning
NASA Astrophysics Data System (ADS)
Gresch, Helge; Hasselhorn, Marcus; Bögeholz, Susanne
2017-02-01
Thoughtful decision-making to resolve socioscientific issues is central to science, technology, society, and environment (STSE) education. One approach for attaining this goal involves fostering students' decision-making processes. Thus, the present study explores whether the application of decision-making strategies, combined with reflections on the decision-making processes of others, enhances decision-making competence. In addition, this study examines whether this process is supported by elements of self-regulated learning, i.e., self-reflection regarding one's own performance and the setting of goals for subsequent tasks. A computer-based training program which involves the resolution of socioscientific issues related to sustainable development was developed in two versions: with and without elements of self-regulated learning. Its effects on decision-making competence were analyzed using a pre test-post test follow-up control-group design ( N = 242 high school students). Decision-making competence was assessed using an open-ended questionnaire that focused on three facets: consideration of advantages and disadvantages, metadecision aspects, and reflection on the decision-making processes of others. The findings suggest that students in both training groups incorporated aspects of metadecision into their statements more often than students in the control group. Furthermore, both training groups were more successful in reflecting on the decision-making processes of others. The students who received additional training in self-regulated learning showed greater benefits in terms of metadecision aspects and reflection, and these effects remained significant two months later. Overall, our findings demonstrate that the application of decision-making strategies, combined with reflections on the decision-making process and elements of self-regulated learning, is a fruitful approach in STSE education.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji
2017-01-01
In the multi-dimensional space-time conservation element and solution element16 (CESE) method, triangles and tetrahedral mesh elements turn out to be the most natural building blocks for 2D and 3D spatial grids, respectively. As such, the CESE method is naturally compatible with the simplest 2D and 3D unstructured grids and thus can be easily applied to solve problems with complex geometries. However, because (a) accurate solution of a high-Reynolds number flow field near a solid wall requires that the grid intervals along the direction normal to the wall be much finer than those in a direction parallel to the wall and, as such, the use of grid cells with extremely high aspect ratio (103 to 106) may become mandatory, and (b) unlike quadrilateral hexahedral grids, it is well-known that accuracy of gradient computations involving triangular tetrahedral grids tends to deteriorate rapidly as cell aspect ratio increases. As a result, the use of triangular tetrahedral grid cells near a solid wall has long been deemed impractical by CFD researchers. In view of (a) the critical role played by triangular tetrahedral grids in the CESE development, and (b) the importance of accurate resolution of high-Reynolds number flow field near a solid wall, as will be presented in the main paper, a comprehensive and rigorous mathematical framework that clearly identifies the reasons behind the accuracy deterioration as described above has been developed for the 2D case involving triangular cells. By avoiding the pitfalls identified by the 2D framework, and its 3D extension, it has been shown numerically.
Proto-experiences and subjective experiences: classical and quantum concepts.
Vimal, Ram Lakhan Pandey
2008-03-01
Deterministic reductive monism and non-reductive substance dualism are two opposite views for consciousness, and both have serious problems. An alternative view is needed. For this, we hypothesize that strings or elementary particles (fermions and bosons) have two aspects: (i) elemental proto-experiences (PEs) as phenomenal aspect, and (ii) mass, charge, and spin as material aspect. Elemental PEs are hypothesized to be the properties of elementary particles and their interactions, which are composed of irreducible fundamental subjective experiences (SEs)/PEs that are in superimposed form in elementary particles and in their interactions. Since SEs/PEs are superimposed, elementary particles are not specific to any SE/PE; they (and all inert matter) are carriers of SEs/PEs, and hence, appear as non-experiential material entities. Furthermore, our hypothesis is that matter and associated elemental PEs co-evolved and co-developed into neural-nets and associated neural-net PEs (neural Darminism), respectively. The signals related to neural PEs interact in a neural-net and neural-net PEs emerges from random process of self-organization. The neural-net PEs are a set of SEs embedded in the neural-net by a non-computational or non-algorithmic process. The non-specificity of elementary particles is transformed into the specificity of neural-nets by neural Darwinism. The specificity of SEs emerges when feedforward and feedback signal interacts in the neuropil and are dependent on wakefulness (i.e., activation) attention, re-entry between neural populations, working memory, stimulus at above threshold, and neural net PE signals. This PE-SE framework integrates reductive and non-reductive views, complements the existing models, bridges the explanatory gaps, and minimizes the problem of causation.
Mechanical testing and finite element analysis of orthodontic teardrop loop.
Coimbra, Maria Elisa Rodrigues; Penedo, Norman Duque; de Gouvêa, Jayme Pereira; Elias, Carlos Nelson; de Souza Araújo, Mônica Tirre; Coelho, Paulo Guilherme
2008-02-01
Understanding how teeth move in response to mechanical loads is an important aspect of orthodontic treatment. Treatment planning should include consideration of the appliances that will meet the desired loading of the teeth to result in optimized treatment outcomes. The purpose of this study was to evaluate the use of computer simulation to predict the force and the torsion obtained after the activation of tear drop loops of 3 heights. Seventy-five retraction loops were divided into 3 groups according to height (6, 7, and 8 mm). The loops were subjected to tensile load through displacements of 0.5, 1.0, 1.5, and 2.0 mm, and the resulting forces and torques were recorded. The loops were designed in AutoCAD software(2005; Autodesk Systems, Alpharetta, GA), and finite element analysis was performed with Ansys software(version 7.0; Swanson Analysis System, Canonsburg, PA). Statistical analysis of the mechanical experiment results was obtained by ANOVA and the Tukey post-hoc test (P < .01). The correlation test and the paired t test (P < .05) were used to compare the computer simulation with the mechanical experiment. The computer simulation accurately predicted the experimentally determined mechanical behavior of tear drop loops of different heights and should be considered an alternative for designing orthodontic appliances before treatment.
Advanced piloted aircraft flight control system design methodology. Volume 1: Knowledge base
NASA Technical Reports Server (NTRS)
Mcruer, Duane T.; Myers, Thomas T.
1988-01-01
The development of a comprehensive and electric methodology for conceptual and preliminary design of flight control systems is presented and illustrated. The methodology is focused on the design stages starting with the layout of system requirements and ending when some viable competing system architectures (feedback control structures) are defined. The approach is centered on the human pilot and the aircraft as both the sources of, and the keys to the solution of, many flight control problems. The methodology relies heavily on computational procedures which are highly interactive with the design engineer. To maximize effectiveness, these techniques, as selected and modified to be used together in the methodology, form a cadre of computational tools specifically tailored for integrated flight control system preliminary design purposes. While theory and associated computational means are an important aspect of the design methodology, the lore, knowledge and experience elements, which guide and govern applications are critical features. This material is presented as summary tables, outlines, recipes, empirical data, lists, etc., which encapsulate a great deal of expert knowledge. Much of this is presented in topical knowledge summaries which are attached as Supplements. The composite of the supplements and the main body elements constitutes a first cut at a a Mark 1 Knowledge Base for manned-aircraft flight control.
Sustaining Open Source Communities through Hackathons - An Example from the ASPECT Community
NASA Astrophysics Data System (ADS)
Heister, T.; Hwang, L.; Bangerth, W.; Kellogg, L. H.
2016-12-01
The ecosystem surrounding a successful scientific open source software package combines both social and technical aspects. Much thought has been given to the technology side of writing sustainable software for large infrastructure projects and software libraries, but less about building the human capacity to perpetuate scientific software used in computational modeling. One effective format for building capacity is regular multi-day hackathons. Scientific hackathons bring together a group of science domain users and scientific software contributors to make progress on a specific software package. Innovation comes through the chance to work with established and new collaborations. Especially in the domain sciences with small communities, hackathons give geographically distributed scientists an opportunity to connect face-to-face. They foster lively discussions amongst scientists with different expertise, promote new collaborations, and increase transparency in both the technical and scientific aspects of code development. ASPECT is an open source, parallel, extensible finite element code to simulate thermal convection, that began development in 2011 under the Computational Infrastructure for Geodynamics. ASPECT hackathons for the past 3 years have grown the number of authors to >50, training new code maintainers in the process. Hackathons begin with leaders establishing project-specific conventions for development, demonstrating the workflow for code contributions, and reviewing relevant technical skills. Each hackathon expands the developer community. Over 20 scientists add >6,000 lines of code during the >1 week event. Participants grow comfortable contributing to the repository and over half continue to contribute afterwards. A high return rate of participants ensures continuity and stability of the group as well as mentoring for novice members. We hope to build other software communities on this model, but anticipate each to bring their own unique challenges.
TORO II simulations of induction heating in ferromagnetic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adkins, D.R.; Gartling, D.K.; Kelley, J.B.
TORO II is a finite element computer program that is used in the simulation of electric and magnetic fields. This code, which was developed at Sandia National Laboratories, has been coupled with a finite element thermal code, COYOTE II, to predict temperature profiles in inductively heated parts. The development of an effective technique to account for the nonlinear behavior of the magnetic permeability in ferromagnetic parts is one of the more difficult aspects of solving induction heating problems. In the TORO II code, nonlinear, spatially varying magnetic permeability is approximated by an effective permeability on an element-by-element basis that effectivelymore » provides the same energy deposition that is produced when the true permeability is used. This approximation has been found to give an accurate estimate of the volumetric heating distribution in the part, and predicted temperature distributions have been experimentally verified using a medium carbon steel and a 10kW industrial induction heating unit. Work on the model was funded through a Cooperative Research and Development Agreement (CRADA) between the Department of Energy and General Motors` Delphi Saginaw Steering Systems.« less
Sustainability of transport structures - some aspects of the nonlinear reliability assessment
NASA Astrophysics Data System (ADS)
Pukl, Radomír; Sajdlová, Tereza; Strauss, Alfred; Lehký, David; Novák, Drahomír
2017-09-01
Efficient techniques for both nonlinear numerical analysis of concrete structures and advanced stochastic simulation methods have been combined in order to offer an advanced tool for assessment of realistic behaviour, failure and safety assessment of transport structures. The utilized approach is based on randomization of the non-linear finite element analysis of the structural models. Degradation aspects such as carbonation of concrete can be accounted in order predict durability of the investigated structure and its sustainability. Results can serve as a rational basis for the performance and sustainability assessment based on advanced nonlinear computer analysis of the structures of transport infrastructure such as bridges or tunnels. In the stochastic simulation the input material parameters obtained from material tests including their randomness and uncertainty are represented as random variables or fields. Appropriate identification of material parameters is crucial for the virtual failure modelling of structures and structural elements. Inverse analysis using artificial neural networks and virtual stochastic simulations approach is applied to determine the fracture mechanical parameters of the structural material and its numerical model. Structural response, reliability and sustainability have been investigated on different types of transport structures made from various materials using the above mentioned methodology and tools.
Biological ageing and clinical consequences of modern technology.
Kyriazis, Marios
2017-08-01
The pace of technology is steadily increasing, and this has a widespread effect on all areas of health and society. When we interact with this technological environment we are exposed to a wide variety of new stimuli and challenges, which may modulate the stress response and thus change the way we respond and adapt. In this Opinion paper I will examine certain aspects of the human-computer interaction with regards to health and ageing. There are practical, everyday effects which also include social and cultural elements. I will discuss how human evolution may be affected by this new environmental change (the hormetic immersion in a virtual/technological environment). Finally, I will also explore certain biological aspects which have direct relevance to the ageing human. By embracing new technologies and engaging with a techno-social ecosystem (which is no longer formed by several interacting species, but by just two main elements: humans and machines), we may be subjected to beneficial hormetic effects, which upregulate the stress response and modulate adaptation. This is likely to improve overall health as we age and, as I speculate here, may also result in the reduction of age-related dysfunction.
Finite element dynamic analysis of soft tissues using state-space model.
Iorga, Lucian N; Shan, Baoxiang; Pelegri, Assimina A
2009-04-01
A finite element (FE) model is employed to investigate the dynamic response of soft tissues under external excitations, particularly corresponding to the case of harmonic motion imaging. A solid 3D mixed 'u-p' element S8P0 is implemented to capture the near-incompressibility inherent in soft tissues. Two important aspects in structural modelling of these tissues are studied; these are the influence of viscous damping on the dynamic response and, following FE-modelling, a developed state-space formulation that valuates the efficiency of several order reduction methods. It is illustrated that the order of the mathematical model can be significantly reduced, while preserving the accuracy of the observed system dynamics. Thus, the reduced-order state-space representation of soft tissues for general dynamic analysis significantly reduces the computational cost and provides a unitary framework for the 'forward' simulation and 'inverse' estimation of soft tissues. Moreover, the results suggest that damping in soft-tissue is significant, effectively cancelling the contribution of all but the first few vibration modes.
Loke, Desmond; Skelton, Jonathan M; Chong, Tow-Chong; Elliott, Stephen R
2016-12-21
One of the requirements for achieving faster CMOS electronics is to mitigate the unacceptably large chip areas required to steer heat away from or, more recently, toward the critical nodes of state-of-the-art devices. Thermal-guiding (TG) structures can efficiently direct heat by "meta-materials" engineering; however, some key aspects of the behavior of these systems are not fully understood. Here, we demonstrate control of the thermal-diffusion properties of TG structures by using nanometer-scale, CMOS-integrable, graphene-on-silica stacked materials through finite-element-methods simulations. It has been shown that it is possible to implement novel, controllable, thermally based Boolean-logic and spike-timing-dependent plasticity operations for advanced (neuromorphic) computing applications using such thermal-guide architectures.
Research on rolling element bearing fault diagnosis based on genetic algorithm matching pursuit
NASA Astrophysics Data System (ADS)
Rong, R. W.; Ming, T. F.
2017-12-01
In order to solve the problem of slow computation speed, matching pursuit algorithm is applied to rolling bearing fault diagnosis, and the improvement are conducted from two aspects that are the construction of dictionary and the way to search for atoms. To be specific, Gabor function which can reflect time-frequency localization characteristic well is used to construct the dictionary, and the genetic algorithm to improve the searching speed. A time-frequency analysis method based on genetic algorithm matching pursuit (GAMP) algorithm is proposed. The way to set property parameters for the improvement of the decomposition results is studied. Simulation and experimental results illustrate that the weak fault feature of rolling bearing can be extracted effectively by this proposed method, at the same time, the computation speed increases obviously.
Computational Modeling of Morphogenesis Regulated by Mechanical Feedback
Ramasubramanian, Ashok; Taber, Larry A.
2008-01-01
Mechanical forces cause changes in form during embryogenesis and likely play a role in regulating these changes. This paper explores the idea that changes in homeostatic tissue stress (target stress), possibly modulated by genes, drive some morphogenetic processes. Computational models are presented to illustrate how regional variations in target stress can cause a range of complex behaviors involving the bending of epithelia. These models include growth and cytoskeletal contraction regulated by stress-based mechanical feedback. All simulations were carried out using the commercial finite element code ABAQUS, with growth and contraction included by modifying the zero-stress state in the material constitutive relations. Results presented for bending of bilayered beams and invagination of cylindrical and spherical shells provide insight into some of the mechanical aspects that must be considered in studying morphogenetic mechanisms. PMID:17318485
Understanding Slat Noise Sources
NASA Technical Reports Server (NTRS)
Khorrami, Medhi R.
2003-01-01
Model-scale aeroacoustic tests of large civil transports point to the leading-edge slat as a dominant high-lift noise source in the low- to mid-frequencies during aircraft approach and landing. Using generic multi-element high-lift models, complementary experimental and numerical tests were carefully planned and executed at NASA in order to isolate slat noise sources and the underlying noise generation mechanisms. In this paper, a brief overview of the supporting computational effort undertaken at NASA Langley Research Center, is provided. Both tonal and broadband aspects of slat noise are discussed. Recent gains in predicting a slat s far-field acoustic noise, current shortcomings of numerical simulations, and other remaining open issues, are presented. Finally, an example of the ever-expanding role of computational simulations in noise reduction studies also is given.
NASA Astrophysics Data System (ADS)
Jin, L.; Zoback, M. D.
2017-10-01
We formulate the problem of fully coupled transient fluid flow and quasi-static poroelasticity in arbitrarily fractured, deformable porous media saturated with a single-phase compressible fluid. The fractures we consider are hydraulically highly conductive, allowing discontinuous fluid flux across them; mechanically, they act as finite-thickness shear deformation zones prior to failure (i.e., nonslipping and nonpropagating), leading to "apparent discontinuity" in strain and stress across them. Local nonlinearity arising from pressure-dependent permeability of fractures is also included. Taking advantage of typically high aspect ratio of a fracture, we do not resolve transversal variations and instead assume uniform flow velocity and simple shear strain within each fracture, rendering the coupled problem numerically more tractable. Fractures are discretized as lower dimensional zero-thickness elements tangentially conforming to unstructured matrix elements. A hybrid-dimensional, equal-low-order, two-field mixed finite element method is developed, which is free from stability issues for a drained coupled system. The fully implicit backward Euler scheme is employed for advancing the fully coupled solution in time, and the Newton-Raphson scheme is implemented for linearization. We show that the fully discretized system retains a canonical form of a fracture-free poromechanical problem; the effect of fractures is translated to the modification of some existing terms as well as the addition of several terms to the capacity, conductivity, and stiffness matrices therefore allowing the development of independent subroutines for treating fractures within a standard computational framework. Our computational model provides more realistic inputs for some fracture-dominated poromechanical problems like fluid-induced seismicity.
On Undecidability Aspects of Resilient Computations and Implications to Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S
2014-01-01
Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classesmore » of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.« less
Exercises in molecular computing.
Stojanovic, Milan N; Stefanovic, Darko; Rudchenko, Sergei
2014-06-17
CONSPECTUS: The successes of electronic digital logic have transformed every aspect of human life over the last half-century. The word "computer" now signifies a ubiquitous electronic device, rather than a human occupation. Yet evidently humans, large assemblies of molecules, can compute, and it has been a thrilling challenge to develop smaller, simpler, synthetic assemblies of molecules that can do useful computation. When we say that molecules compute, what we usually mean is that such molecules respond to certain inputs, for example, the presence or absence of other molecules, in a precisely defined but potentially complex fashion. The simplest way for a chemist to think about computing molecules is as sensors that can integrate the presence or absence of multiple analytes into a change in a single reporting property. Here we review several forms of molecular computing developed in our laboratories. When we began our work, combinatorial approaches to using DNA for computing were used to search for solutions to constraint satisfaction problems. We chose to work instead on logic circuits, building bottom-up from units based on catalytic nucleic acids, focusing on DNA secondary structures in the design of individual circuit elements, and reserving the combinatorial opportunities of DNA for the representation of multiple signals propagating in a large circuit. Such circuit design directly corresponds to the intuition about sensors transforming the detection of analytes into reporting properties. While this approach was unusual at the time, it has been adopted since by other groups working on biomolecular computing with different nucleic acid chemistries. We created logic gates by modularly combining deoxyribozymes (DNA-based enzymes cleaving or combining other oligonucleotides), in the role of reporting elements, with stem-loops as input detection elements. For instance, a deoxyribozyme that normally exhibits an oligonucleotide substrate recognition region is modified such that a stem-loop closes onto the substrate recognition region, making it unavailable for the substrate and thus rendering the deoxyribozyme inactive. But a conformational change can then be induced by an input oligonucleotide, complementary to the loop, to open the stem, allow the substrate to bind, and allow its cleavage to proceed, which is eventually reported via fluorescence. In this Account, several designs of this form are reviewed, along with their application in the construction of large circuits that exhibited complex logical and temporal relationships between the inputs and the outputs. Intelligent (in the sense of being capable of nontrivial information processing) theranostic (therapy + diagnostic) applications have always been the ultimate motivation for developing computing (i.e., decision-making) circuits, and we review our experiments with logic-gate elements bound to cell surfaces that evaluate the proximal presence of multiple markers on lymphocytes.
NASA Technical Reports Server (NTRS)
Witkop, D. L.; Dale, B. J.; Gellin, S.
1991-01-01
The programming aspects of SFENES are described in the User's Manual. The information presented is provided for the installation programmer. It is sufficient to fully describe the general program logic and required peripheral storage. All element generated data is stored externally to reduce required memory allocation. A separate section is devoted to the description of these files thereby permitting the optimization of Input/Output (I/O) time through efficient buffer descriptions. Individual subroutine descriptions are presented along with the complete Fortran source listings. A short description of the major control, computation, and I/O phases is included to aid in obtaining an overall familiarity with the program's components. Finally, a discussion of the suggested overlay structure which allows the program to execute with a reasonable amount of memory allocation is presented.
The bacteriorhodopsin model membrane system as a prototype molecular computing element.
Hong, F T
1986-01-01
The quest for more sophisticated integrated circuits to overcome the limitation of currently available silicon integrated circuits has led to the proposal of using biological molecules as computational elements by computer scientists and engineers. While the theoretical aspect of this possibility has been pursued by computer scientists, the research and development of experimental prototypes have not been pursued with an equal intensity. In this survey, we make an attempt to examine model membrane systems that incorporate the protein pigment bacteriorhodopsin which is found in Halobacterium halobium. This system was chosen for several reasons. The pigment/membrane system is sufficiently simple and stable for rigorous quantitative study, yet at the same time sufficiently complex in molecular structure to permit alteration of this structure in an attempt to manipulate the photosignal. Several methods of forming the pigment/membrane assembly are described and the potential application to biochip design is discussed. Experimental data using these membranes and measured by a tunable voltage clamp method are presented along with a theoretical analysis based on the Gouy-Chapman diffuse double layer theory to illustrate the usefulness of this approach. It is shown that detailed layouts of the pigment/membrane assembly as well as external loading conditions can modify the time course of the photosignal in a predictable manner. Some problems that may arise in the actual implementation and manufacturing, as well as the use of existing technology in protein chemistry, immunology, and recombinant DNA technology are discussed.
Uncertainty propagation of p-boxes using sparse polynomial chaos expansions
NASA Astrophysics Data System (ADS)
Schöbi, Roland; Sudret, Bruno
2017-06-01
In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.
Uncertainty propagation of p-boxes using sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch
2017-06-15
In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less
Gonzales, Matthew J.; Sturgeon, Gregory; Krishnamurthy, Adarsh; Hake, Johan; Jonas, René; Stark, Paul; Rappel, Wouter-Jan; Narayan, Sanjiv M.; Zhang, Yongjie; Segars, W. Paul; McCulloch, Andrew D.
2013-01-01
High-order cubic Hermite finite elements have been valuable in modeling cardiac geometry, fiber orientations, biomechanics, and electrophysiology, but their use in solving three-dimensional problems has been limited to ventricular models with simple topologies. Here, we utilized a subdivision surface scheme and derived a generalization of the “local-to-global” derivative mapping scheme of cubic Hermite finite elements to construct bicubic and tricubic Hermite models of the human atria with extraordinary vertices from computed tomography images of a patient with atrial fibrillation. To an accuracy of 0.6 millimeters, we were able to capture the left atrial geometry with only 142 bicubic Hermite finite elements, and the right atrial geometry with only 90. The left and right atrial bicubic Hermite meshes were G1 continuous everywhere except in the one-neighborhood of extraordinary vertices, where the mean dot products of normals at adjacent elements were 0.928 and 0.925. We also constructed two biatrial tricubic Hermite models and defined fiber orientation fields in agreement with diagrammatic data from the literature using only 42 angle parameters. The meshes all have good quality metrics, uniform element sizes, and elements with aspect ratios near unity, and are shared with the public. These new methods will allow for more compact and efficient patient-specific models of human atrial and whole heart physiology. PMID:23602918
Deformation and breakup of liquid-liquid threads, jets, and drops
NASA Astrophysics Data System (ADS)
Doshi, Pankaj
The formation and breakup of two-fluid jets and drops find application in various industrially important processes like microencapsulation, inkjet printing, dispersion and emulsion formation, micro fluidics. Two important aspects of these problems are studied in this thesis. The first regards the study of the dynamics of a two-fluid jet issuing out of a concentric nozzle and breaking into multiple liquid drops. The second aspect concerns the study of the dynamics of liquid-liquid interface rupture. Highly robust and accurate numerical algorithms based on the Galerkin finite element method (G/FEM) and elliptic mesh generation technique are developed. The most important results of this research are the prediction of compound drop formation and volume partitioning between primary drop and satellite drops, which are of critical importance for microencapsulation technology. Another equally important result is computational and experimental demonstration of a self-similar behavior for the rupture of liquid-liquid interface. The final focus is the study of the pinch-off dynamics of generalized-Newtonian fluids with deformation-rate-dependent rheology using asymptotic analysis and numerical computation. A significant result is the first ever prediction of self-similar pinch-off of liquid threads of generalized Newtonian fluids.
Probing coherence aspects of adiabatic quantum computation and control.
Goswami, Debabrata
2007-09-28
Quantum interference between multiple excitation pathways can be used to cancel the couplings to the unwanted, nonradiative channels resulting in robustly controlling decoherence through adiabatic coherent control approaches. We propose a useful quantification of the two-level character in a multilevel system by considering the evolution of the coherent character in the quantum system as represented by the off-diagonal density matrix elements, which switches from real to imaginary as the excitation process changes from being resonant to completely adiabatic. Such counterintuitive results can be explained in terms of continuous population exchange in comparison to no population exchange under the adiabatic condition.
Li, Dongsong; Liu, Jianguo; Li, Shuqiang; Fan, Honghui; Guan, Jikui
2008-02-01
In the present study, a three dimensional finite-element model of the human pelvic was reconstructed, and then, under different acetabular component position (the abduction angle ranges from 30 degrees to 70 degrees and the anteversion ranges from 5 degrees to 30degrees) the load distribution around the acetabular was evaluated by the computer biomechanical analysis program (Solidworks). Through the obtained load distribution results, the most even and reasonable range of the distribution was selected; therefore the safe range of the acetabular component implantation can be validated from the biomechanics aspect.
Spatial-Operator Algebra For Flexible-Link Manipulators
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Rodriguez, Guillermo
1994-01-01
Method of computing dynamics of multiple-flexible-link robotic manipulators based on spatial-operator algebra, which originally applied to rigid-link manipulators. Aspects of spatial-operator-algebra approach described in several previous articles in NASA Tech Briefs-most recently "Robot Control Based on Spatial-Operator Algebra" (NPO-17918). In extension of spatial-operator algebra to manipulators with flexible links, each link represented by finite-element model: mass of flexible link apportioned among smaller, lumped-mass rigid bodies, coupling of motions expressed in terms of vibrational modes. This leads to operator expression for modal-mass matrix of link.
Attenuating mass concrete effects in drilled shafts.
DOT National Transportation Integrated Search
2009-09-01
Drilled shafts are large diameter cast in place concrete foundation elements that until recently were not viewed with the same scrutiny as other massive concrete elements when considering mass concrete aspects. This study addressed three aspects of t...
Update to Computational Aspects of Nitrogen-Rich HEDMs
2016-04-01
ARL-TR-7656 ● APR 2016 US Army Research Laboratory Update to “Computational Aspects of Nitrogen -Rich HEDMs” by Betsy M...Computational Aspects of Nitrogen -Rich HEDMs” by Betsy M Rice, Edward FC Byrd, and William D Mattson Weapons and Materials Research Directorate...
Exercises in Molecular Computing
2014-01-01
Conspectus The successes of electronic digital logic have transformed every aspect of human life over the last half-century. The word “computer” now signifies a ubiquitous electronic device, rather than a human occupation. Yet evidently humans, large assemblies of molecules, can compute, and it has been a thrilling challenge to develop smaller, simpler, synthetic assemblies of molecules that can do useful computation. When we say that molecules compute, what we usually mean is that such molecules respond to certain inputs, for example, the presence or absence of other molecules, in a precisely defined but potentially complex fashion. The simplest way for a chemist to think about computing molecules is as sensors that can integrate the presence or absence of multiple analytes into a change in a single reporting property. Here we review several forms of molecular computing developed in our laboratories. When we began our work, combinatorial approaches to using DNA for computing were used to search for solutions to constraint satisfaction problems. We chose to work instead on logic circuits, building bottom-up from units based on catalytic nucleic acids, focusing on DNA secondary structures in the design of individual circuit elements, and reserving the combinatorial opportunities of DNA for the representation of multiple signals propagating in a large circuit. Such circuit design directly corresponds to the intuition about sensors transforming the detection of analytes into reporting properties. While this approach was unusual at the time, it has been adopted since by other groups working on biomolecular computing with different nucleic acid chemistries. We created logic gates by modularly combining deoxyribozymes (DNA-based enzymes cleaving or combining other oligonucleotides), in the role of reporting elements, with stem–loops as input detection elements. For instance, a deoxyribozyme that normally exhibits an oligonucleotide substrate recognition region is modified such that a stem–loop closes onto the substrate recognition region, making it unavailable for the substrate and thus rendering the deoxyribozyme inactive. But a conformational change can then be induced by an input oligonucleotide, complementary to the loop, to open the stem, allow the substrate to bind, and allow its cleavage to proceed, which is eventually reported via fluorescence. In this Account, several designs of this form are reviewed, along with their application in the construction of large circuits that exhibited complex logical and temporal relationships between the inputs and the outputs. Intelligent (in the sense of being capable of nontrivial information processing) theranostic (therapy + diagnostic) applications have always been the ultimate motivation for developing computing (i.e., decision-making) circuits, and we review our experiments with logic-gate elements bound to cell surfaces that evaluate the proximal presence of multiple markers on lymphocytes. PMID:24873234
Towards an orientation-distribution-based multi-scale approach for remodelling biological tissues.
Menzel, A; Harrysson, M; Ristinmaa, M
2008-10-01
The mechanical behaviour of soft biological tissues is governed by phenomena occurring on different scales of observation. From the computational modelling point of view, a vital aspect consists of the appropriate incorporation of micromechanical effects into macroscopic constitutive equations. In this work, particular emphasis is placed on the simulation of soft fibrous tissues with the orientation of the underlying fibres being determined by distribution functions. A straightforward but convenient Taylor-type homogenisation approach links the micro- or rather meso-level of fibres to the overall macro-level and allows to reflect macroscopically orthotropic response. As a key aspect of this work, evolution equations for the fibre orientations are accounted for so that physiological effects like turnover or rather remodelling are captured. Concerning numerical applications, the derived set of equations can be embedded into a nonlinear finite element context so that first elementary simulations are finally addressed.
Measurements of noise produced by flow past lifting surfaces
NASA Technical Reports Server (NTRS)
Kendall, J. M.
1978-01-01
Wind tunnel studies have been conducted to determine the specific locations of aerodynamic noise production within the flow field about various lifting-surface configurations. The models tested included low aspect ratio shapes intended to represent aircraft flaps, a finite aspect ratio NACA 0012 wing, and a multi-element wing section consisting of a main section, a leading edge flap, and dual trailing edge flaps. Turbulence was induced on the models by surface roughness. Lift and drag were measured for the flap models. Hot-wire anemometry was used for study of the flap-model vortex roll-up. Apparent noise source distributions were measured by use of a directional microphone system, located outside the tunnel, which was scanned about the flow region to be analyzed under computer control. These distributions exhibited a diversity of pattern, suggesting that several flow processes are important to lifting-surface noise production. Speculation concerning these processes is offered.
COPING WITH CONTAMINATED SEDIMENTS AND SOILS IN THE URBAN ENVIRONMENT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
JONES,K.W.; VAN DER LELIE,D.; MCGUIGAN,M.
2004-05-25
Soils and sediments contaminated with toxic organic and inorganic compounds harmful to the environment and to human health are common in the urban environment. We report here on aspects of a program being carried out in the New York/New Jersey Port region to develop methods for processing dredged material from the Port to make products that are safe for introduction to commercial markets. We discuss some of the results of the program in Computational Environmental Science, Laboratory Environmental Science, and Applied Environmental Science and indicate some possible directions for future work. Overall, the program elements integrate the scientific and engineeringmore » aspects with regulatory, commercial, urban planning, local governments, and community group interests. Well-developed connections between these components are critical to the ultimate success of efforts to cope with the problems caused by contaminated urban soils and sediments.« less
14 CFR 1214.801 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...
14 CFR 1214.801 - Definitions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...
14 CFR 1214.801 - Definitions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...
14 CFR 1214.801 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...
14 CFR § 1214.801 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...
Aging of the midface bony elements: a three-dimensional computed tomographic study.
Shaw, Robert B; Kahn, David M
2007-02-01
The face loses volume as the soft-tissue structures age. In this study, the authors demonstrate how specific bony aspects of the face change with age in both men and women and what impact this may have on the techniques used in facial cosmetic surgery. Facial bone computed tomographic scans were obtained from 60 Caucasian patients (30 women and 30 men). The authors' study population consisted of 10 male and 10 female subjects in each of three age categories. Each computed tomographic scan underwent three-dimensional reconstruction with volume rendering, and the following measurements were obtained: glabellar angle (maximal prominence of glabella to nasofrontal suture), pyriform angle (nasal bone to lateral inferior pyriform aperture), and maxillary angle (superior to inferior maxilla at the articulation of the inferior maxillary wing and alveolar arch). The pyriform aperture area was also obtained. The t test was used to identify any trends between age groups. The glabellar and maxillary angle in both the male and female subjects showed a significant decrease with increasing age. The pyriform angle did not show a significant change between age groups for either sex. There was a significant increase in pyriform aperture area from the young to the middle age group for both sexes. These results suggest that the bony elements of the midface change dramatically with age and, coupled with soft-tissue changes, lead to the appearance of the aged face.
Mass-corrections for the conservative coupling of flow and transport on collocated meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waluga, Christian, E-mail: waluga@ma.tum.de; Wohlmuth, Barbara; Rüde, Ulrich
2016-01-15
Buoyancy-driven flow models demand a careful treatment of the mass-balance equation to avoid spurious source and sink terms in the non-linear coupling between flow and transport. In the context of finite-elements, it is therefore commonly proposed to employ sufficiently rich pressure spaces, containing piecewise constant shape functions to obtain local or even strong mass-conservation. In three-dimensional computations, this usually requires nonconforming approaches, special meshes or higher order velocities, which make these schemes prohibitively expensive for some applications and complicate the implementation into legacy code. In this paper, we therefore propose a lean and conservatively coupled scheme based on standard stabilizedmore » linear equal-order finite elements for the Stokes part and vertex-centered finite volumes for the energy equation. We show that in a weak mass-balance it is possible to recover exact conservation properties by a local flux-correction which can be computed efficiently on the control volume boundaries of the transport mesh. We discuss implementation aspects and demonstrate the effectiveness of the flux-correction by different two- and three-dimensional examples which are motivated by geophysical applications.« less
Large-N kinetic theory for highly occupied systems
NASA Astrophysics Data System (ADS)
Walz, R.; Boguslavski, K.; Berges, J.
2018-06-01
We consider an effective kinetic description for quantum many-body systems, which is not based on a weak-coupling or diluteness expansion. Instead, it employs an expansion in the number of field components N of the underlying scalar quantum field theory. Extending previous studies, we demonstrate that the large-N kinetic theory at next-to-leading order is able to describe important aspects of highly occupied systems, which are beyond standard perturbative kinetic approaches. We analyze the underlying quasiparticle dynamics by computing the effective scattering matrix elements analytically and solve numerically the large-N kinetic equation for a highly occupied system far from equilibrium. This allows us to compute the universal scaling form of the distribution function at an infrared nonthermal fixed point within a kinetic description, and we compare to existing lattice field theory simulation results.
Foley, Kimberley A; Feldman-Stewart, Deb; Groome, Patti A; Brundage, Michael D; McArdle, Siobhan; Wallace, David; Peng, Yingwei; Mackillop, William J
2016-02-01
The overall quality of patient care is a function of the quality of both its technical and its nontechnical components. The purpose of this study was to identify the elements of nontechnical (personal) care that are most important to patients undergoing radiation therapy for prostate cancer. We reviewed the literature and interviewed patients and health professionals to identify elements of personal care pertinent to patients undergoing radiation therapy for prostate cancer. We identified 143 individual elements relating to 10 aspects of personal care. Patients undergoing radical radiation therapy for prostate cancer completed a self-administered questionnaire in which they rated the importance of each element. The overall importance of each element was measured by the percentage of respondents who rated it as "very important." The importance of each aspect of personal care was measured by the mean importance of its elements. One hundred eight patients completed the questionnaire. The percentage of patients who rated each element "very important" ranged from 7% to 95% (mean 61%). The mean importance rating of the elements of each aspect of care varied significantly: "perceived competence of caregivers," 80%; "empathy and respectfulness of caregivers," 67%; "adequacy of information sharing," 67%; "patient centeredness," 59%; "accessibility of caregivers," 57%; "continuity of care," 51%; "privacy," 51%; "convenience," 45%; "comprehensiveness of services," 44%; and "treatment environment," 30% (P<.0001). Neither age nor education was associated with importance ratings, but the patient's health status was associated with the rating of some elements of care. Many different elements of personal care are important to patients undergoing radiation therapy for prostate cancer, but the 3 aspects of care that most believe are most important are these: the perceived competence of their caregivers, the empathy and respectfulness of their caregivers, and the adequacy of information sharing. Copyright © 2016 Elsevier Inc. All rights reserved.
SURPHEX (tm): New dry photopolymers for replication of surface relief diffractive optics
NASA Technical Reports Server (NTRS)
Shvartsman, Felix P.
1993-01-01
High efficiency, deep groove, surface relief Diffractive Optical Elements (DOE) with various optical functions can be recorded in a photoresist using conventional interferometric holographic and computer generated photolithographic recording techniques. While photoresist recording media are satisfactory for recording individual surface relief DOE, a reliable and precise method is needed to replicate these diffractive microstructures to maintain the high aspect ratio in each replicated DOE. The term 'high aspect ratio' means that the depth of a groove is substantially greater, i.e. 2, 3, or more times greater, than the width of the groove. A new family of dry photopolymers SURPHEX was developed recently at Du Pont to replicate such highly efficient, deep groove DOE's. SURPHEX photopolymers are being utilized in Du Pont's proprietary Dry Photopolymer Embossing (DPE) technology to replicate with very high degree of precision almost any type of surface relief DOE. Surfaces relief microstructures with width/depth aspect ratio of 1:20 (0.1 micron/2.0 micron) were faithfully replicated by DPE technology. Several types of plastic and glass/quartz optical substrates can be used for economical replication of DOE.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1982-01-01
The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Biomolecular electrostatics and solvation: a computational perspective
Ren, Pengyu; Chun, Jaehun; Thomas, Dennis G.; Schnieders, Michael J.; Marucho, Marcelo; Zhang, Jiajing; Baker, Nathan A.
2012-01-01
An understanding of molecular interactions is essential for insight into biological systems at the molecular scale. Among the various components of molecular interactions, electrostatics are of special importance because of their long-range nature and their influence on polar or charged molecules, including water, aqueous ions, proteins, nucleic acids, carbohydrates, and membrane lipids. In particular, robust models of electrostatic interactions are essential for understanding the solvation properties of biomolecules and the effects of solvation upon biomolecular folding, binding, enzyme catalysis, and dynamics. Electrostatics, therefore, are of central importance to understanding biomolecular structure and modeling interactions within and among biological molecules. This review discusses the solvation of biomolecules with a computational biophysics view towards describing the phenomenon. While our main focus lies on the computational aspect of the models, we provide an overview of the basic elements of biomolecular solvation (e.g., solvent structure, polarization, ion binding, and nonpolar behavior) in order to provide a background to understand the different types of solvation models. PMID:23217364
Biomolecular electrostatics and solvation: a computational perspective.
Ren, Pengyu; Chun, Jaehun; Thomas, Dennis G; Schnieders, Michael J; Marucho, Marcelo; Zhang, Jiajing; Baker, Nathan A
2012-11-01
An understanding of molecular interactions is essential for insight into biological systems at the molecular scale. Among the various components of molecular interactions, electrostatics are of special importance because of their long-range nature and their influence on polar or charged molecules, including water, aqueous ions, proteins, nucleic acids, carbohydrates, and membrane lipids. In particular, robust models of electrostatic interactions are essential for understanding the solvation properties of biomolecules and the effects of solvation upon biomolecular folding, binding, enzyme catalysis, and dynamics. Electrostatics, therefore, are of central importance to understanding biomolecular structure and modeling interactions within and among biological molecules. This review discusses the solvation of biomolecules with a computational biophysics view toward describing the phenomenon. While our main focus lies on the computational aspect of the models, we provide an overview of the basic elements of biomolecular solvation (e.g. solvent structure, polarization, ion binding, and non-polar behavior) in order to provide a background to understand the different types of solvation models.
Image Processing Occupancy Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less
Nonlinear Finite Element Analysis of Shells with Large Aspect Ratio
NASA Technical Reports Server (NTRS)
Chang, T. Y.; Sawamiphakdi, K.
1984-01-01
A higher order degenerated shell element with nine nodes was selected for large deformation and post-buckling analysis of thick or thin shells. Elastic-plastic material properties are also included. The post-buckling analysis algorithm is given. Using a square plate, it was demonstrated that the none-node element does not have shear locking effect even if its aspect ratio was increased to the order 10 to the 8th power. Two sample problems are given to illustrate the analysis capability of the shell element.
NASA Astrophysics Data System (ADS)
He, Y.; Puckett, E. G.; Billen, M. I.; Kellogg, L. H.
2016-12-01
For a convection-dominated system, like convection in the Earth's mantle, accurate modeling of the temperature field in terms of the interaction between convective and diffusive processes is one of the most common numerical challenges. In the geodynamics community using Finite Element Method (FEM) with artificial entropy viscosity is a popular approach to resolve this difficulty, but introduce numerical diffusion. The extra artificial viscosity added into the temperature system will not only oversmooth the temperature field where the convective process dominates, but also change the physical properties by increasing the local material conductivity, which will eventually change the local conservation of energy. Accurate modeling of temperature is especially important in the mantle, where material properties are strongly dependent on temperature. In subduction zones, for example, the rheology of the cold sinking slab depends nonlinearly on the temperature, and physical processes such as slab detachment, rollback, and melting all are sensitively dependent on temperature and rheology. Therefore methods that overly smooth the temperature may inaccurately represent the physical processes governing subduction, lithospheric instabilities, plume generation and other aspects of mantle convection. Here we present a method for modeling the temperature field in mantle dynamics simulations using a new solver implemented in the ASPECT software. The new solver for the temperature equation uses a Discontinuous Galerkin (DG) approach, which combines features of both finite element and finite volume methods, and is particularly suitable for problems satisfying the conservation law, and the solution has a large variation locally. Furthermore, we have applied a post-processing technique to insure that the solution satisfies a local discrete maximum principle in order to eliminate the overshoots and undershoots in the temperature locally. To demonstrate the capabilities of this new method we present benchmark results (e.g., falling sphere), and a simple subduction models with kinematic surface boundary condition. To evaluate the trade-offs in computational speed and solution accuracy we present results for the same benchmarks using the Finite Element entropy viscosity method available in ASPECT.
Element-topology-independent preconditioners for parallel finite element computations
NASA Technical Reports Server (NTRS)
Park, K. C.; Alexander, Scott
1992-01-01
A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.
NASA Technical Reports Server (NTRS)
Loftin, Karin C.; Ly, Bebe; Webster, Laurie; Verlander, James; Taylor, Gerald R.; Riley, Gary; Culbert, Chris
1992-01-01
One of NASA's goals for long duration space flight is to maintain acceptable levels of crew health, safety, and performance. One way of meeting this goal is through BRAIN, an integrated network of both human and computer elements. BRAIN will function as an advisor to mission managers by assessing the risk of inflight biomedical problems and recommending appropriate countermeasures. Described here is a joint effort among various NASA elements to develop BRAIN and the Infectious Disease Risk Assessment (IDRA) prototype. The implementation of this effort addresses the technological aspects of knowledge acquisition, integration of IDRA components, the use of expert systems to automate the biomedical prediction process, development of a user friendly interface, and integration of IDRA and ExerCISys systems. Because C language, CLIPS and the X-Window System are portable and easily integrated, they were chosen ss the tools for the initial IDRA prototype.
NASA Technical Reports Server (NTRS)
Kaufman, A.; Laflen, J. H.; Lindholm, U. S.
1985-01-01
Unified constitutive material models were developed for structural analyses of aircraft gas turbine engine components with particular application to isotropic materials used for high-pressure stage turbine blades and vanes. Forms or combinations of models independently proposed by Bodner and Walker were considered. These theories combine time-dependent and time-independent aspects of inelasticity into a continuous spectrum of behavior. This is in sharp contrast to previous classical approaches that partition inelastic strain into uncoupled plastic and creep components. Predicted stress-strain responses from these models were evaluated against monotonic and cyclic test results for uniaxial specimens of two cast nickel-base alloys, B1900+Hf and Rene' 80. Previously obtained tension-torsion test results for Hastelloy X alloy were used to evaluate multiaxial stress-strain cycle predictions. The unified models, as well as appropriate algorithms for integrating the constitutive equations, were implemented in finite-element computer codes.
Mital, A
1999-01-01
Manual handling of materials continues to be a hazardous activity, leading to a very significant number of severe overexertion injuries. Designing jobs that are within the physical capabilities of workers is one approach ergonomists have adopted to redress this problem. As a result, several job design procedures have been developed over the years. However, these procedures are limited to designing or evaluating only pure lifting jobs or only the lifting aspect of a materials handling job. This paper describes a general procedure that may be used to design or analyse materials handling jobs that involve several different kinds of activities (e.g. lifting, lowering, carrying, pushing, etc). The job design/analysis procedure utilizes an elemental approach (breaking the job into elements) and relies on databases provided in A Guide to Manual Materials Handling to compute associated risk factors. The use of the procedure is demonstrated with the help of two case studies.
NASA Astrophysics Data System (ADS)
Gassmöller, Rene; Bangerth, Wolfgang
2016-04-01
Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a modern advection-field approach, and demonstrate under which conditions which method is more efficient. We implemented the presented methods in ASPECT (aspect.dealii.org), a freely available open-source community code for geodynamic simulations. The structure of the particle code is highly modular, and segregated from the PDE solver, and can thus be easily transferred to other programs, or adapted for various application cases.
Kavlock, Robert; Dix, David
2010-02-01
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the Toxicity of Chemicals (U.S. EPA, 2009a). Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast) and exposure (ExpoCast), and creating virtual liver (v-Liver) and virtual embryo (v-Embryo) systems models. U.S. EPA-funded STAR centers are also providing bioinformatics, computational toxicology data and models, and developmental toxicity data and models. The models and underlying data are being made publicly available through the Aggregated Computational Toxicology Resource (ACToR), the Distributed Structure-Searchable Toxicity (DSSTox) Database Network, and other U.S. EPA websites. While initially focused on improving the hazard identification process, the CTRP is placing increasing emphasis on using high-throughput bioactivity profiling data in systems modeling to support quantitative risk assessments, and in developing complementary higher throughput exposure models. This integrated approach will enable analysis of life-stage susceptibility, and understanding of the exposures, pathways, and key events by which chemicals exert their toxicity in developing systems (e.g., endocrine-related pathways). The CTRP will be a critical component in next-generation risk assessments utilizing quantitative high-throughput data and providing a much higher capacity for assessing chemical toxicity than is currently available.
Contact problem for a composite material with nacre inspired microstructure
NASA Astrophysics Data System (ADS)
Berinskii, Igor; Ryvkin, Michael; Aboudi, Jacob
2017-12-01
Bi-material composites with nacre inspired brick and mortar microstructures, characterized by stiff elements of one phase with high aspect ratio separated by thin layers of the second one, are considered. Such microstructure is proved to provide an efficient solution for the problem of a crack arrest. However, contrary to the case of a homogeneous material, an external pressure, applied to a part of the composite boundary, can cause significant tensile stresses which increase the danger of crack nucleation. Investigation of the influence of microstructure parameters on the magnitude of tensile stresses is performed by means of the classical Flamant-like problem of an orthotropic half-plane subjected to a normal external distributed loading. Adequate analysis of this problem represents a serious computational task due to the geometry of the considered layout and the high contrast between the composite constituents. This difficulty is presently circumvented by deriving a micro-to-macro analysis in the framework of which an analytical solution of the auxiliary elasticity problem, followed by the discrete Fourier transform and the higher-order theory are employed. As a result, full scale continuum modeling of both composite constituents without employing any simplifying assumptions is presented. In the framework of the present proposed modeling, the influence of stiff elements aspect ratio on the overall stress distribution is demonstrated.
NASA Strategy to Safely Live and Work in the Space Radiation Environment
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Wu, Honglu; Corbin, Barbara J.; Sulzman, Frank M.; Krenek, Sam
2007-01-01
In space, astronauts are constantly bombarded with energetic particles. The goal of the National Aeronautics and Space Agency and the NASA Space Radiation Project is to ensure that astronauts can safely live and work in the space radiation environment. The space radiation environment poses both acute and chronic risks to crew health and safety, but unlike some other aspects of space travel, space radiation exposure has clinically relevant implications for the lifetime of the crew. Among the identified radiation risks are cancer, acute and late CNS damage, chronic and degenerative tissue decease, and acute radiation syndrome. The term "safely" means that risks are sufficiently understood such that acceptable limits on mission, post-mission and multi-mission consequences can be defined. The NASA Space Radiation Project strategy has several elements. The first element is to use a peer-reviewed research program to increase our mechanistic knowledge and genetic capabilities to develop tools for individual risk projection, thereby reducing our dependency on epidemiological data and population-based risk assessment. The second element is to use the NASA Space Radiation Laboratory to provide a ground-based facility to study the health effects/mechanisms of damage from space radiation exposure and the development and validation of biological models of risk, as well as methods for extrapolation to human risk. The third element is a risk modeling effort that integrates the results from research efforts into models of human risk to reduce uncertainties in predicting the identified radiation risks. To understand the biological basis for risk, we must also understand the physical aspects of the crew environment. Thus, the fourth element develops computer algorithms to predict radiation transport properties, evaluate integrated shielding technologies and provide design optimization recommendations for the design of human space systems. Understanding the risks and determining methods to mitigate the risks are keys to a successful radiation protection strategy.
NASA Astrophysics Data System (ADS)
Du, Jinsong; Chen, Chao; Lesur, Vincent; Lane, Richard; Wang, Huilin
2015-06-01
We examined the mathematical and computational aspects of the magnetic potential, vector and gradient tensor fields of a tesseroid in a geocentric spherical coordinate system (SCS). This work is relevant for 3-D modelling that is performed with lithospheric vertical scales and global, continent or large regional horizontal scales. The curvature of the Earth is significant at these scales and hence, a SCS is more appropriate than the usual Cartesian coordinate system (CCS). The 3-D arrays of spherical prisms (SP; `tesseroids') can be used to model the response of volumes with variable magnetic properties. Analytical solutions do not exist for these model elements and numerical or mixed numerical and analytical solutions must be employed. We compared various methods for calculating the response in terms of accuracy and computational efficiency. The methods were (1) the spherical coordinate magnetic dipole method (MD), (2) variants of the 3-D Gauss-Legendre quadrature integration method (3-D GLQI) with (i) different numbers of nodes in each of the three directions, and (ii) models where we subdivided each SP into a number of smaller tesseroid volume elements, (3) a procedure that we term revised Gauss-Legendre quadrature integration (3-D RGLQI) where the magnetization direction which is constant in a SCS is assumed to be constant in a CCS and equal to the direction at the geometric centre of each tesseroid, (4) the Taylor's series expansion method (TSE) and (5) the rectangular prism method (RP). In any realistic application, both the accuracy and the computational efficiency factors must be considered to determine the optimum approach to employ. In all instances, accuracy improves with increasing distance from the source. It is higher in the percentage terms for potential than the vector or tensor response. The tensor errors are the largest, but they decrease more quickly with distance from the source. In our comparisons of relative computational efficiency, we found that the magnetic potential takes less time to compute than the vector response, which in turn takes less time to compute than the tensor gradient response. The MD method takes less time to compute than either the TSE or RP methods. The efficiency of the (GLQI and) RGLQI methods depends on the polynomial order, but the response typically takes longer to compute than it does for the other methods. The optimum method is a complex function of the desired accuracy, the size of the volume elements, the element latitude and the distance between the source and the observation. For a model of global extent with typical model element size (e.g. 1 degree horizontally and 10 km radially) and observations at altitudes of 10s to 100s of km, a mixture of methods based on the horizontal separation of the source and observation separation would be the optimum approach. To demonstrate the RGLQI method described within this paper, we applied it to the computation of the response for a global magnetization model for observations at 300 and 30 km altitude.
NASA Astrophysics Data System (ADS)
Figiel, Łukasz; Dunne, Fionn P. E.; Buckley, C. Paul
2010-01-01
Layered-silicate nanoparticles offer a cost-effective reinforcement for thermoplastics. Computational modelling has been employed to study large deformations in layered-silicate/poly(ethylene terephthalate) (PET) nanocomposites near the glass transition, as would be experienced during industrial forming processes such as thermoforming or injection stretch blow moulding. Non-linear numerical modelling was applied, to predict the macroscopic large deformation behaviour, with morphology evolution and deformation occurring at the microscopic level, using the representative volume element (RVE) approach. A physically based elasto-viscoplastic constitutive model, describing the behaviour of the PET matrix within the RVE, was numerically implemented into a finite element solver (ABAQUS) using an UMAT subroutine. The implementation was designed to be robust, for accommodating large rotations and stretches of the matrix local to, and between, the nanoparticles. The nanocomposite morphology was reconstructed at the RVE level using a Monte-Carlo-based algorithm that placed straight, high-aspect ratio particles according to the specified orientation and volume fraction, with the assumption of periodicity. Computational experiments using this methodology enabled prediction of the strain-stiffening behaviour of the nanocomposite, observed experimentally, as functions of strain, strain rate, temperature and particle volume fraction. These results revealed the probable origins of the enhanced strain stiffening observed: (a) evolution of the morphology (through particle re-orientation) and (b) early onset of stress-induced pre-crystallization (and hence lock-up of viscous flow), triggered by the presence of particles. The computational model enabled prediction of the effects of process parameters (strain rate, temperature) on evolution of the morphology, and hence on the end-use properties.
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2016-12-01
There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.
NASA Astrophysics Data System (ADS)
Harteveld, Casper
Designing a game with a serious purpose involves considering the worlds of Reality and Meaning yet it is undeniably impossible to create a game without a third world, one that is specifically concerned with what makes a game a game: the play elements. This third world, the world of people like designers and artists, and disciplines as computer science and game design, I call the world of Play and this level is devoted to it. The level starts off with some of the misperceptions people have of play. Unlike some may think, we play all the time, even when we grow old—this was also very noticeable in designing the game Levee Patroller as the team exhibited very playful behavior at many occasions. From there, I go into the aspects that characterize this world. The first concerns the goal of the game. This relates to the objectives people have to achieve within the game. This is constituted by the second aspect: the gameplay. Taking actions and facing challenges is subsequently constituted by a gameworld, which concerns the third aspect. And all of it is not possible without the fourth and final aspect, the type of technology that creates and facilitates the game. The four aspects together make up a “game concept” and from this world such a concept can be judged on the basis of three closely interrelated criteria: engagement, immersion, and fun.
2016-01-01
The problem of multi-scale modelling of damage development in a SiC ceramic fibre-reinforced SiC matrix ceramic composite tube is addressed, with the objective of demonstrating the ability of the finite-element microstructure meshfree (FEMME) model to introduce important aspects of the microstructure into a larger scale model of the component. These are particularly the location, orientation and geometry of significant porosity and the load-carrying capability and quasi-brittle failure behaviour of the fibre tows. The FEMME model uses finite-element and cellular automata layers, connected by a meshfree layer, to efficiently couple the damage in the microstructure with the strain field at the component level. Comparison is made with experimental observations of damage development in an axially loaded composite tube, studied by X-ray computed tomography and digital volume correlation. Recommendations are made for further development of the model to achieve greater fidelity to the microstructure. This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242308
Terminological aspects of data elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strehlow, R.A.; Kenworthey, W.H. Jr.; Schuldt, R.E.
1991-01-01
The creation and display of data comprise a process that involves a sequence of steps requiring both semantic and systems analysis. An essential early step in this process is the choice, definition, and naming of data element concepts and is followed by the specification of other needed data element concept attributes. The attributes and the values of data element concept remain associated with them from their birth as a concept to a generic data element that serves as a template for final application. Terminology is, therefore, centrally important to the entire data creation process. Smooth mapping from natural language tomore » a database is a critical aspect of database, and consequently, it requires terminology standardization from the outset of database work. In this paper the semantic aspects of data elements are analyzed and discussed. Seven kinds of data element concept information are considered and those that require terminological development and standardization are identified. The four terminological components of a data element are the hierarchical type of a concept, functional dependencies, schematas showing conceptual structures, and definition statements. These constitute the conventional role of terminology in database design. 12 refs., 8 figs., 1 tab.« less
Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer
NASA Technical Reports Server (NTRS)
Pifko, A. B.; Ogilvie, P. L.
1978-01-01
The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.
A study of the rheology and micro-structure of dumbbells in shear geometries
NASA Astrophysics Data System (ADS)
Mandal, Sandip; Khakhar, D. V.
2018-01-01
We study the flow of frictional, inelastic dumbbells made of two fused spheres of different aspect ratios down a rough inclined plane and in a simple shear cell, using discrete element simulations. At a fixed inclination angle, the mean velocity decreases, and the volume fraction increases significantly with increasing aspect ratio in the chute flow. At a fixed solid fraction, the shear stress and pressure decrease significantly with increasing aspect ratio in the shear cell flow. The micro-structure of the flow is characterized. The translational diffusion coefficient in the normal direction to the flow is found to scale as Dy y=b γ ˙ d2, independent of aspect ratio, where b is a constant, γ ˙ is the shear rate, and d is the diameter of the constituent spheres of the dumbbells. The effective friction coefficient (μ, the ratio of shear stress to pressure) increases by 30%-35% on increasing the aspect ratio λ, from 1.0 to 1.7, for a fixed inertial number I. The volume fraction (ϕ) also increases significantly with increasing aspect ratio, especially at high inertial numbers. The effective friction coefficient and volume fraction are found to follow simple scalings of the form μ = μ(I, λ) and ϕ = ϕ(I, λ) for all the data from both systems, and the results are in reasonable agreement with kinetic theory predictions at low I. The computational results are in reasonable agreement with the experimental data for flow in a rotating cylinder.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Chang, Chau-Lyan; Yen, Joseph C.
2013-01-01
In the multidimensional CESE development, triangles and tetrahedra turn out to be the most natural building blocks for 2D and 3D spatial meshes. As such the CESE method is compatible with the simplest unstructured meshes and thus can be easily applied to solve problems with complex geometries. However, because the method uses space-time staggered stencils, solution decoupling may become a real nuisance in applications involving unstructured meshes. In this paper we will describe a simple and general remedy which, according to numerical experiments, has removed any possibility of solution decoupling. Moreover, in a real-world viscous flow simulation near a solid wall, one often encounters a case where a boundary with high curvature or sharp corner is surrounded by triangular/tetrahedral meshes of extremely high aspect ratio (up to 106). For such an extreme case, the spatial projection of a space-time compounded conservation element constructed using the original CESE design may become highly concave and thus its centroid (referred to as a spatial solution point) may lie far outside of the spatial projection. It could even be embedded beyond a solid wall boundary and causes serious numerical difficulties. In this paper we will also present a new procedure for constructing conservation elements and solution elements which effectively overcomes the difficulties associated with the original design. Another difficulty issue which was addressed more recently is the wellknown fact that accuracy of gradient computations involving triangular/tetrahedral grids deteriorates rapidly as the aspect ratio of grid cells increases. The root cause of this difficulty was clearly identified and several remedies to overcome it were found through a rigorous mathematical analysis. However, because of the length of the current paper and the complexity of mathematics involved, this new work will be presented in another paper.
High-speed on-chip windowed centroiding using photodiode-based CMOS imager
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)
2003-01-01
A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.
High-speed on-chip windowed centroiding using photodiode-based CMOS imager
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)
2004-01-01
A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.
Selected Aspects of Cryogenic Tank Fatigue Calculations for Offshore Application
NASA Astrophysics Data System (ADS)
Skrzypacz, J.; Jaszak, P.
2018-02-01
The paper presents the way of the fatigue life calculation of a cryogenic tank dedicated for the carriers ship application. The independent tank type C was taken into consideration. The calculation took into account a vast range of the load spectrum resulting in the ship accelerations. The stress at the most critical point of the tank was determined by means of the finite element method. The computation methods and codes used in the design of the LNG tank were presented. The number of fatigue cycles was determined by means of S-N curve. The cumulated linear damage theory was used to determine life factor.
NASA Technical Reports Server (NTRS)
Stuart, Jeffrey; Howell, Kathleen; Wilson, Roby
2013-01-01
The Sun-Jupiter Trojan asteroids are celestial bodies of great scientific interest as well as potential resources offering water and other mineral resources for longterm human exploration of the solar system. Previous investigations under this project have addressed the automated design of tours within the asteroid swarm. This investigation expands the current automation scheme by incorporating options for a complete trajectory design approach to the Trojan asteroids. Computational aspects of the design procedure are automated such that end-to-end trajectories are generated with a minimum of human interaction after key elements and constraints associated with a proposed mission concept are specified.
Environmental apsects of the transuranics: a selected, annotated bibliography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, F. M.; Sanders, C. T.; Talmage, S. S.
This fourth published bibliography of 528 references is from the computer information file built to provide support to the Nevada Applied Ecology Group (NAEG) of the AEC Nevada Operations Office. The general scope is environmental aspects of uranium and the transuranic elements, with a preponderance of material on plutonium. In addition, there are supporting materials involving basic ecology or general reviews on other nuclides that are entered at the request of the NAEG. References provide findings-oriented abstracts. Numerical data is referred to, in the comment field. Indexes are given for author, subject category, keywords, geographic location, permuted title, taxons, andmore » publication description.« less
Fiber optic temperature sensor gives rise to thermal analysis in complex product design
NASA Astrophysics Data System (ADS)
Cheng, Andrew Y. S.; Pau, Michael C. Y.
1996-09-01
A computer-adapted fiber-optic temperature sensing system has been developed which aims to study both the theoretical aspect of fiber temperature sensing and the experimental aspect of such system. The system consists of a laser source, a fiber sensing element, an electronic fringes counting device, and an on-line personal computer. The temperature measurement is achieved by the conventional double beam fringe counting method with optical path length changes in the sensing beam due to the fiber expansion. The system can automatically measure the temperature changes in a sensing fiber arm which provides an insight of the heat generation and dissipation of the measured system. Unlike the conventional measuring devices such as thermocouples or solid state temperature sensors, the fiber sensor can easily be wrapped and shaped to fit the surface of the measuring object or even inside a molded plastic parts such as a computer case, which gives much more flexibility and applicability to the analysis of heat generation and dissipation in the operation of these machine parts. The reference beam is being set up on a temperature controlled optical bench to facilitate high sensitivity and high temperature resolution. The measuring beam has a motorized beam selection device for multiple fiber beam measurement. The project has been demonstrated in the laboratory and the system sensitivity and resolution are found to be as high as 0.01 degree Celsius. It is expected the system will find its application in many design studies which require thermal budgeting.
Educational aspects of molecular simulation
NASA Astrophysics Data System (ADS)
Allen, Michael P.
This article addresses some aspects of teaching simulation methods to undergraduates and graduate students. Simulation is increasingly a cross-disciplinary activity, which means that the students who need to learn about simulation methods may have widely differing backgrounds. Also, they may have a wide range of views on what constitutes an interesting application of simulation methods. Almost always, a successful simulation course includes an element of practical, hands-on activity: a balance always needs to be struck between treating the simulation software as a 'black box', and becoming bogged down in programming issues. With notebook computers becoming widely available, students often wish to take away the programs to run themselves, and access to raw computer power is not the limiting factor that it once was; on the other hand, the software should be portable and, if possible, free. Examples will be drawn from the author's experience in three different contexts. (1) An annual simulation summer school for graduate students, run by the UK CCP5 organization, in which practical sessions are combined with an intensive programme of lectures describing the methodology. (2) A molecular modelling module, given as part of a doctoral training centre in the Life Sciences at Warwick, for students who might not have a first degree in the physical sciences. (3) An undergraduate module in Physics at Warwick, also taken by students from other disciplines, teaching high performance computing, visualization, and scripting in the context of a physical application such as Monte Carlo simulation.
Assignment Of Finite Elements To Parallel Processors
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.
1990-01-01
Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.
NASA Astrophysics Data System (ADS)
He, Y.; Billen, M. I.; Puckett, E. G.
2015-12-01
Flow in the Earth's mantle is driven by thermo-chemical convection in which the properties and geochemical signatures of rocks vary depending on their origin and composition. For example, tectonic plates are composed of compositionally-distinct layers of crust, residual lithosphere and fertile mantle, while in the lower-most mantle there are large compositionally distinct "piles" with thinner lenses of different material. Therefore, tracking of active or passive fields with distinct compositional, geochemical or rheologic properties is important for incorporating physical realism into mantle convection simulations, and for investigating the long term mixing properties of the mantle. The difficulty in numerically advecting fields arises because they are non-diffusive and have sharp boundaries, and therefore require different methods than usually used for temperature. Previous methods for tracking fields include the marker-chain, tracer particle, and field-correction (e.g., the Lenardic Filter) methods: each of these has different advantages or disadvantages, trading off computational speed with accuracy in tracking feature boundaries. Here we present a method for modeling active fields in mantle dynamics simulations using a new solver implemented in the deal.II package that underlies the ASPECT software. The new solver for the advection-diffusion equation uses a Local Discontinuous Galerkin (LDG) algorithm, which combines features of both finite element and finite volume methods, and is particularly suitable for problems with a dominant first-order term and discontinuities. Furthermore, we have applied a post-processing technique to insure that the solution satisfies a global maximum/minimum. One potential drawback for the LDG method is that the total number of degrees of freedom is larger than the finite element method. To demonstrate the capabilities of this new method we present results for two benchmarks used previously: a falling cube with distinct buoyancy and viscosity, and a Rayleigh-Taylor instability of a compositionally buoyant layer. To evaluate the trade-offs in computational speed and solution accuracy we present results for these same benchmarks using the two field tracking methods available in ASPECT: active tracer particles and the entropy viscosity method.
An Integrated Data-Driven Strategy for Safe-by-Design Nanoparticles: The FP7 MODERN Project.
Brehm, Martin; Kafka, Alexander; Bamler, Markus; Kühne, Ralph; Schüürmann, Gerrit; Sikk, Lauri; Burk, Jaanus; Burk, Peeter; Tamm, Tarmo; Tämm, Kaido; Pokhrel, Suman; Mädler, Lutz; Kahru, Anne; Aruoja, Villem; Sihtmäe, Mariliis; Scott-Fordsmand, Janeck; Sorensen, Peter B; Escorihuela, Laura; Roca, Carlos P; Fernández, Alberto; Giralt, Francesc; Rallo, Robert
2017-01-01
The development and implementation of safe-by-design strategies is key for the safe development of future generations of nanotechnology enabled products. The safety testing of the huge variety of nanomaterials that can be synthetized is unfeasible due to time and cost constraints. Computational modeling facilitates the implementation of alternative testing strategies in a time and cost effective way. The development of predictive nanotoxicology models requires the use of high quality experimental data on the structure, physicochemical properties and bioactivity of nanomaterials. The FP7 Project MODERN has developed and evaluated the main components of a computational framework for the evaluation of the environmental and health impacts of nanoparticles. This chapter describes each of the elements of the framework including aspects related to data generation, management and integration; development of nanodescriptors; establishment of nanostructure-activity relationships; identification of nanoparticle categories; hazard ranking and risk assessment.
NASA Technical Reports Server (NTRS)
1983-01-01
Experimental work in support of stress studies in high speed silicon sheet growth has been emphasized in this quarter. Creep experiments utilizing four-point bending have been made in the temperature range from 1000 C to 1360 C in CZ silicon as well as on EFG ribbon. A method to measure residual stress over large areas using laser interferometry to map strain distributions under load is under development. A fiber optics sensor to measure ribbon temperature profiles has been constructed and is being tested in a ribbon growth furnace environment. Stress and temperature field modeling work has been directed toward improving various aspects of the finite element computing schemes. Difficulties in computing stress distributions with a very high creep intensity and with non-zero interface stress have been encountered and additional development of the numerical schemes to cope with these problems is required. Temperature field modeling has been extended to include the study of heat transfer effects in the die and meniscus regions.
Lattice Boltzmann simulation of antiplane shear loading of a stationary crack
NASA Astrophysics Data System (ADS)
Schlüter, Alexander; Kuhn, Charlotte; Müller, Ralf
2018-01-01
In this work, the lattice Boltzmann method is applied to study the dynamic behaviour of linear elastic solids under antiplane shear deformation. In this case, the governing set of partial differential equations reduces to a scalar wave equation for the out of plane displacement in a two dimensional domain. The lattice Boltzmann approach developed by Guangwu (J Comput Phys 161(1):61-69, 2000) in 2006 is used to solve the problem numerically. Some aspects of the scheme are highlighted, including the treatment of the boundary conditions. Subsequently, the performance of the lattice Boltzmann scheme is tested for a stationary crack problem for which an analytic solution exists. The treatment of cracks is new compared to the examples that are discussed in Guangwu's work. Furthermore, the lattice Boltzmann simulations are compared to finite element computations. Finally, the influence of the lattice Boltzmann relaxation parameter on the stability of the scheme is illustrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foley, Kimberley A.; Department of Public Health Sciences, Queen's University, Kingston, Ontario; Feldman-Stewart, Deb
Purpose/Objective: The overall quality of patient care is a function of the quality of both its technical and its nontechnical components. The purpose of this study was to identify the elements of nontechnical (personal) care that are most important to patients undergoing radiation therapy for prostate cancer. Methods and Materials: We reviewed the literature and interviewed patients and health professionals to identify elements of personal care pertinent to patients undergoing radiation therapy for prostate cancer. We identified 143 individual elements relating to 10 aspects of personal care. Patients undergoing radical radiation therapy for prostate cancer completed a self-administered questionnaire inmore » which they rated the importance of each element. The overall importance of each element was measured by the percentage of respondents who rated it as “very important.” The importance of each aspect of personal care was measured by the mean importance of its elements. Results: One hundred eight patients completed the questionnaire. The percentage of patients who rated each element “very important” ranged from 7% to 95% (mean 61%). The mean importance rating of the elements of each aspect of care varied significantly: “perceived competence of caregivers,” 80%; “empathy and respectfulness of caregivers,” 67%; “adequacy of information sharing,” 67%; “patient centeredness,” 59%; “accessibility of caregivers,” 57%; “continuity of care,” 51%; “privacy,” 51%; “convenience,” 45%; “comprehensiveness of services,” 44%; and “treatment environment,” 30% (P<.0001). Neither age nor education was associated with importance ratings, but the patient's health status was associated with the rating of some elements of care. Conclusions: Many different elements of personal care are important to patients undergoing radiation therapy for prostate cancer, but the 3 aspects of care that most believe are most important are these: the perceived competence of their caregivers, the empathy and respectfulness of their caregivers, and the adequacy of information sharing.« less
Nakamura, Keiko; Tajima, Kiyoshi; Chen, Ker-Kong; Nagamatsu, Yuki; Kakigawa, Hiroshi; Masumi, Shin-ich
2013-12-01
This study focused on the application of novel finite-element analysis software for constructing a finite-element model from the computed tomography data of a human dentulous mandible. The finite-element model is necessary for evaluating the mechanical response of the alveolar part of the mandible, resulting from occlusal force applied to the teeth during biting. Commercially available patient-specific general computed tomography-based finite-element analysis software was solely applied to the finite-element analysis for the extraction of computed tomography data. The mandibular bone with teeth was extracted from the original images. Both the enamel and the dentin were extracted after image processing, and the periodontal ligament was created from the segmented dentin. The constructed finite-element model was reasonably accurate using a total of 234,644 nodes and 1,268,784 tetrahedral and 40,665 shell elements. The elastic moduli of the heterogeneous mandibular bone were determined from the bone density data of the computed tomography images. The results suggested that the software applied in this study is both useful and powerful for creating a more accurate three-dimensional finite-element model of a dentulous mandible from the computed tomography data without the need for any other software.
Yerganian, Simon Scott
2001-07-17
A piezoelectric motor having a stator in which piezoelectric elements are contained in slots formed in the stator transverse to the desired wave motion. When an electric field is imposed on the elements, deformation of the elements imposes a force perpendicular to the sides of the slot, deforming the stator. Appropriate frequency and phase shifting of the electric field will produce a wave in the stator and motion in a rotor. In a preferred aspect, the piezoelectric elements are configured so that deformation of the elements in direction of an imposed electric field, generally referred to as the d.sub.33 direction, is utilized to produce wave motion in the stator. In a further aspect, the elements are compressed into the slots so as to minimize tensile stresses on the elements in use.
Yerganian, Simon Scott
2003-02-11
A piezoelectric motor having a stator in which piezoelectric elements are contained in slots formed in the stator transverse to the desired wave motion. When an electric field is imposed on the elements, deformation of the elements imposes a force perpendicular to the sides of the slot, deforming the stator. Appropriate frequency and phase-shifting of the electric field will produce a wave in the stator and motion in a rotor. In a preferred aspect, the piezoelectric elements are configured so that deformation of the elements in the direction of an imposed electric field, generally referred to as the d.sub.33 direction, is utilized to produce wave motion in the stator. In a further aspect, the elements are compressed into the slots so as to minimize tensile stresses on the elements in use.
Climate, weather, space weather: model development in an operational context
NASA Astrophysics Data System (ADS)
Folini, Doris
2018-05-01
Aspects of operational modeling for climate, weather, and space weather forecasts are contrasted, with a particular focus on the somewhat conflicting demands of "operational stability" versus "dynamic development" of the involved models. Some common key elements are identified, indicating potential for fruitful exchange across communities. Operational model development is compelling, driven by factors that broadly fall into four categories: model skill, basic physics, advances in computer architecture, and new aspects to be covered, from costumer needs over physics to observational data. Evaluation of model skill as part of the operational chain goes beyond an automated skill score. Permanent interaction between "pure research" and "operational forecast" people is beneficial to both sides. This includes joint model development projects, although ultimate responsibility for the operational code remains with the forecast provider. The pace of model development reflects operational lead times. The points are illustrated with selected examples, many of which reflect the author's background and personal contacts, notably with the Swiss Weather Service and the Max Planck Institute for Meteorology, Hamburg, Germany. In view of current and future challenges, large collaborations covering a range of expertise are a must - within and across climate, weather, and space weather. To profit from and cope with the rapid progress of computer architectures, supercompute centers must form part of the team.
Scripes, Paola G; Yaparpalvi, Ravindra
2012-09-01
The usage of functional data in radiation therapy (RT) treatment planning (RTP) process is currently the focus of significant technical, scientific, and clinical development. Positron emission tomography (PET) using ((18)F) fluorodeoxyglucose is being increasingly used in RT planning in recent years. Fluorodeoxyglucose is the most commonly used radiotracer for diagnosis, staging, recurrent disease detection, and monitoring of tumor response to therapy (Lung Cancer 2012;76:344-349; Lung Cancer 2009;64:301-307; J Nucl Med 2008;49:532-540; J Nucl Med 2007;48:58S-67S). All the efforts to improve both PET and computed tomography (CT) image quality and, consequently, lesion detectability have a common objective to increase the accuracy in functional imaging and thus of coregistration into RT planning systems. In radiotherapy, improvement in target localization permits reduction of tumor margins, consequently reducing volume of normal tissue irradiated. Furthermore, smaller treated target volumes create the possibility of dose escalation, leading to increased chances of tumor cure and control. This article focuses on the technical aspects of PET/CT image acquisition, fusion, usage, and impact on the physics of RTP. The authors review the basic elements of RTP, modern radiation delivery, and the technical parameters of coregistration of PET/CT into RT computerized planning systems. Copyright © 2012 Elsevier Inc. All rights reserved.
Tsouknidas, Alexander; Maropoulos, Stergios; Savvakis, Savvas; Michailidis, Nikolaos
2011-01-01
Recent advances in Computer Aided Design and Manufacturing techniques (CAD/CAM) have facilitated the rapid and precise construction of customized implants used for craniofacial reconstruction. Data of the patients' trauma, acquired through Computer Topographies (CT), provide sufficient information with regard to the defect contour profile, thus allowing a thorough preoperative evaluation whilst ensuring excellent implant precision. During the selection, however, of a suitable implant material for the specific trauma, the mechanical aspects of the implant have to be considered. This investigation aims to assess the mechanical strength, the shock resistance and the critical deflection of cranial implants manufactured with two commonly used materials, Polymethylmethacrylate (PMMA) and Ti6Al4V. Even though the strength properties of Ti-alloys are far superior to those of PMMA, there are several aspects that may act in advantage of PMMA, e.g., it is known that discontinuities in the elastic modulus of adjoined parts (bone-implant) lead to bone resorption thus loosening the fixation of the implant over time.The implant design and fixation was the same in both cases allowing a direct comparison of the implant behavior for various loads. Finite Element Methods (FEM) assisted procedures were employed, providing a valuable insight to the neurocranial protection granted by these implants.
Mathematical aspects of finite element methods for incompressible viscous flows
NASA Technical Reports Server (NTRS)
Gunzburger, M. D.
1986-01-01
Mathematical aspects of finite element methods are surveyed for incompressible viscous flows, concentrating on the steady primitive variable formulation. The discretization of a weak formulation of the Navier-Stokes equations are addressed, then the stability condition is considered, the satisfaction of which insures the stability of the approximation. Specific choices of finite element spaces for the velocity and pressure are then discussed. Finally, the connection between different weak formulations and a variety of boundary conditions is explored.
Higher-order adaptive finite-element methods for Kohn–Sham density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motamarri, P.; Nowak, M.R.; Leiter, K.
2013-11-15
We present an efficient computational approach to perform real-space electronic structure calculations using an adaptive higher-order finite-element discretization of Kohn–Sham density-functional theory (DFT). To this end, we develop an a priori mesh-adaption technique to construct a close to optimal finite-element discretization of the problem. We further propose an efficient solution strategy for solving the discrete eigenvalue problem by using spectral finite-elements in conjunction with Gauss–Lobatto quadrature, and a Chebyshev acceleration technique for computing the occupied eigenspace. The proposed approach has been observed to provide a staggering 100–200-fold computational advantage over the solution of a generalized eigenvalue problem. Using the proposedmore » solution procedure, we investigate the computational efficiency afforded by higher-order finite-element discretizations of the Kohn–Sham DFT problem. Our studies suggest that staggering computational savings—of the order of 1000-fold—relative to linear finite-elements can be realized, for both all-electron and local pseudopotential calculations, by using higher-order finite-element discretizations. On all the benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy, suggesting that the hexic spectral-element may be an optimal choice for the finite-element discretization of the Kohn–Sham DFT problem. A comparative study of the computational efficiency of the proposed higher-order finite-element discretizations suggests that the performance of finite-element basis is competing with the plane-wave discretization for non-periodic local pseudopotential calculations, and compares to the Gaussian basis for all-electron calculations to within an order of magnitude. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of a metallic system containing 1688 atoms using modest computational resources, and good scalability of the present implementation up to 192 processors.« less
Charles Darwin and Evolution: Illustrating Human Aspects of Science
ERIC Educational Resources Information Center
Kampourakis, Kostas; McComas, William F.
2010-01-01
Recently, the nature of science (NOS) has become recognized as an important element within the K-12 science curriculum. Despite differences in the ultimate lists of recommended aspects, a consensus is emerging on what specific NOS elements should be the focus of science instruction and inform textbook writers and curriculum developers. In this…
Rectenna session: Micro aspects. [energy conversion
NASA Technical Reports Server (NTRS)
Gutmann, R. J.
1980-01-01
Two micro aspects of the rectenna design are addressed: evaluation of the degradation in net rectenna RF to DC conversion efficiency due to power density variations across the rectenna (power combining analysis) and design of Yagi-Uda receiving elements to reduce rectenna cost by decreasing the number of conversion circuits (directional receiving elements). The first of these micro aspects involves resolving a fundamental question of efficiency potential with a rectenna, while the second involves a design modification with a large potential cost saving.
49 CFR 236.526 - Roadway element not functioning properly.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element has...
49 CFR 236.526 - Roadway element not functioning properly.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element has...
49 CFR 236.526 - Roadway element not functioning properly.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element has...
Corner Wrinkling at a Square Membrane Due to Symmetric Mechanical Loads
NASA Technical Reports Server (NTRS)
Blandino, Joseph R.; Johnston, John D.; Dharamsi, Urmil K.; Brodeur, Stephen J. (Technical Monitor)
2001-01-01
Thin-film membrane structures are under consideration for use in many future gossamer spacecraft systems. Examples include sunshields for large aperture telescopes, solar sails, and membrane optics. The development of capabilities for testing and analyzing pre-tensioned, thin film membrane structures is an important and challenging aspect of gossamer spacecraft technology development. This paper presents results from experimental and computational studies performed to characterize the wrinkling behavior of thin-fi[m membranes under mechanical loading. The test article is a 500 mm square membrane subjected to symmetric comer loads. Data is presented for loads ranging from 0.49 N to 4.91 N. The experimental results show that as the load increases the number of wrinkles increases, while the wrinkle amplitude decreases. The computational model uses a finite element implementation of Stein-Hedgepeth membrane wrinkling theory to predict the behavior of the membrane. Comparisons were made with experimental results for the wrinkle angle and wrinkled region. There was reasonably good agreement between the measured wrinkle angle and the predicted directions of the major principle stresses. The shape of the wrinkle region predicted by the finite element model matches that observed in the experiments; however, the size of the predicted region is smaller than that determined in the experiments.
NASA Technical Reports Server (NTRS)
Nguyen, Nhan; Ting, Eric; Chaparro, Daniel
2017-01-01
This paper investigates the effect of nonlinear large deflection bending on the aerodynamic performance of a high aspect ratio flexible wing. A set of nonlinear static aeroelastic equations are derived for the large bending deflection of a high aspect ratio wing structure. An analysis is conducted to compare the nonlinear bending theory with the linear bending theory. The results show that the nonlinear bending theory is length-preserving whereas the linear bending theory causes a non-physical effect of lengthening the wing structure under the no axial load condition. A modified lifting line theory is developed to compute the lift and drag coefficients of a wing structure undergoing a large bending deflection. The lift and drag coefficients are more accurately estimated by the nonlinear bending theory due to its length-preserving property. The nonlinear bending theory yields lower lift and span efficiency than the linear bending theory. A coupled aerodynamic-nonlinear finite element model is developed to implement the nonlinear bending theory for a Common Research Model (CRM) flexible wing wind tunnel model to be tested in the University of Washington Aeronautical Laboratory (UWAL). The structural stiffness of the model is designed to give about 10% wing tip deflection which is large enough that could cause the nonlinear deflection effect to become significant. The computational results show that the nonlinear bending theory yields slightly less lift than the linear bending theory for this wind tunnel model. As a result, the linear bending theory is deemed adequate for the CRM wind tunnel model.
Computational model of collagen turnover in carotid arteries during hypertension.
Sáez, P; Peña, E; Tarbell, J M; Martínez, M A
2015-02-01
It is well known that biological tissues adapt their properties because of different mechanical and chemical stimuli. The goal of this work is to study the collagen turnover in the arterial tissue of hypertensive patients through a coupled computational mechano-chemical model. Although it has been widely studied experimentally, computational models dealing with the mechano-chemical approach are not. The present approach can be extended easily to study other aspects of bone remodeling or collagen degradation in heart diseases. The model can be divided into three different stages. First, we study the smooth muscle cell synthesis of different biological substances due to over-stretching during hypertension. Next, we study the mass-transport of these substances along the arterial wall. The last step is to compute the turnover of collagen based on the amount of these substances in the arterial wall which interact with each other to modify the turnover rate of collagen. We simulate this process in a finite element model of a real human carotid artery. The final results show the well-known stiffening of the arterial wall due to the increase in the collagen content. Copyright © 2015 John Wiley & Sons, Ltd.
ModeLang: a new approach for experts-friendly viral infections modeling.
Wasik, Szymon; Prejzendanc, Tomasz; Blazewicz, Jacek
2013-01-01
Computational modeling is an important element of systems biology. One of its important applications is modeling complex, dynamical, and biological systems, including viral infections. This type of modeling usually requires close cooperation between biologists and mathematicians. However, such cooperation often faces communication problems because biologists do not have sufficient knowledge to understand mathematical description of the models, and mathematicians do not have sufficient knowledge to define and verify these models. In many areas of systems biology, this problem has already been solved; however, in some of these areas there are still certain problematic aspects. The goal of the presented research was to facilitate this cooperation by designing seminatural formal language for describing viral infection models that will be easy to understand for biologists and easy to use by mathematicians and computer scientists. The ModeLang language was designed in cooperation with biologists and its computer implementation was prepared. Tests proved that it can be successfully used to describe commonly used viral infection models and then to simulate and verify them. As a result, it can make cooperation between biologists and mathematicians modeling viral infections much easier, speeding up computational verification of formulated hypotheses.
ModeLang: A New Approach for Experts-Friendly Viral Infections Modeling
Blazewicz, Jacek
2013-01-01
Computational modeling is an important element of systems biology. One of its important applications is modeling complex, dynamical, and biological systems, including viral infections. This type of modeling usually requires close cooperation between biologists and mathematicians. However, such cooperation often faces communication problems because biologists do not have sufficient knowledge to understand mathematical description of the models, and mathematicians do not have sufficient knowledge to define and verify these models. In many areas of systems biology, this problem has already been solved; however, in some of these areas there are still certain problematic aspects. The goal of the presented research was to facilitate this cooperation by designing seminatural formal language for describing viral infection models that will be easy to understand for biologists and easy to use by mathematicians and computer scientists. The ModeLang language was designed in cooperation with biologists and its computer implementation was prepared. Tests proved that it can be successfully used to describe commonly used viral infection models and then to simulate and verify them. As a result, it can make cooperation between biologists and mathematicians modeling viral infections much easier, speeding up computational verification of formulated hypotheses. PMID:24454531
NASA Astrophysics Data System (ADS)
Mirkia, Hasti; Sangari, Arash; Nelson, Mark; Assadi, Amir H.
2013-03-01
Architecture brings together diverse elements to enhance the observer's measure of esthetics and the convenience of functionality. Architects often conceptualize synthesis of design elements to invoke the observer's sense of harmony and positive affect. How does an observer's brain respond to harmony of design in interior spaces? One implicit consideration by architects is the role of guided visual attention by observers while navigating indoors. Prior visual experience of natural scenes provides the perceptual basis for Gestalt of design elements. In contrast, Gestalt of organization in design varies according to the architect's decision. We outline a quantitative theory to measure the success in utilizing the observer's psychological factors to achieve the desired positive affect. We outline a unified framework for perception of geometry and motion in interior spaces, which integrates affective and cognitive aspects of human vision in the context of anthropocentric interior design. The affective criteria are derived from contemporary theories of interior design. Our contribution is to demonstrate that the neural computations in an observer's eye movement could be used to elucidate harmony in perception of form, space and motion, thus a measure of goodness of interior design. Through mathematical modeling, we argue the plausibility of the relevant hypotheses.
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Shabouei, M.; Nakshatrala, K.
2015-12-01
Advection-diffusion-reaction (ADR) equations appear in various areas of life sciences, hydrogeological systems, and contaminant transport. Obtaining stable and accurate numerical solutions can be challenging as the underlying equations are coupled, nonlinear, and non-self-adjoint. Currently, there is neither a robust computational framework available nor a reliable commercial package known that can handle various complex situations. Herein, the objective of this poster presentation is to present a novel locally conservative non-negative finite element formulation that preserves the underlying physical and mathematical properties of a general linear transient anisotropic ADR equation. In continuous setting, governing equations for ADR systems possess various important properties. In general, all these properties are not inherited during finite difference, finite volume, and finite element discretizations. The objective of this poster presentation is two fold: First, we analyze whether the existing numerical formulations (such as SUPG and GLS) and commercial packages provide physically meaningful values for the concentration of the chemical species for various realistic benchmark problems. Furthermore, we also quantify the errors incurred in satisfying the local and global species balance for two popular chemical kinetics schemes: CDIMA (chlorine dioxide-iodine-malonic acid) and BZ (Belousov--Zhabotinsky). Based on these numerical simulations, we show that SUPG and GLS produce unphysical values for concentration of chemical species due to the violation of the non-negative constraint, contain spurious node-to-node oscillations, and have large errors in local and global species balance. Second, we proposed a novel finite element formulation to overcome the above difficulties. The proposed locally conservative non-negative computational framework based on low-order least-squares finite elements is able to preserve these underlying physical and mathematical properties. Several representative numerical examples are discussed to illustrate the importance of the proposed numerical formulations to accurately describe various aspects of mixing process in chaotic flows and to simulate transport in highly heterogeneous anisotropic media.
Challenges in reducing the computational time of QSTS simulations for distribution system analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less
Numerical evaluation of a single ellipsoid motion in Newtonian and power-law fluids
NASA Astrophysics Data System (ADS)
Férec, Julien; Ausias, Gilles; Natale, Giovanniantonio
2018-05-01
A computational model is developed for simulating the motion of a single ellipsoid suspended in a Newtonian and power-law fluid, respectively. Based on a finite element method (FEM), the approach consists in seeking solutions for the linear and angular particle velocities using a minimization algorithm, such that the net hydrodynamic force and torque acting on the ellipsoid are zero. For a Newtonian fluid subjected to a simple shear flow, the Jeffery's predictions are recovered at any aspect ratios. The motion of a single ellipsoidal fiber is found to be slightly disturbed by the shear-thinning character of the suspending fluid, when compared with the Jeffery's solutions. Surprisingly, the perturbation can be completely neglected for a particle with a large aspect ratio. Furthermore, the particle centroid is also found to translate with the same linear velocity as the undisturbed simple shear flow evaluated at particle centroid. This is confirmed by recent works based on experimental investigations and modeling approach (1-2).
Computing Fiber/Matrix Interfacial Effects In SiC/RBSN
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Hopkins, Dale A.
1996-01-01
Computational study conducted to demonstrate use of boundary-element method in analyzing effects of fiber/matrix interface on elastic and thermal behaviors of representative laminated composite materials. In study, boundary-element method implemented by Boundary Element Solution Technology - Composite Modeling System (BEST-CMS) computer program.
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
Ceramic matrix composite behavior -- Computational simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamis, C.C.; Murthy, P.L.N.; Mital, S.K.
Development of analytical modeling and computational capabilities for the prediction of high temperature ceramic matrix composite behavior has been an ongoing research activity at NASA-Lewis Research Center. These research activities have resulted in the development of micromechanics based methodologies to evaluate different aspects of ceramic matrix composite behavior. The basis of the approach is micromechanics together with a unique fiber substructuring concept. In this new concept the conventional unit cell (the smallest representative volume element of the composite) of micromechanics approach has been modified by substructuring the unit cell into several slices and developing the micromechanics based equations at themore » slice level. Main advantage of this technique is that it can provide a much greater detail in the response of composite behavior as compared to a conventional micromechanics based analysis and still maintains a very high computational efficiency. This methodology has recently been extended to model plain weave ceramic composites. The objective of the present paper is to describe the important features of the modeling and simulation and illustrate with select examples of laminated as well as woven composites.« less
Aspects semiotiques de trois manuels scolaires (Semiotic Aspects of Three School Textbooks).
ERIC Educational Resources Information Center
Calame, Claude
1980-01-01
A structural analysis according to narrative rules and common content elements was made of stories on an identical theme in three different foreign language texts. The purpose of the analysis was to highlight some of the elements by which an educational institution influences its students through the world view it espouses. The three texts chosen…
NASA Technical Reports Server (NTRS)
Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.
1991-01-01
The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.
Top-down predictions in the cognitive brain
Kveraga, Kestutis; Ghuman, Avniel S.; Bar, Moshe
2007-01-01
The human brain is not a passive organ simply waiting to be activated by external stimuli. Instead, it is proposed tat the brain continuously employs memory of past experiences to interpret sensory information and predict the immediately relevant future. This review concentrates on visual recognition as the model system for developing and testing ideas about the role and mechanisms of top-down predictions in the brain. We cover relevant behavioral, computational and neural aspects. These ideas are then extended to other domains. The basic elements of this proposal include analogical mapping, associative representations and the generation of predictions. Connections to a host of cognitive processes will be made and implications to several mental disorders will be proposed. PMID:17923222
Mathematical and Numerical Aspects of the Adaptive Fast Multipole Poisson-Boltzmann Solver
Zhang, Bo; Lu, Benzhuo; Cheng, Xiaolin; ...
2013-01-01
This paper summarizes the mathematical and numerical theories and computational elements of the adaptive fast multipole Poisson-Boltzmann (AFMPB) solver. We introduce and discuss the following components in order: the Poisson-Boltzmann model, boundary integral equation reformulation, surface mesh generation, the nodepatch discretization approach, Krylov iterative methods, the new version of fast multipole methods (FMMs), and a dynamic prioritization technique for scheduling parallel operations. For each component, we also remark on feasible approaches for further improvements in efficiency, accuracy and applicability of the AFMPB solver to large-scale long-time molecular dynamics simulations. Lastly, the potential of the solver is demonstrated with preliminary numericalmore » results.« less
Space Transportation System/Spacelab accommodations
NASA Technical Reports Server (NTRS)
De Sanctis, C. E.
1978-01-01
A description is provided of the capabilities offered by the Spacelab design for doing research in space. The Spacelab flight vehicle consists of two basic elements including the habitable pressurized compartments and the unpressurized equipment mounting platforms. Spacelab services to payloads are considered, taking into account payload mass, electrical power and energy, heat rejection for Spacelab and payload, aspects of Spacelab data handling, and the extended flight capability. Attention is also given to the Spacelab structure, crew station and habitability, the electrical power distribution subsystem, the command and data management subsystem, the experiment computer operating system, the environmental control subsystem, the experiment vent assembly, the common payload support equipment, the instrument pointing subsystem, and details concerning the utilization of Spacelab.
LATDYN - PROGRAM FOR SIMULATION OF LARGE ANGLE TRANSIENT DYNAMICS OF FLEXIBLE AND RIGID STRUCTURES
NASA Technical Reports Server (NTRS)
Housner, J. M.
1994-01-01
LATDYN is a computer code for modeling the Large Angle Transient DYNamics of flexible articulating structures and mechanisms involving joints about which members rotate through large angles. LATDYN extends and brings together some of the aspects of Finite Element Structural Analysis, Multi-Body Dynamics, and Control System Analysis; three disciplines that have been historically separate. It combines significant portions of their distinct capabilities into one single analysis tool. The finite element formulation for flexible bodies in LATDYN extends the conventional finite element formulation by using a convected coordinate system for constructing the equation of motion. LATDYN's formulation allows for large displacements and rotations of finite elements subject to the restriction that deformations within each are small. Also, the finite element approach implemented in LATDYN provides a convergent path for checking solutions simply by increasing mesh density. For rigid bodies and joints LATDYN borrows extensively from methodology used in multi-body dynamics where rigid bodies may be defined and connected together through joints (hinges, ball, universal, sliders, etc.). Joints may be modeled either by constraints or by adding joint degrees of freedom. To eliminate error brought about by the separation of structural analysis and control analysis, LATDYN provides symbolic capabilities for modeling control systems which are integrated with the structural dynamic analysis itself. Its command language contains syntactical structures which perform symbolic operations which are also interfaced directly with the finite element structural model, bypassing the modal approximation. Thus, when the dynamic equations representing the structural model are integrated, the equations representing the control system are integrated along with them as a coupled system. This procedure also has the side benefit of enabling a dramatic simplification of the user interface for modeling control systems. Three FORTRAN computer programs, the LATDYN Program, the Preprocessor, and the Postprocessor, make up the collective LATDYN System. The Preprocessor translates user commands into a form which can be used while the LATDYN program provides the computational core. The Postprocessor allows the user to interactively plot and manage a database of LATDYN transient analysis results. It also includes special facilities for modeling control systems and for programming changes to the model which take place during analysis sequence. The documentation includes a Demonstration Problem Manual for the evaluation and verification of results and a Postprocessor guide. Because the program should be viewed as a byproduct of research on technology development, LATDYN's scope is limited. It does not have a wide library of finite elements, and 3-D Graphics are not available. Nevertheless, it does have a measure of "user friendliness". The LATDYN program was developed over a period of several years and was implemented on a CDC NOS/VE & Convex Unix computer. It is written in FORTRAN 77 and has a virtual memory requirement of 1.46 MB. The program was validated on a DEC MICROVAX operating under VMS 5.2.
A physically based catchment partitioning method for hydrological analysis
NASA Astrophysics Data System (ADS)
Menduni, Giovanni; Riboni, Vittoria
2000-07-01
We propose a partitioning method for the topographic surface, which is particularly suitable for hydrological distributed modelling and shallow-landslide distributed modelling. The model provides variable mesh size and appears to be a natural evolution of contour-based digital terrain models. The proposed method allows the drainage network to be derived from the contour lines. The single channels are calculated via a search for the steepest downslope lines. Then, for each network node, the contributing area is determined by means of a search for both steepest upslope and downslope lines. This leads to the basin being partitioned into physically based finite elements delimited by irregular polygons. In particular, the distributed computation of local geomorphological parameters (i.e. aspect, average slope and elevation, main stream length, concentration time, etc.) can be performed easily for each single element. The contributing area system, together with the information on the distribution of geomorphological parameters provide a useful tool for distributed hydrological modelling and simulation of environmental processes such as erosion, sediment transport and shallow landslides.
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1985-01-01
The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.
Manual control models of industrial management
NASA Technical Reports Server (NTRS)
Crossman, E. R. F. W.
1972-01-01
The industrial engineer is often required to design and implement control systems and organization for manufacturing and service facilities, to optimize quality, delivery, and yield, and minimize cost. Despite progress in computer science most such systems still employ human operators and managers as real-time control elements. Manual control theory should therefore be applicable to at least some aspects of industrial system design and operations. Formulation of adequate model structures is an essential prerequisite to progress in this area; since real-world production systems invariably include multilevel and multiloop control, and are implemented by timeshared human effort. A modular structure incorporating certain new types of functional element, has been developed. This forms the basis for analysis of an industrial process operation. In this case it appears that managerial controllers operate in a discrete predictive mode based on fast time modelling, with sampling interval related to plant dynamics. Successive aggregation causes reduced response bandwidth and hence increased sampling interval as a function of level.
Cancer biomarkers: the role of structured data reporting.
Simpson, Ross W; Berman, Michael A; Foulis, Philip R; Divaris, Dimitrios X G; Birdsong, George G; Mirza, Jaleh; Moldwin, Richard; Spencer, Samantha; Srigley, John R; Fitzgibbons, Patrick L
2015-05-01
The College of American Pathologists has been producing cancer protocols since 1986 to aid pathologists in the diagnosis and reporting of cancer cases. Many pathologists use the included cancer case summaries as templates for dictation/data entry into the final pathology report. These summaries are now available in a computer-readable format with structured data elements for interoperability, packaged as "electronic cancer checklists." Most major vendors of anatomic pathology reporting software support this model. To outline the development and advantages of structured electronic cancer reporting using the electronic cancer checklist model, and to describe its extension to cancer biomarkers and other aspects of cancer reporting. Peer-reviewed literature and internal records of the College of American Pathologists. Accurate and usable cancer biomarker data reporting will increasingly depend on initial capture of this information as structured data. This process will support the standardization of data elements and biomarker terminology, enabling the meaningful use of these datasets by pathologists, clinicians, tumor registries, and patients.
NASA Technical Reports Server (NTRS)
Smith, C. W.; Bhateley, I. C.
1976-01-01
Two techniques for extending the range of applicability of the basic vortex-lattice method are discussed. The first improves the computation of aerodynamic forces on thin, low-aspect-ratio wings of arbitrary planforms at subsonic Mach numbers by including the effects of leading-edge and tip vortex separation, characteristic of this type wing, through use of the well-known suction-analogy method of E. C. Polhamus. Comparisons with experimental data for a variety of planforms are presented. The second consists of the use of the vortex-lattice method to predict pressure distributions over thick multi-element wings (wings with leading- and trailing-edge devices). A method of laying out the lattice is described which gives accurate pressures on the top and part of the bottom surface of the wing. Limited comparisons between the result predicted by this method, the conventional lattice arrangement method, experimental data, and 2-D potential flow analysis techniques are presented.
The Boundary Integral Equation Method for Porous Media Flow
NASA Astrophysics Data System (ADS)
Anderson, Mary P.
Just as groundwater hydrologists are breathing sighs of relief after the exertions of learning the finite element method, a new technique has reared its nodes—the boundary integral equation method (BIEM) or the boundary equation method (BEM), as it is sometimes called. As Liggett and Liu put it in the preface to The Boundary Integral Equation Method for Porous Media Flow, “Lately, the Boundary Integral Equation Method (BIEM) has emerged as a contender in the computation Derby.” In fact, in July 1984, the 6th International Conference on Boundary Element Methods in Engineering will be held aboard the Queen Elizabeth II, en route from Southampton to New York. These conferences are sponsored by the Department of Civil Engineering at Southampton College (UK), whose members are proponents of BIEM. The conferences have featured papers on applications of BIEM to all aspects of engineering, including flow through porous media. Published proceedings are available, as are textbooks on application of BIEM to engineering problems. There is even a 10-minute film on the subject.
NASA Technical Reports Server (NTRS)
Walston, W. H., Jr.
1986-01-01
The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.
NASA Astrophysics Data System (ADS)
Bercea, Gheorghe-Teodor; McRae, Andrew T. T.; Ham, David A.; Mitchell, Lawrence; Rathgeber, Florian; Nardi, Luigi; Luporini, Fabio; Kelly, Paul H. J.
2016-10-01
We present a generic algorithm for numbering and then efficiently iterating over the data values attached to an extruded mesh. An extruded mesh is formed by replicating an existing mesh, assumed to be unstructured, to form layers of prismatic cells. Applications of extruded meshes include, but are not limited to, the representation of three-dimensional high aspect ratio domains employed by geophysical finite element simulations. These meshes are structured in the extruded direction. The algorithm presented here exploits this structure to avoid the performance penalty traditionally associated with unstructured meshes. We evaluate the implementation of this algorithm in the Firedrake finite element system on a range of low compute intensity operations which constitute worst cases for data layout performance exploration. The experiments show that having structure along the extruded direction enables the cost of the indirect data accesses to be amortized after 10-20 layers as long as the underlying mesh is well ordered. We characterize the resulting spatial and temporal reuse in a representative set of both continuous-Galerkin and discontinuous-Galerkin discretizations. On meshes with realistic numbers of layers the performance achieved is between 70 and 90 % of a theoretical hardware-specific limit.
Finite element modeling of borehole heat exchanger systems. Part 1. Fundamentals
NASA Astrophysics Data System (ADS)
Diersch, H.-J. G.; Bauer, D.; Heidemann, W.; Rühaak, W.; Schätzl, P.
2011-08-01
Single borehole heat exchanger (BHE) and arrays of BHE are modeled by using the finite element method. The first part of the paper derives the fundamental equations for BHE systems and their finite element representations, where the thermal exchange between the borehole components is modeled via thermal transfer relations. For this purpose improved relationships for thermal resistances and capacities of BHE are introduced. Pipe-to-grout thermal transfer possesses multiple grout points for double U-shape and single U-shape BHE to attain a more accurate modeling. The numerical solution of the final 3D problems is performed via a widely non-sequential (essentially non-iterative) coupling strategy for the BHE and porous medium discretization. Four types of vertical BHE are supported: double U-shape (2U) pipe, single U-shape (1U) pipe, coaxial pipe with annular (CXA) and centred (CXC) inlet. Two computational strategies are used: (1) The analytical BHE method based on Eskilson and Claesson's (1988) solution, (2) numerical BHE method based on Al-Khoury et al.'s (2005) solution. The second part of the paper focusses on BHE meshing aspects, the validation of BHE solutions and practical applications for borehole thermal energy store systems.
NASA Astrophysics Data System (ADS)
Reinoso, J.; Paggi, M.; Linder, C.
2017-06-01
Fracture of technological thin-walled components can notably limit the performance of their corresponding engineering systems. With the aim of achieving reliable fracture predictions of thin structures, this work presents a new phase field model of brittle fracture for large deformation analysis of shells relying on a mixed enhanced assumed strain (EAS) formulation. The kinematic description of the shell body is constructed according to the solid shell concept. This enables the use of fully three-dimensional constitutive models for the material. The proposed phase field formulation integrates the use of the (EAS) method to alleviate locking pathologies, especially Poisson thickness and volumetric locking. This technique is further combined with the assumed natural strain method to efficiently derive a locking-free solid shell element. On the computational side, a fully coupled monolithic framework is consistently formulated. Specific details regarding the corresponding finite element formulation and the main aspects associated with its implementation in the general purpose packages FEAP and ABAQUS are addressed. Finally, the applicability of the current strategy is demonstrated through several numerical examples involving different loading conditions, and including linear and nonlinear hyperelastic constitutive models.
NASA Astrophysics Data System (ADS)
Giannopoulos, Georgios I.; Kontoni, Denise-Penelope N.; Georgantzinos, Stylianos K.
2016-08-01
This paper describes the static and free vibration behavior of single walled boron nitride nanotubes using a structural mechanics based finite element method. First, depending on the type of nanotube under investigation, its three dimensional nanostructure is developed according to the well-known corresponding positions of boron and nitride atoms as well as boron nitride bonds. Then, appropriate point masses are assigned to the atomic positions of the developed space frame. Next, these point masses are suitably interconnected with two-noded, linear, spring-like, finite elements. In order to simulate effectively the interactions observed between boron and nitride atoms within the nanotube, appropriate potential energy functions are introduced for these finite elements. In this manner, various atomistic models for both armchair and zigzag nanotubes with different aspect ratios are numerically analyzed and their effective elastic modulus as well as their natural frequencies and corresponding mode shapes are obtained. Regarding the free vibration analysis, the computed results reveal bending, breathing and axial modes of vibration depending on the nanotube size and chirality as well as the applied boundary support conditions. The longitudinal stiffness of the boron nitride nanotubes is found also sensitive to their geometric characteristics.
NASA Technical Reports Server (NTRS)
Jenkins, Jerald M.
1987-01-01
Temperature, thermal stresses, and residual creep stresses were studied by comparing laboratory values measured on a built-up titanium structure with values calculated from finite-element models. Several such models were used to examine the relationship between computational thermal stresses and thermal stresses measured on a built-up structure. Element suitability, element density, and computational temperature discrepancies were studied to determine their impact on measured and calculated thermal stress. The optimum number of elements is established from a balance between element density and suitable safety margins, such that the answer is acceptably safe yet is economical from a computational viewpoint. It is noted that situations exist where relatively small excursions of calculated temperatures from measured values result in far more than proportional increases in thermal stress values. Measured residual stresses due to creep significantly exceeded the values computed by the piecewise linear elastic strain analogy approach. The most important element in the computation is the correct definition of the creep law. Computational methodology advances in predicting residual stresses due to creep require significantly more viscoelastic material characterization.
NASA Astrophysics Data System (ADS)
Wang, Dong; Tan, Danielle S.
2017-12-01
We use discrete element modelling to simulate a system of sand being released underwater, similar to the process of releasing sediment tailings back into the sea in nodule harvesting, in 2D. The force model includes concentration-dependent drag, buoyancy, `added mass' and Stokeslet disturbance. For a fixed number of uniform-sized particles, we vary the aspect ratio and the compression ratio of the rectangular mass of granular media pre-release. We observed that the spreading leads to a nonlinear increase with aspect ratio. On the other hand, when the compression ratio is increased, the total spreading increases; however the spread of the bulk of the sand decreases at small aspect ratios and increases at large aspect ratios. We proposed a simple theoretical model for the horizontal spreading which depends on both the aspect and compression ratios.
Computational Modeling for the Flow Over a Multi-Element Airfoil
NASA Technical Reports Server (NTRS)
Liou, William W.; Liu, Feng-Jun
1999-01-01
The flow over a multi-element airfoil is computed using two two-equation turbulence models. The computations are performed using the INS2D) Navier-Stokes code for two angles of attack. Overset grids are used for the three-element airfoil. The computed results are compared with experimental data for the surface pressure, skin friction coefficient, and velocity magnitude. The computed surface quantities generally agree well with the measurement. The computed results reveal the possible existence of a mixing-layer-like region of flow next to the suction surface of the slat for both angles of attack.
NASA Astrophysics Data System (ADS)
Ardalan, A.; Safari, A.; Grafarend, E.
2003-04-01
An operational algorithm for computing the ellipsoidal terrain correction based on application of closed form solution of the Newton integral in terms of Cartesian coordinates in the cylindrical equal area map projected surface of a reference ellipsoid has been developed. As the first step the mapping of the points on the surface of a reference ellipsoid onto the cylindrical equal area map projection of a cylinder tangent to a point on the surface of reference ellipsoid closely studied and the map projection formulas are computed. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid is considered and the gravitational potential and the vector of gravitational intensity of these mass elements has been computed via the solution of Newton integral in terms of ellipsoidal coordinates. The geographical cross section areas of the selected ellipsoidal mass elements are transferred into cylindrical equal area map projection and based on the transformed area elements Cartesian mass elements with the same height as that of the ellipsoidal mass elements are constructed. Using the close form solution of the Newton integral in terms of Cartesian coordinates the potential of the Cartesian mass elements are computed and compared with the same results based on the application of the ellipsoidal Newton integral over the ellipsoidal mass elements. The results of the numerical computations show that difference between computed gravitational potential of the ellipsoidal mass elements and Cartesian mass element in the cylindrical equal area map projection is of the order of 1.6 × 10-8m^2/s^2 for a mass element with the cross section size of 10 km × 10 km and the height of 1000 m. For a 1 km × 1 km mass element with the same height, this difference is less than 1.5 × 10-4 m^2}/s^2. The results of the numerical computations indicate that a new method for computing the terrain correction based on the closed form solution of the Newton integral in terms of Cartesian coordinates and with accuracy of ellipsoidal terrain correction has been achieved! In this way one can enjoy the simplicity of the solution of the Newton integral in terms of Cartesian coordinates and at the same time the accuracy of the ellipsoidal terrain correction, which is needed for the modern theory of geoid computations.
Rule-based programming paradigm: a formal basis for biological, chemical and physical computation.
Krishnamurthy, V; Krishnamurthy, E V
1999-03-01
A rule-based programming paradigm is described as a formal basis for biological, chemical and physical computations. In this paradigm, the computations are interpreted as the outcome arising out of interaction of elements in an object space. The interactions can create new elements (or same elements with modified attributes) or annihilate old elements according to specific rules. Since the interaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements, so that the elements evolve toward an equilibrium or unstable or chaotic state. Such an evolution may retain certain invariant properties of the attributes of the elements. The object space resembles Gibbsian ensemble that corresponds to a distribution of points in the space of positions and momenta (called phase space). It permits the introduction of probabilities in rule applications. As each element of the ensemble changes over time, its phase point is carried into a new phase point. The evolution of this probability cloud in phase space corresponds to a distributed probabilistic computation. Thus, this paradigm can handle tor deterministic exact computation when the initial conditions are exactly specified and the trajectory of evolution is deterministic. Also, it can handle probabilistic mode of computation if we want to derive macroscopic or bulk properties of matter. We also explain how to support this rule-based paradigm using relational-database like query processing and transactions.
NASA Astrophysics Data System (ADS)
Dodig, H.
2017-11-01
This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.
NASA Astrophysics Data System (ADS)
Puckett, Elbridge Gerry; Turcotte, Donald L.; He, Ying; Lokavarapu, Harsha; Robey, Jonathan M.; Kellogg, Louise H.
2018-03-01
Geochemical observations of mantle-derived rocks favor a nearly homogeneous upper mantle, the source of mid-ocean ridge basalts (MORB), and heterogeneous lower mantle regions. Plumes that generate ocean island basalts are thought to sample the lower mantle regions and exhibit more heterogeneity than MORB. These regions have been associated with lower mantle structures known as large low shear velocity provinces (LLSVPS) below Africa and the South Pacific. The isolation of these regions is attributed to compositional differences and density stratification that, consequently, have been the subject of computational and laboratory modeling designed to determine the parameter regime in which layering is stable and understanding how layering evolves. Mathematical models of persistent compositional interfaces in the Earth's mantle may be inherently unstable, at least in some regions of the parameter space relevant to the mantle. Computing approximations to solutions of such problems presents severe challenges, even to state-of-the-art numerical methods. Some numerical algorithms for modeling the interface between distinct compositions smear the interface at the boundary between compositions, such as methods that add numerical diffusion or 'artificial viscosity' in order to stabilize the algorithm. We present two new algorithms for maintaining high-resolution and sharp computational boundaries in computations of these types of problems: a discontinuous Galerkin method with a bound preserving limiter and a Volume-of-Fluid interface tracking algorithm. We compare these new methods with two approaches widely used for modeling the advection of two distinct thermally driven compositional fields in mantle convection computations: a high-order accurate finite element advection algorithm with entropy viscosity and a particle method that carries a scalar quantity representing the location of each compositional field. All four algorithms are implemented in the open source finite element code ASPECT, which we use to compute the velocity, pressure, and temperature associated with the underlying flow field. We compare the performance of these four algorithms on three problems, including computing an approximation to the solution of an initially compositionally stratified fluid at Ra =105 with buoyancy numbers B that vary from no stratification at B = 0 to stratified flow at large B .
Cognitive neuroscience in forensic science: understanding and utilizing the human element
Dror, Itiel E.
2015-01-01
The human element plays a critical role in forensic science. It is not limited only to issues relating to forensic decision-making, such as bias, but also relates to most aspects of forensic work (some of which even take place before a crime is ever committed or long after the verification of the forensic conclusion). In this paper, I explicate many aspects of forensic work that involve the human element and therefore show the relevance (and potential contribution) of cognitive neuroscience to forensic science. The 10 aspects covered in this paper are proactive forensic science, selection during recruitment, training, crime scene investigation, forensic decision-making, verification and conflict resolution, reporting, the role of the forensic examiner, presentation in court and judicial decisions. As the forensic community is taking on the challenges introduced by the realization that the human element is critical for forensic work, new opportunities emerge that allow for considerable improvement and enhancement of the forensic science endeavour. PMID:26101281
Cognitive neuroscience in forensic science: understanding and utilizing the human element.
Dror, Itiel E
2015-08-05
The human element plays a critical role in forensic science. It is not limited only to issues relating to forensic decision-making, such as bias, but also relates to most aspects of forensic work (some of which even take place before a crime is ever committed or long after the verification of the forensic conclusion). In this paper, I explicate many aspects of forensic work that involve the human element and therefore show the relevance (and potential contribution) of cognitive neuroscience to forensic science. The 10 aspects covered in this paper are proactive forensic science, selection during recruitment, training, crime scene investigation, forensic decision-making, verification and conflict resolution, reporting, the role of the forensic examiner, presentation in court and judicial decisions. As the forensic community is taking on the challenges introduced by the realization that the human element is critical for forensic work, new opportunities emerge that allow for considerable improvement and enhancement of the forensic science endeavour. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, Earl A.; Lipshutz, Robert J.; Morris, Macdonald S.; Winkler, James L.
1997-01-01
An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks.
Identity as an Element of Human and Language Universes: Axiological Aspect
ERIC Educational Resources Information Center
Zheltukhina, Marina R.; Vikulova, Larisa G.; Serebrennikova, Evgenia F.; Gerasimova, Svetlana A.; Borbotko, Liudmila A.
2016-01-01
Interest to axiosphere as the sphere of values and its correlation with the ever-progressive noosphere as sphere of knowledge of a person is due to comprehension of the modern period in the evolution of society. The aim of the article is to describe an axiological aspect of the research of identity as an element of human and language universes.…
Computation of Asteroid Proper Elements: Recent Advances
NASA Astrophysics Data System (ADS)
Knežević, Z.
2017-12-01
The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.
Advantages and disadvantages of computer imaging in cosmetic surgery.
Koch, R J; Chavez, A; Dagum, P; Newman, J P
1998-02-01
Despite the growing popularity of computer imaging systems, it is not clear whether the medical and legal advantages of using such a system outweigh the disadvantages. The purpose of this report is to evaluate these aspects, and provide some protective guidelines in the use of computer imaging in cosmetic surgery. The positive and negative aspects of computer imaging from a medical and legal perspective are reviewed. Also, specific issues are examined by a legal panel. The greatest advantages are potential problem patient exclusion, and enhanced physician-patient communication. Disadvantages include cost, user learning curve, and potential liability. Careful use of computer imaging should actually reduce one's liability when all aspects are considered. Recommendations for such use and specific legal issues are discussed.
Methodologies and systems for heterogeneous concurrent computing
NASA Technical Reports Server (NTRS)
Sunderam, V. S.
1994-01-01
Heterogeneous concurrent computing is gaining increasing acceptance as an alternative or complementary paradigm to multiprocessor-based parallel processing as well as to conventional supercomputing. While algorithmic and programming aspects of heterogeneous concurrent computing are similar to their parallel processing counterparts, system issues, partitioning and scheduling, and performance aspects are significantly different. In this paper, we discuss critical design and implementation issues in heterogeneous concurrent computing, and describe techniques for enhancing its effectiveness. In particular, we highlight the system level infrastructures that are required, aspects of parallel algorithm development that most affect performance, system capabilities and limitations, and tools and methodologies for effective computing in heterogeneous networked environments. We also present recent developments and experiences in the context of the PVM system and comment on ongoing and future work.
Herweh, Christian; Ringleb, Peter A; Rauch, Geraldine; Gerry, Steven; Behrens, Lars; Möhlenbruch, Markus; Gottorf, Rebecca; Richter, Daniel; Schieber, Simon; Nagel, Simon
2016-06-01
The Alberta Stroke Program Early CT score (ASPECTS) is an established 10-point quantitative topographic computed tomography scan score to assess early ischemic changes. We compared the performance of the e-ASPECTS software with those of stroke physicians at different professional levels. The baseline computed tomography scans of acute stroke patients, in whom computed tomography and diffusion-weighted imaging scans were obtained less than two hours apart, were retrospectively scored by e-ASPECTS as well as by three stroke experts and three neurology trainees blinded to any clinical information. The ground truth was defined as the ASPECTS on diffusion-weighted imaging scored by another two non-blinded independent experts on consensus basis. Sensitivity and specificity in an ASPECTS region-based and an ASPECTS score-based analysis as well as receiver-operating characteristic curves, Bland-Altman plots with mean score error, and Matthews correlation coefficients were calculated. Comparisons were made between the human scorers and e-ASPECTS with diffusion-weighted imaging being the ground truth. Two methods for clustered data were used to estimate sensitivity and specificity in the region-based analysis. In total, 34 patients were included and 680 (34 × 20) ASPECTS regions were scored. Mean time from onset to computed tomography was 172 ± 135 min and mean time difference between computed tomographyand magnetic resonance imaging was 41 ± 31 min. The region-based sensitivity (46.46% [CI: 30.8;62.1]) of e-ASPECTS was better than three trainees and one expert (p ≤ 0.01) and not statistically different from another two experts. Specificity (94.15% [CI: 91.7;96.6]) was lower than one expert and one trainee (p < 0.01) and not statistically different to the other four physicians. e-ASPECTS had the best Matthews correlation coefficient of 0.44 (experts: 0.38 ± 0.08 and trainees: 0.19 ± 0.05) and the lowest mean score error of 0.56 (experts: 1.44 ± 1.79 and trainees: 1.97 ± 2.12). e-ASPECTS showed a similar performance to that of stroke experts in the assessment of brain computed tomographys of acute ischemic stroke patients with the Alberta Stroke Program Early CT score method. © 2016 World Stroke Organization.
Thermal stress prediction in mirror and multilayer coatings.
Cheng, Xianchao; Zhang, Lin; Morawe, Christian; Sanchez Del Rio, Manuel
2015-03-01
Multilayer optics for X-rays typically consist of hundreds of periods of two types of alternating sub-layers which are coated on a silicon substrate. The thickness of the coating is well below 1 µm (tens or hundreds of nanometers). The high aspect ratio (∼10(7)) between the size of the optics and the thickness of the multilayer can lead to a huge number of elements (∼10(16)) for the numerical simulation (by finite-element analysis using ANSYS code). In this work, the finite-element model for thermal-structural analysis of multilayer optics has been implemented using the ANSYS layer-functioned elements. The number of meshed elements is considerably reduced and the number of sub-layers feasible for the present computers is increased significantly. Based on this technique, single-layer coated mirrors and multilayer monochromators cooled by water or liquid nitrogen are studied with typical parameters of heat-load, cooling and geometry. The effects of cooling-down of the optics and heating of the X-ray beam are described. It is shown that the influences from the coating on temperature and deformation are negligible. However, large stresses are induced in the layers due to the different thermal expansion coefficients between the layer and the substrate materials, which is the critical issue for the survival of the optics. This is particularly true for the liquid-nitrogen cooling condition. The material properties of thin multilayer films are applied in the simulation to predict the layer thermal stresses with more precision.
Modelling of anisotropic growth in biological tissues. A new approach and computational aspects.
Menzel, A
2005-03-01
In this contribution, we develop a theoretical and computational framework for anisotropic growth phenomena. As a key idea of the proposed phenomenological approach, a fibre or rather structural tensor is introduced, which allows the description of transversely isotropic material behaviour. Based on this additional argument, anisotropic growth is modelled via appropriate evolution equations for the fibre while volumetric remodelling is realised by an evolution of the referential density. Both the strength of the fibre as well as the density follow Wolff-type laws. We however elaborate on two different approaches for the evolution of the fibre direction, namely an alignment with respect to strain or with respect to stress. One of the main benefits of the developed framework is therefore the opportunity to address the evolutions of the fibre strength and the fibre direction separately. It is then straightforward to set up appropriate integration algorithms such that the developed framework fits nicely into common, finite element schemes. Finally, several numerical examples underline the applicability of the proposed formulation.
Predictive modeling of dynamic fracture growth in brittle materials with machine learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel
We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less
Design oriented structural analysis
NASA Technical Reports Server (NTRS)
Giles, Gary L.
1994-01-01
Desirable characteristics and benefits of design oriented analysis methods are described and illustrated by presenting a synoptic description of the development and uses of the Equivalent Laminated Plate Solution (ELAPS) computer code. ELAPS is a design oriented structural analysis method which is intended for use in the early design of aircraft wing structures. Model preparation is minimized by using a few large plate segments to model the wing box structure. Computational efficiency is achieved by using a limited number of global displacement functions that encompass all segments over the wing planform. Coupling with other codes is facilitated since the output quantities such as deflections and stresses are calculated as continuous functions over the plate segments. Various aspects of the ELAPS development are discussed including the analytical formulation, verification of results by comparison with finite element analysis results, coupling with other codes, and calculation of sensitivity derivatives. The effectiveness of ELAPS for multidisciplinary design application is illustrated by describing its use in design studies of high speed civil transport wing structures.
Predictive modeling of dynamic fracture growth in brittle materials with machine learning
Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel; ...
2018-02-22
We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less
NASA Astrophysics Data System (ADS)
Dodani, Sheel C.; Kiss, Gert; Cahn, Jackson K. B.; Su, Ye; Pande, Vijay S.; Arnold, Frances H.
2016-05-01
The dynamic motions of protein structural elements, particularly flexible loops, are intimately linked with diverse aspects of enzyme catalysis. Engineering of these loop regions can alter protein stability, substrate binding and even dramatically impact enzyme function. When these flexible regions are unresolvable structurally, computational reconstruction in combination with large-scale molecular dynamics simulations can be used to guide the engineering strategy. Here we present a collaborative approach that consists of both experiment and computation and led to the discovery of a single mutation in the F/G loop of the nitrating cytochrome P450 TxtE that simultaneously controls loop dynamics and completely shifts the enzyme's regioselectivity from the C4 to the C5 position of L-tryptophan. Furthermore, we find that this loop mutation is naturally present in a subset of homologous nitrating P450s and confirm that these uncharacterized enzymes exclusively produce 5-nitro-L-tryptophan, a previously unknown biosynthetic intermediate.
On the interpretation of kernels - Computer simulation of responses to impulse pairs
NASA Technical Reports Server (NTRS)
Hung, G.; Stark, L.; Eykhoff, P.
1983-01-01
A method is presented for the use of a unit impulse response and responses to impulse pairs of variable separation in the calculation of the second-degree kernels of a quadratic system. A quadratic system may be built from simple linear terms of known dynamics and a multiplier. Computer simulation results on quadratic systems with building elements of various time constants indicate reasonably that the larger time constant term before multiplication dominates in the envelope of the off-diagonal kernel curves as these move perpendicular to and away from the main diagonal. The smaller time constant term before multiplication combines with the effect of the time constant after multiplication to dominate in the kernel curves in the direction of the second-degree impulse response, i.e., parallel to the main diagonal. Such types of insight may be helpful in recognizing essential aspects of (second-degree) kernels; they may be used in simplifying the model structure and, perhaps, add to the physical/physiological understanding of the underlying processes.
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, Earl A.; Morris, MacDonald S.; Winkler, James L.
1999-01-05
An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system (100) is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files (104) to design and/or generate lithographic masks (110).
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, Earl A.; Morris, MacDonald S.; Winkler, James L.
1996-01-01
An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system (100) is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files (104) to design and/or generate lithographic masks (110).
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, E.A.; Morris, M.S.; Winkler, J.L.
1999-01-05
An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, E.A.; Lipshutz, R.J.; Morris, M.S.; Winkler, J.L.
1997-01-14
An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, E.A.; Morris, M.S.; Winkler, J.L.
1996-11-05
An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.
Cloud-free resolution element statistics program
NASA Technical Reports Server (NTRS)
Liley, B.; Martin, C. D.
1971-01-01
Computer program computes number of cloud-free elements in field-of-view and percentage of total field-of-view occupied by clouds. Human error is eliminated by using visual estimation to compute cloud statistics from aerial photographs.
Improving finite element results in modeling heart valve mechanics.
Earl, Emily; Mohammadi, Hadi
2018-06-01
Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Otoguro, Yuto
2018-04-01
Stabilized methods, which have been very common in flow computations for many years, typically involve stabilization parameters, and discontinuity-capturing (DC) parameters if the method is supplemented with a DC term. Various well-performing stabilization and DC parameters have been introduced for stabilized space-time (ST) computational methods in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible and compressible flows. These parameters were all originally intended for finite element discretization but quite often used also for isogeometric discretization. The stabilization and DC parameters we present here for ST computations are in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible flows, target isogeometric discretization, and are also applicable to finite element discretization. The parameters are based on a direction-dependent element length expression. The expression is outcome of an easy to understand derivation. The key components of the derivation are mapping the direction vector from the physical ST element to the parent ST element, accounting for the discretization spacing along each of the parametric coordinates, and mapping what we have in the parent element back to the physical element. The test computations we present for pure-advection cases show that the parameters proposed result in good solution profiles.
On modelling three-dimensional piezoelectric smart structures with boundary spectral element method
NASA Astrophysics Data System (ADS)
Zou, Fangxin; Aliabadi, M. H.
2017-05-01
The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1988-01-01
This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.
Finite element analysis simulations for ultrasonic array NDE inspections
NASA Astrophysics Data System (ADS)
Dobson, Jeff; Tweedie, Andrew; Harvey, Gerald; O'Leary, Richard; Mulholland, Anthony; Tant, Katherine; Gachagan, Anthony
2016-02-01
Advances in manufacturing techniques and materials have led to an increase in the demand for reliable and robust inspection techniques to maintain safety critical features. The application of modelling methods to develop and evaluate inspections is becoming an essential tool for the NDE community. Current analytical methods are inadequate for simulation of arbitrary components and heterogeneous materials, such as anisotropic welds or composite structures. Finite element analysis software (FEA), such as PZFlex, can provide the ability to simulate the inspection of these arrangements, providing the ability to economically prototype and evaluate improved NDE methods. FEA is often seen as computationally expensive for ultrasound problems however, advances in computing power have made it a more viable tool. This paper aims to illustrate the capability of appropriate FEA to produce accurate simulations of ultrasonic array inspections - minimizing the requirement for expensive test-piece fabrication. Validation is afforded via corroboration of the FE derived and experimentally generated data sets for a test-block comprising 1D and 2D defects. The modelling approach is extended to consider the more troublesome aspects of heterogeneous materials where defect dimensions can be of the same length scale as the grain structure. The model is used to facilitate the implementation of new ultrasonic array inspection methods for such materials. This is exemplified by considering the simulation of ultrasonic NDE in a weld structure in order to assess new approaches to imaging such structures.
Human-Computer Interaction: A Review of the Research on Its Affective and Social Aspects.
ERIC Educational Resources Information Center
Deaudelin, Colette; Dussault, Marc; Brodeur, Monique
2003-01-01
Discusses a review of 34 qualitative and non-qualitative studies related to affective and social aspects of student-computer interactions. Highlights include the nature of the human-computer interaction (HCI); the interface, comparing graphic and text types; and the relation between variables linked to HCI, mainly trust, locus of control,…
ERIC Educational Resources Information Center
Wayman, Ian; Kyobe, Michael
2012-01-01
As students in computing disciplines are introduced to modern information technologies, numerous unethical practices also escalate. With the increase in stringent legislations on use of IT, users of technology could easily be held liable for violation of this legislation. There is however lack of understanding of social aspects of computing, and…
Zakerian, SA; Subramaniam, ID
2011-01-01
Background: With computers rapidly carving a niche in virtually every nook and crevice of today’s fast-paced society, musculoskeletal disorders are becoming more prevalent among computer users, which comprise a wide spectrum of the Malaysian population, including office workers. While extant literature depicts extensive research on musculoskeletal disorders in general, the five dimensions of psychosocial work factors (job demands, job contentment, job control, computer-related problems and social interaction) attributed to work-related musculoskeletal disorders have been neglected. This study examines the aforementioned elements in detail, pertaining to their relationship with musculoskeletal disorders, focusing in particular, on 120 office workers at Malaysian public sector organizations, whose jobs require intensive computer usage. Methods: Research was conducted between March and July 2009 in public service organizations in Malaysia. This study was conducted via a survey utilizing self-complete questionnaires and diary. The relationship between psychosocial work factors and musculoskeletal discomfort was ascertained through regression analyses, which revealed that some factors were more important than others were. Results: The results indicate a significant relationship among psychosocial work factors and musculoskeletal discomfort among computer users. Several of these factors such as job control, computer-related problem and social interaction of psychosocial work factors are found to be more important than others in musculoskeletal discomfort. Conclusion: With computer usage on the rise among users, the prevalence of musculoskeletal discomfort could lead to unnecessary disabilities, hence, the vital need for greater attention to be given on this aspect in the work place, to alleviate to some extent, potential problems in future. PMID:23113058
Total-reflection X-ray fluorescence studies of trace elements in biomedical samples
NASA Astrophysics Data System (ADS)
Kubala-Kukuś, A.; Braziewicz, J.; Pajek, M.
2004-08-01
Application of the total-reflection X-ray fluorescence (TXRF) analysis in the studies of trace element contents in biomedical samples is discussed in the following aspects: (i) a nature of trace element concentration distributions, (ii) censoring approach to the detection limits, and (iii) a comparison of two sets of censored data. The paper summarizes the recent results achieved in this topics, in particular, the lognormal, or more general logstable, nature of concentration distribution of trace elements, the random left-censoring and the Kaplan-Meier approach accounting for detection limits and, finally, the application of the logrank test to compare the censored concentrations measured for two groups. These new aspects, which are of importance for applications of the TXRF in different fields, are discussed here in the context of TXRF studies of trace element in various samples of medical interest.
[The occurance lead and cadmium in hip joint in aspect of exposure on tobacco smoke].
Bogunia, Mariusz; Brodziak-Dopierała, Barbara; Kwapuliński, Jerzy; Ahnert, Bozena; Kowol, Jolanta; Nogaj, Ewa
2008-01-01
The objective of this study was qualification of content cadmium and lead in selected elements of the hip joint in aspect of tobacco smoking. The material for the research were 5 elements of hip joint (articular cartilage, trabecular bone and cortical bone femur head, fragment articular capsule and fragment trabecular bone from region intertrochanteric femoral bone), obtained intraoperatively during endoprothesoplastic surgeries. The samples come from habitants of Upper Silesian Region. Determination of trace elements contents were performed by ASA method (Pye Unicam SP-9) in acetylene-oxygen flame. Higher contents of lead were observed for smoking people, however in case of cadmium the differences of this element were not statistical essential between smokers and non-smokers.
NASA Technical Reports Server (NTRS)
Reardon, John E.; Violett, Duane L., Jr.
1991-01-01
The AFAS Database System was developed to provide the basic structure of a comprehensive database system for the Marshall Space Flight Center (MSFC) Structures and Dynamics Laboratory Aerophysics Division. The system is intended to handle all of the Aerophysics Division Test Facilities as well as data from other sources. The system was written for the DEC VAX family of computers in FORTRAN-77 and utilizes the VMS indexed file system and screen management routines. Various aspects of the system are covered, including a description of the user interface, lists of all code structure elements, descriptions of the file structures, a description of the security system operation, a detailed description of the data retrieval tasks, a description of the session log, and a description of the archival system.
NASA Technical Reports Server (NTRS)
Arenburg, R. T.; Reddy, J. N.
1991-01-01
The micromechanical constitutive theory is used to examine the nonlinear behavior of continuous-fiber-reinforced metal-matrix composite structures. Effective lamina constitutive relations based on the Abouli micromechanics theory are presented. The inelastic matrix behavior is modeled by the unified viscoplasticity theory of Bodner and Partom. The laminate constitutive relations are incorporated into a first-order deformation plate theory. The resulting boundary value problem is solved by utilizing the finite element method. Attention is also given to computational aspects of the numerical solution, including the temporal integration of the inelastic strains and the spatial integration of bending moments. Numerical results the nonlinear response of metal matrix composites subjected to extensional and bending loads are presented.
Conformal piezoelectric energy harvesting and storage from motions of the heart, lung, and diaphragm
Dagdeviren, Canan; Yang, Byung Duk; Su, Yewang; Tran, Phat L.; Joe, Pauline; Anderson, Eric; Xia, Jing; Doraiswamy, Vijay; Dehdashti, Behrooz; Feng, Xue; Lu, Bingwei; Poston, Robert; Khalpey, Zain; Ghaffari, Roozbeh; Huang, Yonggang; Slepian, Marvin J.; Rogers, John A.
2014-01-01
Here, we report advanced materials and devices that enable high-efficiency mechanical-to-electrical energy conversion from the natural contractile and relaxation motions of the heart, lung, and diaphragm, demonstrated in several different animal models, each of which has organs with sizes that approach human scales. A cointegrated collection of such energy-harvesting elements with rectifiers and microbatteries provides an entire flexible system, capable of viable integration with the beating heart via medical sutures and operation with efficiencies of ∼2%. Additional experiments, computational models, and results in multilayer configurations capture the key behaviors, illuminate essential design aspects, and offer sufficient power outputs for operation of pacemakers, with or without battery assist. PMID:24449853
NASA Technical Reports Server (NTRS)
Miller, R. H.; Minsky, M. L.; Smith, D. B. S.
1982-01-01
Applications of automation, robotics, and machine intelligence systems (ARAMIS) to space activities and their related ground support functions are studied, so that informed decisions can be made on which aspects of ARAMIS to develop. The specific tasks which will be required by future space project tasks are identified and the relative merits of these options are evaluated. The ARAMIS options defined and researched span the range from fully human to fully machine, including a number of intermediate options (e.g., humans assisted by computers, and various levels of teleoperation). By including this spectrum, the study searches for the optimum mix of humans and machines for space project tasks.
Parallel computation using boundary elements in solid mechanics
NASA Technical Reports Server (NTRS)
Chien, L. S.; Sun, C. T.
1990-01-01
The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.
Prediction and phylogenetic analysis of mammalian short interspersed elements (SINEs).
Rogozin, I B; Mayorov, V I; Lavrentieva, M V; Milanesi, L; Adkison, L R
2000-09-01
The presence of repetitive elements can create serious problems for sequence analysis, especially in the case of homology searches in nucleotide sequence databases. Repetitive elements should be treated carefully by using special programs and databases. In this paper, various aspects of SINE (short interspersed repetitive element) identification, analysis and evolution are discussed.
A hybrid computational model to explore the topological characteristics of epithelial tissues.
González-Valverde, Ismael; García-Aznar, José Manuel
2017-11-01
Epithelial tissues show a particular topology where cells resemble a polygon-like shape, but some biological processes can alter this tissue topology. During cell proliferation, mitotic cell dilation deforms the tissue and modifies the tissue topology. Additionally, cells are reorganized in the epithelial layer and these rearrangements also alter the polygon distribution. We present here a computer-based hybrid framework focused on the simulation of epithelial layer dynamics that combines discrete and continuum numerical models. In this framework, we consider topological and mechanical aspects of the epithelial tissue. Individual cells in the tissue are simulated by an off-lattice agent-based model, which keeps the information of each cell. In addition, we model the cell-cell interaction forces and the cell cycle. Otherwise, we simulate the passive mechanical behaviour of the cell monolayer using a material that approximates the mechanical properties of the cell. This continuum approach is solved by the finite element method, which uses a dynamic mesh generated by the triangulation of cell polygons. Forces generated by cell-cell interaction in the agent-based model are also applied on the finite element mesh. Cell movement in the agent-based model is driven by the displacements obtained from the deformed finite element mesh of the continuum mechanical approach. We successfully compare the results of our simulations with some experiments about the topology of proliferating epithelial tissues in Drosophila. Our framework is able to model the emergent behaviour of the cell monolayer that is due to local cell-cell interactions, which have a direct influence on the dynamics of the epithelial tissue. Copyright © 2017 John Wiley & Sons, Ltd.
Ferreiro, Diego U.; Komives, Elizabeth A.; Wolynes, Peter G.
2014-01-01
Biomolecules are the prime information processing elements of living matter. Most of these inanimate systems are polymers that compute their own structures and dynamics using as input seemingly random character strings of their sequence, following which they coalesce and perform integrated cellular functions. In large computational systems with a finite interaction-codes, the appearance of conflicting goals is inevitable. Simple conflicting forces can lead to quite complex structures and behaviors, leading to the concept of frustration in condensed matter. We present here some basic ideas about frustration in biomolecules and how the frustration concept leads to a better appreciation of many aspects of the architecture of biomolecules, and how biomolecular structure connects to function. These ideas are simultaneously both seductively simple and perilously subtle to grasp completely. The energy landscape theory of protein folding provides a framework for quantifying frustration in large systems and has been implemented at many levels of description. We first review the notion of frustration from the areas of abstract logic and its uses in simple condensed matter systems. We discuss then how the frustration concept applies specifically to heteropolymers, testing folding landscape theory in computer simulations of protein models and in experimentally accessible systems. Studying the aspects of frustration averaged over many proteins provides ways to infer energy functions useful for reliable structure prediction. We discuss how frustration affects folding mechanisms. We review here how a large part of the biological functions of proteins are related to subtle local physical frustration effects and how frustration influences the appearance of metastable states, the nature of binding processes, catalysis and allosteric transitions. We hope to illustrate how Frustration is a fundamental concept in relating function to structural biology. PMID:25225856
Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements
Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.
2016-01-01
Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037
Kamp, I; Van Veen, S A T; Vink, P
2015-01-01
The use of mobile devices as an addition to or replacement of desktop computers for traditional office work results in more flexibility of workplaces. Consequently transportation time is used for office work and this asks for comfortable mobile offices. The aim of this review is providing a framework of the relevant elements for comfortable mobile offices and defining needs for future research. This literature review draws on 68 papers, theses, reviews and critiques. The framework is based on existing literature on traditional office ergonomics and comfort literature for different transportation modes like trains, buses, airplanes and cars. The main differences with traditional offices are the type of devices, dynamic versus static situation, the sole use of mobile devices and therefore the need for a good arm support to avoid an uncomfortable neck flexion, limited space, and the presence of strangers which influence the privacy perception. Important topics for future research are: the effect on the employee and the environment of the ability and demand of working anywhere, and the requirements for the physical aspects of mobile offices.
Mohammadi, Amrollah; Ahmadian, Alireza; Rabbani, Shahram; Fattahi, Ehsan; Shirani, Shapour
2017-12-01
Finite element models for estimation of intraoperative brain shift suffer from huge computational cost. In these models, image registration and finite element analysis are two time-consuming processes. The proposed method is an improved version of our previously developed Finite Element Drift (FED) registration algorithm. In this work the registration process is combined with the finite element analysis. In the Combined FED (CFED), the deformation of whole brain mesh is iteratively calculated by geometrical extension of a local load vector which is computed by FED. While the processing time of the FED-based method including registration and finite element analysis was about 70 s, the computation time of the CFED was about 3.2 s. The computational cost of CFED is almost 50% less than similar state of the art brain shift estimators based on finite element models. The proposed combination of registration and structural analysis can make the calculation of brain deformation much faster. Copyright © 2016 John Wiley & Sons, Ltd.
On computational methods for crashworthiness
NASA Technical Reports Server (NTRS)
Belytschko, T.
1992-01-01
The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.
NASA Astrophysics Data System (ADS)
Tavadyan, Levon, Prof; Sachkov, Viktor, Prof; Godymchuk, Anna, Dr.; Bogdan, Anna
2016-01-01
The 2nd International Symposium «Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering» (REES2015) was jointly organized by Tomsk State University (Russia), National Academy of Science (Armenia), Shenyang Polytechnic University (China), Moscow Institute of Physics and Engineering (Russia), Siberian Physical-technical Institute (Russia), and Tomsk Polytechnic University (Russia) in September, 7-15, 2015, Belokuriha, Russia. The Symposium provided a high quality of presentations and gathered engineers, scientists, academicians, and young researchers working in the field of rare and rare earth elements mining, modification, separation, elaboration and application, in order to facilitate aggregation and sharing interests and results for a better collaboration and activity visibility. The goal of the REES2015 was to bring researchers and practitioners together to share the latest knowledge on rare and rare earth elements technologies. The Symposium was aimed at presenting new trends in rare and rare earth elements mining, research and separation and recent achievements in advanced materials elaboration and developments for different purposes, as well as strengthening the already existing contacts between manufactures, highly-qualified specialists and young scientists. The topics of the REES2015 were: (1) Problems of extraction and separation of rare and rare earth elements; (2) Methods and approaches to the separation and isolation of rare and rare earth elements with ultra-high purity; (3) Industrial technologies of production and separation of rare and rare earth elements; (4) Economic aspects in technology of rare and rare earth elements; and (5) Rare and rare earth based materials (application in metallurgy, catalysis, medicine, optoelectronics, etc.). We want to thank the Organizing Committee, the Universities and Sponsors supporting the Symposium, and everyone who contributed to the organization of the event and to publication of this proceeding.
Efficient simulation of incompressible viscous flow over multi-element airfoils
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Wiltberger, N. Lyn; Kwak, Dochan
1992-01-01
The incompressible, viscous, turbulent flow over single and multi-element airfoils is numerically simulated in an efficient manner by solving the incompressible Navier-Stokes equations. The computer code uses the method of pseudo-compressibility with an upwind-differencing scheme for the convective fluxes and an implicit line-relaxation solution algorithm. The motivation for this work includes interest in studying the high-lift take-off and landing configurations of various aircraft. In particular, accurate computation of lift and drag at various angles of attack, up to stall, is desired. Two different turbulence models are tested in computing the flow over an NACA 4412 airfoil; an accurate prediction of stall is obtained. The approach used for multi-element airfoils involves the use of multiple zones of structured grids fitted to each element. Two different approaches are compared: a patched system of grids, and an overlaid Chimera system of grids. Computational results are presented for two-element, three-element, and four-element airfoil configurations. Excellent agreement with experimental surface pressure coefficients is seen. The code converges in less than 200 iterations, requiring on the order of one minute of CPU time (on a CRAY YMP) per element in the airfoil configuration.
Kojic, M; Milosevic, M; Simic, V; Koay, E J; Kojic, N; Ziemys, A; Ferrari, M
2018-05-21
One of the basic and vital processes in living organisms is mass exchange, which occurs on several levels: it goes from blood vessels to cells and organelles within cells. On that path, molecules, as oxygen, metabolic products, drugs, etc. Traverse different macro and micro environments - blood, extracellular/intracellular space, and interior of organelles; and also biological barriers such as walls of blood vessels and membranes of cells and organelles. Many aspects of this mass transport remain unknown, particularly the biophysical mechanisms governing drug delivery. The main research approach relies on laboratory and clinical investigations. In parallel, considerable efforts have been directed to develop computational tools for additional insight into the intricate process of mass exchange and transport. Along these lines, we have recently formulated a composite smeared finite element (CSFE) which is composed of the smeared continuum pressure and concentration fields of the capillary and lymphatic system, and of these fields within tissue. The element offers an elegant and simple procedure which opens up new lines of inquiry and can be applied to large systems such as organs and tumors models. Here, we extend this concept to a multiscale scheme which concurrently couples domains that span from large blood vessels, capillaries and lymph, to cell cytosol and further to organelles of nanometer size. These spatial physical domains are coupled by the appropriate connectivity elements representing biological barriers. The composite finite element has "degrees of freedom" which include pressures and concentrations of all compartments of the vessels-tissue assemblage. The overall model uses the standard, measurable material properties of the continuum biological environments and biological barriers. It can be considered as a framework into which we can incorporate various additional effects (such as electrical or biochemical) for transport through membranes or within cells. This concept and the developed FE software within our package PAK offers a computational tool that can be applied to whole-organ systems, while also including specific domains such as tumors. The solved examples demonstrate the accuracy of this model and its applicability to large biological systems. Copyright © 2018. Published by Elsevier Ltd.
Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?
NASA Astrophysics Data System (ADS)
Schild, Jonas; Masuch, Maic
2012-03-01
This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.
Sardhara, Jayesh; Pavaman, Sindgikar; Das, Kuntal; Srivastava, Arun; Mehrotra, Anant; Behari, Sanjay
2016-11-01
Congenital spondylolytic spondylolisthesis of C2 vertebra resulting from deficient posterior element of the axis is rarely described in the literature. We describe a unique case of agenesis of posterior elements of C2 with craniovertebral junction anomalies consisting of osseous, vascular, and soft tissue anomalies. A 26-year-old man presented with symptoms of upper cervical myelopathy of 12 months' duration. A computed tomography scan of the cervical spine including the craniovertebral junction revealed spondylolisthesis of C2 over C3, atlantoaxial dislocation, occipitalization of the atlas, hypoplasia of the odontoid, and cleft posterior C1 arch. Additionally, the axis vertebra was found devoid of its posterior elements except bilaterally rudimentary pedicles. Magnetic resonance imaging revealed tonsilar herniation, suggesting associated Chiari type I malformation. CT angiogram of the vertebral arteries displayed persistent bilateral first intersegmental arteries crossing the posterior aspect of the C1/2 facet joint. This patient underwent foramen magnum decompression, C3 laminectomy with occipito-C3/C4 posterior fusion using screw and rod to maintain the cervical alignment and stability. We report this rare constellation of congenital craniovertebral junction anomaly and review the relevant literature. Copyright © 2016 Elsevier Inc. All rights reserved.
Multiphysics Nuclear Thermal Rocket Thrust Chamber Analysis
NASA Technical Reports Server (NTRS)
Wang, Ten-See
2005-01-01
The objective of this effort is t o develop an efficient and accurate thermo-fluid computational methodology to predict environments for hypothetical thrust chamber design and analysis. The current task scope is to perform multidimensional, multiphysics analysis of thrust performance and heat transfer analysis for a hypothetical solid-core, nuclear thermal engine including thrust chamber and nozzle. The multiphysics aspects of the model include: real fluid dynamics, chemical reactivity, turbulent flow, and conjugate heat transfer. The model will be designed to identify thermal, fluid, and hydrogen environments in all flow paths and materials. This model would then be used to perform non- nuclear reproduction of the flow element failures demonstrated in the Rover/NERVA testing, investigate performance of specific configurations and assess potential issues and enhancements. A two-pronged approach will be employed in this effort: a detailed analysis of a multi-channel, flow-element, and global modeling of the entire thrust chamber assembly with a porosity modeling technique. It is expected that the detailed analysis of a single flow element would provide detailed fluid, thermal, and hydrogen environments for stress analysis, while the global thrust chamber assembly analysis would promote understanding of the effects of hydrogen dissociation and heat transfer on thrust performance. These modeling activities will be validated as much as possible by testing performed by other related efforts.
Zhan, Yijian; Meschke, Günther
2017-07-08
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.
Zhan, Yijian
2017-01-01
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130
Finite element dynamic analysis on CDC STAR-100 computer
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lambiotte, J. J., Jr.
1978-01-01
Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.
View of MISSE-8 taken during a session of EVA
2011-07-12
ISS028-E-016111 (12 July 2011) --- This close-up image, recorded during a July 12 spacewalk, shows the Materials on International Space Station Experiment - 8 (MISSE-8). The experiment package is a test bed for materials and computing elements attached to the outside of the orbiting complex. These materials and computing elements are being evaluated for the effects of atomic oxygen, ultraviolet, direct sunlight, radiation, and extremes of heat and cold. This experiment allows the development and testing of new materials and computing elements that can better withstand the rigors of space environments. Results will provide a better understanding of the durability of various materials and computing elements when they are exposed to the space environment, with applications in the design of future spacecraft.
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
Membrane triangles with corner drilling freedoms. III - Implementation and performance evaluation
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.; Alexander, Scott
1992-01-01
This paper completes a three-part series on the formulation of 3-node, 9-dof membrane triangles with corner drilling freedoms based on parametrized variational principles. The first four sections cover element implementation details including determination of optimal parameters and treatment of distributed loads. Then three elements of this type, labeled ALL, FF and EFF-ANDES, are tested on standard plane stress problems. ALL represents numerically integrated versions of Allman's 1988 triangle; FF is based on the free formulation triangle presented by Bergan and Felippa in 1985; and EFF-ANDES represent two different formulations of the optimal triangle derived in Parts I and II. The numerical studies indicate that the ALL, FF and EFF-ANDES elements are comparable in accuracy for elements of unitary aspect ratios. The ALL elements are found to stiffen rapidly in inplane bending for high aspect ratios, whereas the FF and EFF elements maintain accuracy. The EFF and ANDES implementations have a moderate edge in formation speed over the FF.
Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Conyers, Howard J.; Mavris, Dimitri N.
2014-01-01
This paper introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio and number of control surfaces. A doublet lattice approach is taken to compute generalized forces. A rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. Although, all parameters can be easily modified if desired.The focus of this paper is on tool presentation, verification and validation. This process is carried out in stages throughout the paper. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool. Therefore the flutter speed and frequency for a clamped plate are computed using V-g and V-f analysis. The computational results are compared to a previously published computational analysis and wind tunnel results for the same structure. Finally a case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to V-g and V-f analysis. This also includes the analysis of the model in response to a 1-cos gust.
Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Conyers, Howard J.; Mavris, Dimitri N.
2015-01-01
This paper introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this paper is on tool presentation, verification, and validation. These processes are carried out in stages throughout the paper. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.
Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Conyers, Howard Jason; Mavris, Dimitri N.
2015-01-01
This report introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this report is on tool presentation, verification, and validation. These processes are carried out in stages throughout the report. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
Optically intraconnected computer employing dynamically reconfigurable holographic optical element
NASA Technical Reports Server (NTRS)
Bergman, Larry A. (Inventor)
1992-01-01
An optically intraconnected computer and a reconfigurable holographic optical element employed therein. The basic computer comprises a memory for holding a sequence of instructions to be executed; logic for accessing the instructions in sequence; logic for determining for each the instruction the function to be performed and the effective address thereof; a plurality of individual elements on a common support substrate optimized to perform certain logical sequences employed in executing the instructions; and, element selection logic connected to the logic determining the function to be performed for each the instruction for determining the class of each function and for causing the instruction to be executed by those the elements which perform those associated the logical sequences affecting the instruction execution in an optimum manner. In the optically intraconnected version, the element selection logic is adapted for transmitting and switching signals to the elements optically.
Rectenna session: Micro aspects
NASA Technical Reports Server (NTRS)
Gutmann, R. J.
1980-01-01
Two micro aspects of rectenna design are discussed: evaluation of the degradation in net rectenna RF to DC conversion efficiency due to power density variations across the rectenna (power combining analysis) and design of Yagi-Uda receiving elements to reduce rectenna cost by decreasing the number of conversion circuits (directional receiving elements). The first of these involves resolving a fundamental question of efficiency potential with a rectenna, while the second involves a design modification with a large potential cost saving.
Business aspects of cardiovascular computed tomography: tackling the challenges.
Bateman, Timothy M
2008-01-01
The purpose of this article is to provide a comprehensive understanding of the business issues surrounding provision of dedicated cardiovascular computed tomographic imaging. Some of the challenges include high up-front costs, current low utilization relative to scanner capability, and inadequate payments. Cardiovascular computed tomographic imaging is a valuable clinical modality that should be offered by cardiovascular centers-of-excellence. With careful consideration of the business aspects, moderate-to-large size cardiology programs should be able to implement an economically viable cardiovascular computed tomographic service.
Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.
Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A
2016-03-21
Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Metal ions in neurology and psychiatry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabay, S.; Harris, J.; Ho, B.T.
1985-01-01
This book consists of five sections, each containing several papers. The section titles are: CNS Development and Aging, Clinically Related Aspects of Trace Elements, Neurochemical Aspects, Neurotoxicity and Neuropathology, and Methodology and Application.
Oasis: A high-level/high-performance open source Navier-Stokes solver
NASA Astrophysics Data System (ADS)
Mortensen, Mikael; Valen-Sendstad, Kristian
2015-03-01
Oasis is a high-level/high-performance finite element Navier-Stokes solver written from scratch in Python using building blocks from the FEniCS project (fenicsproject.org). The solver is unstructured and targets large-scale applications in complex geometries on massively parallel clusters. Oasis utilizes MPI and interfaces, through FEniCS, to the linear algebra backend PETSc. Oasis advocates a high-level, programmable user interface through the creation of highly flexible Python modules for new problems. Through the high-level Python interface the user is placed in complete control of every aspect of the solver. A version of the solver, that is using piecewise linear elements for both velocity and pressure, is shown to reproduce very well the classical, spectral, turbulent channel simulations of Moser et al. (1999). The computational speed is strongly dominated by the iterative solvers provided by the linear algebra backend, which is arguably the best performance any similar implicit solver using PETSc may hope for. Higher order accuracy is also demonstrated and new solvers may be easily added within the same framework.
A grid generation and flow solution method for the Euler equations on unstructured grids
NASA Astrophysics Data System (ADS)
Anderson, W. Kyle
1994-01-01
A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme utilizes Delaunay triangulation and self-generates the field points for the mesh based on cell aspect ratios and allows for clustering near solid surfaces. The flow solution method is an implicit algorithm in which the linear set of equations arising at each time step is solved using a Gauss Seidel procedure which is completely vectorizable. In addition, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for a National Advisory Committee for Aeronautics (NACA) 0012 airfoil as well as a two-element configuration. Flow solution results are shown for two-dimensional flow over the NACA 0012 airfoil and for a two-element configuration in which the solution has been obtained through an adaptation procedure and compared to an exact solution. Preliminary three-dimensional results are also shown in which subsonic flow over a business jet is computed.
Segregating the core computational faculty of human language from working memory.
Makuuchi, Michiru; Bahlmann, Jörg; Anwander, Alfred; Friederici, Angela D
2009-05-19
In contrast to simple structures in animal vocal behavior, hierarchical structures such as center-embedded sentences manifest the core computational faculty of human language. Previous artificial grammar learning studies found that the left pars opercularis (LPO) subserves the processing of hierarchical structures. However, it is not clear whether this area is activated by the structural complexity per se or by the increased memory load entailed in processing hierarchical structures. To dissociate the effect of structural complexity from the effect of memory cost, we conducted a functional magnetic resonance imaging study of German sentence processing with a 2-way factorial design tapping structural complexity (with/without hierarchical structure, i.e., center-embedding of clauses) and working memory load (long/short distance between syntactically dependent elements; i.e., subject nouns and their respective verbs). Functional imaging data revealed that the processes for structure and memory operate separately but co-operatively in the left inferior frontal gyrus; activities in the LPO increased as a function of structural complexity, whereas activities in the left inferior frontal sulcus (LIFS) were modulated by the distance over which the syntactic information had to be transferred. Diffusion tensor imaging showed that these 2 regions were interconnected through white matter fibers. Moreover, functional coupling between the 2 regions was found to increase during the processing of complex, hierarchically structured sentences. These results suggest a neuroanatomical segregation of syntax-related aspects represented in the LPO from memory-related aspects reflected in the LIFS, which are, however, highly interconnected functionally and anatomically.
Efficient Computation Of Behavior Of Aircraft Tires
NASA Technical Reports Server (NTRS)
Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.
1989-01-01
NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.
Standardized Computer-based Organized Reporting of EEG: SCORE
Beniczky, Sándor; Aurlien, Harald; Brøgger, Jan C; Fuglsang-Frederiksen, Anders; Martins-da-Silva, António; Trinka, Eugen; Visser, Gerhard; Rubboli, Guido; Hjalgrim, Helle; Stefan, Hermann; Rosén, Ingmar; Zarubova, Jana; Dobesberger, Judith; Alving, Jørgen; Andersen, Kjeld V; Fabricius, Martin; Atkins, Mary D; Neufeld, Miri; Plouin, Perrine; Marusic, Petr; Pressler, Ronit; Mameniskiene, Ruta; Hopfengärtner, Rüdiger; Emde Boas, Walter; Wolf, Peter
2013-01-01
The electroencephalography (EEG) signal has a high complexity, and the process of extracting clinically relevant features is achieved by visual analysis of the recordings. The interobserver agreement in EEG interpretation is only moderate. This is partly due to the method of reporting the findings in free-text format. The purpose of our endeavor was to create a computer-based system for EEG assessment and reporting, where the physicians would construct the reports by choosing from predefined elements for each relevant EEG feature, as well as the clinical phenomena (for video-EEG recordings). A working group of EEG experts took part in consensus workshops in Dianalund, Denmark, in 2010 and 2011. The faculty was approved by the Commission on European Affairs of the International League Against Epilepsy (ILAE). The working group produced a consensus proposal that went through a pan-European review process, organized by the European Chapter of the International Federation of Clinical Neurophysiology. The Standardised Computer-based Organised Reporting of EEG (SCORE) software was constructed based on the terms and features of the consensus statement and it was tested in the clinical practice. The main elements of SCORE are the following: personal data of the patient, referral data, recording conditions, modulators, background activity, drowsiness and sleep, interictal findings, “episodes” (clinical or subclinical events), physiologic patterns, patterns of uncertain significance, artifacts, polygraphic channels, and diagnostic significance. The following specific aspects of the neonatal EEGs are scored: alertness, temporal organization, and spatial organization. For each EEG finding, relevant features are scored using predefined terms. Definitions are provided for all EEG terms and features. SCORE can potentially improve the quality of EEG assessment and reporting; it will help incorporate the results of computer-assisted analysis into the report, it will make possible the build-up of a multinational database, and it will help in training young neurophysiologists. PMID:23506075
On finite element methods for the Helmholtz equation
NASA Technical Reports Server (NTRS)
Aziz, A. K.; Werschulz, A. G.
1979-01-01
The numerical solution of the Helmholtz equation is considered via finite element methods. A two-stage method which gives the same accuracy in the computed gradient as in the computed solution is discussed. Error estimates for the method using a newly developed proof are given, and the computational considerations which show this method to be computationally superior to previous methods are presented.
On numerically accurate finite element
NASA Technical Reports Server (NTRS)
Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.
1974-01-01
A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.
Some system considerations in configuring a digital flight control - navigation system
NASA Technical Reports Server (NTRS)
Boone, J. H.; Flynn, G. R.
1976-01-01
A trade study was conducted with the objective of providing a technical guideline for selection of the most appropriate computer technology for the automatic flight control system of a civil subsonic jet transport. The trade study considers aspects of using either an analog, incremental type special purpose computer or a general purpose computer to perform critical autopilot computation functions. It also considers aspects of integration of noncritical autopilot and autothrottle modes into the computer performing the critical autoland functions, as compared to the federation of the noncritical modes into either a separate computer or with a R-Nav computer. The study is accomplished by establishing the relative advantages and/or risks associated with each of the computer configurations.
View of MISSE-8 taken during a session of EVA
2011-07-12
ISS028-E-016107 (12 July 2011) --- This medium close-up image, recorded during a July 12 spacewalk, shows the Materials on International Space Station Experiment - 8 (MISSE-8). The experiment package is a test bed for materials and computing elements attached to the outside of the orbiting complex. These materials and computing elements are being evaluated for the effects of atomic oxygen, ultraviolet, direct sunlight, radiation, and extremes of heat and cold. This experiment allows the development and testing of new materials and computing elements that can better withstand the rigors of space environments. Results will provide a better understanding of the durability of various materials and computing elements when they are exposed to the space environment, with applications in the design of future spacecraft.
Asteroid orbital inversion using uniform phase-space sampling
NASA Astrophysics Data System (ADS)
Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.
2014-07-01
We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, corresponding virtual least-squares orbital elements are derived using the Nelder-Mead downhill simplex method; third, repeating the procedure two times allows for a computation of a difference for two sets of virtual orbital elements; and, fourth, this orbital-element difference constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal p.d.f. In a discrete approximation, the allowed proposals coincide with the differences that are based on a large number of pre-computed sets of virtual least-squares orbital elements. The virtual-observation MCMC method is thus based on the characterization of the relevant volume in the orbital-element phase space. Here we utilize MCMC to map the phase-space domain of acceptable solutions. We can make use of the proposal p.d.f.s from the MCMC ranging and virtual-observation methods. The present phase-space mapping produces, upon convergence, a uniform sampling of the solution space within a pre-defined χ^2-value. The weights of the sampled orbital elements are then computed on the basis of the corresponding χ^2-values. The present method resembles the original ranging method. On one hand, MCMC mapping is insensitive to local extrema in the phase space and efficiently maps the solution space. This is somewhat contrary to the MCMC methods described above. On the other hand, MCMC mapping can suffer from producing a small number of sample elements with small χ^2-values, in resemblance to the original ranging method. We apply the methods to example near-Earth, main-belt, and transneptunian objects, and highlight the utilization of the methods in the data processing and analysis pipeline of the ESA Gaia space mission.
ERIC Educational Resources Information Center
Krzywkowski, Valerie I., Ed.
The 15 papers in this collection discuss various aspects of computer use in libraries and several other aspects of library service not directly related to computers. Following an introduction and a list of officers, the papers are: (1) "Criminal Justice and Related Databases" (Kate E. Adams); (2) "Software and Hard Thought:…
Controlling the stoichiometry and doping of semiconductor materials
Albin, David; Burst, James; Metzger, Wyatt; Duenow, Joel; Farrell, Stuart; Colegrove, Eric
2016-08-16
Methods for treating a semiconductor material are provided. According to an aspect of the invention, the method includes annealing the semiconductor material in the presence of a compound that includes a first element and a second element. The first element provides an overpressure to achieve a desired stoichiometry of the semiconductor material, and the second element provides a dopant to the semiconductor material.
A finite element head and neck model as a supportive tool for deformable image registration.
Kim, Jihun; Saitou, Kazuhiro; Matuszak, Martha M; Balter, James M
2016-07-01
A finite element (FE) head and neck model was developed as a tool to aid investigations and development of deformable image registration and patient modeling in radiation oncology. Useful aspects of a FE model for these purposes include ability to produce realistic deformations (similar to those seen in patients over the course of treatment) and a rational means of generating new configurations, e.g., via the application of force and/or displacement boundary conditions. The model was constructed based on a cone-beam computed tomography image of a head and neck cancer patient. The three-node triangular surface meshes created for the bony elements (skull, mandible, and cervical spine) and joint elements were integrated into a skeletal system and combined with the exterior surface. Nodes were additionally created inside the surface structures which were composed of the three-node triangular surface meshes, so that four-node tetrahedral FE elements were created over the whole region of the model. The bony elements were modeled as a homogeneous linear elastic material connected by intervertebral disks. The surrounding tissues were modeled as a homogeneous linear elastic material. Under force or displacement boundary conditions, FE analysis on the model calculates approximate solutions of the displacement vector field. A FE head and neck model was constructed that skull, mandible, and cervical vertebrae were mechanically connected by disks. The developed FE model is capable of generating realistic deformations that are strain-free for the bony elements and of creating new configurations of the skeletal system with the surrounding tissues reasonably deformed. The FE model can generate realistic deformations for skeletal elements. In addition, the model provides a way of evaluating the accuracy of image alignment methods by producing a ground truth deformation and correspondingly simulated images. The ability to combine force and displacement conditions provides flexibility for simulating realistic anatomic configurations.
Into the Deep: A Writer's Look at Creativity.
ERIC Educational Resources Information Center
Els, Susan McBride
This book records the author's creative processes as she wrote the book, presenting a personal journey where all artists will recognize aspects of themselves. The first (and most extensive) part of the book (entitled "Elements of Creativity") reveals universal elements of creativity that mirror the ancient elements in nature:…
Element fracture technique for hypervelocity impact simulation
NASA Astrophysics Data System (ADS)
Zhang, Xiao-tian; Li, Xiao-gang; Liu, Tao; Jia, Guang-hui
2015-05-01
Hypervelocity impact dynamics is the theoretical support of spacecraft shielding against space debris. The numerical simulation has become an important approach for obtaining the ballistic limits of the spacecraft shields. Currently, the most widely used algorithm for hypervelocity impact is the smoothed particle hydrodynamics (SPH). Although the finite element method (FEM) is widely used in fracture mechanics and low-velocity impacts, the standard FEM can hardly simulate the debris cloud generated by hypervelocity impact. This paper presents a successful application of the node-separation technique for hypervelocity impact debris cloud simulation. The node-separation technique assigns individual/coincident nodes for the adjacent elements, and it applies constraints to the coincident node sets in the modeling step. In the explicit iteration, the cracks are generated by releasing the constrained node sets that meet the fracture criterion. Additionally, the distorted elements are identified from two aspects - self-piercing and phase change - and are deleted so that the constitutive computation can continue. FEM with the node-separation technique is used for thin-wall hypervelocity impact simulations. The internal structures of the debris cloud in the simulation output are compared with that in the test X-ray graphs under different material fracture criteria. It shows that the pressure criterion is more appropriate for hypervelocity impact. The internal structures of the debris cloud are also simulated and compared under different thickness-to-diameter ratios (t/D). The simulation outputs show the same spall pattern with the tests. Finally, the triple-plate impact case is simulated with node-separation FEM.
PDF modeling of turbulent flows on unstructured grids
NASA Astrophysics Data System (ADS)
Bakosi, Jozsef
In probability density function (PDF) methods of turbulent flows, the joint PDF of several flow variables is computed by numerically integrating a system of stochastic differential equations for Lagrangian particles. Because the technique solves a transport equation for the PDF of the velocity and scalars, a mathematically exact treatment of advection, viscous effects and arbitrarily complex chemical reactions is possible; these processes are treated without closure assumptions. A set of algorithms is proposed to provide an efficient solution of the PDF transport equation modeling the joint PDF of turbulent velocity, frequency and concentration of a passive scalar in geometrically complex configurations. An unstructured Eulerian grid is employed to extract Eulerian statistics, to solve for quantities represented at fixed locations of the domain and to track particles. All three aspects regarding the grid make use of the finite element method. Compared to hybrid methods, the current methodology is stand-alone, therefore it is consistent both numerically and at the level of turbulence closure without the use of consistency conditions. Since both the turbulent velocity and scalar concentration fields are represented in a stochastic way, the method allows for a direct and close interaction between these fields, which is beneficial in computing accurate scalar statistics. Boundary conditions implemented along solid bodies are of the free-slip and no-slip type without the need for ghost elements. Boundary layers at no-slip boundaries are either fully resolved down to the viscous sublayer, explicitly modeling the high anisotropy and inhomogeneity of the low-Reynolds-number wall region without damping or wall-functions or specified via logarithmic wall-functions. As in moment closures and large eddy simulation, these wall-treatments provide the usual trade-off between resolution and computational cost as required by the given application. Particular attention is focused on modeling the dispersion of passive scalars in inhomogeneous turbulent flows. Two different micromixing models are investigated that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. An adaptive algorithm to compute the velocity-conditioned scalar mean is proposed that homogenizes the statistical error over the sample space with no assumption on the shape of the underlying velocity PDF. The development also concentrates on a generally applicable micromixing timescale for complex flow domains. Several newly developed algorithms are described in detail that facilitate a stable numerical solution in arbitrarily complex flow geometries, including a stabilized mean-pressure projection scheme, the estimation of conditional and unconditional Eulerian statistics and their derivatives from stochastic particle fields employing finite element shapefunctions, particle tracking through unstructured grids, an efficient particle redistribution procedure and techniques related to efficient random number generation. The algorithm is validated and tested by computing three different turbulent flows: the fully developed turbulent channel flow, a street canyon (or cavity) flow and the turbulent wake behind a circular cylinder at a sub-critical Reynolds number. The solver has been parallelized and optimized for shared memory and multi-core architectures using the OpenMP standard. Relevant aspects of performance and parallelism on cache-based shared memory machines are discussed and presented in detail. The methodology shows great promise in the simulation of high-Reynolds-number incompressible inert or reactive turbulent flows in realistic configurations.
NASA Technical Reports Server (NTRS)
Ko, William L.; Olona, Timothy; Muramoto, Kyle M.
1990-01-01
Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.
Computer aided stress analysis of long bones utilizing computer tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marom, S.A.
1986-01-01
A computer aided analysis method, utilizing computed tomography (CT) has been developed, which together with a finite element program determines the stress-displacement pattern in a long bone section. The CT data file provides the geometry, the density and the material properties for the generated finite element model. A three-dimensional finite element model of a tibial shaft is automatically generated from the CT file by a pre-processing procedure for a finite element program. The developed pre-processor includes an edge detection algorithm which determines the boundaries of the reconstructed cross-sectional images of the scanned bone. A mesh generation procedure than automatically generatesmore » a three-dimensional mesh of a user-selected refinement. The elastic properties needed for the stress analysis are individually determined for each model element using the radiographic density (CT number) of each pixel with the elemental borders. The elastic modulus is determined from the CT radiographic density by using an empirical relationship from the literature. The generated finite element model, together with applied loads, determined from existing gait analysis and initial displacements, comprise a formatted input for the SAP IV finite element program. The output of this program, stresses and displacements at the model elements and nodes, are sorted and displayed by a developed post-processor to provide maximum and minimum values at selected locations in the model.« less
ERIC Educational Resources Information Center
Zaidel, Mark; Luo, XiaoHui
2010-01-01
This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
Computer program calculates gamma ray source strengths of materials exposed to neutron fluxes
NASA Technical Reports Server (NTRS)
Heiser, P. C.; Ricks, L. O.
1968-01-01
Computer program contains an input library of nuclear data for 44 elements and their isotopes to determine the induced radioactivity for gamma emitters. Minimum input requires the irradiation history of the element, a four-energy-group neutron flux, specification of an alloy composition by elements, and selection of the output.
Prediction of overall and blade-element performance for axial-flow pump configurations
NASA Technical Reports Server (NTRS)
Serovy, G. K.; Kavanagh, P.; Okiishi, T. H.; Miller, M. J.
1973-01-01
A method and a digital computer program for prediction of the distributions of fluid velocity and properties in axial flow pump configurations are described and evaluated. The method uses the blade-element flow model and an iterative numerical solution of the radial equilbrium and continuity conditions. Correlated experimental results are used to generate alternative methods for estimating blade-element turning and loss characteristics. Detailed descriptions of the computer program are included, with example input and typical computed results.
NASA Astrophysics Data System (ADS)
Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried
2017-02-01
We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.
NASA Astrophysics Data System (ADS)
Yasutomi, Ayumu
2003-09-01
Previously, I studied [Physica D 82, 180-194 (1995)] the emergence and collapse of money in a computer simulation model. In this paper I will revisit the same topic, building a model in the same line. I discuss this problem from the viewpoint of chaotic itinerancy. Money is the most popular system for evading the difficulty of exchange under division of labor. It emerges autonomously from exchanges among selfish agents which behave as automata. And such emergent money collapses autonomously. I describe money as a structure in economic space, explaining its autonomous emergence and collapse as two phases of the same phenomenon. The key element in this phenomenon is the switch of the meaning of strategies. This is caused by the drastic change of environment caused by the emergence of a structure. This dynamics shares some aspects with chaotic itinerancy.
Automatic contact in DYNA3D for vehicle crashworthiness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whirley, R.G.; Engelmann, B.E.
1993-07-15
This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit nonlinear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. This paper discusses in detail a new four-step automatic contact algorithm. Key aspects of the proposed method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a smoothly varying surface normal which allows a consistent treatment of shell intersection and corner contact conditions without ad-hocmore » rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public DYNA3D code.« less
Linguistic analysis of project ownership for undergraduate research experiences.
Hanauer, D I; Frederick, J; Fotinakes, B; Strobel, S A
2012-01-01
We used computational linguistic and content analyses to explore the concept of project ownership for undergraduate research. We used linguistic analysis of student interview data to develop a quantitative methodology for assessing project ownership and applied this method to measure degrees of project ownership expressed by students in relation to different types of educational research experiences. The results of the study suggest that the design of a research experience significantly influences the degree of project ownership expressed by students when they describe those experiences. The analysis identified both positive and negative aspects of project ownership and provided a working definition for how a student experiences his or her research opportunity. These elements suggest several features that could be incorporated into an undergraduate research experience to foster a student's sense of project ownership.
Modeling the role of parallel processing in visual search.
Cave, K R; Wolfe, J M
1990-04-01
Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.
Kotter, Dale K [Shelley, ID; Rohrbaugh, David T [Idaho Falls, ID
2010-09-07
A frequency selective surface (FSS) and associated methods for modeling, analyzing and designing the FSS are disclosed. The FSS includes a pattern of conductive material formed on a substrate to form an array of resonance elements. At least one aspect of the frequency selective surface is determined by defining a frequency range including multiple frequency values, determining a frequency dependent permittivity across the frequency range for the substrate, determining a frequency dependent conductivity across the frequency range for the conductive material, and analyzing the frequency selective surface using a method of moments analysis at each of the multiple frequency values for an incident electromagnetic energy impinging on the frequency selective surface. The frequency dependent permittivity and the frequency dependent conductivity are included in the method of moments analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esfahani, M. Nasr; Yilmaz, M.; Sonne, M. R.
The trend towards nanomechanical resonator sensors with increasing sensitivity raises the need to address challenges encountered in the modeling of their mechanical behavior. Selecting the best approach in mechanical response modeling amongst the various potential computational solid mechanics methods is subject to controversy. A guideline for the selection of the appropriate approach for a specific set of geometry and mechanical properties is needed. In this study, geometrical limitations in frequency response modeling of flexural nanomechanical resonators are investigated. Deviation of Euler and Timoshenko beam theories from numerical techniques including finite element modeling and Surface Cauchy-Born technique are studied. The resultsmore » provide a limit beyond which surface energy contribution dominates the mechanical behavior. Using the Surface Cauchy-Born technique as the reference, a maximum error on the order of 50 % is reported for high-aspect ratio resonators.« less
Computing Mass Properties From AutoCAD
NASA Technical Reports Server (NTRS)
Jones, A.
1990-01-01
Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).
Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V
2015-12-01
Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.
Specifying and Refining a Measurement Model for a Computer-Based Interactive Assessment
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
2004-01-01
The challenges of modeling students' performance in computer-based interactive assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance. This article describes a Bayesian approach to modeling and estimating cognitive models…
An Undergraduate Electrical Engineering Course on Computer Organization.
ERIC Educational Resources Information Center
Commission on Engineering Education, Washington, DC.
Outlined is an undergraduate electrical engineering course on computer organization designed to meet the need for electrical engineers familiar with digital system design. The program includes both hardware and software aspects of digital systems essential to design function and correlates design and organizational aspects of the subject. The…
Effecting a broadcast with an allreduce operation on a parallel computer
Almasi, Gheorghe; Archer, Charles J.; Ratterman, Joseph D.; Smith, Brian E.
2010-11-02
A parallel computer comprises a plurality of compute nodes organized into at least one operational group for collective parallel operations. Each compute node is assigned a unique rank and is coupled for data communications through a global combining network. One compute node is assigned to be a logical root. A send buffer and a receive buffer is configured. Each element of a contribution of the logical root in the send buffer is contributed. One or more zeros corresponding to a size of the element are injected. An allreduce operation with a bitwise OR using the element and the injected zeros is performed. And the result for the allreduce operation is determined and stored in each receive buffer.
The Impact of Time Delay on the Content of Discussions at a Computer-Mediated Conference
NASA Astrophysics Data System (ADS)
Huntley, Byron C.; Thatcher, Andrew
2008-11-01
This study investigates the relationship between the content of computer-mediated discussions and the time delay between online postings. The study aims to broaden understanding of the dynamics of computer-mediated discussion regarding the time delay and the actual content of computer-mediated discussions (knowledge construction, social aspects, amount of words and number of postings) which has barely been researched. The computer-mediated discussions of the CybErg 2005 virtual conference served as the sample for this study. The Interaction Analysis Model [1] was utilized to analyze the level of knowledge construction in the content of the computer-mediated discussions. Correlations have been computed for all combinations of the variables. The results demonstrate that knowledge construction, social aspects and amount of words generated within postings were independent of, and not affected by, the time delay between the postings and the posting from which the reply was formulated. When greater numbers of words were utilized within postings, this was typically associated with a greater level of knowledge construction. Social aspects in the discussion were found to neither advantage nor disadvantage the overall effectiveness of the computer-mediated discussion.
Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheid, Robert E.
1989-01-01
The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.
Computed tomography-based finite element analysis to assess fracture risk and osteoporosis treatment
Imai, Kazuhiro
2015-01-01
Finite element analysis (FEA) is a computer technique of structural stress analysis and developed in engineering mechanics. FEA has developed to investigate structural behavior of human bones over the past 40 years. When the faster computers have acquired, better FEA, using 3-dimensional computed tomography (CT) has been developed. This CT-based finite element analysis (CT/FEA) has provided clinicians with useful data. In this review, the mechanism of CT/FEA, validation studies of CT/FEA to evaluate accuracy and reliability in human bones, and clinical application studies to assess fracture risk and effects of osteoporosis medication are overviewed. PMID:26309819
NASA Technical Reports Server (NTRS)
Ko, William L.; Olona, Timothy
1987-01-01
The effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter was investigated. Several structural performance and resizing (SPAR) thermal models and NASA structural analysis (NASTRAN) structural models were set up for the orbiter wing midspan bay 3. The thermal model was found to be the one that determines the limit of finite-element fineness because of the limitation of computational core space required for the radiation view factor calculations. The thermal stresses were found to be extremely sensitive to a slight variation of structural temperature distributions. The minimum degree of element fineness required for the thermal model to yield reasonably accurate solutions was established. The radiation view factor computation time was found to be insignificant compared with the total computer time required for the SPAR transient heat transfer analysis.
A computer program for anisotropic shallow-shell finite elements using symbolic integration
NASA Technical Reports Server (NTRS)
Andersen, C. M.; Bowen, J. T.
1976-01-01
A FORTRAN computer program for anisotropic shallow-shell finite elements with variable curvature is described. A listing of the program is presented together with printed output for a sample case. Computation times and central memory requirements are given for several different elements. The program is based on a stiffness (displacement) finite-element model in which the fundamental unknowns consist of both the displacement and the rotation components of the reference surface of the shell. Two triangular and four quadrilateral elements are implemented in the program. The triangular elements have 6 or 10 nodes, and the quadrilateral elements have 4 or 8 nodes. Two of the quadrilateral elements have internal degrees of freedom associated with displacement modes which vanish along the edges of the elements (bubble modes). The triangular elements and the remaining two quadrilateral elements do not have bubble modes. The output from the program consists of arrays corresponding to the stiffness, the geometric stiffness, the consistent mass, and the consistent load matrices for individual elements. The integrals required for the generation of these arrays are evaluated by using symbolic (or analytic) integration in conjunction with certain group-theoretic techniques. The analytic expressions for the integrals are exact and were developed using the symbolic and algebraic manipulation language.
Development of an hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1993-01-01
The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.
Pike, William A; Riensche, Roderick M; Best, Daniel M; Roberts, Ian E; Whyatt, Marie V; Hart, Michelle L; Carr, Norman J; Thomas, James J
2012-09-18
Systems and computer-implemented processes for storage and management of information artifacts collected by information analysts using a computing device. The processes and systems can capture a sequence of interactive operation elements that are performed by the information analyst, who is collecting an information artifact from at least one of the plurality of software applications. The information artifact can then be stored together with the interactive operation elements as a snippet on a memory device, which is operably connected to the processor. The snippet comprises a view from an analysis application, data contained in the view, and the sequence of interactive operation elements stored as a provenance representation comprising operation element class, timestamp, and data object attributes for each interactive operation element in the sequence.
GAP Noise Computation By The CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.
2001-01-01
A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.
Analysis of rocket engine injection combustion processes
NASA Technical Reports Server (NTRS)
Salmon, J. W.; Saltzman, D. H.
1977-01-01
Mixing methodology improvement for the JANNAF DER and CICM injection/combustion analysis computer programs was accomplished. ZOM plane prediction model development was improved for installation into the new standardized DER computer program. An intra-element mixing model developing approach was recommended for gas/liquid coaxial injection elements for possible future incorporation into the CICM computer program.
NASA Astrophysics Data System (ADS)
Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia
2013-10-01
Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.
An Implicit Upwind Algorithm for Computing Turbulent Flows on Unstructured Grids
NASA Technical Reports Server (NTRS)
Anerson, W. Kyle; Bonhaus, Daryl L.
1994-01-01
An implicit, Navier-Stokes solution algorithm is presented for the computation of turbulent flow on unstructured grids. The inviscid fluxes are computed using an upwind algorithm and the solution is advanced in time using a backward-Euler time-stepping scheme. At each time step, the linear system of equations is approximately solved with a point-implicit relaxation scheme. This methodology provides a viable and robust algorithm for computing turbulent flows on unstructured meshes. Results are shown for subsonic flow over a NACA 0012 airfoil and for transonic flow over a RAE 2822 airfoil exhibiting a strong upper-surface shock. In addition, results are shown for 3 element and 4 element airfoil configurations. For the calculations, two one equation turbulence models are utilized. For the NACA 0012 airfoil, a pressure distribution and force data are compared with other computational results as well as with experiment. Comparisons of computed pressure distributions and velocity profiles with experimental data are shown for the RAE airfoil and for the 3 element configuration. For the 4 element case, comparisons of surface pressure distributions with experiment are made. In general, the agreement between the computations and the experiment is good.
NASA Astrophysics Data System (ADS)
Gerngross, M.-D.; Carstensen, J.; Föll, H.; Adelung, R.
2016-01-01
This paper reports on the characterization of the electrochemical growth process of magnetic nanowires in ultra-high-aspect ratio InP membranes via in situ fast Fourier transform impedance spectroscopy in a typical frequency range from 75 Hz to 18.5 kHz. The measured impedance data from the Ni, Co, and FeCo can be very well fitted using the same electric equivalent circuit consisting of a series resistance in serial connection to an RC-element and a Maxwell element. The impedance data clearly indicate the similarities in the growth behavior of Ni, Co and FeCo nanowires in ultra-high aspect ratio InP membranes—the beneficial impact of boric acid on the metal deposition in ultra-high aspect ratio membranes and the diffusion limitation of boric acid, as well as differences such as passivation or side reactions.
A computer graphics program for general finite element analyses
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Sawyer, L. M.
1978-01-01
Documentation for a computer graphics program for displays from general finite element analyses is presented. A general description of display options and detailed user instructions are given. Several plots made in structural, thermal and fluid finite element analyses are included to illustrate program options. Sample data files are given to illustrate use of the program.
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Plane stress analysis of wood members using isoparametric finite elements, a computer program
Gary D. Gerhardt
1983-01-01
A finite element program is presented which computes displacements, strains, and stresses in wood members of arbitrary shape which are subjected to plane strain/stressloading conditions. This report extends a program developed by R. L. Taylor in 1977, by adding both the cubic isoparametric finite element and the capability to analyze nonisotropic materials. The...
Automatic finite element generators
NASA Technical Reports Server (NTRS)
Wang, P. S.
1984-01-01
The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.
Influence of altitude and aspect on daily variations in factors of forest fire danger
G. Lloyd. Hayes
1941-01-01
Altitude, in broad subdivisions, exerts recognized and well-understood effects on climate. Aspect further modifies the altitudinal influence. Many publications have dealt with the interrelations of these geographic factors with climate and life zones or have discussed variations of individual weather elements as influenced by local altitude and aspect differences and...
Simulation of Physical Experiments in Immersive Virtual Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Wasfy, Tamer M.
2001-01-01
An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.
Zhou, Xiangmin; Zhang, Nan; Sha, Desong; Shen, Yunhe; Tamma, Kumar K; Sweet, Robert
2009-01-01
The inability to render realistic soft-tissue behavior in real time has remained a barrier to face and content aspects of validity for many virtual reality surgical training systems. Biophysically based models are not only suitable for training purposes but also for patient-specific clinical applications, physiological modeling and surgical planning. When considering the existing approaches for modeling soft tissue for virtual reality surgical simulation, the computer graphics-based approach lacks predictive capability; the mass-spring model (MSM) based approach lacks biophysically realistic soft-tissue dynamic behavior; and the finite element method (FEM) approaches fail to meet the real-time requirement. The present development stems from physics fundamental thermodynamic first law; for a space discrete dynamic system directly formulates the space discrete but time continuous governing equation with embedded material constitutive relation and results in a discrete mechanics framework which possesses a unique balance between the computational efforts and the physically realistic soft-tissue dynamic behavior. We describe the development of the discrete mechanics framework with focused attention towards a virtual laparoscopic nephrectomy application.
Conceptualizing, Designing, and Investigating Locative Media Use in Urban Space
NASA Astrophysics Data System (ADS)
Diamantaki, Katerina; Rizopoulos, Charalampos; Charitos, Dimitris; Kaimakamis, Nikos
This chapter investigates the social implications of locative media (LM) use and attempts to outline a theoretical framework that may support the design and implementation of location-based applications. Furthermore, it stresses the significance of physical space and location awareness as important factors that influence both human-computer interaction and computer-mediated communication. The chapter documents part of the theoretical aspect of the research undertaken as part of LOcation-based Communication Urban NETwork (LOCUNET), a project that aims to investigate the way users interact with one another (human-computer-human interaction aspect) and with the location-based system itself (human-computer interaction aspect). A number of relevant theoretical approaches are discussed in an attempt to provide a holistic theoretical background for LM use. Additionally, the actual implementation of the LOCUNET system is described and some of the findings are discussed.
Producing a College Video: The Sweat (and Success) Is in the Details.
ERIC Educational Resources Information Center
Hays, Tim
1994-01-01
Introduces specifics related to production elements and message elements of college videos. Outlines aspects of lighting, audio, narration, backing music, and performance music. Discusses elements of pace, physical plant, people, and programs with regard to marketing. Suggests the goal is to create a unified vision to attract the target audience.…
De Boer, Jan L M; Ritsema, Rob; Piso, Sjoerd; Van Staden, Hans; Van Den Beld, Wilbert
2004-07-01
Two screening methods were developed for rapid analysis of a great number of urine and blood samples within the framework of an exposure check of the population after a firework explosion. A total of 56 elements was measured including major elements. Sample preparation consisted of simple dilution. Extensive quality controls were applied including element addition and the use of certified reference materials. Relevant results at levels similar to those found in the literature were obtained for Co, Ni, Cu, Zn, Sr, Cd, Sn, Sb, Ba, Tl, and Pb in urine and for the same elements except Ni, Sn, Sb, and Ba in blood. However, quadrupole ICP-MS has limitations, mainly related to spectral interferences, for the analysis of urine and blood, and these cause higher detection limits. The general aspects discussed in the paper give it wider applicability than just for analysis of blood and urine-it can for example be used in environmental analysis.
Aspect-Oriented Subprogram Synthesizes UML Sequence Diagrams
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2006-01-01
The Rational Sequence computer program described elsewhere includes a subprogram that utilizes the capability for aspect-oriented programming when that capability is present. This subprogram is denoted the Rational Sequence (AspectJ) component because it uses AspectJ, which is an extension of the Java programming language that introduces aspect-oriented programming techniques into the language
Program structure-based blocking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.
2017-09-26
Embodiments relate to program structure-based blocking. An aspect includes receiving source code corresponding to a computer program by a compiler of a computer system. Another aspect includes determining a prefetching section in the source code by a marking module of the compiler. Yet another aspect includes performing, by a blocking module of the compiler, blocking of instructions located in the prefetching section into instruction blocks, such that the instruction blocks of the prefetching section only contain instructions that are located in the prefetching section.
ICAN/PART: Particulate composite analyzer, user's manual and verification studies
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Murthy, Pappu L. N.; Mital, Subodh K.
1996-01-01
A methodology for predicting the equivalent properties and constituent microstresses for particulate matrix composites, based on the micromechanics approach, is developed. These equations are integrated into a computer code developed to predict the equivalent properties and microstresses of fiber reinforced polymer matrix composites to form a new computer code, ICAN/PART. Details of the flowchart, input and output for ICAN/PART are described, along with examples of the input and output. Only the differences between ICAN/PART and the original ICAN code are described in detail, and the user is assumed to be familiar with the structure and usage of the original ICAN code. Detailed verification studies, utilizing dim dimensional finite element and boundary element analyses, are conducted in order to verify that the micromechanics methodology accurately models the mechanics of particulate matrix composites. ne equivalent properties computed by ICAN/PART fall within bounds established by the finite element and boundary element results. Furthermore, constituent microstresses computed by ICAN/PART agree in average sense with results computed using the finite element method. The verification studies indicate that the micromechanics programmed into ICAN/PART do indeed accurately model the mechanics of particulate matrix composites.
A comparison between families obtained from different proper elements
NASA Technical Reports Server (NTRS)
Zappala, Vincenzo; Cellino, Alberto; Farinella, Paolo
1992-01-01
Using the hierarchical method of family identification developed by Zappala et al., the results coming from the data set of proper elements computed by Williams (about 2100 numbered + about 1200 PLS 2 asteroids) and by Milani and Knezevic (5.7 version, about 4200 asteroids) are compared. Apart from some expected discrepancies due to the different data sets and/or low accuracy of proper elements computed in peculiar dynamical zones, a good agreement was found in several cases. It follows that these high reliability families represent a sample which can be considered independent on the methods used for their proper elements computation. Therefore, they should be considered as the best candidates for detailed physical studies.
A Computational Approach for Automated Posturing of a Human Finite Element Model
2016-07-01
Std. Z39.18 July 2016 Memorandum Report A Computational Approach for Automated Posturing of a Human Finite Element Model Justin McKee and Adam...protection by influencing the path that loading will be transferred into the body and is a major source of variability. The development of a finite element ...posture, human body, finite element , leg, spine 42 Adam Sokolow 410-306-2985Unclassified Unclassified Unclassified UU ii Approved for public release
ELECTRONIC ANALOG COMPUTER FOR DETERMINING RADIOACTIVE DISINTEGRATION
Robinson, H.P.
1959-07-14
A computer is presented for determining growth and decay curves for elements in a radioactive disintegration series wherein one unstable element decays to form a second unstable element or isotope, which in turn forms a third element, etc. The growth and decay curves of radioactive elements are simulated by the charge and discharge curves of a resistance-capacitance network. Several such networks having readily adjustable values are connected in series with an amplifier between each successive pair. The time constant of each of the various networks is set proportional to the half-life of a corresponding element in the series represented and the charge and discharge curves of each of the networks simulates the element growth and decay curve.
In-Space Transportation for NASA's Evolvable Mars Campaign
NASA Technical Reports Server (NTRS)
Percy, Thomas K.; McGuire, Melissa; Polsgrove, Tara
2015-01-01
As the nation embarks on a new and bold journey to Mars, significant work is being done to determine what that mission and those architectural elements will look like. The Evolvable Mars Campaign, or EMC, is being evaluated as a potential approach to getting humans to Mars. Built on the premise of leveraging current technology investments and maximizing element commonality to reduce cost and development schedule, the EMC transportation architecture is focused on developing the elements required to move crew and equipment to Mars as efficiently and effectively as possible both from a performance and a programmatic standpoint. Over the last 18 months the team has been evaluating potential options for those transportation elements. One of the key aspects of the EMC is leveraging investments being made today in missions like the Asteroid Redirect Mission (ARM) mission using derived versions of the Solar Electric Propulsion (SEP) propulsion systems and coupling them with other chemical propulsion elements that maximize commonality across the architecture between both transportation and Mars operations elements. This paper outlines the broad trade space being evaluated including the different technologies being assessed for transportation elements and how those elements are assembled into an architecture. Impacts to potential operational scenarios at Mars are also investigated. Trades are being made on the size and power level of the SEP vehicle for delivering cargo as well as the size of the chemical propulsion systems and various mission aspects including Inspace assembly and sequencing. Maximizing payload delivery to Mars with the SEP vehicle will better support the operational scenarios at Mars by enabling the delivery of landers and habitation elements that are appropriately sized for the mission. The purpose of this investigation is not to find the solution but rather a suite of solutions with potential application to the challenge of sending cargo and crew to Mars. The goal is that, by building an architecture intelligently with all aspects considered, the sustainable Mars program wisely invests limited resources enabling a long-term human Mars exploration program.
Development of non-linear finite element computer code
NASA Technical Reports Server (NTRS)
Becker, E. B.; Miller, T.
1985-01-01
Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.
SL12-GADRAS-PD2Ka Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.
2014-09-09
The GADRAS Development project comprises several elements that are all related to the Detector Response Function (DRF), which is the core of GADRAS. An ongoing activity is implementing continuous improvements in the accuracy and versatility of the DRF. The ability to perform rapid computation of the response of gammaray detectors for 3-D descriptions of source objects and their environments is a good example of a recent utilization of this versatility. The 3-D calculations, which execute several orders of magnitude faster than competing techniques, compute the response as an extension of the DRF so the radiation transport problem is never solvedmore » explicitly, thus saving considerable computational time. Maintenance of the Graphic User Interface (GUI) and extension of the GUI to enable construction of the 3-D source models is included in tasking for the GADRAS Development project. Another aspect of this project is application of the isotope identification algorithms for search applications. Specifically, SNL is tasked with development of an isotope-identification based search capability for use with the RSL-developed AVID system, which supports simultaneous operation of numerous radiation search assets. A Publically Available (PA) GADRAS-DRF application, which eliminates sensitive analysis components, will soon be available so that the DRF can be used by researchers at universities and corporations.« less
CFD modelling of abdominal aortic aneurysm on hemodynamic loads using a realistic geometry with CT.
Soudah, Eduardo; Ng, E Y K; Loong, T H; Bordone, Maurizio; Pua, Uei; Narayanan, Sriram
2013-01-01
The objective of this study is to find a correlation between the abdominal aortic aneurysm (AAA) geometric parameters, wall stress shear (WSS), abdominal flow patterns, intraluminal thrombus (ILT), and AAA arterial wall rupture using computational fluid dynamics (CFD). Real AAA 3D models were created by three-dimensional (3D) reconstruction of in vivo acquired computed tomography (CT) images from 5 patients. Based on 3D AAA models, high quality volume meshes were created using an optimal tetrahedral aspect ratio for the whole domain. In order to quantify the WSS and the recirculation inside the AAA, a 3D CFD using finite elements analysis was used. The CFD computation was performed assuming that the arterial wall is rigid and the blood is considered a homogeneous Newtonian fluid with a density of 1050 kg/m(3) and a kinematic viscosity of 4 × 10(-3) Pa·s. Parallelization procedures were used in order to increase the performance of the CFD calculations. A relation between AAA geometric parameters (asymmetry index ( β ), saccular index ( γ ), deformation diameter ratio ( χ ), and tortuosity index ( ε )) and hemodynamic loads was observed, and it could be used as a potential predictor of AAA arterial wall rupture and potential ILT formation.
NASA Astrophysics Data System (ADS)
Robinson, Tyler D.; Crisp, David
2018-05-01
Solar and thermal radiation are critical aspects of planetary climate, with gradients in radiative energy fluxes driving heating and cooling. Climate models require that radiative transfer tools be versatile, computationally efficient, and accurate. Here, we describe a technique that uses an accurate full-physics radiative transfer model to generate a set of atmospheric radiative quantities which can be used to linearly adapt radiative flux profiles to changes in the atmospheric and surface state-the Linearized Flux Evolution (LiFE) approach. These radiative quantities describe how each model layer in a plane-parallel atmosphere reflects and transmits light, as well as how the layer generates diffuse radiation by thermal emission and by scattering light from the direct solar beam. By computing derivatives of these layer radiative properties with respect to dynamic elements of the atmospheric state, we can then efficiently adapt the flux profiles computed by the full-physics model to new atmospheric states. We validate the LiFE approach, and then apply this approach to Mars, Earth, and Venus, demonstrating the information contained in the layer radiative properties and their derivatives, as well as how the LiFE approach can be used to determine the thermal structure of radiative and radiative-convective equilibrium states in one-dimensional atmospheric models.
Geology of photo linear elements, Great Divide Basin, Wyoming
NASA Technical Reports Server (NTRS)
Blackstone, D. L., Jr.
1973-01-01
The author has identified the following significant results. Ground examination of photo linear elements in the Great Divide Basin, Wyoming indicates little if any tectonic control. Aeolian aspects are more widespread and pervasive than previously considered.
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
An emulator for minimizing finite element analysis implementation resources
NASA Technical Reports Server (NTRS)
Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.
1982-01-01
A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.
Synchrotron Imaging Computations on the Grid without the Computing Element
NASA Astrophysics Data System (ADS)
Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.
2011-12-01
Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.
Three-Dimensional Navier-Stokes Calculations Using the Modified Space-Time CESE Method
NASA Technical Reports Server (NTRS)
Chang, Chau-lyan
2007-01-01
The space-time conservation element solution element (CESE) method is modified to address the robustness issues of high-aspect-ratio, viscous, near-wall meshes. In this new approach, the dependent variable gradients are evaluated using element edges and the corresponding neighboring solution elements while keeping the original flux integration procedure intact. As such, the excellent flux conservation property is retained and the new edge-based gradients evaluation significantly improves the robustness for high-aspect ratio meshes frequently encountered in three-dimensional, Navier-Stokes calculations. The order of accuracy of the proposed method is demonstrated for oblique acoustic wave propagation, shock-wave interaction, and hypersonic flows over a blunt body. The confirmed second-order convergence along with the enhanced robustness in handling hypersonic blunt body flow calculations makes the proposed approach a very competitive CFD framework for 3D Navier-Stokes simulations.
NASA Astrophysics Data System (ADS)
Huismann, Immo; Stiller, Jörg; Fröhlich, Jochen
2017-10-01
The paper proposes a novel factorization technique for static condensation of a spectral-element discretization matrix that yields a linear operation count of just 13N multiplications for the residual evaluation, where N is the total number of unknowns. In comparison to previous work it saves a factor larger than 3 and outpaces unfactored variants for all polynomial degrees. Using the new technique as a building block for a preconditioned conjugate gradient method yields linear scaling of the runtime with N which is demonstrated for polynomial degrees from 2 to 32. This makes the spectral-element method cost effective even for low polynomial degrees. Moreover, the dependence of the iterative solution on the element aspect ratio is addressed, showing only a slight increase in the number of iterations for aspect ratios up to 128. Hence, the solver is very robust for practical applications.
NASA Astrophysics Data System (ADS)
Rakshit, Suman; Khare, Swanand R.; Datta, Biswa Nath
2018-07-01
One of the most important yet difficult aspect of the Finite Element Model Updating Problem is to preserve the finite element inherited structures in the updated model. Finite element matrices are in general symmetric, positive definite (or semi-definite) and banded (tridiagonal, diagonal, penta-diagonal, etc.). Though a large number of papers have been published in recent years on various aspects of solutions of this problem, papers dealing with structure preservation almost do not exist. A novel optimization based approach that preserves the symmetric tridiagonal structures of the stiffness and damping matrices is proposed in this paper. An analytical expression for the global minimum solution of the associated optimization problem along with the results of numerical experiments obtained by both the analytical expressions and by an appropriate numerical optimization algorithm are presented. The results of numerical experiments support the validity of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parzen, George
It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 x 6 matrix, R. R will be called the decoupling matrix. It will be shown that of the 36 elements of the 6 x 6 decoupling matrix R, only 12 elements are independent. This may be contrasted with the results for motion in 4- dimensional phase space, wheremore » R has 4 independent elements. A set of equations is given from which the 12 elements of R can be computed from the one period transfer matrix. This set of equations also allows the linear parameters, the β i,α i, i = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix. An alternative procedure for computing the linear parameters,β i,α i, i = 1, 3, and the 12 independent elements of the decoupling matrix R is also given which depends on computing the eigenvectors of the one period transfer matrix. These results can be used in a tracking program, where the one period transfer matrix can be computed by multiplying the transfer matrices of all the elements in a period, to compute the linear parameters α i and β i, i = 1, 3, and the elements of the decoupling matrix R. The procedure presented here for studying coupled motion in 6-dimensional phase space can also be applied to coupled motion in 4-dimensional phase space, where it may be a useful alternative procedure to the procedure presented by Edwards and Teng. In particular, it gives a simpler programing procedure for computing the beta functions and the emittances for coupled motion in 4-dimensional phase space.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parzen, G.
It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 {times} 6 matrix, R. R will be called the decoupling matrix. It will be shown that of the 36 elements of the 6 {times} 6 decoupling matrix R, only 12 elements are independent. This may be contrasted with the results for motion in 4-dimensional phase space, where Rmore » has 4 independent elements. A set of equations is given from which the 12 elements of R can be computed from the one period transfer matrix. This set of equations also allows the linear parameters, {beta}{sub i}, {alpha}{sub i} = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix. An alternative procedure for computing the linear parameters, the {beta}{sub i}, {alpha}{sub i} i = 1, 3, and the 12 independent elements of the decoupling matrix R is also given which depends on computing the eigenvectors of the one period transfer matrix. These results can be used in a tracking program, where the one period transfer matrix can be computed by multiplying the transfer matrices of all the elements in a period, to compute the linear parameters {alpha}{sub i} and {beta}{sub i}, i = 1, 3, and the elements of the decoupling matrix R. The procedure presented here for studying coupled motion in 6-dimensional phase space can also be applied to coupled motion in 4-dimensional phase space, where it may be a useful alternative procedure to the procedure presented by Edwards and Teng. In particular, it gives a simpler programming procedure for computing the beta functions and the emittances for coupled motion in 4-dimensional phase space.« less
Using OSG Computing Resources with (iLC)Dirac
NASA Astrophysics Data System (ADS)
Sailer, A.; Petric, M.; CLICdp Collaboration
2017-10-01
CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called ‘SiteDirectors’, which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional site-specific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were developed. Not only is the usage of these types of computing elements now completely transparent for all DIRAC instances, which makes DIRAC a flexible solution for OSG based virtual organisations, but it also allows LCG Grid Sites to move to the HTCondor-CE software, without shutting DIRAC based VOs out of their site. In these proceedings we detail how we interfaced the DIRAC system to the HTCondor-CE and Globus computing elements and explain the encountered obstacles and solutions developed, and how the linear collider community uses resources in the OSG.
Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2012-01-10
The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.
Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2008-01-01
The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.
1989-01-01
The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.
A breakthrough for experiencing and understanding simulated physics
NASA Technical Reports Server (NTRS)
Watson, Val
1988-01-01
The use of computer simulation in physics research is discussed, focusing on improvements to graphic workstations. Simulation capabilities and applications of enhanced visualization tools are outlined. The elements of an ideal computer simulation are presented and the potential for improving various simulation elements is examined. The interface between the human and the computer and simulation models are considered. Recommendations are made for changes in computer simulation practices and applications of simulation technology in education.
A class of hybrid finite element methods for electromagnetics: A review
NASA Technical Reports Server (NTRS)
Volakis, J. L.; Chatterjee, A.; Gong, J.
1993-01-01
Integral equation methods have generally been the workhorse for antenna and scattering computations. In the case of antennas, they continue to be the prominent computational approach, but for scattering applications the requirement for large-scale computations has turned researchers' attention to near neighbor methods such as the finite element method, which has low O(N) storage requirements and is readily adaptable in modeling complex geometrical features and material inhomogeneities. In this paper, we review three hybrid finite element methods for simulating composite scatterers, conformal microstrip antennas, and finite periodic arrays. Specifically, we discuss the finite element method and its application to electromagnetic problems when combined with the boundary integral, absorbing boundary conditions, and artificial absorbers for terminating the mesh. Particular attention is given to large-scale simulations, methods, and solvers for achieving low memory requirements and code performance on parallel computing architectures.
Prediction of High-Lift Flows using Turbulent Closure Models
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Gatski, Thomas B.; Ying, Susan X.; Bertelrud, Arild
1997-01-01
The flow over two different multi-element airfoil configurations is computed using linear eddy viscosity turbulence models and a nonlinear explicit algebraic stress model. A subset of recently-measured transition locations using hot film on a McDonnell Douglas configuration is presented, and the effect of transition location on the computed solutions is explored. Deficiencies in wake profile computations are found to be attributable in large part to poor boundary layer prediction on the generating element, and not necessarily inadequate turbulence modeling in the wake. Using measured transition locations for the main element improves the prediction of its boundary layer thickness, skin friction, and wake profile shape. However, using measured transition locations on the slat still yields poor slat wake predictions. The computation of the slat flow field represents a key roadblock to successful predictions of multi-element flows. In general, the nonlinear explicit algebraic stress turbulence model gives very similar results to the linear eddy viscosity models.
Zhou, Jie; Guo, Lanping; Xiao, Wenjuan; Geng, Yanling; Wang, Xiao; Shi, Xin'gang; Dan, Staerk
2012-08-01
The process in the studies on physiological effects of rare earth elements in plants and their action mechanisms were summarized in the aspects of seed germination, photosynthesis, mineral metabolism and stress resistance. And the applications of rare earth elements in traditional Chinese medicine (TCM) in recent years were also overviewed, which will provide reference for further development and application of rare earth elements in TCM.
Sato, Y; Wadamoto, M; Tsuga, K; Teixeira, E R
1999-04-01
More validity of finite element analysis in implant biomechanics requires element downsizing. However, excess downsizing needs computer memory and calculation time. To investigate the effectiveness of element downsizing on the construction of a three-dimensional finite element bone trabeculae model, with different element sizes (600, 300, 150 and 75 microm) models were constructed and stress induced by vertical 10 N loading was analysed. The difference in von Mises stress values between the models with 600 and 300 microm element sizes was larger than that between 300 and 150 microm. On the other hand, no clear difference of stress values was detected among the models with 300, 150 and 75 microm element sizes. Downsizing of elements from 600 to 300 microm is suggested to be effective in the construction of a three-dimensional finite element bone trabeculae model for possible saving of computer memory and calculation time in the laboratory.
High Tech: A Place in Our Lives and in Our Schools.
ERIC Educational Resources Information Center
Roach, John V.
1986-01-01
Discusses various aspects of high technology: computers in cars, computer-assisted design and manufacturing, computers in telephones, video recorders, laser technology, home computers, job training, computer education, and the challenge to the technology teacher. (CT)
Gary H. Elsner
1979-01-01
Computers can analyze and help to plan the visual aspects of large wildland landscapes. This paper categorizes and explains current computer methods available. It also contains a futuristic dialogue between a landscape architect and a computer.
Mesh refinement in finite element analysis by minimization of the stiffness matrix trace
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.
1989-01-01
Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.
Discussion on the Technology and Method of Computer Network Security Management
NASA Astrophysics Data System (ADS)
Zhou, Jianlei
2017-09-01
With the rapid development of information technology, the application of computer network technology has penetrated all aspects of society, changed people's way of life work to a certain extent, brought great convenience to people. But computer network technology is not a panacea, it can promote the function of social development, but also can cause damage to the community and the country. Due to computer network’ openness, easiness of sharing and other characteristics, it had a very negative impact on the computer network security, especially the loopholes in the technical aspects can cause damage on the network information. Based on this, this paper will do a brief analysis on the computer network security management problems and security measures.
Adiabatic Quantum Computation: Coherent Control Back Action.
Goswami, Debabrata
2006-11-22
Though attractive from scalability aspects, optical approaches to quantum computing are highly prone to decoherence and rapid population loss due to nonradiative processes such as vibrational redistribution. We show that such effects can be reduced by adiabatic coherent control, in which quantum interference between multiple excitation pathways is used to cancel coupling to the unwanted, non-radiative channels. We focus on experimentally demonstrated adiabatic controlled population transfer experiments wherein the details on the coherence aspects are yet to be explored theoretically but are important for quantum computation. Such quantum computing schemes also form a back-action connection to coherent control developments.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
Physical aspects of computing the flow of a viscous fluid
NASA Technical Reports Server (NTRS)
Mehta, U. B.
1984-01-01
One of the main themes in fluid dynamics at present and in the future is going to be computational fluid dynamics with the primary focus on the determination of drag, flow separation, vortex flows, and unsteady flows. A computation of the flow of a viscous fluid requires an understanding and consideration of the physical aspects of the flow. This is done by identifying the flow regimes and the scales of fluid motion, and the sources of vorticity. Discussions of flow regimes deal with conditions of incompressibility, transitional and turbulent flows, Navier-Stokes and non-Navier-Stokes regimes, shock waves, and strain fields. Discussions of the scales of fluid motion consider transitional and turbulent flows, thin- and slender-shear layers, triple- and four-deck regions, viscous-inviscid interactions, shock waves, strain rates, and temporal scales. In addition, the significance and generation of vorticity are discussed. These physical aspects mainly guide computations of the flow of a viscous fluid.
NASA Technical Reports Server (NTRS)
Stonesifer, R. B.; Atluri, S. N.
1982-01-01
The development of valid creep fracture criteria is considered. Two path-independent integral parameters which show some degree of promise are the C* and (Delta T)sub c integrals. The mathematical aspects of these parameters are reviewed by deriving generalized vector forms of the parameters using conservation laws which are valid for arbitrary, three dimensional, cracked bodies with crack surface tractions (or applied displacements), body forces, inertial effects, and large deformations. Two principal conclusions are that (Delta T)sub c has an energy rate interpretation whereas C* does not. The development and application of fracture criteria often involves the solution of boundary/initial value problems associated with deformation and stresses. The finite element method is used for this purpose. An efficient, small displacement, infinitesimal strain, displacement based finite element model is specialized to two dimensional plane stress and plane strain and to power law creep constitutive relations. A mesh shifting/remeshing procedure is used for simulating crack growth. The model is implemented with the quartz-point node technique and also with specially developed, conforming, crack-tip singularity elements which provide for the r to the n-(1+n) power strain singularity associated with the HRR crack-tip field. Comparisons are made with a variety of analytical solutions and alternate numerical solutions for a number of problems.
Simulations of DNA stretching by flow field in microchannels with complex geometry.
Huang, Chiou-De; Kang, Dun-Yen; Hsieh, Chih-Chen
2014-01-01
Recently, we have reported the experimental results of DNA stretching by flow field in three microchannels (C. H. Lee and C. C. Hsieh, Biomicrofluidics 7(1), 014109 (2013)) designed specifically for the purpose of preconditioning DNA conformation for easier stretching. The experimental results do not only demonstrate the superiority of the new devices but also provides detailed observation of DNA behavior in complex flow field that was not available before. In this study, we use Brownian dynamics-finite element method (BD-FEM) to simulate DNA behavior in these microchannels, and compare the results against the experiments. Although the hydrodynamic interaction (HI) between DNA segments and between DNA and the device boundaries was not included in the simulations, the simulation results are in fairly good agreement with the experimental data from either the aspect of the single molecule behavior or from the aspect of ensemble averaged properties. The discrepancy between the simulation and the experimental results can be explained by the neglect of HI effect in the simulations. Considering the huge savings on the computational cost from neglecting HI, we conclude that BD-FEM can be used as an efficient and economic designing tool for developing new microfluidic device for DNA manipulation.
Automated calculation of matrix elements and physics motivated observables
NASA Astrophysics Data System (ADS)
Was, Z.
2017-11-01
The central aspect of my personal scientific activity, has focused on calculations useful for interpretation of High Energy accelerator experimental results, especially in a domain of precision tests of the Standard Model. My activities started in early 80’s, when computer support for algebraic manipulations was in its infancy. But already then it was important for my work. It brought a multitude of benefits, but at the price of some inconvenience for physics intuition. Calculations became more complex, work had to be distributed over teams of researchers and due to automatization, some aspects of the intermediate results became more difficult to identify. In my talk I will not be very exhaustive, I will present examples from my personal research only: (i) calculations of spin effects for the process e + e - → τ + τ - γ at Petra/PEP energies, calculations (with the help of the Grace system of Minami-tateya group) and phenomenology of spin amplitudes for (ii) e + e - → 4f and for (iii) e + e - → νeν¯eγγ processes, (iv) phenomenology of CP-sensitive observables for Higgs boson parity in H → τ + τ -, τ ± → ν2(3)π cascade decays.
Code of Federal Regulations, 2011 CFR
2011-01-01
... aspect of individual, team, or organizational performance that is not a critical or non-critical element..., or organizational performance, exclusive of a critical element, that is used in assigning a summary... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERFORMANCE MANAGEMENT...
Processor Would Find Best Paths On Map
NASA Technical Reports Server (NTRS)
Eberhardt, Silvio P.
1990-01-01
Proposed very-large-scale integrated (VLSI) circuit image-data processor finds path of least cost from specified origin to any destination on map. Cost of traversal assigned to each picture element of map. Path of least cost from originating picture element to every other picture element computed as path that preserves as much as possible of signal transmitted by originating picture element. Dedicated microprocessor at each picture element stores cost of traversal and performs its share of computations of paths of least cost. Least-cost-path problem occurs in research, military maneuvers, and in planning routes of vehicles.
Precollege Computer Literacy: A Personal Computing Approach. Second Edition.
ERIC Educational Resources Information Center
Moursund, David
Intended for elementary and secondary teachers and curriculum specialists, this booklet discusses and defines computer literacy as a functional knowledge of computers and their effects on students and the rest of society. It analyzes personal computing and the aspects of computers that have direct impact on students. Outlining computer-assisted…
Mechanical Aspects of Interfaces and Surfaces in Ceramic Containing Systems.
1984-12-14
of a computer model to simulate the crack damage. The model is based on the fracture mechanics of cracks engulfed by the short stress pulse generated...by drop impact. Inertial effects of the crack faces are a particularly important aspect of the model. The computer scheme thereby allows the stress...W. R. Beaumont, "On the Toughness of Particulate Filled Polymers." Water Drop Impact X. E. D. Case and A. G. Evans, "A Computer -Generated Simulation
Modules and methods for all photonic computing
Schultz, David R.; Ma, Chao Hung
2001-01-01
A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.
NASA Astrophysics Data System (ADS)
Rangarajan, Ramsharan; Gao, Huajian
2015-09-01
We introduce a finite element method to compute equilibrium configurations of fluid membranes, identified as stationary points of a curvature-dependent bending energy functional under certain geometric constraints. The reparameterization symmetries in the problem pose a challenge in designing parametric finite element methods, and existing methods commonly resort to Lagrange multipliers or penalty parameters. In contrast, we exploit these symmetries by representing solution surfaces as normal offsets of given reference surfaces and entirely bypass the need for artificial constraints. We then resort to a Galerkin finite element method to compute discrete C1 approximations of the normal offset coordinate. The variational framework presented is suitable for computing deformations of three-dimensional membranes subject to a broad range of external interactions. We provide a systematic algorithm for computing large deformations, wherein solutions at subsequent load steps are identified as perturbations of previously computed ones. We discuss the numerical implementation of the method in detail and demonstrate its optimal convergence properties using examples. We discuss applications of the method to studying adhesive interactions of fluid membranes with rigid substrates and to investigate the influence of membrane tension in tether formation.
NASA Technical Reports Server (NTRS)
Crouse, J. E.
1974-01-01
A method is presented for designing axial-flow compressor blading from blade elements defined on cones which pass through the blade-edge streamline locations. Each blade-element centerline is composed of two segments which are tangent to each other. The centerline and surfaces of each segment have constant change of angle with path distance. The stacking line for the blade elements can be leaned in both the axial and tangential directions. The output of the computer program gives coordinates for fabrication and properties for aeroelastic analysis for planar blade sections. These coordinates and properties are obtained by interpolation across conical blade elements. The program is structured to be coupled with an aerodynamic design program.
Computational structural mechanics engine structures computational simulator
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1989-01-01
The Computational Structural Mechanics (CSM) program at Lewis encompasses: (1) fundamental aspects for formulating and solving structural mechanics problems, and (2) development of integrated software systems to computationally simulate the performance/durability/life of engine structures.
On a 3-D singularity element for computation of combined mode stress intensities
NASA Technical Reports Server (NTRS)
Atluri, S. N.; Kathiresan, K.
1976-01-01
A special three-dimensional singularity element is developed for the computation of combined modes 1, 2, and 3 stress intensity factors, which vary along an arbitrarily curved crack front in three dimensional linear elastic fracture problems. The finite element method is based on a displacement-hybrid finite element model, based on a modified variational principle of potential energy, with arbitrary element interior displacements, interelement boundary displacements, and element boundary tractions as variables. The special crack-front element used in this analysis contains the square root singularity in strains and stresses, where the stress-intensity factors K(1), K(2), and K(3) are quadratically variable along the crack front and are solved directly along with the unknown nodal displacements.
NASA Technical Reports Server (NTRS)
Noor, A. K. (Editor); Hayduk, R. J. (Editor)
1985-01-01
Among the topics discussed are developments in structural engineering hardware and software, computation for fracture mechanics, trends in numerical analysis and parallel algorithms, mechanics of materials, advances in finite element methods, composite materials and structures, determinations of random motion and dynamic response, optimization theory, automotive tire modeling methods and contact problems, the damping and control of aircraft structures, and advanced structural applications. Specific topics covered include structural design expert systems, the evaluation of finite element system architectures, systolic arrays for finite element analyses, nonlinear finite element computations, hierarchical boundary elements, adaptive substructuring techniques in elastoplastic finite element analyses, automatic tracking of crack propagation, a theory of rate-dependent plasticity, the torsional stability of nonlinear eccentric structures, a computation method for fluid-structure interaction, the seismic analysis of three-dimensional soil-structure interaction, a stress analysis for a composite sandwich panel, toughness criterion identification for unidirectional composite laminates, the modeling of submerged cable dynamics, and damping synthesis for flexible spacecraft structures.
NASA Technical Reports Server (NTRS)
Ecer, A.; Akay, H. U.
1981-01-01
The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.
Aorta modeling with the element-based zero-stress state and isogeometric discretization
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Sasaki, Takafumi
2017-02-01
Patient-specific arterial fluid-structure interaction computations, including aorta computations, require an estimation of the zero-stress state (ZSS), because the image-based arterial geometries do not come from a ZSS. We have earlier introduced a method for estimation of the element-based ZSS (EBZSS) in the context of finite element discretization of the arterial wall. The method has three main components. 1. An iterative method, which starts with a calculated initial guess, is used for computing the EBZSS such that when a given pressure load is applied, the image-based target shape is matched. 2. A method for straight-tube segments is used for computing the EBZSS so that we match the given diameter and longitudinal stretch in the target configuration and the "opening angle." 3. An element-based mapping between the artery and straight-tube is extracted from the mapping between the artery and straight-tube segments. This provides the mapping from the arterial configuration to the straight-tube configuration, and from the estimated EBZSS of the straight-tube configuration back to the arterial configuration, to be used as the initial guess for the iterative method that matches the image-based target shape. Here we present the version of the EBZSS estimation method with isogeometric wall discretization. With isogeometric discretization, we can obtain the element-based mapping directly, instead of extracting it from the mapping between the artery and straight-tube segments. That is because all we need for the element-based mapping, including the curvatures, can be obtained within an element. With NURBS basis functions, we may be able to achieve a similar level of accuracy as with the linear basis functions, but using larger-size and much fewer elements. Higher-order NURBS basis functions allow representation of more complex shapes within an element. To show how the new EBZSS estimation method performs, we first present 2D test computations with straight-tube configurations. Then we show how the method can be used in a 3D computation where the target geometry is coming from medical image of a human aorta.
The “Common Solutions” Strategy of the Experiment Support group at CERN for the LHC Experiments
NASA Astrophysics Data System (ADS)
Girone, M.; Andreeva, J.; Barreiro Megino, F. H.; Campana, S.; Cinquilli, M.; Di Girolamo, A.; Dimou, M.; Giordano, D.; Karavakis, E.; Kenyon, M. J.; Kokozkiewicz, L.; Lanciotti, E.; Litmaath, M.; Magini, N.; Negri, G.; Roiser, S.; Saiz, P.; Saiz Santos, M. D.; Schovancova, J.; Sciabà, A.; Spiga, D.; Trentadue, R.; Tuckett, D.; Valassi, A.; Van der Ster, D. C.; Shiers, J. D.
2012-12-01
After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing, as well as WLCG deployment and operations, need to evolve. As part of the activities of the Experiment Support group in CERN's IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management monitoring, File Transfer monitoring) and the Site Status Board. This talk focuses primarily on the strategic aspects of providing such common solutions and how this relates to the overall goals of long-term sustainability and the relationship to the various WLCG Technical Evolution Groups. The success of the service components has given us confidence in the process, and has developed the trust of the stakeholders. We are now attempting to expand the development of common solutions into the more critical workflows. The first is a feasibility study of common analysis workflow execution elements between ATLAS and CMS. We look forward to additional common development in the future.
Segregating the core computational faculty of human language from working memory
Makuuchi, Michiru; Bahlmann, Jörg; Anwander, Alfred; Friederici, Angela D.
2009-01-01
In contrast to simple structures in animal vocal behavior, hierarchical structures such as center-embedded sentences manifest the core computational faculty of human language. Previous artificial grammar learning studies found that the left pars opercularis (LPO) subserves the processing of hierarchical structures. However, it is not clear whether this area is activated by the structural complexity per se or by the increased memory load entailed in processing hierarchical structures. To dissociate the effect of structural complexity from the effect of memory cost, we conducted a functional magnetic resonance imaging study of German sentence processing with a 2-way factorial design tapping structural complexity (with/without hierarchical structure, i.e., center-embedding of clauses) and working memory load (long/short distance between syntactically dependent elements; i.e., subject nouns and their respective verbs). Functional imaging data revealed that the processes for structure and memory operate separately but co-operatively in the left inferior frontal gyrus; activities in the LPO increased as a function of structural complexity, whereas activities in the left inferior frontal sulcus (LIFS) were modulated by the distance over which the syntactic information had to be transferred. Diffusion tensor imaging showed that these 2 regions were interconnected through white matter fibers. Moreover, functional coupling between the 2 regions was found to increase during the processing of complex, hierarchically structured sentences. These results suggest a neuroanatomical segregation of syntax-related aspects represented in the LPO from memory-related aspects reflected in the LIFS, which are, however, highly interconnected functionally and anatomically. PMID:19416819
Phenomenological aspects of the cognitive rumination construct.
Meyer, Leonardo Fernandez; Taborda, José Geraldo Vernet; da Costa, Fábio Antônio; Soares, Ana Luiza Alfaya Galego; Mecler, Kátia; Valença, Alexandre Martins
2015-01-01
To evaluate the importance of phenomenological aspects of the cognitive rumination (CR) construct in current empirical psychiatric research. We searched SciELO, Scopus, ScienceDirect, MEDLINE, OneFile (GALE), SpringerLink, Cambridge Journals and Web of Science between February and March of 2014 for studies whose title and topic included the following keywords: cognitive rumination; rumination response scale; and self-reflection. The inclusion criteria were: empirical clinical study; CR as the main object of investigation; and study that included a conceptual definition of CR. The studies selected were published in English in biomedical journals in the last 10 years. Our phenomenological analysis was based on Karl Jaspers' General Psychopathology. Most current empirical studies adopt phenomenological cognitive elements in conceptual definitions. However, these elements do not seem to be carefully examined and are indistinctly understood as objective empirical factors that may be measured, which may contribute to misunderstandings about CR, erroneous interpretations of results and problematic theoretical models. Empirical studies fail when evaluating phenomenological aspects of the cognitive elements of the CR construct. Psychopathology and phenomenology may help define the characteristics of CR elements and may contribute to their understanding and hierarchical organization as a construct. A review of the psychopathology principles established by Jasper may clarify some of these issues.
Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.
2010-01-01
An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402
A new parallel-vector finite element analysis software on distributed-memory computers
NASA Technical Reports Server (NTRS)
Qin, Jiangning; Nguyen, Duc T.
1993-01-01
A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.
Exponential convergence through linear finite element discretization of stratified subdomains
NASA Astrophysics Data System (ADS)
Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali
2016-10-01
Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.
A survey of parametrized variational principles and applications to computational mechanics
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.
1993-01-01
This survey paper describes recent developments in the area of parametrized variational principles (PVP's) and selected applications to finite-element computational mechanics. A PVP is a variational principle containing free parameters that have no effect on the Euler-Lagrange equations. The theory of single-field PVP's based on gauge functions (also known as null Lagrangians) is a subset of the inverse problem of variational calculus that has limited value. On the other hand, multifield PVP's are more interesting from theoretical and practical standpoints. Following a tutorial introduction, the paper describes the recent construction of multifield PVP's in several areas of elasticity and electromagnetics. It then discusses three applications to finite-element computational mechanics: the derivation of high-performance finite elements, the development of element-level error indicators, and the constructions of finite element templates. The paper concludes with an overview of open research areas.
System Proposal for Mass Transit Service Quality Control Based on GPS Data
Padrón, Gabino; Cristóbal, Teresa; Alayón, Francisco; Quesada-Arencibia, Alexis; García, Carmelo R.
2017-01-01
Quality is an essential aspect of public transport. In the case of regular public passenger transport by road, punctuality and regularity are criteria used to assess quality of service. Calculating metrics related to these criteria continuously over time and comprehensively across the entire transport network requires the handling of large amounts of data. This article describes a system for continuously and comprehensively monitoring punctuality and regularity. The system uses location data acquired continuously in the vehicles and automatically transferred for analysis. These data are processed intelligently by elements that are commonly used by transport operators: GPS-based tracking system, onboard computer and wireless networks for mobile data communications. The system was tested on a transport company, for which we measured the punctuality of one of the routes that it operates; the results are presented in this article. PMID:28621745
System Proposal for Mass Transit Service Quality Control Based on GPS Data.
Padrón, Gabino; Cristóbal, Teresa; Alayón, Francisco; Quesada-Arencibia, Alexis; García, Carmelo R
2017-06-16
Quality is an essential aspect of public transport. In the case of regular public passenger transport by road, punctuality and regularity are criteria used to assess quality of service. Calculating metrics related to these criteria continuously over time and comprehensively across the entire transport network requires the handling of large amounts of data. This article describes a system for continuously and comprehensively monitoring punctuality and regularity. The system uses location data acquired continuously in the vehicles and automatically transferred for analysis. These data are processed intelligently by elements that are commonly used by transport operators: GPS-based tracking system, onboard computer and wireless networks for mobile data communications. The system was tested on a transport company, for which we measured the punctuality of one of the routes that it operates; the results are presented in this article.
Silk-Its Mysteries, How It Is Made, and How It Is Used.
Ebrahimi, Davoud; Tokareva, Olena; Rim, Nae Gyune; Wong, Joyce Y; Kaplan, David L; Buehler, Markus J
2015-10-12
This article reviews fundamental and applied aspects of silk-one of Nature's most intriguing materials in terms of its strength, toughness, and biological role-in its various forms, from protein molecules to webs and cocoons, in the context of mechanical and biological properties. A central question that will be explored is how the bridging of scales and the emergence of hierarchical structures are critical elements in achieving novel material properties, and how this knowledge can be explored in the design of synthetic materials. We review how the function of a material system at the macroscale can be derived from the interplay of fundamental molecular building blocks. Moreover, guidelines and approaches to current experimental and computational designs in the field of synthetic silklike materials are provided to assist the materials science community in engineering customized finetuned biomaterials for biomedical applications.
Five critical elements to ensure the precision medicine.
Chen, Chengshui; He, Mingyan; Zhu, Yichun; Shi, Lin; Wang, Xiangdong
2015-06-01
The precision medicine as a new emerging area and therapeutic strategy has occurred and was practiced in the individual and brought unexpected successes, and gained high attentions from professional and social aspects as a new path to improve the treatment and prognosis of patients. There will be a number of new components to appear or be discovered, of which clinical bioinformatics integrates clinical phenotypes and informatics with bioinformatics, computational science, mathematics, and systems biology. In addition to those tools, precision medicine calls more accurate and repeatable methodologies for the identification and validation of gene discovery. Precision medicine will bring more new therapeutic strategies, drug discovery and development, and gene-oriented treatment. There is an urgent need to identify and validate disease-specific, mechanism-based, or epigenetics-dependent biomarkers to monitor precision medicine, and develop "precision" regulations to guard the application of precision medicine.
Learning by statistical cooperation of self-interested neuron-like computing elements.
Barto, A G
1985-01-01
Since the usual approaches to cooperative computation in networks of neuron-like computating elements do not assume that network components have any "preferences", they do not make substantive contact with game theoretic concepts, despite their use of some of the same terminology. In the approach presented here, however, each network component, or adaptive element, is a self-interested agent that prefers some inputs over others and "works" toward obtaining the most highly preferred inputs. Here we describe an adaptive element that is robust enough to learn to cooperate with other elements like itself in order to further its self-interests. It is argued that some of the longstanding problems concerning adaptation and learning by networks might be solvable by this form of cooperativity, and computer simulation experiments are described that show how networks of self-interested components that are sufficiently robust can solve rather difficult learning problems. We then place the approach in its proper historical and theoretical perspective through comparison with a number of related algorithms. A secondary aim of this article is to suggest that beyond what is explicitly illustrated here, there is a wealth of ideas from game theory and allied disciplines such as mathematical economics that can be of use in thinking about cooperative computation in both nervous systems and man-made systems.
Efficient simulation of incompressible viscous flow over multi-element airfoils
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Wiltberger, N. Lyn; Kwak, Dochan
1993-01-01
The incompressible, viscous, turbulent flow over single and multi-element airfoils is numerically simulated in an efficient manner by solving the incompressible Navier-Stokes equations. The solution algorithm employs the method of pseudo compressibility and utilizes an upwind differencing scheme for the convective fluxes, and an implicit line-relaxation scheme. The motivation for this work includes interest in studying high-lift take-off and landing configurations of various aircraft. In particular, accurate computation of lift and drag at various angles of attack up to stall is desired. Two different turbulence models are tested in computing the flow over an NACA 4412 airfoil; an accurate prediction of stall is obtained. The approach used for multi-element airfoils involves the use of multiple zones of structured grids fitted to each element. Two different approaches are compared; a patched system of grids, and an overlaid Chimera system of grids. Computational results are presented for two-element, three-element, and four-element airfoil configurations. Excellent agreement with experimental surface pressure coefficients is seen. The code converges in less than 200 iterations, requiring on the order of one minute of CPU time on a CRAY YMP per element in the airfoil configuration.
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX-80
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.; Watson, Brian C.
1992-11-01
The finite element method has proven to be an invaluable tool for analysis and design of complex, high performance systems, such as bladed-disk assemblies in aircraft turbofan engines. However, as the problem size increase, the computation time required by conventional computers can be prohibitively high. Parallel processing computers provide the means to overcome these computation time limits. This report summarizes the results of a research activity aimed at providing a finite element capability for analyzing turbomachinery bladed-disk assemblies in a vector/parallel processing environment. A special purpose code, named with the acronym SAPNEW, has been developed to perform static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements. SAPNEW provides a stand alone capability for static and eigen analysis on the Alliant FX/80, a parallel processing computer. A preprocessor, named with the acronym NTOS, has been developed to accept NASTRAN input decks and convert them to the SAPNEW format to make SAPNEW more readily used by researchers at NASA Lewis Research Center.
NASA Astrophysics Data System (ADS)
Cheviakov, Alexei F.
2017-11-01
An efficient systematic procedure is provided for symbolic computation of Lie groups of equivalence transformations and generalized equivalence transformations of systems of differential equations that contain arbitrary elements (arbitrary functions and/or arbitrary constant parameters), using the software package GeM for Maple. Application of equivalence transformations to the reduction of the number of arbitrary elements in a given system of equations is discussed, and several examples are considered. The first computational example of generalized equivalence transformations where the transformation of the dependent variable involves an arbitrary constitutive function is presented. As a detailed physical example, a three-parameter family of nonlinear wave equations describing finite anti-plane shear displacements of an incompressible hyperelastic fiber-reinforced medium is considered. Equivalence transformations are computed and employed to radically simplify the model for an arbitrary fiber direction, invertibly reducing the model to a simple form that corresponds to a special fiber direction, and involves no arbitrary elements. The presented computation algorithm is applicable to wide classes of systems of differential equations containing arbitrary elements.
Computation of Asteroid Proper Elements on the Grid
NASA Astrophysics Data System (ADS)
Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.
2009-12-01
A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.
Exploring Elements of Fun to Motivate Youth to Do Cognitive Bias Modification.
Boendermaker, Wouter J; Boffo, Marilisa; Wiers, Reinout W
2015-12-01
Heavy drinking among young adults poses severe health risks, including development of later addiction problems. Cognitive retraining of automatic appetitive processes related to alcohol (so-called cognitive bias modification [CBM]) may help to prevent escalation of use. Although effective as a treatment in clinical patients, the use of CBM in youth proves more difficult, as motivation in this group is typically low, and the paradigms used are often viewed as boring and tedious. This article presents two separate studies that focused on three approaches that may enhance user experience and motivation to train: a serious game, a serious game in a social networking context, and a mobile application. In the Game Study, 77 participants performed a regular CBM training, aimed at response matching, a gamified version, or a placebo version of that training. The gamified version was presented as a stand-alone game or in the context of a social network. In the Mobile Study, 64 participants completed a different CBM training, aimed at approach bias, either on a computer or on their mobile device. Although no training effects were found in the Game Study, adding (social) game elements did increase aspects of the user experience and motivation to train. The mobile training appeared to increase motivation to train in terms how often participants trained, but this effect disappeared after controlling for baseline motivation to train. Adding (social) game elements can increase motivation to train, and mobile training did not underperform compared with the regular training in this sample, which warrants more research into motivational elements for CBM training in younger audiences.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Bell, W. C.; Arakere, G.; He, T.; Xie, X.; Cheeseman, B. A.
2010-02-01
A meso-scale ballistic material model for a prototypical plain-woven single-ply flexible armor is developed and implemented in a material user subroutine for the use in commercial explicit finite element programs. The main intent of the model is to attain computational efficiency when calculating the mechanical response of the multi-ply fabric-based flexible-armor material during its impact with various projectiles without significantly sacrificing the key physical aspects of the fabric microstructure, architecture, and behavior. To validate the new model, a comparative finite element method analysis is carried out in which: (a) the plain-woven single-ply fabric is modeled using conventional shell elements and weaving is done in an explicit manner by snaking the yarns through the fabric and (b) the fabric is treated as a planar continuum surface composed of conventional shell elements to which the new meso-scale unit-cell based material model is assigned. The results obtained show that the material model provides a reasonably good description for the fabric deformation and fracture behavior under different combinations of fixed and free boundary conditions. Finally, the model is used in an investigation of the ability of a multi-ply soft-body armor vest to protect the wearer from impact by a 9-mm round nose projectile. The effects of inter-ply friction, projectile/yarn friction, and the far-field boundary conditions are revealed and the results explained using simple wave mechanics principles, high-deformation rate material behavior, and the role of various energy-absorbing mechanisms in the fabric-based armor systems.
The Computer Student Worksheet Based Mathematical Literacy for Statistics
NASA Astrophysics Data System (ADS)
Manoy, J. T.; Indarasati, N. A.
2018-01-01
The student worksheet is one of media teaching which is able to improve teaching an activity in the classroom. Indicators in mathematical literacy were included in a student worksheet is able to help the students for applying the concept in daily life. Then, the use of computers in learning can create learning with environment-friendly. This research used developmental research which was Thiagarajan (Four-D) development design. There are 4 stages in the Four-D, define, design, develop, and disseminate. However, this research was finish until the third stage, develop stage. The computer student worksheet based mathematical literacy for statistics executed good quality. This student worksheet is achieving the criteria if able to achieve three aspects, validity, practicality, and effectiveness. The subject in this research was the students at The 1st State Senior High School of Driyorejo, Gresik, grade eleven of The 5th Mathematics and Natural Sciences. The computer student worksheet products based mathematical literacy for statistics executed good quality, while it achieved the aspects for validity, practical, and effectiveness. This student worksheet achieved the validity aspects with an average of 3.79 (94.72%), and practical aspects with an average of 2.85 (71.43%). Besides, it achieved the effectiveness aspects with a percentage of the classical complete students of 94.74% and a percentage of the student positive response of 75%.
Numerical sedimentation particle-size analysis using the Discrete Element Method
NASA Astrophysics Data System (ADS)
Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.
2015-12-01
Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.
A centennial tribute to G.K. Gilbert's Hydraulic Mining Débris in the Sierra Nevada
NASA Astrophysics Data System (ADS)
James, L. A.; Phillips, J. D.; Lecce, S. A.
2017-10-01
G.K. Gilbert's (1917) classic monograph, Hydraulic-Mining Débris in the Sierra Nevada, is described and put into the context of modern geomorphic knowledge. The emphasis here is on large-scale applied fluvial geomorphology, but other key elements-e.g., coastal geomorphology-are also briefly covered. A brief synopsis outlines key elements of the monograph, followed by discussions of highly influential aspects including the integrated watershed perspective, the extreme example of anthropogenic sedimentation, computation of a quantitative, semidistributed sediment budget, and advent of sediment-wave theory. Although Gilbert did not address concepts of equilibrium and grade in much detail, the rivers of the northwestern Sierra Nevada were highly disrupted and thrown into a condition of nonequilibrium. Therefore, concepts of equilibrium and grade-for which Gilbert's early work is often cited-are discussed. Gilbert's work is put into the context of complex nonlinear dynamics in geomorphic systems and how these concepts can be used to interpret the nonequilibrium systems described by Gilbert. Broad, basin-scale studies were common in the period, but few were as quantitative and empirically rigorous or employed such a range of methodologies as PP105. None demonstrated such an extreme case of anthropogeomorphic change.
Studies on Effective Elastic Properties of CNT/Nano-Clay Reinforced Polymer Hybrid Composite
NASA Astrophysics Data System (ADS)
Thakur, Arvind Kumar; Kumar, Puneet; Srinivas, J.
2016-02-01
This paper presents a computational approach to predict elastic propertiesof hybrid nanocomposite material prepared by adding nano-clayplatelets to conventional CNT-reinforced epoxy system. In comparison to polymers alone/single-fiber reinforced polymers, if an additional fiber is added to the composite structure, it was found a drastic improvement in resultant properties. In this regard, effective elastic moduli of a hybrid nano composite are determined by using finite element (FE) model with square representative volume element (RVE). Continuum mechanics based homogenization of the nano-filler reinforced composite is considered for evaluating the volumetric average of the stresses and the strains under different periodic boundary conditions.A three phase Halpin-Tsai approach is selected to obtain the analytical result based on micromechanical modeling. The effect of the volume fractions of CNTs and nano-clay platelets on the mechanical behavior is studied. Two different RVEs of nano-clay platelets were used to investigate the influence of nano-filler geometry on composite properties. The combination of high aspect ratio of CNTs and larger surface area of clay platelets contribute to the stiffening effect of the hybrid samples. Results of analysis are validated with Halpin-Tsai empirical formulae.
Business Process Aware IS Change Management in SMEs
NASA Astrophysics Data System (ADS)
Makna, Janis
Changes in the business process usually require changes in the computer supported information system and, vice versa, changes in the information system almost always cause at least some changes in the business process. In many situations it is not even possible to detect which of those changes are causes and which of them are effects. Nevertheless, it is possible to identify a set of changes that usually happen when one of the elements of the set changes its state. These sets of changes may be used as patterns for situation analysis to anticipate full range of activities to be performed to get the business process and/or information system back to the stable state after it is lost because of the changes in one of the elements. Knowledge about the change pattern gives an opportunity to manage changes of information systems even if business process models and information systems architecture are not neatly documented as is the case in many SMEs. Using change patterns it is possible to know whether changes in information systems are to be expected and how changes in information systems activities, data and users will impact different aspects of the business process supported by the information system.
Dynamic Response of Functionally Graded Carbon Nanotube Reinforced Sandwich Plate
NASA Astrophysics Data System (ADS)
Mehar, Kulmani; Panda, Subrata Kumar
2018-03-01
In this article, the dynamic response of the carbon nanotube-reinforced functionally graded sandwich composite plate has been studied numerically with the help of finite element method. The face sheets of the sandwich composite plate are made of carbon nanotube- reinforced composite for two different grading patterns whereas the core phase is taken as isotropic material. The final properties of the structure are calculated using the rule of mixture. The geometrical model of the sandwich plate is developed and discretized suitably with the help of available shell element in ANSYS library. Subsequently, the corresponding numerical dynamic responses computed via batch input technique (parametric design language code in ANSYS) of ANSYS including Newmark’s integration scheme. The stability of the sandwich structural numerical model is established through the proper convergence study. Further, the reliability of the sandwich model is checked by comparison study between present and available results from references. As a final point, some numerical problems have been solved to examine the effect of different design constraints (carbon nanotube distribution pattern, core to face thickness ratio, volume fractions of the nanotube, length to thickness ratio, aspect ratio and constraints at edges) on the time-responses of sandwich plate.
Hall, David R [Provo, UT; Hall, Jr., H. Tracy
2007-07-24
A transmission system in a downhole component comprises a data transmission element in both ends of the downhole component. Each data transmission element houses an electrically conducting coil in a MCEI circular trough. The electrically conducting coil comprises at least two generally fractional loops. In the preferred embodiment, the transmission elements are connected by an electrical conductor. Preferably, the electrical conductor is a coaxial cable. Preferably, the MCEI trough comprises ferrite. In the preferred embodiment, the fractional loops are connected by a connecting cable. In one aspect of the present invention, the connecting cable is a pair of twisted wires. In one embodiment the connecting cable is a shielded pair of twisted wires. In another aspect of the present invention, the connecting cable is a coaxial cable. The connecting cable may be disposed outside of the MCEI circular trough.
Numerical studies of the reversed-field pinch at high aspect ratio
NASA Astrophysics Data System (ADS)
Sätherblom, H.-E.; Drake, J. R.
1998-10-01
The reversed field pinch (RFP) configuration at an aspect ratio of 8.8 is studied numerically by means of the three-dimensional magnetohydrodynamic code DEBS [D. D. Schnack et al., J. Comput. Phys. 70, 330 (1987)]. This aspect ratio is equal to that of the Extrap T1 experiment [S. Mazur et al., Nucl. Fusion 34, 427 (1994)]. A numerical study of a RFP with this level of aspect ratio requires extensive computer achievements and has hitherto not been performed. The results are compared with previous studies [Y. L. Ho et al., Phys. Plasmas 2, 3407 (1995)] of lower aspect ratio RFP configurations. In particular, an evaluation of the extrapolation to the aspect ratio of 8.8 made in this previous study shows that the extrapolation of the spectral spread, as well as most of the other findings, are confirmed. An important exception, however, is the magnetic diffusion coefficient, which is found to decrease with aspect ratio. Furthermore, an aspect ratio dependence of the magnetic energy and of the helicity of the RFP is found.
Four Studies on Aspects of Assessing Computational Performance. Technical Report No. 297.
ERIC Educational Resources Information Center
Romberg, Thomas A., Ed.
The four studies reported in this document deal with aspects of assessing students' performance on computational skills. The first study grew out of a need for an instrument to measure students' speed at recalling addition facts. This had seemed to be a very easy task, but it proved to be much more difficult than anticipated. The second study grew…
Computer simulation of functioning of elements of security systems
NASA Astrophysics Data System (ADS)
Godovykh, A. V.; Stepanov, B. P.; Sheveleva, A. A.
2017-01-01
The article is devoted to issues of development of the informational complex for simulation of functioning of the security system elements. The complex is described from the point of view of main objectives, a design concept and an interrelation of main elements. The proposed conception of the computer simulation provides an opportunity to simulate processes of security system work for training security staff during normal and emergency operation.
Nonvolatile Ionic Two-Terminal Memory Device
NASA Technical Reports Server (NTRS)
Williams, Roger M.
1990-01-01
Conceptual solid-state memory device nonvolatile and erasable and has only two terminals. Proposed device based on two effects: thermal phase transition and reversible intercalation of ions. Transfer of sodium ions between source of ions and electrical switching element increases or decreases electrical conductance of element, turning switch "on" or "off". Used in digital computers and neural-network computers. In neural networks, many small, densely packed switches function as erasable, nonvolatile synaptic elements.
Vauhkonen, P J; Vauhkonen, M; Kaipio, J P
2000-02-01
In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The currents spread out in three dimensions and therefore off-plane structures have a significant effect on the reconstructed images. A question arises: how far from the current carrying electrodes should the discretized model of the object be extended? If the model is truncated too near the electrodes, errors are produced in the reconstructed images. On the other hand if the model is extended very far from the electrodes the computational time may become too long in practice. In this paper the model truncation problem is studied with the extended finite element method. Forward solutions obtained using so-called infinite elements, long finite elements and separable long finite elements are compared to the correct solution. The effects of the truncation of the computational domain on the reconstructed images are also discussed and results from the three-dimensional (3D) sensitivity analysis are given. We show that if the finite element method with ordinary elements is used in static 3D EIT, the dimension of the problem can become fairly large if the errors associated with the domain truncation are to be avoided.
TAP 2: A finite element program for thermal analysis of convectively cooled structures
NASA Technical Reports Server (NTRS)
Thornton, E. A.
1980-01-01
A finite element computer program (TAP 2) for steady-state and transient thermal analyses of convectively cooled structures is presented. The program has a finite element library of six elements: two conduction/convection elements to model heat transfer in a solid, two convection elements to model heat transfer in a fluid, and two integrated conduction/convection elements to represent combined heat transfer in tubular and plate/fin fluid passages. Nonlinear thermal analysis due to temperature-dependent thermal parameters is performed using the Newton-Raphson iteration method. Transient analyses are performed using an implicit Crank-Nicolson time integration scheme with consistent or lumped capacitance matrices as an option. Program output includes nodal temperatures and element heat fluxes. Pressure drops in fluid passages may be computed as an option. User instructions and sample problems are presented in appendixes.
Logical Structure and the Composing Process.
ERIC Educational Resources Information Center
Russow, Lilly-Marlene
An important aspect of the composing process is the element of organization--the coherent development of ideas and considerations of relevance. Most investigations of this aspect have focused on prewriting behavior or on "heuristics,""frames," or other approaches that presuppose that organization is something imposed from the…
Psychosomatic Aspects of Cancer: An Overview.
ERIC Educational Resources Information Center
Murray, John B.
1980-01-01
It is suggested in this literature review on the psychosomatic aspects of cancer that psychoanalytic interpretations which focused on intrapsychic elements have given way to considerations of rehabilitation and assistance with the complex emotional reactions of patients and their families to terminal illness and death. (Author/DB)
Cognitive-ergonomics and instructional aspects of e-learning courses.
Rodrigues, Martha; Castello Branco, Iana; Shimioshi, José; Rodrigues, Evaldo; Monteiro, Simone; Quirino, Marcelo
2012-01-01
This paper presents an analysis of cognitive-ergonomic aspects of e-learning courses, offered by an organism from Brazilian Public Administration. The Cognitive Ergonomic studies conductive and cognitive aspects concerning to the relation between human, physics elements and social elements of the work space. From that usability aspects were evaluated by these points: i) visualization; ii) text comprehension lecture; iii) memory; iv) interface; v) instructional design; and vi) attention and learning. That survey is characterized as having been applied using the following techniques: (1) bibliographic survey, (2) field survey and (3) analysis of the documents. It was chosen the semi-structured questionnaire as the main method of data collection. About the interacting with artifacts, the interface of the courses is classified as direct engagement, because it allows the user to get the feeling that acts directly on the objects. Although the courses are well-structured they have flaws that will be discussed below. Even with these problems, the courses have a good degree of usability.
Computer-Generated Feedback on Student Writing
ERIC Educational Resources Information Center
Ware, Paige
2011-01-01
A distinction must be made between "computer-generated scoring" and "computer-generated feedback". Computer-generated scoring refers to the provision of automated scores derived from mathematical models built on organizational, syntactic, and mechanical aspects of writing. In contrast, computer-generated feedback, the focus of this article, refers…
Wittek, Adam; Joldes, Grand; Couton, Mathieu; Warfield, Simon K; Miller, Karol
2010-12-01
Long computation times of non-linear (i.e. accounting for geometric and material non-linearity) biomechanical models have been regarded as one of the key factors preventing application of such models in predicting organ deformation for image-guided surgery. This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy (i.e. surgical opening of the skull) using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU). In contrast to commercial finite element codes that rely on an updated Lagrangian formulation and implicit integration in time domain for steady state solutions, our procedures utilise the total Lagrangian formulation with explicit time stepping and dynamic relaxation. We used patient-specific finite element meshes consisting of hexahedral and non-locking tetrahedral elements, together with realistic material properties for the brain tissue and appropriate contact conditions at the boundaries. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register (i.e. align) the preoperative and intraoperative images indicated that the models very accurately predict the intraoperative deformations within the brain. For each case, computing the brain deformation field took less than 4 s using an NVIDIA Tesla C870 GPU, which is two orders of magnitude reduction in computation time in comparison to our previous study in which the brain deformation was predicted using a commercial finite element solver executed on a personal computer. Copyright © 2010 Elsevier Ltd. All rights reserved.
Micromechanical Aspects of Hydraulic Fracturing Processes
NASA Astrophysics Data System (ADS)
Galindo-torres, S. A.; Behraftar, S.; Scheuermann, A.; Li, L.; Williams, D.
2014-12-01
A micromechanical model is developed to simulate the hydraulic fracturing process. The model comprises two key components. Firstly, the solid matrix, assumed as a rock mass with pre-fabricated cracks, is represented by an array of bonded particles simulated by the Discrete Element Model (DEM)[1]. The interaction is ruled by the spheropolyhedra method, which was introduced by the authors previously and has been shown to realistically represent many of the features found in fracturing and communition processes. The second component is the fluid, which is modelled by the Lattice Boltzmann Method (LBM). It was recently coupled with the spheropolyhedra by the authors and validated. An advantage of this coupled LBM-DEM model is the control of many of the parameters of the fracturing fluid, such as its viscosity and the injection rate. To the best of the authors' knowledge this is the first application of such a coupled scheme for studying hydraulic fracturing[2]. In this first implementation, results are presented for a two-dimensional situation. Fig. 1 shows one snapshot of the LBM-DEM coupled simulation for the hydraulic fracturing where the elements with broken bonds can be identified and the fracture geometry quantified. The simulation involves a variation of the underground stress, particularly the difference between the two principal components of the stress tensor, to explore the effect on the fracture path. A second study focuses on the fluid viscosity to examine the effect of the time scales of different injection plans on the fracture geometry. The developed tool and the presented results have important implications for future studies of the hydraulic fracturing process and technology. references 1. Galindo-Torres, S.A., et al., Breaking processes in three-dimensional bonded granular materials with general shapes. Computer Physics Communications, 2012. 183(2): p. 266-277. 2. Galindo-Torres, S.A., A coupled Discrete Element Lattice Boltzmann Method for the simulation of fluid-solid interaction with particles of general shapes. Computer Methods in Applied Mechanics and Engineering, 2013. 265(0): p. 107-119.
Future of Assurance: Ensuring that a System is Trustworthy
NASA Astrophysics Data System (ADS)
Sadeghi, Ahmad-Reza; Verbauwhede, Ingrid; Vishik, Claire
Significant efforts are put in defining and implementing strong security measures for all components of the comput-ing environment. It is equally important to be able to evaluate the strength and robustness of these measures and establish trust among the components of the computing environment based on parameters and attributes of these elements and best practices associated with their production and deployment. Today the inventory of techniques used for security assurance and to establish trust -- audit, security-conscious development process, cryptographic components, external evaluation - is somewhat limited. These methods have their indisputable strengths and have contributed significantly to the advancement in the area of security assurance. However, shorter product and tech-nology development cycles and the sheer complexity of modern digital systems and processes have begun to decrease the efficiency of these techniques. Moreover, these approaches and technologies address only some aspects of security assurance and, for the most part, evaluate assurance in a general design rather than an instance of a product. Additionally, various components of the computing environment participating in the same processes enjoy different levels of security assurance, making it difficult to ensure adequate levels of protection end-to-end. Finally, most evaluation methodologies rely on the knowledge and skill of the evaluators, making reliable assessments of trustworthiness of a system even harder to achieve. The paper outlines some issues in security assurance that apply across the board, with the focus on the trustworthiness and authenticity of hardware components and evaluates current approaches to assurance.
González-Avalos, P; Mürnseer, M; Deeg, J; Bachmann, A; Spatz, J; Dooley, S; Eils, R; Gladilin, E
2017-05-01
The mechanical cell environment is a key regulator of biological processes . In living tissues, cells are embedded into the 3D extracellular matrix and permanently exposed to mechanical forces. Quantification of the cellular strain state in a 3D matrix is therefore the first step towards understanding how physical cues determine single cell and multicellular behaviour. The majority of cell assays are, however, based on 2D cell cultures that lack many essential features of the in vivo cellular environment. Furthermore, nondestructive measurement of substrate and cellular mechanics requires appropriate computational tools for microscopic image analysis and interpretation. Here, we present an experimental and computational framework for generation and quantification of the cellular strain state in 3D cell cultures using a combination of 3D substrate stretcher, multichannel microscopic imaging and computational image analysis. The 3D substrate stretcher enables deformation of living cells embedded in bead-labelled 3D collagen hydrogels. Local substrate and cell deformations are determined by tracking displacement of fluorescent beads with subsequent finite element interpolation of cell strains over a tetrahedral tessellation. In this feasibility study, we debate diverse aspects of deformable 3D culture construction, quantification and evaluation, and present an example of its application for quantitative analysis of a cellular model system based on primary mouse hepatocytes undergoing transforming growth factor (TGF-β) induced epithelial-to-mesenchymal transition. © 2017 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.
Analysis of an algorithm for distributed recognition and accountability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, C.; Frincke, D.A.; Goan, T. Jr.
1993-08-01
Computer and network systems are available to attacks. Abandoning the existing huge infrastructure of possibly-insecure computer and network systems is impossible, and replacing them by totally secure systems may not be feasible or cost effective. A common element in many attacks is that a single user will often attempt to intrude upon multiple resources throughout a network. Detecting the attack can become significantly easier by compiling and integrating evidence of such intrusion attempts across the network rather than attempting to assess the situation from the vantage point of only a single host. To solve this problem, we suggest an approachmore » for distributed recognition and accountability (DRA), which consists of algorithms which ``process,`` at a central location, distributed and asynchronous ``reports`` generated by computers (or a subset thereof) throughout the network. Our highest-priority objectives are to observe ways by which an individual moves around in a network of computers, including changing user names to possibly hide his/her true identity, and to associate all activities of multiple instance of the same individual to the same network-wide user. We present the DRA algorithm and a sketch of its proof under an initial set of simplifying albeit realistic assumptions. Later, we relax these assumptions to accommodate pragmatic aspects such as missing or delayed ``reports,`` clock slew, tampered ``reports,`` etc. We believe that such algorithms will have widespread applications in the future, particularly in intrusion-detection system.« less
Analysis and synthesis of distributed-lumped-active networks by digital computer
NASA Technical Reports Server (NTRS)
1973-01-01
The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.
Study of propellant dynamics in a shuttle type launch vehicle
NASA Technical Reports Server (NTRS)
Jones, C. E.; Feng, G. C.
1972-01-01
A method and an associated digital computer program for evaluating the vibrational characteristics of large liquid-filled rigid wall tanks of general shape are presented. A solution procedure was developed in which slosh modes and frequencies are computed for systems mathematically modeled as assemblages of liquid finite elements. To retain sparsity in the assembled system mass and stiffness matrices, a compressible liquid element formulation was incorporated in the program. The approach taken in the liquid finite element formulation is compatible with triangular and quadrilateral structural finite elements so that the analysis of liquid motion can be coupled with flexible tank wall motion at some future time. The liquid element repertoire developed during the course of this study consists of a two-dimensional triangular element and a three-dimensional tetrahedral element.
NASA Astrophysics Data System (ADS)
Zárate, Francisco; Cornejo, Alejandro; Oñate, Eugenio
2018-07-01
This paper extends to three dimensions (3D), the computational technique developed by the authors in 2D for predicting the onset and evolution of fracture in a finite element mesh in a simple manner based on combining the finite element method and the discrete element method (DEM) approach (Zárate and Oñate in Comput Part Mech 2(3):301-314, 2015). Once a crack is detected at an element edge, discrete elements are generated at the adjacent element vertexes and a simple DEM mechanism is considered in order to follow the evolution of the crack. The combination of the DEM with simple four-noded linear tetrahedron elements correctly captures the onset of fracture and its evolution, as shown in several 3D examples of application.
Functionally conserved cis-regulatory elements of COL18A1 identified through zebrafish transgenesis.
Kague, Erika; Bessling, Seneca L; Lee, Josephine; Hu, Gui; Passos-Bueno, Maria Rita; Fisher, Shannon
2010-01-15
Type XVIII collagen is a component of basement membranes, and expressed prominently in the eye, blood vessels, liver, and the central nervous system. Homozygous mutations in COL18A1 lead to Knobloch Syndrome, characterized by ocular defects and occipital encephalocele. However, relatively little has been described on the role of type XVIII collagen in development, and nothing is known about the regulation of its tissue-specific expression pattern. We have used zebrafish transgenesis to identify and characterize cis-regulatory sequences controlling expression of the human gene. Candidate enhancers were selected from non-coding sequence associated with COL18A1 based on sequence conservation among mammals. Although these displayed no overt conservation with orthologous zebrafish sequences, four regions nonetheless acted as tissue-specific transcriptional enhancers in the zebrafish embryo, and together recapitulated the major aspects of col18a1 expression. Additional post-hoc computational analysis on positive enhancer sequences revealed alignments between mammalian and teleost sequences, which we hypothesize predict the corresponding zebrafish enhancers; for one of these, we demonstrate functional overlap with the orthologous human enhancer sequence. Our results provide important insight into the biological function and regulation of COL18A1, and point to additional sequences that may contribute to complex diseases involving COL18A1. More generally, we show that combining functional data with targeted analyses for phylogenetic conservation can reveal conserved cis-regulatory elements in the large number of cases where computational alignment alone falls short. Copyright 2009 Elsevier Inc. All rights reserved.
Study of the elastic behavior of synthetic lightweight aggregates (SLAs)
NASA Astrophysics Data System (ADS)
Jin, Na
Synthetic lightweight aggregates (SLAs), composed of coal fly ash and recycled plastics, represent a resilient construction material that could be a key aspect to future sustainable development. This research focuses on a prediction of the elastic modulus of SLA, assumed as a homogenous and isotropic composite of particulates of high carbon fly ash (HCFA) and a matrix of plastics (HDPE, LDPE, PS and mixture of plastics), with the emphasis on SLAs made of HCFA and PS. The elastic moduli of SLA with variable fly ash volume fractions are predicted based on finite element analyses (FEA) performed using the computer programs ABAQUS and PLAXIS. The effect of interface friction (roughness) between phases and other computation parameters; e.g., loading strain, stiffness of component, element type and boundary conditions, are included in these analyses. Analytical models and laboratory tests provide a baseline for comparison. Overall, results indicate ABAQUS generates elastic moduli closer to those predicted by well-established analytical models than moduli predicted from PLAXIS, especially for SLAs with lower fly ash content. In addition, an increase in roughness, loading strain indicated increase of SLAs stiffness, especially as fly ash content increases. The elastic moduli obtained from unconfined compression generally showed less elastic moduli than those obtained from analytical and ABAQUS 3D predictions. This may be caused by possible existence of pre-failure surface in specimen and the directly interaction between HCFA particles. Recommendations for the future work include laboratory measurements of SLAs moduli and FEM modeling that considers various sizes and random distribution of HCFA particles in SLAs.
NASA Astrophysics Data System (ADS)
Huespe, A. E.; Oliver, J.; Mora, D. F.
2013-12-01
A finite element methodology for simulating the failure of high performance fiber reinforced concrete composites (HPFRC), with arbitrarily oriented short fibers, is presented. The composite material model is based on a micromorphic approach. Using the framework provided by this theory, the body configuration space is described through two kinematical descriptors. At the structural level, the displacement field represents the standard kinematical descriptor. Additionally, a morphological kinematical descriptor, the micromorphic field, is introduced. It describes the fiber-matrix relative displacement, or slipping mechanism of the bond, observed at the mesoscale level. In the first part of this paper, we summarize the model formulation of the micromorphic approach presented in a previous work by the authors. In the second part, and as the main contribution of the paper, we address specific issues related to the numerical aspects involved in the computational implementation of the model. The developed numerical procedure is based on a mixed finite element technique. The number of dofs per node changes according with the number of fiber bundles simulated in the composite. Then, a specific solution scheme is proposed to solve the variable number of unknowns in the discrete model. The HPFRC composite model takes into account the important effects produced by concrete fracture. A procedure for simulating quasi-brittle fracture is introduced into the model and is described in the paper. The present numerical methodology is assessed by simulating a selected set of experimental tests which proves its viability and accuracy to capture a number of mechanical phenomenon interacting at the macro- and mesoscale and leading to failure of HPFRC composites.
Ferreiro, Diego U; Komives, Elizabeth A; Wolynes, Peter G
2014-11-01
Biomolecules are the prime information processing elements of living matter. Most of these inanimate systems are polymers that compute their own structures and dynamics using as input seemingly random character strings of their sequence, following which they coalesce and perform integrated cellular functions. In large computational systems with finite interaction-codes, the appearance of conflicting goals is inevitable. Simple conflicting forces can lead to quite complex structures and behaviors, leading to the concept of frustration in condensed matter. We present here some basic ideas about frustration in biomolecules and how the frustration concept leads to a better appreciation of many aspects of the architecture of biomolecules, and especially how biomolecular structure connects to function by means of localized frustration. These ideas are simultaneously both seductively simple and perilously subtle to grasp completely. The energy landscape theory of protein folding provides a framework for quantifying frustration in large systems and has been implemented at many levels of description. We first review the notion of frustration from the areas of abstract logic and its uses in simple condensed matter systems. We discuss then how the frustration concept applies specifically to heteropolymers, testing folding landscape theory in computer simulations of protein models and in experimentally accessible systems. Studying the aspects of frustration averaged over many proteins provides ways to infer energy functions useful for reliable structure prediction. We discuss how frustration affects folding mechanisms. We review here how the biological functions of proteins are related to subtle local physical frustration effects and how frustration influences the appearance of metastable states, the nature of binding processes, catalysis and allosteric transitions. In this review, we also emphasize that frustration, far from being always a bad thing, is an essential feature of biomolecules that allows dynamics to be harnessed for function. In this way, we hope to illustrate how Frustration is a fundamental concept in molecular biology.
Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.
Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan
2018-04-01
The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have similar levels of performance in the remaining aspects.
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2014 CFR
2014-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2010 CFR
2010-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2011 CFR
2011-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2012 CFR
2012-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2013 CFR
2013-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
High-Order Numerical Simulations of Wind Turbine Wakes
NASA Astrophysics Data System (ADS)
Kleusberg, E.; Mikkelsen, R. F.; Schlatter, P.; Ivanell, S.; Henningson, D. S.
2017-05-01
Previous attempts to describe the structure of wind turbine wakes and their mutual interaction were mostly limited to large-eddy and Reynolds-averaged Navier-Stokes simulations using finite-volume solvers. We employ the higher-order spectral-element code Nek5000 to study the influence of numerical aspects on the prediction of the wind turbine wake structure and the wake interaction between two turbines. The spectral-element method enables an accurate representation of the vortical structures, with lower numerical dissipation than the more commonly used finite-volume codes. The wind-turbine blades are modeled as body forces using the actuator-line method (ACL) in the incompressible Navier-Stokes equations. Both tower and nacelle are represented with appropriate body forces. An inflow boundary condition is used which emulates homogeneous isotropic turbulence of wind-tunnel flows. We validate the implementation with results from experimental campaigns undertaken at the Norwegian University of Science and Technology (NTNU Blind Tests), investigate parametric influences and compare computational aspects with existing numerical simulations. In general the results show good agreement between the experiments and the numerical simulations both for a single-turbine setup as well as a two-turbine setup where the turbines are offset in the spanwise direction. A shift in the wake center caused by the tower wake is detected similar to experiments. The additional velocity deficit caused by the tower agrees well with the experimental data. The wake is captured well by Nek5000 in comparison with experiments both for the single wind turbine and in the two-turbine setup. The blade loading however shows large discrepancies for the high-turbulence, two-turbine case. While the experiments predicted higher thrust for the downstream turbine than for the upstream turbine, the opposite case was observed in Nek5000.
Cost Considerations in Nonlinear Finite-Element Computing
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.
1985-01-01
Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.
People and computers--some recent highlights.
Shackel, B
2000-12-01
This paper aims to review selectively a fair proportion of the literature on human-computer interaction (HCI) over the three years since Shackel (J. Am. Soc. Inform. Sci. 48 (11) (1997) 970-986). After a brief note of history I discuss traditional input, output and workplace aspects, the web and 'E-topics', web-related aspects, virtual reality, safety-critical systems, and the need to move from HCI to human-system integration (HSI). Finally I suggest, and consider briefly, some future possibilities and issues including web consequences, embedded ubiquitous computing, and 'back to systems ergonomics?'.
Yang, R; Zelyak, O; Fallone, B G; St-Aubin, J
2018-01-30
Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.
NASA Astrophysics Data System (ADS)
Yang, R.; Zelyak, O.; Fallone, B. G.; St-Aubin, J.
2018-02-01
Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.
Computer Program for Steady Transonic Flow over Thin Airfoils by Finite Elements
1975-10-01
COMPUTER PROGRAM FOR STEADY JJ TRANSONIC FLOW OVER THIN AIRFOILS BY g FINITE ELEMENTS • *q^^ r ̂ c HUNTSVILLE RESEARCH & ENGINEERING CENTER...jglMMi B Jun’ INC ORGANIMTION NAME ANO ADDRESS Lö^kfteed Missiles & Space Company, Inc. Huntsville Research & Engineering Center,^ Huntsville, Alab...This report was prepared by personnel in the Computational Mechamcs Section of the Lockheed Missiles fc Space Company, Inc.. Huntsville Research
Computational Modeling For The Transitional Flow Over A Multi-Element Airfoil
NASA Technical Reports Server (NTRS)
Liou, William W.; Liu, Feng-Jun; Rumsey, Chris L. (Technical Monitor)
2000-01-01
The transitional flow over a multi-element airfoil in a landing configuration are computed using a two equation transition model. The transition model is predictive in the sense that the transition onset is a result of the calculation and no prior knowledge of the transition location is required. The computations were performed using the INS2D) Navier-Stokes code. Overset grids are used for the three-element airfoil. The airfoil operating conditions are varied for a range of angle of attack and for two different Reynolds numbers of 5 million and 9 million. The computed results are compared with experimental data for the surface pressure, skin friction, transition onset location, and velocity magnitude. In general, the comparison shows a good agreement with the experimental data.
NASA Technical Reports Server (NTRS)
Southall, J. W.
1979-01-01
The engineering-specified requirements for integrated information processing by means of the Integrated Programs for Aerospace-Vehicle Design (IPAD) system are presented. A data model is described and is based on the design process of a typical aerospace vehicle. General data management requirements are specified for data storage, retrieval, generation, communication, and maintenance. Information management requirements are specified for a two-component data model. In the general portion, data sets are managed as entities, and in the specific portion, data elements and the relationships between elements are managed by the system, allowing user access to individual elements for the purpose of query. Computer program management requirements are specified for support of a computer program library, control of computer programs, and installation of computer programs into IPAD.
Determination of apparent coupling factors for adhesive bonded acrylic plates using SEAL approach
NASA Astrophysics Data System (ADS)
Pankaj, Achuthan. C.; Shivaprasad, M. V.; Murigendrappa, S. M.
2018-04-01
Apparent coupling loss factors (CLF) and velocity responses has been computed for two lap joined adhesive bonded plates using finite element and experimental statistical energy analysis like approach. A finite element model of the plates has been created using ANSYS software. The statistical energy parameters have been computed using the velocity responses obtained from a harmonic forced excitation analysis. Experiments have been carried out for two different cases of adhesive bonded joints and the results have been compared with the apparent coupling factors and velocity responses obtained from finite element analysis. The results obtained from the studies signify the importance of modeling of adhesive bonded joints in computation of the apparent coupling factors and its further use in computation of energies and velocity responses using statistical energy analysis like approach.