Sample records for element computer analyses

  1. A computer graphics program for general finite element analyses

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Sawyer, L. M.

    1978-01-01

    Documentation for a computer graphics program for displays from general finite element analyses is presented. A general description of display options and detailed user instructions are given. Several plots made in structural, thermal and fluid finite element analyses are included to illustrate program options. Sample data files are given to illustrate use of the program.

  2. Computational Aeroelastic Analyses of a Low-Boom Supersonic Configuration

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Sanetrik, Mark D.; Chwalowski, Pawel; Connolly, Joseph

    2015-01-01

    An overview of NASA's Commercial Supersonic Technology (CST) Aeroservoelasticity (ASE) element is provided with a focus on recent computational aeroelastic analyses of a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The overview includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, unstructured CFD grids, and CFD-based aeroelastic analyses. In addition, a summary of the work involving the development of aeroelastic reduced-order models (ROMs) and the development of an aero-propulso-servo-elastic (APSE) model is provided.

  3. Effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy

    1987-01-01

    The effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter was investigated. Several structural performance and resizing (SPAR) thermal models and NASA structural analysis (NASTRAN) structural models were set up for the orbiter wing midspan bay 3. The thermal model was found to be the one that determines the limit of finite-element fineness because of the limitation of computational core space required for the radiation view factor calculations. The thermal stresses were found to be extremely sensitive to a slight variation of structural temperature distributions. The minimum degree of element fineness required for the thermal model to yield reasonably accurate solutions was established. The radiation view factor computation time was found to be insignificant compared with the total computer time required for the SPAR transient heat transfer analysis.

  4. Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Ogilvie, P. L.

    1978-01-01

    The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.

  5. A comparison between different finite elements for elastic and aero-elastic analyses.

    PubMed

    Mahran, Mohamed; ELsabbagh, Adel; Negm, Hani

    2017-11-01

    In the present paper, a comparison between five different shell finite elements, including the Linear Triangular Element, Linear Quadrilateral Element, Linear Quadrilateral Element based on deformation modes, 8-node Quadrilateral Element, and 9-Node Quadrilateral Element was presented. The shape functions and the element equations related to each element were presented through a detailed mathematical formulation. Additionally, the Jacobian matrix for the second order derivatives was simplified and used to derive each element's strain-displacement matrix in bending. The elements were compared using carefully selected elastic and aero-elastic bench mark problems, regarding the number of elements needed to reach convergence, the resulting accuracy, and the needed computation time. The best suitable element for elastic free vibration analysis was found to be the Linear Quadrilateral Element with deformation-based shape functions, whereas the most suitable element for stress analysis was the 8-Node Quadrilateral Element, and the most suitable element for aero-elastic analysis was the 9-Node Quadrilateral Element. Although the linear triangular element was the last choice for modal and stress analyses, it establishes more accurate results in aero-elastic analyses, however, with much longer computation time. Additionally, the nine-node quadrilateral element was found to be the best choice for laminated composite plates analysis.

  6. TAP 2: A finite element program for thermal analysis of convectively cooled structures

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.

    1980-01-01

    A finite element computer program (TAP 2) for steady-state and transient thermal analyses of convectively cooled structures is presented. The program has a finite element library of six elements: two conduction/convection elements to model heat transfer in a solid, two convection elements to model heat transfer in a fluid, and two integrated conduction/convection elements to represent combined heat transfer in tubular and plate/fin fluid passages. Nonlinear thermal analysis due to temperature-dependent thermal parameters is performed using the Newton-Raphson iteration method. Transient analyses are performed using an implicit Crank-Nicolson time integration scheme with consistent or lumped capacitance matrices as an option. Program output includes nodal temperatures and element heat fluxes. Pressure drops in fluid passages may be computed as an option. User instructions and sample problems are presented in appendixes.

  7. Advances and trends in structures and dynamics; Proceedings of the Symposium, Washington, DC, October 22-25, 1984

    NASA Technical Reports Server (NTRS)

    Noor, A. K. (Editor); Hayduk, R. J. (Editor)

    1985-01-01

    Among the topics discussed are developments in structural engineering hardware and software, computation for fracture mechanics, trends in numerical analysis and parallel algorithms, mechanics of materials, advances in finite element methods, composite materials and structures, determinations of random motion and dynamic response, optimization theory, automotive tire modeling methods and contact problems, the damping and control of aircraft structures, and advanced structural applications. Specific topics covered include structural design expert systems, the evaluation of finite element system architectures, systolic arrays for finite element analyses, nonlinear finite element computations, hierarchical boundary elements, adaptive substructuring techniques in elastoplastic finite element analyses, automatic tracking of crack propagation, a theory of rate-dependent plasticity, the torsional stability of nonlinear eccentric structures, a computation method for fluid-structure interaction, the seismic analysis of three-dimensional soil-structure interaction, a stress analysis for a composite sandwich panel, toughness criterion identification for unidirectional composite laminates, the modeling of submerged cable dynamics, and damping synthesis for flexible spacecraft structures.

  8. Integrated Nondestructive Evaluation and Finite Element Analysis Predicts Crack Location and Shape

    NASA Technical Reports Server (NTRS)

    Abdul-Azia, Ali; Baaklini, George Y.; Trudell, Jeffrey J.

    2002-01-01

    This study describes the finite-element analyses and the NDE modality undertaken on two flywheel rotors that were spun to burst speed. Computed tomography and dimensional measurements were used to nondestructively evaluate the rotors before and/or after they were spun to the first crack detection. Computed tomography data findings of two- and three-dimensional crack formation were used to conduct finite-element (FEA) and fracture mechanics analyses. A procedure to extend these analyses to estimate the life of these components is also outlined. NDE-FEA results for one of the rotors are presented in the figures. The stress results, which represent the radial stresses in the rim, clearly indicate that the maximum stress region is within the section defined by the computed tomography scan. Furthermore, the NDE data correlate well with the FEA results. In addition, the measurements reported show that the NDE and FEA data are in parallel.

  9. RNA-Seq Analysis to Measure the Expression of SINE Retroelements.

    PubMed

    Román, Ángel Carlos; Morales-Hernández, Antonio; Fernández-Salguero, Pedro M

    2016-01-01

    The intrinsic features of retroelements, like their repetitive nature and disseminated presence in their host genomes, demand the use of advanced methodologies for their bioinformatic and functional study. The short length of SINE (short interspersed elements) retrotransposons makes such analyses even more complex. Next-generation sequencing (NGS) technologies are currently one of the most widely used tools to characterize the whole repertoire of gene expression in a specific tissue. In this chapter, we will review the molecular and computational methods needed to perform NGS analyses on SINE elements. We will also describe new methods of potential interest for researchers studying repetitive elements. We intend to outline the general ideas behind the computational analyses of NGS data obtained from SINE elements, and to stimulate other scientists to expand our current knowledge on SINE biology using RNA-seq and other NGS tools.

  10. Finite Element Analysis of a NASA National Transonic Facility Wind Tunnel Balance

    NASA Technical Reports Server (NTRS)

    Lindell, Michael C.

    1996-01-01

    This paper presents the results of finite element analyses and correlation studies performed on a NASA National Transonic Facility (NTF) Wind Tunnel balance. In the past NASA has relied primarily on classical hand analyses, coupled with relatively large safety factors, for predicting maximum stresses in wind tunnel balances. Now, with the significant advancements in computer technology and sophistication of general purpose analysis codes, it is more reasonable to pursue finite element analyses of these balances. The correlation studies of the present analyses show very good agreement between the analyses and data measured with strain gages and therefore the studies give higher confidence for using finite element analyses to analyze and optimize balance designs in the future.

  11. Finite Element Analysis of a NASA National Transonic Facility Wide Tunnel Balance

    NASA Technical Reports Server (NTRS)

    Lindell, Michael C. (Editor)

    1999-01-01

    This paper presents the results of finite element analyses and correlation studies performed on a NASA National Transonic Facility (NTF) Wind Tunnel balance. In the past NASA has relied primarily on classical hand analyses, coupled with relatively large safety factors, for predicting maximum stresses in wind tunnel balances. Now, with the significant advancements in computer technology and sophistication of general purpose analysis codes, it is more reasonable to pursue finite element analyses of these balances. The correlation studies of the present analyses show very good agreement between the analyses and data measured with strain gages and therefore the studies give higher confidence for using finite element analyses to analyze and optimize balance designs in the future.

  12. Difference-Equation/Flow-Graph Circuit Analysis

    NASA Technical Reports Server (NTRS)

    Mcvey, I. M.

    1988-01-01

    Numerical technique enables rapid, approximate analyses of electronic circuits containing linear and nonlinear elements. Practiced in variety of computer languages on large and small computers; for circuits simple enough, programmable hand calculators used. Although some combinations of circuit elements make numerical solutions diverge, enables quick identification of divergence and correction of circuit models to make solutions converge.

  13. Elastic-plastic finite-element analyses of thermally cycled double-edge wedge specimens

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Hunt, L. E.

    1982-01-01

    Elastic-plastic stress-strain analyses were performed for double-edge wedge specimens subjected to thermal cycling in fluidized beds at 316 and 1088 C. Four cases involving different nickel-base alloys (IN 100, Mar M-200, NASA TAZ-8A, and Rene 80) were analyzed by using the MARC nonlinear, finite element computer program. Elastic solutions from MARC showed good agreement with previously reported solutions obtained by using the NASTRAN and ISO3DQ computer programs. Equivalent total strain ranges at the critical locations calculated by elastic analyses agreed within 3 percent with those calculated from elastic-plastic analyses. The elastic analyses always resulted in compressive mean stresses at the critical locations. However, elastic-plastic analyses showed tensile mean stresses for two of the four alloys and an increase in the compressive mean stress for the highest plastic strain case.

  14. Elastic-plastic finite-element analyses of thermally cycled single-edge wedge specimens

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1982-01-01

    Elastic-plastic stress-strain analyses were performed for single-edge wedge alloys subjected to thermal cycling in fluidized beds. Three cases (NASA TAZ-8A alloy under one cycling condition and 316 stainless steel alloy under two cycling conditions) were analyzed by using the MARC nonlinear, finite-element computer program. Elastic solutions from MARC showed good agreement with previously reported solutions that used the NASTRAN and ISO3DQ computer programs. The NASA TAZ-8A case exhibited no plastic strains, and the elastic and elastic-plastic analyses gave identical results. Elastic-plastic analyses of the 316 stainless steel alloy showed plastic strain reversal with a shift of the mean stresses in the compressive direction. The maximum equivalent total strain ranges for these cases were 13 to 22 percent greater than that calculated from elastic analyses.

  15. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  16. A New Material Mapping Procedure for Quantitative Computed Tomography-Based, Continuum Finite Element Analyses of the Vertebra

    PubMed Central

    Unnikrishnan, Ginu U.; Morgan, Elise F.

    2011-01-01

    Inaccuracies in the estimation of material properties and errors in the assignment of these properties into finite element models limit the reliability, accuracy, and precision of quantitative computed tomography (QCT)-based finite element analyses of the vertebra. In this work, a new mesh-independent, material mapping procedure was developed to improve the quality of predictions of vertebral mechanical behavior from QCT-based finite element models. In this procedure, an intermediate step, called the material block model, was introduced to determine the distribution of material properties based on bone mineral density, and these properties were then mapped onto the finite element mesh. A sensitivity study was first conducted on a calibration phantom to understand the influence of the size of the material blocks on the computed bone mineral density. It was observed that varying the material block size produced only marginal changes in the predictions of mineral density. Finite element (FE) analyses were then conducted on a square column-shaped region of the vertebra and also on the entire vertebra in order to study the effect of material block size on the FE-derived outcomes. The predicted values of stiffness for the column and the vertebra decreased with decreasing block size. When these results were compared to those of a mesh convergence analysis, it was found that the influence of element size on vertebral stiffness was less than that of the material block size. This mapping procedure allows the material properties in a finite element study to be determined based on the block size required for an accurate representation of the material field, while the size of the finite elements can be selected independently and based on the required numerical accuracy of the finite element solution. The mesh-independent, material mapping procedure developed in this study could be particularly helpful in improving the accuracy of finite element analyses of vertebroplasty and spine metastases, as these analyses typically require mesh refinement at the interfaces between distinct materials. Moreover, the mapping procedure is not specific to the vertebra and could thus be applied to many other anatomic sites. PMID:21823740

  17. Thermal finite-element analysis of space shuttle main engine turbine blade

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Tong, Michael T.; Kaufman, Albert

    1987-01-01

    Finite-element, transient heat transfer analyses were performed for the first-stage blades of the space shuttle main engine (SSME) high-pressure fuel turbopump. The analyses were based on test engine data provided by Rocketdyne. Heat transfer coefficients were predicted by performing a boundary-layer analysis at steady-state conditions with the STAN5 boundary-layer code. Two different peak-temperature overshoots were evaluated for the startup transient. Cutoff transient conditions were also analyzed. A reduced gas temperature profile based on actual thermocouple data was also considered. Transient heat transfer analyses were conducted with the MARC finite-element computer code.

  18. The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.

    PubMed

    Ene, Florentina; Delassus, Patrick; Morris, Liam

    2014-08-01

    The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.

  19. The effectiveness of element downsizing on a three-dimensional finite element model of bone trabeculae in implant biomechanics.

    PubMed

    Sato, Y; Wadamoto, M; Tsuga, K; Teixeira, E R

    1999-04-01

    More validity of finite element analysis in implant biomechanics requires element downsizing. However, excess downsizing needs computer memory and calculation time. To investigate the effectiveness of element downsizing on the construction of a three-dimensional finite element bone trabeculae model, with different element sizes (600, 300, 150 and 75 microm) models were constructed and stress induced by vertical 10 N loading was analysed. The difference in von Mises stress values between the models with 600 and 300 microm element sizes was larger than that between 300 and 150 microm. On the other hand, no clear difference of stress values was detected among the models with 300, 150 and 75 microm element sizes. Downsizing of elements from 600 to 300 microm is suggested to be effective in the construction of a three-dimensional finite element bone trabeculae model for possible saving of computer memory and calculation time in the laboratory.

  20. Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine

    NASA Astrophysics Data System (ADS)

    Sharma, Gulshan B.; Robertson, Douglas D.

    2013-07-01

    Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula's material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element's remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.

  1. 2D and 3D Multiscale/Multicomponent Modeling of Impact Response of Heterogeneous Energetic Composites

    DTIC Science & Technology

    2016-06-01

    7 Development of Cohesive Finite Element Method (CFEM) Capability ................................7 3D...Cohesive Finite Element Method (CFEM) framework A new scientific framework and technical capability is developed for the computational analyses of...this section should shift from reporting activities to reporting accomplishments. Development of Cohesive Finite Element Method (CFEM) Capability

  2. Higher and lowest order mixed finite element approximation of subsurface flow problems with solutions of low regularity

    NASA Astrophysics Data System (ADS)

    Bause, Markus

    2008-02-01

    In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is observed, which however is less significant for the accompanying solute transport.

  3. Know Your Enemy: Successful Bioinformatic Approaches to Predict Functional RNA Structures in Viral RNAs.

    PubMed

    Lim, Chun Shen; Brown, Chris M

    2017-01-01

    Structured RNA elements may control virus replication, transcription and translation, and their distinct features are being exploited by novel antiviral strategies. Viral RNA elements continue to be discovered using combinations of experimental and computational analyses. However, the wealth of sequence data, notably from deep viral RNA sequencing, viromes, and metagenomes, necessitates computational approaches being used as an essential discovery tool. In this review, we describe practical approaches being used to discover functional RNA elements in viral genomes. In addition to success stories in new and emerging viruses, these approaches have revealed some surprising new features of well-studied viruses e.g., human immunodeficiency virus, hepatitis C virus, influenza, and dengue viruses. Some notable discoveries were facilitated by new comparative analyses of diverse viral genome alignments. Importantly, comparative approaches for finding RNA elements embedded in coding and non-coding regions differ. With the exponential growth of computer power we have progressed from stem-loop prediction on single sequences to cutting edge 3D prediction, and from command line to user friendly web interfaces. Despite these advances, many powerful, user friendly prediction tools and resources are underutilized by the virology community.

  4. Know Your Enemy: Successful Bioinformatic Approaches to Predict Functional RNA Structures in Viral RNAs

    PubMed Central

    Lim, Chun Shen; Brown, Chris M.

    2018-01-01

    Structured RNA elements may control virus replication, transcription and translation, and their distinct features are being exploited by novel antiviral strategies. Viral RNA elements continue to be discovered using combinations of experimental and computational analyses. However, the wealth of sequence data, notably from deep viral RNA sequencing, viromes, and metagenomes, necessitates computational approaches being used as an essential discovery tool. In this review, we describe practical approaches being used to discover functional RNA elements in viral genomes. In addition to success stories in new and emerging viruses, these approaches have revealed some surprising new features of well-studied viruses e.g., human immunodeficiency virus, hepatitis C virus, influenza, and dengue viruses. Some notable discoveries were facilitated by new comparative analyses of diverse viral genome alignments. Importantly, comparative approaches for finding RNA elements embedded in coding and non-coding regions differ. With the exponential growth of computer power we have progressed from stem-loop prediction on single sequences to cutting edge 3D prediction, and from command line to user friendly web interfaces. Despite these advances, many powerful, user friendly prediction tools and resources are underutilized by the virology community. PMID:29354101

  5. Development and verification of local/global analysis techniques for laminated composites

    NASA Technical Reports Server (NTRS)

    Griffin, O. Hayden, Jr.

    1989-01-01

    Analysis and design methods for laminated composite materials have been the subject of considerable research over the past 20 years, and are currently well developed. In performing the detailed three-dimensional analyses which are often required in proximity to discontinuities, however, analysts often encounter difficulties due to large models. Even with the current availability of powerful computers, models which are too large to run, either from a resource or time standpoint, are often required. There are several approaches which can permit such analyses, including substructuring, use of superelements or transition elements, and the global/local approach. This effort is based on the so-called zoom technique to global/local analysis, where a global analysis is run, with the results of that analysis applied to a smaller region as boundary conditions, in as many iterations as is required to attain an analysis of the desired region. Before beginning the global/local analyses, it was necessary to evaluate the accuracy of the three-dimensional elements currently implemented in the Computational Structural Mechanics (CSM) Testbed. It was also desired to install, using the Experimental Element Capability, a number of displacement formulation elements which have well known behavior when used for analysis of laminated composites.

  6. LaRC design analysis report for National Transonic Facility for 304 stainless steel tunnel shell. Volume 1S: Finite difference analysis of cone/cylinder junction

    NASA Technical Reports Server (NTRS)

    Ramsey, J. W., Jr.; Taylor, J. T.; Wilson, J. F.; Gray, C. E., Jr.; Leatherman, A. D.; Rooker, J. R.; Allred, J. W.

    1976-01-01

    The results of extensive computer (finite element, finite difference and numerical integration), thermal, fatigue, and special analyses of critical portions of a large pressurized, cryogenic wind tunnel (National Transonic Facility) are presented. The computer models, loading and boundary conditions are described. Graphic capability was used to display model geometry, section properties, and stress results. A stress criteria is presented for evaluation of the results of the analyses. Thermal analyses were performed for major critical and typical areas. Fatigue analyses of the entire tunnel circuit are presented.

  7. Finite Elements, Design Optimization, and Nondestructive Evaluation: A Review in Magnetics, and Future Directions in GPU-based, Element-by-Element Coupled Optimization and NDE

    DTIC Science & Technology

    2013-07-18

    Nationale Supérieure d’Ingénieurs Electriciens de Grenoble (ENSIEG) group led by J.C. Sabonnadiere, J.L. Coulomb and G. Meunier would bring mathematical...1985. [11] J.L. Coulomb , “Analyse tridimensionnelle des champs électriques et magnétiques par la méthode des éléments finis,” These de Doctorat...computations by the virtual work principle [10]. However Coulomb [11-13] of the ENSIEG group identified a one-step solution for the computation of

  8. Discrete-Roughness-Element-Enhanced Swept-Wing Natural Laminar Flow at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Malik, Mujeeb; Liao, Wei; Li, Fei; Choudhari, Meelan

    2015-01-01

    Nonlinear parabolized stability equations and secondary-instability analyses are used to provide a computational assessment of the potential use of the discrete-roughness-element technology for extending swept-wing natural laminar flow at chord Reynolds numbers relevant to transport aircraft. Computations performed for the boundary layer on a natural-laminar-flow airfoil with a leading-edge sweep angle of 34.6 deg, freestream Mach number of 0.75, and chord Reynolds numbers of 17 × 10(exp 6), 24 × 10(exp 6), and 30 × 10(exp 6) suggest that discrete roughness elements could delay laminar-turbulent transition by about 20% when transition is caused by stationary crossflow disturbances. Computations show that the introduction of small-wavelength stationary crossflow disturbances (i.e., discrete roughness element) also suppresses the growth of most amplified traveling crossflow disturbances.

  9. An Overview of Preliminary Computational and Experimental Results for the Semi-Span Super-Sonic Transport (S4T) Wind-Tunnel Model

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Perry, Boyd, III; Florance, James R.; Sanetrik, Mark D.; Wieseman, Carol D.; Stevens, William L.; Funk, Christie J.; Hur, Jiyoung; Christhilf, David M.; Coulson, David A.

    2011-01-01

    A summary of computational and experimental aeroelastic and aeroservoelastic (ASE) results for the Semi-Span Super-Sonic Transport (S4T) wind-tunnel model is presented. A broad range of analyses and multiple ASE wind-tunnel tests of the S4T have been performed in support of the ASE element in the Supersonics Program, part of NASA's Fundamental Aeronautics Program. The computational results to be presented include linear aeroelastic and ASE analyses, nonlinear aeroelastic analyses using an aeroelastic CFD code, and rapid aeroelastic analyses using CFD-based reduced-order models (ROMs). Experimental results from two closed-loop wind-tunnel tests performed at NASA Langley's Transonic Dynamics Tunnel (TDT) will be presented as well.

  10. Computer program for determining mass properties of a rigid structure

    NASA Technical Reports Server (NTRS)

    Hull, R. A.; Gilbert, J. L.; Klich, P. J.

    1978-01-01

    A computer program was developed for the rapid computation of the mass properties of complex structural systems. The program uses rigid body analyses and permits differences in structural material throughout the total system. It is based on the premise that complex systems can be adequately described by a combination of basic elemental shapes. Simple geometric data describing size and location of each element and the respective material density or weight of each element were the only required input data. From this minimum input, the program yields system weight, center of gravity, moments of inertia and products of inertia with respect to mutually perpendicular axes through the system center of gravity. The program also yields mass properties of the individual shapes relative to component axes.

  11. Computing Surface Coordinates Of Face-Milled Spiral-Bevel Gear Teeth

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Litvin, Faydor L.

    1995-01-01

    Surface coordinates of face-milled spiral-bevel gear teeth computed by method involving numerical solution of governing equations. Needed to generate mathematical models of tooth surfaces for use in finite-element analyses of stresses, strains, and vibrations in meshing spiral-bevel gears.

  12. Computer program analyzes Buckling Of Shells Of Revolution with various wall construction, BOSOR

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Bushnell, D.; Sobel, L. H.

    1968-01-01

    Computer program performs stability analyses for a wide class of shells without unduly restrictive approximations. The program uses numerical integration, finite difference of finite element techniques to solve with reasonable accuracy almost any buckling problem for shells exhibiting orthotropic behavior.

  13. ICAN/PART: Particulate composite analyzer, user's manual and verification studies

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Murthy, Pappu L. N.; Mital, Subodh K.

    1996-01-01

    A methodology for predicting the equivalent properties and constituent microstresses for particulate matrix composites, based on the micromechanics approach, is developed. These equations are integrated into a computer code developed to predict the equivalent properties and microstresses of fiber reinforced polymer matrix composites to form a new computer code, ICAN/PART. Details of the flowchart, input and output for ICAN/PART are described, along with examples of the input and output. Only the differences between ICAN/PART and the original ICAN code are described in detail, and the user is assumed to be familiar with the structure and usage of the original ICAN code. Detailed verification studies, utilizing dim dimensional finite element and boundary element analyses, are conducted in order to verify that the micromechanics methodology accurately models the mechanics of particulate matrix composites. ne equivalent properties computed by ICAN/PART fall within bounds established by the finite element and boundary element results. Furthermore, constituent microstresses computed by ICAN/PART agree in average sense with results computed using the finite element method. The verification studies indicate that the micromechanics programmed into ICAN/PART do indeed accurately model the mechanics of particulate matrix composites.

  14. Nonlinear heat transfer and structural analyses of SSME turbine blades

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, A.; Kaufman, A.

    1987-01-01

    Three-dimensional nonlinear finite-element heat transfer and structural analyses were performed for the first stage high-pressure fuel turbopump blade of the space shuttle main engine (SSME). Directionally solidified (DS) MAR-M 246 material properties were considered for the analyses. Analytical conditions were based on a typical test stand engine cycle. Blade temperature and stress-strain histories were calculated using MARC finite-element computer code. The study was undertaken to assess the structural response of an SSME turbine blade and to gain greater understanding of blade damage mechanisms, convective cooling effects, and the thermal-mechanical effects.

  15. Materials constitutive models for nonlinear analysis of thermally cycled structures

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Hunt, L. E.

    1982-01-01

    Effects of inelastic materials models on computed stress-strain solutions for thermally loaded structures were studied by performing nonlinear (elastoplastic creep) and elastic structural analyses on a prismatic, double edge wedge specimen of IN 100 alloy that was subjected to thermal cycling in fluidized beds. Four incremental plasticity creep models (isotropic, kinematic, combined isotropic kinematic, and combined plus transient creep) were exercised for the problem by using the MARC nonlinear, finite element computer program. Maximum total strain ranges computed from the elastic and nonlinear analyses agreed within 5 percent. Mean cyclic stresses, inelastic strain ranges, and inelastic work were significantly affected by the choice of inelastic constitutive model. The computing time per cycle for the nonlinear analyses was more than five times that required for the elastic analysis.

  16. Effect of Shear Deformation and Continuity on Delamination Modelling with Plate Elements

    NASA Technical Reports Server (NTRS)

    Glaessgen, E. H.; Riddell, W. T.; Raju, I. S.

    1998-01-01

    The effects of several critical assumptions and parameters on the computation of strain energy release rates for delamination and debond configurations modeled with plate elements have been quantified. The method of calculation is based on the virtual crack closure technique (VCCT), and models that model the upper and lower surface of the delamination or debond with two-dimensional (2D) plate elements rather than three-dimensional (3D) solid elements. The major advantages of the plate element modeling technique are a smaller model size and simpler geometric modeling. Specific issues that are discussed include: constraint of translational degrees of freedom, rotational degrees of freedom or both in the neighborhood of the crack tip; element order and assumed shear deformation; and continuity of material properties and section stiffness in the vicinity of the debond front, Where appropriate, the plate element analyses are compared with corresponding two-dimensional plane strain analyses.

  17. Monitoring Collaborative Activities in Computer Supported Collaborative Learning

    ERIC Educational Resources Information Center

    Persico, Donatella; Pozzi, Francesca; Sarti, Luigi

    2010-01-01

    Monitoring the learning process in computer supported collaborative learning (CSCL) environments is a key element for supporting the efficacy of tutor actions. This article proposes an approach for analysing learning processes in a CSCL environment to support tutors in their monitoring tasks. The approach entails tracking the interactions within…

  18. Some Observations on the Current Status of Performing Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Knight, Norman F., Jr; Shivakumar, Kunigal N.

    2015-01-01

    Aerospace structures are complex high-performance structures. Advances in reliable and efficient computing and modeling tools are enabling analysts to consider complex configurations, build complex finite element models, and perform analysis rapidly. Many of the early career engineers of today are very proficient in the usage of modern computers, computing engines, complex software systems, and visualization tools. These young engineers are becoming increasingly efficient in building complex 3D models of complicated aerospace components. However, the current trends demonstrate blind acceptance of the results of the finite element analysis results. This paper is aimed at raising an awareness of this situation. Examples of the common encounters are presented. To overcome the current trends, some guidelines and suggestions for analysts, senior engineers, and educators are offered.

  19. The effectiveness of a new algorithm on a three-dimensional finite element model construction of bone trabeculae in implant biomechanics.

    PubMed

    Sato, Y; Teixeira, E R; Tsuga, K; Shindoi, N

    1999-08-01

    More validity of finite element analysis (FEA) in implant biomechanics requires element downsizing. However, excess downsizing needs computer memory and calculation time. To evaluate the effectiveness of a new algorithm established for more valid FEA model construction without downsizing, three-dimensional FEA bone trabeculae models with different element sizes (300, 150 and 75 micron) were constructed. Four algorithms of stepwise (1 to 4 ranks) assignment of Young's modulus accorded with bone volume in the individual cubic element was used and then stress distribution against vertical loading was analysed. The model with 300 micron element size, with 4 ranks of Young's moduli accorded with bone volume in each element presented similar stress distribution to the model with the 75 micron element size. These results show that the new algorithm was effective, and the use of the 300 micron element for bone trabeculae representation was proposed, without critical changes in stress values and for possible savings on computer memory and calculation time in the laboratory.

  20. 3-d finite element model development for biomechanics: a software demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollerbach, K.; Hollister, A.M.; Ashby, E.

    1997-03-01

    Finite element analysis is becoming an increasingly important part of biomechanics and orthopedic research, as computational resources become more powerful, and data handling algorithms become more sophisticated. Until recently, tools with sufficient power did not exist or were not accessible to adequately model complicated, three-dimensional, nonlinear biomechanical systems. In the past, finite element analyses in biomechanics have often been limited to two-dimensional approaches, linear analyses, or simulations of single tissue types. Today, we have the resources to model fully three-dimensional, nonlinear, multi-tissue, and even multi-joint systems. The authors will present the process of developing these kinds of finite element models,more » using human hand and knee examples, and will demonstrate their software tools.« less

  1. Superelement Analysis of Tile-Reinforced Composite Armor

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.

    1998-01-01

    Super-elements can greatly improve the computational efficiency of analyses of tile-reinforced structures such as the hull of the Composite Armored Vehicle. By taking advantage of the periodicity in this type of construction, super-elements can be used to simplify the task of modeling, to virtually eliminate the time required to assemble the stiffness matrices, and to reduce significantly the analysis solution time. Furthermore, super-elements are fully transferable between analyses and analysts, so that they provide a consistent method to share information and reduce duplication. This paper describes a methodology that was developed to model and analyze large upper hull components of the Composite Armored Vehicle. The analyses are based on two types of superelement models. The first type is based on element-layering, which consists of modeling a laminate by using several layers of shell elements constrained together with compatibility equations. Element layering is used to ensure the proper transverse shear deformation in the laminate rubber layer. The second type of model uses three-dimensional elements. Since no graphical pre-processor currently supports super-elements, a special technique based on master-elements was developed. Master-elements are representations of super-elements that are used in conjunction with a custom translator to write the superelement connectivities as input decks for ABAQUS.

  2. Computational study of Drucker-Prager plasticity of rock using microtomography

    NASA Astrophysics Data System (ADS)

    Liu, J.; Sarout, J.; Zhang, M.; Dautriat, J.; Veveakis, M.; Regenauer-Lieb, K.

    2016-12-01

    Understanding the physics of rocks is essential for the industry of mining and petroleum. Microtomography provides a new way to quantify the relationship between the microstructure and their mechanical and transport properties. Transport and elastic properties have been studied widely while plastic properties are still poorly understood. In this study, we analyse a synthetic sandstone sample for its up-scaled plastic properties from the micro-scale. The computations are based on the representative volume element (RVE). The mechanical RVE was determined by the upper and lower bound finite element computations of elasticity. By comparing with experimental curves, the parameters of the matrix (solid part), which consists of calcite-cemented quartz grains, were investigated and quite accurate values obtained. Analyses deduced the bulk properties of yield stress, cohesion and the angle of friction of the rock with pores. Computations of a series of models of volume-sizes from 240-cube to 400-cube showed almost overlapped stress-strain curves, suggesting that the mechanical RVE determined by elastic computations is valid for plastic yielding. Furthermore, a series of derivative models were created which have similar structure but different porosity values. The analyses of these models showed that yield stress, cohesion and the angle of friction linearly decrease with the porosity increasing in the range of porosity from 8% to 28%. The angle of friction decreases the fastest and cohesion shows the most stable along with porosity.

  3. Thermal-structural analyses of Space Shuttle Main Engine (SSME) hot section components

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Thompson, Robert L.

    1988-01-01

    Three dimensional nonlinear finite element heat transfer and structural analyses were performed for the first stage high pressure fuel turbopump (HPFTP) blade of the space shuttle main engine (SSME). Directionally solidified (DS) MAR-M 246 and single crystal (SC) PWA-1480 material properties were used for the analyses. Analytical conditions were based on a typical test stand engine cycle. Blade temperature and stress strain histories were calculated by using the MARC finite element computer code. The structural response of an SSME turbine blade was assessed and a greater understanding of blade damage mechanisms, convective cooling effects, and thermal mechanical effects was gained.

  4. Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca; University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213; University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4

    Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respondmore » over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.« less

  5. Applications of Parallel Computation in Micro-Mechanics and Finite Element Method

    NASA Technical Reports Server (NTRS)

    Tan, Hui-Qian

    1996-01-01

    This project discusses the application of parallel computations related with respect to material analyses. Briefly speaking, we analyze some kind of material by elements computations. We call an element a cell here. A cell is divided into a number of subelements called subcells and all subcells in a cell have the identical structure. The detailed structure will be given later in this paper. It is obvious that the problem is "well-structured". SIMD machine would be a better choice. In this paper we try to look into the potentials of SIMD machine in dealing with finite element computation by developing appropriate algorithms on MasPar, a SIMD parallel machine. In section 2, the architecture of MasPar will be discussed. A brief review of the parallel programming language MPL also is given in that section. In section 3, some general parallel algorithms which might be useful to the project will be proposed. And, combining with the algorithms, some features of MPL will be discussed in more detail. In section 4, the computational structure of cell/subcell model will be given. The idea of designing the parallel algorithm for the model will be demonstrated. Finally in section 5, a summary will be given.

  6. Vibro-Acoustic FE Analyses of the Saab 2000 Aircraft

    NASA Technical Reports Server (NTRS)

    Green, Inge S.

    1992-01-01

    A finite element model of the Saab 2000 fuselage structure and interior cavity has been created in order to compute the noise level in the passenger cabin due to propeller noise. Areas covered in viewgraph format include the following: coupled acoustic/structural noise; data base creation; frequency response analysis; model validation; and planned analyses.

  7. Finite element analyses of a linear-accelerator electron gun

    NASA Astrophysics Data System (ADS)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-02-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  8. Finite element analyses of a linear-accelerator electron gun.

    PubMed

    Iqbal, M; Wasy, A; Islam, G U; Zhou, Z

    2014-02-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  9. Analysis of Ninety Degree Flexure Tests for Characterization of Composite Transverse Tensile Strength

    NASA Technical Reports Server (NTRS)

    OBrien, T. Kevin; Krueger, Ronald

    2001-01-01

    Finite element (FE) analysis was performed on 3-point and 4-point bending test configurations of ninety degree oriented glass-epoxy and graphite-epoxy composite beams to identify deviations from beam theory predictions. Both linear and geometric non-linear analyses were performed using the ABAQUS finite element code. The 3-point and 4-point bending specimens were first modeled with two-dimensional elements. Three-dimensional finite element models were then performed for selected 4-point bending configurations to study the stress distribution across the width of the specimens and compare the results to the stresses computed from two-dimensional plane strain and plane stress analyses and the stresses from beam theory. Stresses for all configurations were analyzed at load levels corresponding to the measured transverse tensile strength of the material.

  10. An atomic finite element model for biodegradable polymers. Part 1. Formulation of the finite elements.

    PubMed

    Gleadall, Andrew; Pan, Jingzhe; Ding, Lifeng; Kruft, Marc-Anton; Curcó, David

    2015-11-01

    Molecular dynamics (MD) simulations are widely used to analyse materials at the atomic scale. However, MD has high computational demands, which may inhibit its use for simulations of structures involving large numbers of atoms such as amorphous polymer structures. An atomic-scale finite element method (AFEM) is presented in this study with significantly lower computational demands than MD. Due to the reduced computational demands, AFEM is suitable for the analysis of Young's modulus of amorphous polymer structures. This is of particular interest when studying the degradation of bioresorbable polymers, which is the topic of an accompanying paper. AFEM is derived from the inter-atomic potential energy functions of an MD force field. The nonlinear MD functions were adapted to enable static linear analysis. Finite element formulations were derived to represent interatomic potential energy functions between two, three and four atoms. Validation of the AFEM was conducted through its application to atomic structures for crystalline and amorphous poly(lactide). Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. PLANS; a finite element program for nonlinear analysis of structures. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Pifko, A.; Armen, H., Jr.; Levy, A.; Levine, H.

    1977-01-01

    The PLANS system, rather than being one comprehensive computer program, is a collection of finite element programs used for the nonlinear analysis of structures. This collection of programs evolved and is based on the organizational philosophy in which classes of analyses are treated individually based on the physical problem class to be analyzed. Each of the independent finite element computer programs of PLANS, with an associated element library, can be individually loaded and used to solve the problem class of interest. A number of programs have been developed for material nonlinear behavior alone and for combined geometric and material nonlinear behavior. The usage, capabilities, and element libraries of the current programs include: (1) plastic analysis of built-up structures where bending and membrane effects are significant, (2) three dimensional elastic-plastic analysis, (3) plastic analysis of bodies of revolution, and (4) material and geometric nonlinear analysis of built-up structures.

  12. 3-D modeling of ductile tearing using finite elements: Computational aspects and techniques

    NASA Astrophysics Data System (ADS)

    Gullerud, Arne Stewart

    This research focuses on the development and application of computational tools to perform large-scale, 3-D modeling of ductile tearing in engineering components under quasi-static to mild loading rates. Two standard models for ductile tearing---the computational cell methodology and crack growth controlled by the crack tip opening angle (CTOA)---are described and their 3-D implementations are explored. For the computational cell methodology, quantification of the effects of several numerical issues---computational load step size, procedures for force release after cell deletion, and the porosity for cell deletion---enables construction of computational algorithms to remove the dependence of predicted crack growth on these issues. This work also describes two extensions of the CTOA approach into 3-D: a general 3-D method and a constant front technique. Analyses compare the characteristics of the extensions, and a validation study explores the ability of the constant front extension to predict crack growth in thin aluminum test specimens over a range of specimen geometries, absolutes sizes, and levels of out-of-plane constraint. To provide a computational framework suitable for the solution of these problems, this work also describes the parallel implementation of a nonlinear, implicit finite element code. The implementation employs an explicit message-passing approach using the MPI standard to maintain portability, a domain decomposition of element data to provide parallel execution, and a master-worker organization of the computational processes to enhance future extensibility. A linear preconditioned conjugate gradient (LPCG) solver serves as the core of the solution process. The parallel LPCG solver utilizes an element-by-element (EBE) structure of the computations to permit a dual-level decomposition of the element data: domain decomposition of the mesh provides efficient coarse-grain parallel execution, while decomposition of the domains into blocks of similar elements (same type, constitutive model, etc.) provides fine-grain parallel computation on each processor. A major focus of the LPCG solver is a new implementation of the Hughes-Winget element-by-element (HW) preconditioner. The implementation employs a weighted dependency graph combined with a new coloring algorithm to provide load-balanced scheduling for the preconditioner and overlapped communication/computation. This approach enables efficient parallel application of the HW preconditioner for arbitrary unstructured meshes.

  13. Aeroelastic and dynamic finite element analyses of a bladder shrouded disk

    NASA Technical Reports Server (NTRS)

    Smith, G. C. C.; Elchuri, V.

    1980-01-01

    The delivery and demonstration of a computer program for the analysis of aeroelastic and dynamic properties is reported. Approaches to flutter and forced vibration of mistuned discs, and transient aerothermoelasticity are described.

  14. An automatic generation of non-uniform mesh for CFD analyses of image-based multiscale human airway models

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2014-11-01

    The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.

  15. Finite element analyses of a linear-accelerator electron gun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iqbal, M., E-mail: muniqbal.chep@pu.edu.pk, E-mail: muniqbal@ihep.ac.cn; Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049; Wasy, A.

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gunmore » is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.« less

  16. Model-size reduction for the buckling and vibration analyses of anisotropic panels

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Whitworth, S. L.

    1986-01-01

    A computational procedure is presented for reducing the size of the model used in the buckling and vibration analyses of symmetric anisotropic panels to that of the corresponding orthotropic model. The key elements of the procedure are the application of an operator splitting technique through the decomposition of the material stiffness matrix of the panel into the sum of orthotropic and nonorthotropic (anisotropic) parts and the use of a reduction method through successive application of the finite element method and the classical Rayleigh-Ritz technique. The effectiveness of the procedure is demonstrated by numerical examples.

  17. Computation of leaky guided waves dispersion spectrum using vibroacoustic analyses and the Matrix Pencil Method: a validation study for immersed rectangular waveguides.

    PubMed

    Mazzotti, M; Bartoli, I; Castellazzi, G; Marzani, A

    2014-09-01

    The paper aims at validating a recently proposed Semi Analytical Finite Element (SAFE) formulation coupled with a 2.5D Boundary Element Method (2.5D BEM) for the extraction of dispersion data in immersed waveguides of generic cross-section. To this end, three-dimensional vibroacoustic analyses are carried out on two waveguides of square and rectangular cross-section immersed in water using the commercial Finite Element software Abaqus/Explicit. Real wavenumber and attenuation dispersive data are extracted by means of a modified Matrix Pencil Method. It is demonstrated that the results obtained using the two techniques are in very good agreement. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Deformation in Micro Roll Forming of Bipolar Plate

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Pereira, M.; Rolfe, B.; Daniel, W.; Weiss, M.

    2017-09-01

    Micro roll forming is a new processing technology to produce bipolar plates for Proton Exchange Membrane Fuel Cells (PEMFC) from thin stainless steel foil. To gain a better understanding of the deformation of the material in this process, numerical studies are necessary before experimental implementation. In general, solid elements with several layers through the material thickness are required to analyse material thinning in processes where the deformation mode is that of bending combined with tension, but this results in high computational costs. This pure solid element approach is especially time-consuming when analysing roll forming processes which generally involves feeding a long strip through a number of successive roll stands. In an attempt to develop a more efficient modelling approach without sacrificing accuracy, two solutions are numerically analysed with ABAQUS/Explicit in this paper. In the first, a small patch of solid elements over the strip width and in the centre of the “pre-cut” sheet is coupled with shell elements while in the second approach pure shell elements are used to discretize the full sheet. In the first approach, the shell element enables accounting for the effect of material being held in the roll stands on material flow while solid elements can be applied to analyse material thinning in a small discrete area of the sheet. Experimental micro roll forming trials are performed to prove that the coupling of solid and shell elements can give acceptable model accuracy while using shell elements alone is shown to result in major deviations between numerical and experimental results.

  19. GRANNY, a data bank of chemical analyses of Laramide and younger high-silica rhyolites and granites from Colorado and north-central New Mexico

    USGS Publications Warehouse

    Steigerwald, Celia H.; Mutschler, Felix E.; Ludington, Steve

    1983-01-01

    GRANNY is a data bank containing information on 507 chemically analyzed Laramide or younger high-silica rhyolites and granites from Colorado and north-central New Mexico. The data were compiled from both published and unpublished sources. The data bank is designed to aid in the recognition of igneous rocks with a high exploration potential for the discovery of molybdenum (and other lithophile element) deposits. Information on source reference, geographic location, age, mineralogic and petrologic characteristics, major constituent analyses, and trace element analyses for each sample are given. The data bank is available in two formats: 1) paper- or microfiche-hardcopy, and 2) fixed format computer readable magnetic tape.

  20. Estimation of Sonic Fatigue by Reduced-Order Finite Element Based Analyses

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Przekop, Adam

    2006-01-01

    A computationally efficient, reduced-order method is presented for prediction of sonic fatigue of structures exhibiting geometrically nonlinear response. A procedure to determine the nonlinear modal stiffness using commercial finite element codes allows the coupled nonlinear equations of motion in physical degrees of freedom to be transformed to a smaller coupled system of equations in modal coordinates. The nonlinear modal system is first solved using a computationally light equivalent linearization solution to determine if the structure responds to the applied loading in a nonlinear fashion. If so, a higher fidelity numerical simulation in modal coordinates is undertaken to more accurately determine the nonlinear response. Comparisons of displacement and stress response obtained from the reduced-order analyses are made with results obtained from numerical simulation in physical degrees-of-freedom. Fatigue life predictions from nonlinear modal and physical simulations are made using the rainflow cycle counting method in a linear cumulative damage analysis. Results computed for a simple beam structure under a random acoustic loading demonstrate the effectiveness of the approach and compare favorably with results obtained from the solution in physical degrees-of-freedom.

  1. The forced vibration of one-dimensional multi-coupled periodic structures: An application to finite element analysis

    NASA Astrophysics Data System (ADS)

    Mead, Denys J.

    2009-01-01

    A general theory for the forced vibration of multi-coupled one-dimensional periodic structures is presented as a sequel to a much earlier general theory for free vibration. Starting from the dynamic stiffness matrix of a single multi-coupled periodic element, it derives matrix equations for the magnitudes of the characteristic free waves excited in the whole structure by prescribed harmonic forces and/or displacements acting at a single periodic junction. The semi-infinite periodic system excited at its end is first analysed to provide the basis for analysing doubly infinite and finite periodic systems. In each case, total responses are found by considering just one periodic element. An already-known method of reducing the size of the computational problem is reexamined, expanded and extended in detail, involving reduction of the dynamic stiffness matrix of the periodic element through a wave-coordinate transformation. Use of the theory is illustrated in a combined periodic structure+finite element analysis of the forced harmonic in-plane motion of a uniform flat plate. Excellent agreement between the computed low-frequency responses and those predicted by simple engineering theories validates the detailed formulations of the paper. The primary purpose of the paper is not towards a specific application but to present a systematic and coherent forced vibration theory, carefully linked with the existing free-wave theory.

  2. BUCLASP 3: A computer program for stresses and buckling of heated composite stiffened panels and other structures, user's manual

    NASA Technical Reports Server (NTRS)

    Tripp, L. L.; Tamekuni, M.; Viswanathan, A. V.

    1973-01-01

    The use of the computer program BUCLASP3 is described. The code is intended for thermal stress and instability analyses of structures such as unidirectionally stiffened panels. There are two types of instability analyses that can be effected by PAINT; (1) thermal buckling, and (2) buckling due to a specified inplane biaxial loading. Any structure that has a constant cross section in one direction, that may be idealized as an assemblage of beam elements and laminated flat and curved plate strip-elements can be analyzed. The two parallel ends of the panel must be simply supported, whereas arbitrary elastic boundary conditions may be imposed along any one or both external longitudinal side. Any variation in the temperature rise (from ambient) through the cross section of a panel is considered in the analyses but it must be assumed that in the longitudinal direction the temperature field is constant. Load distributions for the externally applied inplane biaxial loads are similar in nature to the permissible temperature field.

  3. Computational design and engineering of polymeric orthodontic aligners.

    PubMed

    Barone, S; Paoli, A; Razionale, A V; Savignano, R

    2016-10-05

    Transparent and removable aligners represent an effective solution to correct various orthodontic malocclusions through minimally invasive procedures. An aligner-based treatment requires patients to sequentially wear dentition-mating shells obtained by thermoforming polymeric disks on reference dental models. An aligner is shaped introducing a geometrical mismatch with respect to the actual tooth positions to induce a loading system, which moves the target teeth toward the correct positions. The common practice is based on selecting the aligner features (material, thickness, and auxiliary elements) by only considering clinician's subjective assessments. In this article, a computational design and engineering methodology has been developed to reconstruct anatomical tissues, to model parametric aligner shapes, to simulate orthodontic movements, and to enhance the aligner design. The proposed approach integrates computer-aided technologies, from tomographic imaging to optical scanning, from parametric modeling to finite element analyses, within a 3-dimensional digital framework. The anatomical modeling provides anatomies, including teeth (roots and crowns), jaw bones, and periodontal ligaments, which are the references for the down streaming parametric aligner shaping. The biomechanical interactions between anatomical models and aligner geometries are virtually reproduced using a finite element analysis software. The methodology allows numerical simulations of patient-specific conditions and the comparative analyses of different aligner configurations. In this article, the digital framework has been used to study the influence of various auxiliary elements on the loading system delivered to a maxillary and a mandibular central incisor during an orthodontic tipping movement. Numerical simulations have shown a high dependency of the orthodontic tooth movement on the auxiliary element configuration, which should then be accurately selected to maximize the aligner's effectiveness. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Semi-quantitative spectrographic analysis and rank correlation in geochemistry

    USGS Publications Warehouse

    Flanagan, F.J.

    1957-01-01

    The rank correlation coefficient, rs, which involves less computation than the product-moment correlation coefficient, r, can be used to indicate the degree of relationship between two elements. The method is applicable in situations where the assumptions underlying normal distribution correlation theory may not be satisfied. Semi-quantitative spectrographic analyses which are reported as grouped or partly ranked data can be used to calculate rank correlations between elements. ?? 1957.

  5. Combining Thermal And Structural Analyses

    NASA Technical Reports Server (NTRS)

    Winegar, Steven R.

    1990-01-01

    Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.

  6. Conceptual Design Oriented Wing Structural Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Lau, May Yuen

    1996-01-01

    Airplane optimization has always been the goal of airplane designers. In the conceptual design phase, a designer's goal could be tradeoffs between maximum structural integrity, minimum aerodynamic drag, or maximum stability and control, many times achieved separately. Bringing all of these factors into an iterative preliminary design procedure was time consuming, tedious, and not always accurate. For example, the final weight estimate would often be based upon statistical data from past airplanes. The new design would be classified based on gross characteristics, such as number of engines, wingspan, etc., to see which airplanes of the past most closely resembled the new design. This procedure works well for conventional airplane designs, but not very well for new innovative designs. With the computing power of today, new methods are emerging for the conceptual design phase of airplanes. Using finite element methods, computational fluid dynamics, and other computer techniques, designers can make very accurate disciplinary-analyses of an airplane design. These tools are computationally intensive, and when used repeatedly, they consume a great deal of computing time. In order to reduce the time required to analyze a design and still bring together all of the disciplines (such as structures, aerodynamics, and controls) into the analysis, simplified design computer analyses are linked together into one computer program. These design codes are very efficient for conceptual design. The work in this thesis is focused on a finite element based conceptual design oriented structural synthesis capability (CDOSS) tailored to be linked into ACSYNT.

  7. Some methodical peculiarities of analysis of small-mass samples by SRXFA

    NASA Astrophysics Data System (ADS)

    Kudryashova, A. F.; Tarasov, L. S.; Ulyanov, A. A.; Baryshev, V. B.

    1989-10-01

    The stability of work of the element analysis station on the storage rings VEPP-3 and VEPP-4 in INP (Novosibirsk, USSR) was demonstrated on the example of three sets of rare element analyses carried out by SRXFA in May 1985, January and May-June 1988. These data show that there are some systematic deviations in the results of measurements of Zr and La contents. SRXFA and INAA data have been compared for the latter element. A false linear correlation on the Rb-Sr plot in one set of analyses has been attributed to an overlapping artificial Sr peak on a Rb peak. The authors proposed sequences of registration of spectra and computer treatment for samples and standards. Such sequences result in better final concentration data.

  8. Computational fluid mechanics utilizing the variational principle of modeling damping seals

    NASA Technical Reports Server (NTRS)

    Abernathy, J. M.

    1986-01-01

    A computational fluid dynamics code for application to traditional incompressible flow problems has been developed. The method is actually a slight compressibility approach which takes advantage of the bulk modulus and finite sound speed of all real fluids. The finite element numerical analog uses a dynamic differencing scheme based, in part, on a variational principle for computational fluid dynamics. The code was developed in order to study the feasibility of damping seals for high speed turbomachinery. Preliminary seal analyses have been performed.

  9. Micromechanics based simulation of ductile fracture in structural steels

    NASA Astrophysics Data System (ADS)

    Yellavajjala, Ravi Kiran

    The broader aim of this research is to develop fundamental understanding of ductile fracture process in structural steels, propose robust computational models to quantify the associated damage, and provide numerical tools to simplify the implementation of these computational models into general finite element framework. Mechanical testing on different geometries of test specimens made of ASTM A992 steels is conducted to experimentally characterize the ductile fracture at different stress states under monotonic and ultra-low cycle fatigue (ULCF) loading. Scanning electron microscopy studies of the fractured surfaces is conducted to decipher the underlying microscopic damage mechanisms that cause fracture in ASTM A992 steels. Detailed micromechanical analyses for monotonic and cyclic loading are conducted to understand the influence of stress triaxiality and Lode parameter on the void growth phase of ductile fracture. Based on monotonic analyses, an uncoupled micromechanical void growth model is proposed to predict ductile fracture. This model is then incorporated in to finite element program as a weakly coupled model to simulate the loss of load carrying capacity in the post microvoid coalescence regime for high triaxialities. Based on the cyclic analyses, an uncoupled micromechanics based cyclic void growth model is developed to predict the ULCF life of ASTM A992 steels subjected to high stress triaxialities. Furthermore, a computational fracture locus for ASTM A992 steels is developed and incorporated in to finite element program as an uncoupled ductile fracture model. This model can be used to predict the ductile fracture initiation under monotonic loading in a wide range of triaxiality and Lode parameters. Finally, a coupled microvoid elongation and dilation based continuum damage model is proposed, implemented, calibrated and validated. This model is capable of simulating the local softening caused by the various phases of ductile fracture process under monotonic loading for a wide range of stress states. Novel differentiation procedures based on complex analyses along with existing finite difference methods and automatic differentiation are extended using perturbation techniques to evaluate tensor derivatives. These tensor differentiation techniques are then used to automate nonlinear constitutive models into implicit finite element framework. Finally, the efficiency of these automation procedures is demonstrated using benchmark problems.

  10. Fourier analysis of finite element preconditioned collocation schemes

    NASA Technical Reports Server (NTRS)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  11. An Efficient Finite Element Framework to Assess Flexibility Performances of SMA Self-Expandable Carotid Artery Stents

    PubMed Central

    Ferraro, Mauro; Auricchio, Ferdinando; Boatti, Elisa; Scalet, Giulia; Conti, Michele; Morganti, Simone; Reali, Alessandro

    2015-01-01

    Computer-based simulations are nowadays widely exploited for the prediction of the mechanical behavior of different biomedical devices. In this aspect, structural finite element analyses (FEA) are currently the preferred computational tool to evaluate the stent response under bending. This work aims at developing a computational framework based on linear and higher order FEA to evaluate the flexibility of self-expandable carotid artery stents. In particular, numerical simulations involving large deformations and inelastic shape memory alloy constitutive modeling are performed, and the results suggest that the employment of higher order FEA allows accurately representing the computational domain and getting a better approximation of the solution with a widely-reduced number of degrees of freedom with respect to linear FEA. Moreover, when buckling phenomena occur, higher order FEA presents a superior capability of reproducing the nonlinear local effects related to buckling phenomena. PMID:26184329

  12. PCI-based WILDFIRE reconfigurable computing engines

    NASA Astrophysics Data System (ADS)

    Fross, Bradley K.; Donaldson, Robert L.; Palmer, Douglas J.

    1996-10-01

    WILDFORCE is the first PCI-based custom reconfigurable computer that is based on the Splash 2 technology transferred from the National Security Agency and the Institute for Defense Analyses, Supercomputing Research Center (SRC). The WILDFORCE architecture has many of the features of the WILDFIRE computer, such as field- programmable gate array (FPGA) based processing elements, linear array and crossbar interconnection, and high- performance memory and I/O subsystems. New features introduced in the PCI-based WILDFIRE systems include memory/processor options that can be added to any processing element. These options include static and dynamic memory, digital signal processors (DSPs), FPGAs, and microprocessors. In addition to memory/processor options, many different application specific connectors can be used to extend the I/O capabilities of the system, including systolic I/O, camera input and video display output. This paper also discusses how this new PCI-based reconfigurable computing engine is used for rapid-prototyping, real-time video processing and other DSP applications.

  13. A program for mass spectrometer control and data processing analyses in isotope geology; written in BASIC for an 8K Nova 1120 computer

    USGS Publications Warehouse

    Stacey, J.S.; Hope, J.

    1975-01-01

    A system is described which uses a minicomputer to control a surface ionization mass spectrometer in the peak switching mode, with the object of computing isotopic abundance ratios of elements of geologic interest. The program uses the BASIC language and is sufficiently flexible to be used for multiblock analyses of any spectrum containing from two to five peaks. In the case of strontium analyses, ratios are corrected for rubidium content and normalized for mass spectrometer fractionation. Although almost any minicomputer would be suitable, the model used was the Data General Nova 1210 with 8K memory. Assembly language driver program and interface hardware-descriptions for the Nova 1210 are included.

  14. Application of Dynamic Analysis in Semi-Analytical Finite Element Method.

    PubMed

    Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus

    2017-08-30

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.

  15. Computational compliance criteria in water hammer modelling

    NASA Astrophysics Data System (ADS)

    Urbanowicz, Kamil

    2017-10-01

    Among many numerical methods (finite: difference, element, volume etc.) used to solve the system of partial differential equations describing unsteady pipe flow, the method of characteristics (MOC) is most appreciated. With its help, it is possible to examine the effect of numerical discretisation carried over the pipe length. It was noticed, based on the tests performed in this study, that convergence of the calculation results occurred on a rectangular grid with the division of each pipe of the analysed system into at least 10 elements. Therefore, it is advisable to introduce computational compliance criteria (CCC), which will be responsible for optimal discretisation of the examined system. The results of this study, based on the assumption of various values of the Courant-Friedrichs-Levy (CFL) number, indicate also that the CFL number should be equal to one for optimum computational results. Application of the CCC criterion to own written and commercial computer programmes based on the method of characteristics will guarantee fast simulations and the necessary computational coherence.

  16. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  17. Comparison of 2D Finite Element Modeling Assumptions with Results From 3D Analysis for Composite Skin-Stiffener Debonding

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Paris, Isbelle L.; OBrien, T. Kevin; Minguet, Pierre J.

    2004-01-01

    The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane-strain elements as well as three different generalized plane strain type approaches were performed. The computed skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with delamination length. For more accurate predictions, however, a three-dimensional analysis is required.

  18. Influence of 2D Finite Element Modeling Assumptions on Debonding Prediction for Composite Skin-stiffener Specimens Subjected to Tension and Bending

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane strain elements as well as three different generalized plane strain type approaches were performed. The computed deflections, skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with lamination length. For more accurate predictions, however, a three-dimensional analysis is required.

  19. VALIDATION OF ANSYS FINITE ELEMENT ANALYSIS SOFTWARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HAMM, E.R.

    2003-06-27

    This document provides a record of the verification and Validation of the ANSYS Version 7.0 software that is installed on selected CH2M HILL computers. The issues addressed include: Software verification, installation, validation, configuration management and error reporting. The ANSYS{reg_sign} computer program is a large scale multi-purpose finite element program which may be used for solving several classes of engineering analysis. The analysis capabilities of ANSYS Full Mechanical Version 7.0 installed on selected CH2M Hill Hanford Group (CH2M HILL) Intel processor based computers include the ability to solve static and dynamic structural analyses, steady-state and transient heat transfer problems, mode-frequency andmore » buckling eigenvalue problems, static or time-varying magnetic analyses and various types of field and coupled-field applications. The program contains many special features which allow nonlinearities or secondary effects to be included in the solution, such as plasticity, large strain, hyperelasticity, creep, swelling, large deflections, contact, stress stiffening, temperature dependency, material anisotropy, and thermal radiation. The ANSYS program has been in commercial use since 1970, and has been used extensively in the aerospace, automotive, construction, electronic, energy services, manufacturing, nuclear, plastics, oil and steel industries.« less

  20. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  1. Influence of Finite Element Software on Energy Release Rates Computed Using the Virtual Crack Closure Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)

    2006-01-01

    Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.

  2. Analysis of helium-ion scattering with a desktop computer

    NASA Astrophysics Data System (ADS)

    Butler, J. W.

    1986-04-01

    This paper describes a program written in an enhanced BASIC language for a desktop computer, for simulating the energy spectra of high-energy helium ions scattered into two concurrent detectors (backward and glancing). The program is designed for 512-channel spectra from samples containing up to 8 elements and 55 user-defined layers. The program is intended to meet the needs of analyses in materials sciences, such as metallurgy, where more than a few elements may be present, where several elements may be near each other in the periodic table, and where relatively deep structure may be important. These conditions preclude the use of completely automatic procedures for obtaining the sample composition directly from the scattered ion spectrum. Therefore, efficient methods are needed for entering and editing large amounts of composition data, with many iterations and with much feedback of information from the computer to the user. The internal video screen is used exclusively for verbal and numeric communications between user and computer. The composition matrix is edited on screen with a two-dimension forms-fill-in text editor and with many automatic procedures, such as doubling the number of layers with appropriate interpolations and extrapolations. The control center of the program is a bank of 10 keys that initiate on-event branching of program flow. The experimental and calculated spectra, including those of individual elements if desired, are displayed on an external color monitor, with an optional inset plot of the depth concentration profiles of the elements in the sample.

  3. Computer Simulation For Design Of TWT's

    NASA Technical Reports Server (NTRS)

    Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard

    1992-01-01

    A three-dimensional finite-element analytical technique facilitates design and fabrication of traveling-wave-tube (TWT) slow-wave structures. Used to perform thermal and mechanical analyses of TWT designed with variety of configurations, geometries, and materials. Using three-dimensional computer analysis, designer able to simulate building and testing of TWT, with consequent substantial saving of time and money. Technique enables detailed look into operation of traveling-wave tubes to help improve performance for future communications systems.

  4. Application of numerical methods to heat transfer and thermal stress analysis of aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Wieting, A. R.

    1979-01-01

    The paper describes a thermal-structural design analysis study of a fuel-injection strut for a hydrogen-cooled scramjet engine for a supersonic transport, utilizing finite-element methodology. Applications of finite-element and finite-difference codes to the thermal-structural design-analysis of space transports and structures are discussed. The interaction between the thermal and structural analyses has led to development of finite-element thermal methodology to improve the integration between these two disciplines. The integrated thermal-structural analysis capability developed within the framework of a computer code is outlined.

  5. Cross-Shear Implementation in Sliding-Distance-Coupled Finite Element Analysis of Wear in Metal-on-Polyethylene Total Joint Arthroplasty: Intervertebral Total Disc Replacement as an Illustrative Application

    PubMed Central

    Goreham-Voss, Curtis M.; Hyde, Philip J.; Hall, Richard M.; Fisher, John; Brown, Thomas D.

    2010-01-01

    Computational simulations of wear of orthopaedic total joint replacement implants have proven to valuably complement laboratory physical simulators, for pre-clinical estimation of abrasive/adhesive wear propensity. This class of numerical formulations has primarily involved implementation of the Archard/Lancaster relationship, with local wear computed as the product of (finite element) contact stress, sliding speed, and a bearing-couple-dependent wear factor. The present study introduces an augmentation, whereby the influence of interface cross-shearing motion transverse to the prevailing molecular orientation of the polyethylene articular surface is taken into account in assigning the instantaneous local wear factor. The formulation augment is implemented within a widely-utilized commercial finite element software environment (ABAQUS). Using a contemporary metal-on-polyethylene total disc replacement (ProDisc-L) as an illustrative implant, physically validated computational results are presented to document the role of cross-shearing effects in alternative laboratory consensus testing protocols. Going forward, this formulation permits systematically accounting for cross-shear effects in parametric computational wear studies of metal-on-polyethylene joint replacements, heretofore a substantial limitation of such analyses. PMID:20399432

  6. Finite element flow analysis; Proceedings of the Fourth International Symposium on Finite Element Methods in Flow Problems, Chuo University, Tokyo, Japan, July 26-29, 1982

    NASA Astrophysics Data System (ADS)

    Kawai, T.

    Among the topics discussed are the application of FEM to nonlinear free surface flow, Navier-Stokes shallow water wave equations, incompressible viscous flows and weather prediction, the mathematical analysis and characteristics of FEM, penalty function FEM, convective, viscous, and high Reynolds number FEM analyses, the solution of time-dependent, three-dimensional and incompressible Navier-Stokes equations, turbulent boundary layer flow, FEM modeling of environmental problems over complex terrain, and FEM's application to thermal convection problems and to the flow of polymeric materials in injection molding processes. Also covered are FEMs for compressible flows, including boundary layer flows and transonic flows, hybrid element approaches for wave hydrodynamic loadings, FEM acoustic field analyses, and FEM treatment of free surface flow, shallow water flow, seepage flow, and sediment transport. Boundary element methods and FEM computational technique topics are also discussed. For individual items see A84-25834 to A84-25896

  7. Annular dilatation and loss of sino-tubular junction in aneurysmatic aorta: implications on leaflet quality at the time of surgery. A finite element study†

    PubMed Central

    Weltert, Luca; de Tullio, Marco D.; Afferrante, Luciano; Salica, Andrea; Scaffa, Raffaele; Maselli, Daniele; Verzicco, Roberto; De Paulis, Ruggero

    2013-01-01

    OBJECTIVES In the belief that stress is the main determinant of leaflet quality deterioration, we sought to evaluate the effect of annular and/or sino-tubular junction dilatation on leaflet stress. A finite element computer-assisted stress analysis was used to model four different anatomic conditions and analyse the consequent stress pattern on the aortic valve. METHODS Theoretical models of four aortic root configurations (normal, with dilated annulus, with loss of sino-tubular junction and with both dilatation simultaneously) were created with computer-aided design technique. The pattern of stress and strain was then analysed by means of finite elements analysis, when a uniform pressure of 100 mmHg was applied to the model. Analysis produced von Mises charts (colour-coded, computational, three-dimensional stress-pattern graphics) and bidimensional plots of compared stress on arc-linear line, which allowed direct comparison of stress in the four different conditions. RESULTS Stresses both on the free margin and on the ‘belly’ of the leaflet rose from 0.28 MPa (normal conditions) to 0.32 MPa (+14%) in case of isolated dilatation of the sino-tubular junction, while increased to 0.42 MPa (+67%) in case of isolated annular dilatation, with no substantial difference whether sino-tubular junction dilatation was present or not. CONCLUSIONS Annular dilatation is the key element determining an increased stress on aortic leaflets independently from an associated sino-tubular junction dilatation. The presence of annular dilatation associated with root aneurysm greatly decreases the chance of performing a valve sparing procedure without the need for additional manoeuvres on leaflet tissue. This information may lead to a refinement in the optimal surgical strategy. PMID:23536020

  8. On the stability analysis of hyperelastic boundary value problems using three- and two-field mixed finite element formulations

    NASA Astrophysics Data System (ADS)

    Schröder, Jörg; Viebahn, Nils; Wriggers, Peter; Auricchio, Ferdinando; Steeger, Karl

    2017-09-01

    In this work we investigate different mixed finite element formulations for the detection of critical loads for the possible occurrence of bifurcation and limit points. In detail, three- and two-field formulations for incompressible and quasi-incompressible materials are analyzed. In order to apply various penalty functions for the volume dilatation in displacement/pressure mixed elements we propose a new consistent scheme capturing the non linearities of the penalty constraints. It is shown that for all mixed formulations, which can be reduced to a generalized displacement scheme, a straight forward stability analysis is possible. However, problems based on the classical saddle-point structure require a different analyses based on the change of the signature of the underlying matrix system. The basis of these investigations is the work from Auricchio et al. (Comput Methods Appl Mech Eng 194:1075-1092, 2005, Comput Mech 52:1153-1167, 2013).

  9. Petascale supercomputing to accelerate the design of high-temperature alloys

    DOE PAGES

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; ...

    2017-10-25

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less

  10. Petascale supercomputing to accelerate the design of high-temperature alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less

  11. Petascale supercomputing to accelerate the design of high-temperature alloys

    NASA Astrophysics Data System (ADS)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen

    2017-12-01

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.

  12. The Effect of Scale Dependent Discretization on the Progressive Failure of Composite Materials Using Multiscale Analyses

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.

    2013-01-01

    A multiscale modeling methodology, which incorporates a statistical distribution of fiber strengths into coupled micromechanics/ finite element analyses, is applied to unidirectional polymer matrix composites (PMCs) to analyze the effect of mesh discretization both at the micro- and macroscales on the predicted ultimate tensile (UTS) strength and failure behavior. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a PMC tensile specimen that initiates at the repeating unit cell (RUC) level. Three different finite element mesh densities were employed and each coupled with an appropriate RUC. Multiple simulations were performed in order to assess the effect of a statistical distribution of fiber strengths on the bulk composite failure and predicted strength. The coupled effects of both the micro- and macroscale discretizations were found to have a noticeable effect on the predicted UTS and computational efficiency of the simulations.

  13. Comparisons of node-based and element-based approaches of assigning bone material properties onto subject-specific finite element models.

    PubMed

    Chen, G; Wu, F Y; Liu, Z C; Yang, K; Cui, F

    2015-08-01

    Subject-specific finite element (FE) models can be generated from computed tomography (CT) datasets of a bone. A key step is assigning material properties automatically onto finite element models, which remains a great challenge. This paper proposes a node-based assignment approach and also compares it with the element-based approach in the literature. Both approaches were implemented using ABAQUS. The assignment procedure is divided into two steps: generating the data file of the image intensity of a bone in a MATLAB program and reading the data file into ABAQUS via user subroutines. The node-based approach assigns the material properties to each node of the finite element mesh, while the element-based approach assigns the material properties directly to each integration point of an element. Both approaches are independent from the type of elements. A number of FE meshes are tested and both give accurate solutions; comparatively the node-based approach involves less programming effort. The node-based approach is also independent from the type of analyses; it has been tested on the nonlinear analysis of a Sawbone femur. The node-based approach substantially improves the level of automation of the assignment procedure of bone material properties. It is the simplest and most powerful approach that is applicable to many types of analyses and elements. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Application of Dynamic Analysis in Semi-Analytical Finite Element Method

    PubMed Central

    Oeser, Markus

    2017-01-01

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement’s state. PMID:28867813

  15. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  16. Modular structural elements in the replication origin region of Tetrahymena rDNA.

    PubMed Central

    Du, C; Sanzgiri, R P; Shaiu, W L; Choi, J K; Hou, Z; Benbow, R M; Dobbs, D L

    1995-01-01

    Computer analyses of the DNA replication origin region in the amplified rRNA genes of Tetrahymena thermophila identified a potential initiation zone in the 5'NTS [Dobbs, Shaiu and Benbow (1994), Nucleic Acids Res. 22, 2479-2489]. This region consists of a putative DNA unwinding element (DUE) aligned with predicted bent DNA segments, nuclear matrix or scaffold associated region (MAR/SAR) consensus sequences, and other common modular sequence elements previously shown to be clustered in eukaryotic chromosomal origin regions. In this study, two mung bean nuclease-hypersensitive sites in super-coiled plasmid DNA were localized within the major DUE-like element predicted by thermodynamic analyses. Three restriction fragments of the 5'NTS region predicted to contain bent DNA segments exhibited anomalous migration characteristic of bent DNA during electrophoresis on polyacrylamide gels. Restriction fragments containing the 5'NTS region bound Tetrahymena nuclear matrices in an in vitro binding assay, consistent with an association of the replication origin region with the nuclear matrix in vivo. The direct demonstration in a protozoan origin region of elements previously identified in Drosophila, chick and mammalian origin regions suggests that clusters of modular structural elements may be a conserved feature of eukaryotic chromosomal origins of replication. Images PMID:7784181

  17. A Rocket Engine Design Expert System

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.

    1989-01-01

    The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state of the art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the H2-O2 coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One dimensional equilibrium chemistry was used in the energy release analysis of the combustion chamber. A 3-D conduction and/or 1-D advection analysis is used to predict heat transfer and coolant channel wall temperature distributions, in addition to coolant temperature and pressure drop. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.

  18. A rocket engine design expert system

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.

    1989-01-01

    The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state-of-the-art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the hydrogen-oxygen coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One-dimensional equilibrium chemistry was employed in the energy release analysis of the combustion chamber and three-dimensional finite-difference analysis of the regenerative cooling channels was used to calculate the pressure drop along the channels and the coolant temperature as it exits the coolant circuit. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.

  19. Conjugate Heat Transfer Analyses on the Manifold for Ramjet Fuel Injectors

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.

    2006-01-01

    Three-dimensional conjugate heat transfer analyses on the manifold located upstream of the ramjet fuel injector are performed using CFdesign, a finite-element computational fluid dynamics (CFD) software. The flow field of the hot fuel (JP-7) flowing through the manifold is simulated and the wall temperature of the manifold is computed. The three-dimensional numerical results of the fuel temperature are compared with those obtained using a one-dimensional analysis based on empirical equations, and they showed a good agreement. The numerical results revealed that it takes around 30 to 40 sec to reach the equilibrium where the fuel temperature has dropped about 3 F from the inlet to the exit of the manifold.

  20. Two-dimensional finite-element analyses of simulated rotor-fragment impacts against rings and beams compared with experiments

    NASA Technical Reports Server (NTRS)

    Stagliano, T. R.; Witmer, E. A.; Rodal, J. J. A.

    1979-01-01

    Finite element modeling alternatives as well as the utility and limitations of the two dimensional structural response computer code CIVM-JET 4B for predicting the transient, large deflection, elastic plastic, structural responses of two dimensional beam and/or ring structures which are subjected to rigid fragment impact were investigated. The applicability of the CIVM-JET 4B analysis and code for the prediction of steel containment ring response to impact by complex deformable fragments from a trihub burst of a T58 turbine rotor was studied. Dimensional analysis considerations were used in a parametric examination of data from engine rotor burst containment experiments and data from sphere beam impact experiments. The use of the CIVM-JET 4B computer code for making parametric structural response studies on both fragment-containment structure and fragment-deflector structure was illustrated. Modifications to the analysis/computation procedure were developed to alleviate restrictions.

  1. The modelling of the flow-induced vibrations of periodic flat and axial-symmetric structures with a wave-based method

    NASA Astrophysics Data System (ADS)

    Errico, F.; Ichchou, M.; De Rosa, S.; Bareille, O.; Franco, F.

    2018-06-01

    The stochastic response of periodic flat and axial-symmetric structures, subjected to random and spatially-correlated loads, is here analysed through an approach based on the combination of a wave finite element and a transfer matrix method. Although giving a lower computational cost, the present approach keeps the same accuracy of classic finite element methods. When dealing with homogeneous structures, the accuracy is also extended to higher frequencies, without increasing the time of calculation. Depending on the complexity of the structure and the frequency range, the computational cost can be reduced more than two orders of magnitude. The presented methodology is validated both for simple and complex structural shapes, under deterministic and random loads.

  2. Bunburra Rockhole: Exploring the geology of a new differentiated asteroid

    NASA Astrophysics Data System (ADS)

    Benedix, G. K.; Bland, P. A.; Friedrich, J. M.; Mittlefehldt, D. W.; Sanborn, M. E.; Yin, Q.-Z.; Greenwood, R. C.; Franchi, I. A.; Bevan, A. W. R.; Towner, M. C.; Perrotta, G. C.; Mertzman, S. A.

    2017-07-01

    Bunburra Rockhole is the first recovered meteorite of the Desert Fireball Network. We expanded a bulk chemical study of the Bunburra Rockhole meteorite to include major, minor and trace element analyses, as well as oxygen and chromium isotopes, in several different pieces of the meteorite. This was to determine the extent of chemical heterogeneity and constrain the origin of the meteorite. Minor and trace element analyses in all pieces are exactly on the basaltic eucrite trend. Major element analyses show a slight deviation from basaltic eucrite compositions, but not in any systematic pattern. New oxygen isotope analyses on 23 pieces of Bunburra Rockhole shows large variation in both δ17O and δ18O, and both are well outside the HED parent body fractionation line. We present the first Cr isotope results of this rock, which are also distinct from HEDs. Detailed computed tomographic scanning and back-scattered electron mapping do not indicate the presence of any other meteoritic contaminant (contamination is also unlikely based on trace element chemistry). We therefore conclude that Bunburra Rockhole represents a sample of a new differentiated asteroid, one that may have more variable oxygen isotopic compositions than 4 Vesta. The fact that Bunburra Rockhole chemistry falls on the eucrite trend perhaps suggests that multiple objects with basaltic crusts accreted in a similar region of the Solar System.

  3. Numerical Computations of Hypersonic Boundary-Layer over Surface Irregularities

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Choudhari, Meelan M.; Li, Fei

    2010-01-01

    Surface irregularities such as protuberances inside a hypersonic boundary layer may lead to premature transition on the vehicle surface. Early transition in turn causes large localized surface heating that could damage the thermal protection system. Experimental measurements as well as numerical computations aimed at building a knowledge base for transition Reynolds numbers with respect to different protuberance sizes and locations have been actively pursued in recent years. This paper computationally investigates the unsteady wake development behind large isolated cylindrical roughness elements and the scaled wind-tunnel model of the trip used in a recent flight measurement during the reentry of space shuttle Discovery. An unstructured mesh, compressible flow solver based on the space-time conservation element, solution element (CESE) method is used to perform time-accurate Navier-Stokes calculations for the flow past a roughness element under several wind-tunnel conditions. For a cylindrical roughness element with a height to the boundary-layer thickness ratio from 0.8 to 2.5, the wake flow is characterized by a mushroom-shaped centerline streak and horse-shoe vortices. While time-accurate solutions converged to a steady-state for a ratio of 0.8, strong flow unsteadiness is present for a ratio of 1.3 and 2.5. Instability waves marked by distinct disturbance frequencies were found in the latter two cases. Both the centerline streak and the horse-shoe vortices become unstable downstream. The oscillatory vortices eventually reach an early breakdown stage for the largest roughness element. Spectral analyses in conjunction with the computed root mean square variations suggest that the source of the unsteadiness and instability waves in the wake region may be traced back to possible absolute instability in the front-side separation region.

  4. Cyclic structural analyses of anisotropic turbine blades for reusable space propulsion systems. [ssme fuel turbopump

    NASA Technical Reports Server (NTRS)

    Manderscheid, J. M.; Kaufman, A.

    1985-01-01

    Turbine blades for reusable space propulsion systems are subject to severe thermomechanical loading cycles that result in large inelastic strains and very short lives. These components require the use of anisotropic high-temperature alloys to meet the safety and durability requirements of such systems. To assess the effects on blade life of material anisotropy, cyclic structural analyses are being performed for the first stage high-pressure fuel turbopump blade of the space shuttle main engine. The blade alloy is directionally solidified MAR-M 246 alloy. The analyses are based on a typical test stand engine cycle. Stress-strain histories at the airfoil critical location are computed using the MARC nonlinear finite-element computer code. The MARC solutions are compared to cyclic response predictions from a simplified structural analysis procedure developed at the NASA Lewis Research Center.

  5. A Review of Recent Aeroelastic Analysis Methods for Propulsion at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral; Stefko, George L.

    1993-01-01

    This report reviews aeroelastic analyses for propulsion components (propfans, compressors and turbines) being developed and used at NASA LeRC. These aeroelastic analyses include both structural and aerodynamic models. The structural models include a typical section, a beam (with and without disk flexibility), and a finite-element blade model (with plate bending elements). The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation to the three-dimensional Euler equations for multibladed configurations. Typical calculated results are presented for each aeroelastic model. Suggestions for further research are made. Many of the currently available aeroelastic models and analysis methods are being incorporated in a unified computer program, APPLE (Aeroelasticity Program for Propulsion at LEwis).

  6. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  7. Reanalysis, compatibility and correlation in analysis of modified antenna structures

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1989-01-01

    A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.

  8. Static aeroelastic analysis and tailoring of a single-element racing car wing

    NASA Astrophysics Data System (ADS)

    Sadd, Christopher James

    This thesis presents the research from an Engineering Doctorate research programme in collaboration with Reynard Motorsport Ltd, a manufacturer of racing cars. Racing car wing design has traditionally considered structures to be rigid. However, structures are never perfectly rigid and the interaction between aerodynamic loading and structural flexibility has a direct impact on aerodynamic performance. This interaction is often referred to as static aeroelasticity and the focus of this research has been the development of a computational static aeroelastic analysis method to improve the design of a single-element racing car wing. A static aeroelastic analysis method has been developed by coupling a Reynolds-Averaged Navier-Stokes CFD analysis method with a Finite Element structural analysis method using an iterative scheme. Development of this method has included assessment of CFD and Finite Element analysis methods and development of data transfer and mesh deflection methods. Experimental testing was also completed to further assess the computational analyses. The computational and experimental results show a good correlation and these studies have also shown that a Navier-Stokes static aeroelastic analysis of an isolated wing can be performed at an acceptable computational cost. The static aeroelastic analysis tool was used to assess methods of tailoring the structural flexibility of the wing to increase its aerodynamic performance. These tailoring methods were then used to produce two final wing designs to increase downforce and reduce drag respectively. At the average operating dynamic pressure of the racing car, the computational analysis predicts that the downforce-increasing wing has a downforce of C[1]=-1.377 in comparison to C[1]=-1.265 for the original wing. The computational analysis predicts that the drag-reducing wing has a drag of C[d]=0.115 in comparison to C[d]=0.143 for the original wing.

  9. Crashdynamics with DYNA3D: Capabilities and research directions

    NASA Technical Reports Server (NTRS)

    Whirley, Robert G.; Engelmann, Bruce E.

    1993-01-01

    The application of the explicit nonlinear finite element analysis code DYNA3D to crashworthiness problems is discussed. Emphasized in the first part of this work are the most important capabilities of an explicit code for crashworthiness analyses. The areas with significant research promise for the computational simulation of crash events are then addressed.

  10. SPAR reference manual

    NASA Technical Reports Server (NTRS)

    Whetstone, W. D.

    1976-01-01

    The functions and operating rules of the SPAR system, which is a group of computer programs used primarily to perform stress, buckling, and vibrational analyses of linear finite element systems, were given. The following subject areas were discussed: basic information, structure definition, format system matrix processors, utility programs, static solutions, stresses, sparse matrix eigensolver, dynamic response, graphics, and substructure processors.

  11. Ice Engineering - study of Related Properties of Floating Sea-Ice Sheets and Summary of Elastic and Viscoelastic Analyses

    DTIC Science & Technology

    1977-12-01

    Ice Plate Example. To demonstrate the capability of the visco- elastic finite-element computer code (5), the structural response of an infinite ... sea -ice plate on a fluid foundation is investigated for a simulated aircraft loading condition and, using relaxation functions, is determined

  12. Structural integrity of a confinement vessel for testing nuclear fuels for space propulsion

    NASA Astrophysics Data System (ADS)

    Bergmann, V. L.

    Nuclear propulsion systems for rockets could significantly reduce the travel time to distant destinations in space. However, long before such a concept can become reality, a significant effort must be invested in analysis and ground testing to guide the development of nuclear fuels. Any testing in support of development of nuclear fuels for space propulsion must be safely contained to prevent the release of radioactive materials. This paper describes analyses performed to assess the structural integrity of a test confinement vessel. The confinement structure, a stainless steel pressure vessel with bolted flanges, was designed for operating static pressures in accordance with the ASME Boiler and Pressure Vessel Code. In addition to the static operating pressures, the confinement barrier must withstand static overpressures from off-normal conditions without releasing radioactive material. Results from axisymmetric finite element analyses are used to evaluate the response of the confinement structure under design and accident conditions. For the static design conditions, the stresses computed from the ASME code are compared with the stresses computed by the finite element method.

  13. The application of CAD, CAE & CAM in development of butterfly valve’s disc

    NASA Astrophysics Data System (ADS)

    Asiff Razif Shah Ranjit, Muhammad; Hanie Abdullah, Nazlin

    2017-06-01

    The improved design of a butterfly valve disc is based on the concept of sandwich theory. Butterfly valves are mostly used in various industries such as oil and gas plant. The primary failure modes for valves are indented disc, keyways and shaft failure and the cavitation damage. Emphasis on the application of CAD, a new model of the butterfly valve’s disc structure was designed. The structure analysis was analysed using the finite element analysis. Butterfly valve performance factors can be obtained is by using Computational Fluid Dynamics (CFD) software to simulate the physics of fluid flow in a piping system around a butterfly valve. A comparison analysis was done using the finite element to justify the performance of the structure. The second application of CAE is the computational fluid flow analysis. The upstream pressure and the downstream pressure was analysed to calculate the cavitation index and determine the performance throughout each opening position of the valve. The CAM process was done using 3D printer to produce a prototype and analysed the structure in form of prototype. The structure was downscale fabricated based on the model designed initially through the application of CAD. This study is utilized the application of CAD, CAE and CAM for a better improvement of the butterfly valve’s disc components.

  14. System Level RBDO for Military Ground Vehicles using High Performance Computing

    DTIC Science & Technology

    2008-01-01

    platform. Only the analyses that required more than 24 processors were conducted on the Onyx 350 due to the limited number of processors on the...optimization constraints varied. The queues set the number of processors and number of finite element code licenses available to the analyses. sgi ONYX ...3900: unix 24 MIPS R16000 PROCESSORS 4 IR2 GRAPHICS PIPES 4 IR3 GRAPHICS PIPES 24 GBYTES MEMORY 36 GBYTES LOCAL DISK SPACE sgi ONYX 350: unix 32 MIPS

  15. Semi-analytical discontinuous Galerkin finite element method for the calculation of dispersion properties of guided waves in plates.

    PubMed

    Hebaz, Salah-Eddine; Benmeddour, Farouk; Moulin, Emmanuel; Assaad, Jamal

    2018-01-01

    The development of reliable guided waves inspection systems is conditioned by an accurate knowledge of their dispersive properties. The semi-analytical finite element method has been proven to be very practical for modeling wave propagation in arbitrary cross-section waveguides. However, when it comes to computations on complex geometries to a given accuracy, it still has a major drawback: the high consumption of resources. Recently, discontinuous Galerkin finite element method (DG-FEM) has been found advantageous over the standard finite element method when applied as well in the frequency domain. In this work, a high-order method for the computation of Lamb mode characteristics in plates is proposed. The problem is discretised using a class of DG-FEM, namely, the interior penalty methods family. The analytical validation is performed through the homogeneous isotropic case with traction-free boundary conditions. Afterwards, functionally graded material plates are analysed and a numerical example is presented. It was found that the obtained results are in good agreement with those found in the literature.

  16. Building a Data Science capability for USGS water research and communication

    NASA Astrophysics Data System (ADS)

    Appling, A.; Read, E. K.

    2015-12-01

    Interpreting and communicating water issues in an era of exponentially increasing information requires a blend of domain expertise, computational proficiency, and communication skills. The USGS Office of Water Information has established a Data Science team to meet these needs, providing challenging careers for diverse domain scientists and innovators in the fields of information technology and data visualization. Here, we detail the experience of building a Data Science capability as a bridging element between traditional water resources analyses and modern computing tools and data management techniques. This approach includes four major components: 1) building reusable research tools, 2) documenting data-intensive research approaches in peer reviewed journals, 3) communicating complex water resources issues with interactive web visualizations, and 4) offering training programs for our peers in scientific computing. These components collectively improve the efficiency, transparency, and reproducibility of USGS data analyses and scientific workflows.

  17. Development and implementation of a low cost micro computer system for LANDSAT analysis and geographic data base applications

    NASA Technical Reports Server (NTRS)

    Faust, N.; Jordon, L.

    1981-01-01

    Since the implementation of the GRID and IMGRID computer programs for multivariate spatial analysis in the early 1970's, geographic data analysis subsequently moved from large computers to minicomputers and now to microcomputers with radical reduction in the costs associated with planning analyses. Programs designed to process LANDSAT data to be used as one element in a geographic data base were used once NIMGRID (new IMGRID), a raster oriented geographic information system, was implemented on the microcomputer. Programs for training field selection, supervised and unsupervised classification, and image enhancement were added. Enhancements to the color graphics capabilities of the microsystem allow display of three channels of LANDSAT data in color infrared format. The basic microcomputer hardware needed to perform NIMGRID and most LANDSAT analyses is listed as well as the software available for LANDSAT processing.

  18. A comparative study on different methods of automatic mesh generation of human femurs.

    PubMed

    Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A

    1998-01-01

    The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.

  19. Trace element content of gossans at four mines in the West Shasta massive sulfide district.

    USGS Publications Warehouse

    Sanzolone, R.F.; Domenico, J.A.

    1985-01-01

    Paired analyses of the spongy whole-rock gossan and its botryoidal crust ("chipped rock rind') show little differences, whereas duplicate samples of each at individual sites show such extreme differences as to preclude the use of the data in areal mapping. Gossans from disseminated sulphides have lower and less variable trace-element contents than gossans from massive sulphides, due in part to dilution by rock silicates. Computer reduction of the data by a regionalizing algorithm enables determination of pattern differences among the four mines.-G.J.N.

  20. Transient Vibration Prediction for Rotors on Ball Bearings Using Load-dependent Non-linear Bearing Stiffness

    NASA Technical Reports Server (NTRS)

    Fleming, David P.; Poplawski, J. V.

    2002-01-01

    Rolling-element bearing forces vary nonlinearly with bearing deflection. Thus an accurate rotordynamic transient analysis requires bearing forces to be determined at each step of the transient solution. Analyses have been carried out to show the effect of accurate bearing transient forces (accounting for non-linear speed and load dependent bearing stiffness) as compared to conventional use of average rolling-element bearing stiffness. Bearing forces were calculated by COBRA-AHS (Computer Optimized Ball and Roller Bearing Analysis - Advanced High Speed) and supplied to the rotordynamics code ARDS (Analysis of Rotor Dynamic Systems) for accurate simulation of rotor transient behavior. COBRA-AHS is a fast-running 5 degree-of-freedom computer code able to calculate high speed rolling-element bearing load-displacement data for radial and angular contact ball bearings and also for cylindrical and tapered roller beatings. Results show that use of nonlinear bearing characteristics is essential for accurate prediction of rotordynamic behavior.

  1. Numerical Simulation Of Cutting Of Gear Teeth

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Huston, Ronald L.; Mavriplis, Dimitrios

    1994-01-01

    Shapes of gear teeth produced by gear cutters of specified shape simulated computationally, according to approach based on principles of differential geometry. Results of computer simulation displayed as computer graphics and/or used in analyses of design, manufacturing, and performance of gears. Applicable to both standard and non-standard gear-tooth forms. Accelerates and facilitates analysis of alternative designs of gears and cutters. Simulation extended to study generation of surfaces other than gears. Applied to cams, bearings, and surfaces of arbitrary rolling elements as well as to gears. Possible to develop analogous procedures for simulating manufacture of skin surfaces like automobile fenders, airfoils, and ship hulls.

  2. Development and assessment of a chemistry-based computer video game as a learning tool

    NASA Astrophysics Data System (ADS)

    Martinez-Hernandez, Kermin Joel

    The chemistry-based computer video game is a multidisciplinary collaboration between chemistry and computer graphics and technology fields developed to explore the use of video games as a possible learning tool. This innovative approach aims to integrate elements of commercial video game and authentic chemistry context environments into a learning experience through gameplay. The project consists of three areas: development, assessment, and implementation. However, the foci of this study were the development and assessment of the computer video game including possible learning outcomes and game design elements. A chemistry-based game using a mixed genre of a single player first-person game embedded with action-adventure and puzzle components was developed to determine if students' level of understanding of chemistry concepts change after gameplay intervention. Three phases have been completed to assess students' understanding of chemistry concepts prior and after gameplay intervention. Two main assessment instruments (pre/post open-ended content survey and individual semi-structured interviews) were used to assess student understanding of concepts. In addition, game design elements were evaluated for future development phases. Preliminary analyses of the interview data suggest that students were able to understand most of the chemistry challenges presented in the game and the game served as a review for previously learned concepts as well as a way to apply such previous knowledge. To guarantee a better understanding of the chemistry concepts, additions such as debriefing and feedback about the content presented in the game seem to be needed. The use of visuals in the game to represent chemical processes, game genre, and game idea appear to be the game design elements that students like the most about the current computer video game.

  3. An efficient dynamic load balancing algorithm

    NASA Astrophysics Data System (ADS)

    Lagaros, Nikos D.

    2014-01-01

    In engineering problems, randomness and uncertainties are inherent. Robust design procedures, formulated in the framework of multi-objective optimization, have been proposed in order to take into account sources of randomness and uncertainty. These design procedures require orders of magnitude more computational effort than conventional analysis or optimum design processes since a very large number of finite element analyses is required to be dealt. It is therefore an imperative need to exploit the capabilities of computing resources in order to deal with this kind of problems. In particular, parallel computing can be implemented at the level of metaheuristic optimization, by exploiting the physical parallelization feature of the nondominated sorting evolution strategies method, as well as at the level of repeated structural analyses required for assessing the behavioural constraints and for calculating the objective functions. In this study an efficient dynamic load balancing algorithm for optimum exploitation of available computing resources is proposed and, without loss of generality, is applied for computing the desired Pareto front. In such problems the computation of the complete Pareto front with feasible designs only, constitutes a very challenging task. The proposed algorithm achieves linear speedup factors and almost 100% speedup factor values with reference to the sequential procedure.

  4. Simulating Fatigue Crack Growth in Spiral Bevel Pinion

    NASA Technical Reports Server (NTRS)

    Ural, Ani; Wawrzynek, Paul A.; Ingraffe, Anthony R.

    2003-01-01

    This project investigates computational modeling of fatigue crack growth in spiral bevel gears. Current work is a continuation of the previous efforts made to use the Boundary Element Method (BEM) to simulate tooth-bending fatigue failure in spiral bevel gears. This report summarizes new results predicting crack trajectory and fatigue life for a spiral bevel pinion using the Finite Element Method (FEM). Predicting crack trajectories is important in determining the failure mode of a gear. Cracks propagating through the rim may result in catastrophic failure, whereas the gear may remain intact if one tooth fails and this may allow for early detection of failure. Being able to predict crack trajectories is insightful for the designer. However, predicting growth of three-dimensional arbitrary cracks is complicated due to the difficulty of creating three-dimensional models, the computing power required, and absence of closed- form solutions of the problem. Another focus of this project was performing three-dimensional contact analysis of a spiral bevel gear set incorporating cracks. These analyses were significant in determining the influence of change of tooth flexibility due to crack growth on the magnitude and location of contact loads. This is an important concern since change in contact loads might lead to differences in SIFs and therefore result in alteration of the crack trajectory. Contact analyses performed in this report showed the expected trend of decreasing tooth loads carried by the cracked tooth with increasing crack length. Decrease in tooth loads lead to differences between SIFs extracted from finite element contact analysis and finite element analysis with Hertz contact loads. This effect became more pronounced as the crack grew.

  5. Stress analyses of B-52 pylon hooks

    NASA Technical Reports Server (NTRS)

    Ko, W. L.; Schuster, L. S.

    1985-01-01

    The NASTRAN finite element computer program was used in the two dimensional stress analysis of B-52 carrier aircraft pylon hooks: (1) old rear hook (which failed), (2) new rear hook (improved geometry), (3) new DAST rear hook (derated geometry), and (4) front hook. NASTRAN model meshes were generated by the aid of PATRAN-G computer program. Brittle limit loads for all the four hooks were established. The critical stress level calculated from NASTRAN agrees reasonably well with the values predicted from the fracture mechanics for the failed old rear hook.

  6. Equilibrium paths analysis of materials with rheological properties by using the chaos theory

    NASA Astrophysics Data System (ADS)

    Bednarek, Paweł; Rządkowski, Jan

    2018-01-01

    The numerical equilibrium path analysis of the material with random rheological properties by using standard procedures and specialist computer programs was not successful. The proper solution for the analysed heuristic model of the material was obtained on the base of chaos theory elements and neural networks. The paper deals with mathematical reasons of used computer programs and also are elaborated the properties of the attractor used in analysis. There are presented results of conducted numerical analysis both in a numerical and in graphical form for the used procedures.

  7. Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers

    NASA Technical Reports Server (NTRS)

    Guru Prasad, K.; Kane, J. H.

    1992-01-01

    The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.

  8. Modeling thermoelastic distortion of optics using elastodynamic reciprocity

    NASA Astrophysics Data System (ADS)

    King, Eleanor; Levin, Yuri; Ottaway, David; Veitch, Peter

    2015-07-01

    Thermoelastic distortion resulting from optical absorption by transmissive and reflective optics can cause unacceptable changes in optical systems that employ high-power beams. In advanced-generation laser-interferometric gravitational wave detectors, for example, optical absorption is expected to result in wavefront distortions that would compromise the sensitivity of the detector, thus necessitating the use of adaptive thermal compensation. Unfortunately, these systems have long thermal time constants, and so predictive feed-forward control systems could be required, but the finite-element analysis is computationally expensive. We describe here the use of the Betti-Maxwell elastodynamic reciprocity theorem to calculate the response of linear elastic bodies (optics) to heating that has arbitrary spatial distribution. We demonstrate, using a simple example, that it can yield accurate results in computational times that are significantly less than those required for finite-element analyses.

  9. BUCLASP 2: A computer program for instability analysis of biaxially loaded composite stiffened panels and other structures

    NASA Technical Reports Server (NTRS)

    Tripp, L. L.; Tamekuni, M.; Viswanathan, A. V.

    1973-01-01

    The use of the computer program BUCLASP2 is described. The program is intended for linear instability analyses of structures such as unidirectionally stiffened panels. Any structure that has a constant cross section in one direction, that may be idealized as an assemblage of beam elements and laminated flat and curved plant strip elements can be analyzed. The loadings considered are combinations of axial compressive loads and in-plane transverse loads. The two parallel ends of the panel must be simply supported and arbitrary elastic boundary conditions may be imposed along any one or both external longitudinal side. This manual consists of instructions for use of the program with sample problems, including input and output information. The theoretical basis of BUCLASP2 and correlations of calculated results with known solutions, are presented.

  10. Application of Interface Technology in Progressive Failure Analysis of Composite Panels

    NASA Technical Reports Server (NTRS)

    Sleight, D. W.; Lotts, C. G.

    2002-01-01

    A progressive failure analysis capability using interface technology is presented. The capability has been implemented in the COMET-AR finite element analysis code developed at the NASA Langley Research Center and is demonstrated on composite panels. The composite panels are analyzed for damage initiation and propagation from initial loading to final failure using a progressive failure analysis capability that includes both geometric and material nonlinearities. Progressive failure analyses are performed on conventional models and interface technology models of the composite panels. Analytical results and the computational effort of the analyses are compared for the conventional models and interface technology models. The analytical results predicted with the interface technology models are in good correlation with the analytical results using the conventional models, while significantly reducing the computational effort.

  11. Status of VICTORIA: NRC peer review and recent code applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bixler, N.E.; Schaperow, J.H.

    1997-12-01

    VICTORIA is a mechanistic computer code designed to analyze fission product behavior within a nuclear reactor coolant system (RCS) during a severe accident. It provides detailed predictions of the release of radioactive and nonradioactive materials from the reactor core and transport and deposition of these materials within the RCS. A summary of the results and recommendations of an independent peer review of VICTORIA by the US Nuclear Regulatory Commission (NRC) is presented, along with recent applications of the code. The latter include analyses of a temperature-induced steam generator tube rupture sequence and post-test analyses of the Phebus FPT-1 test. Themore » next planned Phebus test, FTP-4, will focus on fission product releases from a rubble bed, especially those of the less-volatile elements, and on the speciation of the released elements. Pretest analyses using VICTORIA to estimate the magnitude and timing of releases are presented. The predicted release of uranium is a matter of particular importance because of concern about filter plugging during the test.« less

  12. Thermal structure analyses for CSM testbed (COMET)

    NASA Technical Reports Server (NTRS)

    Xue, David Y.; Mei, Chuh

    1994-01-01

    This document is the final report for the project entitled 'Thermal Structure Analyses for CSM Testbed (COMET),' for the period of May 16, 1992 - August 15, 1994. The project was focused on the investigation and development of finite element analysis capability of the computational structural mechanics (CSM) testbed (COMET) software system in the field of thermal structural responses. The stages of this project consisted of investigating present capabilities, developing new functions, analysis demonstrations, and research topics. The appendices of this report list the detailed documents of major accomplishments and demonstration runstreams for future references.

  13. Development and verification of global/local analysis techniques for laminated composites

    NASA Technical Reports Server (NTRS)

    Thompson, Danniella Muheim; Griffin, O. Hayden, Jr.

    1991-01-01

    A two-dimensional to three-dimensional global/local finite element approach was developed, verified, and applied to a laminated composite plate of finite width and length containing a central circular hole. The resulting stress fields for axial compression loads were examined for several symmetric stacking sequences and hole sizes. Verification was based on comparison of the displacements and the stress fields with those accepted trends from previous free edge investigations and a complete three-dimensional finite element solution of the plate. The laminates in the compression study included symmetric cross-ply, angle-ply and quasi-isotropic stacking sequences. The entire plate was selected as the global model and analyzed with two-dimensional finite elements. Displacements along a region identified as the global/local interface were applied in a kinematically consistent fashion to independent three-dimensional local models. Local areas of interest in the plate included a portion of the straight free edge near the hole, and the immediate area around the hole. Interlaminar stress results obtained from the global/local analyses compares well with previously reported trends, and some new conclusions about interlaminar stress fields in plates with different laminate orientations and hole sizes are presented for compressive loading. The effectiveness of the global/local procedure in reducing the computational effort required to solve these problems is clearly demonstrated through examination of the computer time required to formulate and solve the linear, static system of equations which result for the global and local analyses to those required for a complete three-dimensional formulation for a cross-ply laminate. Specific processors used during the analyses are described in general terms. The application of this global/local technique is not limited software system, and was developed and described in as general a manner as possible.

  14. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  15. The physical vulnerability of elements at risk: a methodology based on fluid and classical mechanics

    NASA Astrophysics Data System (ADS)

    Mazzorana, B.; Fuchs, S.; Levaggi, L.

    2012-04-01

    The impacts of the flood events occurred in autumn 2011 in the Italian regions Liguria and Tuscany revived the engagement of the public decision makers to enhance in synergy flood control and land use planning. In this context, the design of efficient flood risk mitigation strategies and their subsequent implementation critically relies on a careful vulnerability analysis of both, the immobile and mobile elements at risk potentially exposed to flood hazards. Based on fluid and classical mechanics notions we developed computation schemes enabling for a dynamic vulnerability and risk analysis facing a broad typological variety of elements at risk. The methodological skeleton consists of (1) hydrodynamic computation of the time-varying flood intensities resulting for each element at risk in a succession of loading configurations; (2) modelling the mechanical response of the impacted elements through static, elasto-static and dynamic analyses; (3) characterising the mechanical response through proper structural damage variables and (4) economic valuation of the expected losses as a function of the quantified damage variables. From a computational perspective we coupled the description of the hydrodynamic flow behaviour and the induced structural modifications of the elements at risk exposed. Valuation methods, suitable to support a correct mapping from the value domains of the physical damage variables to the economic loss values are discussed. In such a way we target to complement from a methodological perspective the existing, mainly empirical, vulnerability and risk assessment approaches to refine the conceptual framework of the cost-benefit analysis. Moreover, we aim to support the design of effective flood risk mitigation strategies by diminishing the main criticalities within the systems prone to flood risk.

  16. Nemesis I: Parallel Enhancements to ExodusII

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennigan, Gary L.; John, Matthew S.; Shadid, John N.

    2006-03-28

    NEMESIS I is an enhancement to the EXODUS II finite element database model used to store and retrieve data for unstructured parallel finite element analyses. NEMESIS I adds data structures which facilitate the partitioning of a scalar (standard serial) EXODUS II file onto parallel disk systems found on many parallel computers. Since the NEMESIS I application programming interface (APl)can be used to append information to an existing EXODUS II files can be used on files which contain NEMESIS I information. The NEMESIS I information is written and read via C or C++ callable functions which compromise the NEMESIS I API.

  17. Multiscale finite element modeling of sheet molding compound (SMC) composite structure based on stochastic mesostructure reconstruction

    DOE PAGES

    Chen, Zhangxing; Huang, Tianyu; Shao, Yimin; ...

    2018-03-15

    Predicting the mechanical behavior of the chopped carbon fiber Sheet Molding Compound (SMC) due to spatial variations in local material properties is critical for the structural performance analysis but is computationally challenging. Such spatial variations are induced by the material flow in the compression molding process. In this work, a new multiscale SMC modeling framework and the associated computational techniques are developed to provide accurate and efficient predictions of SMC mechanical performance. The proposed multiscale modeling framework contains three modules. First, a stochastic algorithm for 3D chip-packing reconstruction is developed to efficiently generate the SMC mesoscale Representative Volume Element (RVE)more » model for Finite Element Analysis (FEA). A new fiber orientation tensor recovery function is embedded in the reconstruction algorithm to match reconstructions with the target characteristics of fiber orientation distribution. Second, a metamodeling module is established to improve the computational efficiency by creating the surrogates of mesoscale analyses. Third, the macroscale behaviors are predicted by an efficient multiscale model, in which the spatially varying material properties are obtained based on the local fiber orientation tensors. Our approach is further validated through experiments at both meso- and macro-scales, such as tensile tests assisted by Digital Image Correlation (DIC) and mesostructure imaging.« less

  18. Multiscale finite element modeling of sheet molding compound (SMC) composite structure based on stochastic mesostructure reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zhangxing; Huang, Tianyu; Shao, Yimin

    Predicting the mechanical behavior of the chopped carbon fiber Sheet Molding Compound (SMC) due to spatial variations in local material properties is critical for the structural performance analysis but is computationally challenging. Such spatial variations are induced by the material flow in the compression molding process. In this work, a new multiscale SMC modeling framework and the associated computational techniques are developed to provide accurate and efficient predictions of SMC mechanical performance. The proposed multiscale modeling framework contains three modules. First, a stochastic algorithm for 3D chip-packing reconstruction is developed to efficiently generate the SMC mesoscale Representative Volume Element (RVE)more » model for Finite Element Analysis (FEA). A new fiber orientation tensor recovery function is embedded in the reconstruction algorithm to match reconstructions with the target characteristics of fiber orientation distribution. Second, a metamodeling module is established to improve the computational efficiency by creating the surrogates of mesoscale analyses. Third, the macroscale behaviors are predicted by an efficient multiscale model, in which the spatially varying material properties are obtained based on the local fiber orientation tensors. Our approach is further validated through experiments at both meso- and macro-scales, such as tensile tests assisted by Digital Image Correlation (DIC) and mesostructure imaging.« less

  19. Challenges in Integrating Nondestructive Evaluation and Finite Element Methods for Realistic Structural Analysis

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Baaklini, George Y.; Zagidulin, Dmitri; Rauser, Richard W.

    2000-01-01

    Capabilities and expertise related to the development of links between nondestructive evaluation (NDE) and finite element analysis (FEA) at Glenn Research Center (GRC) are demonstrated. Current tools to analyze data produced by computed tomography (CT) scans are exercised to help assess the damage state in high temperature structural composite materials. A utility translator was written to convert velocity (an image processing software) STL data file to a suitable CAD-FEA type file. Finite element analyses are carried out with MARC, a commercial nonlinear finite element code, and the analytical results are discussed. Modeling was established by building MSC/Patran (a pre and post processing finite element package) generated model and comparing it to a model generated by Velocity in conjunction with MSC/Patran Graphics. Modeling issues and results are discussed in this paper. The entire process that outlines the tie between the data extracted via NDE and the finite element modeling and analysis is fully described.

  20. An Overview of the Semi-Span Super-Sonic Transport (S4T) Wind-Tunnel Model Program

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Perry, Boyd, III; Florance, James R.; Sanetrik, Mark D.; Wieseman, Carol D.; Stevens, William L.; Funk, Christie J.; Christhilf, David M.; Coulson, David A.

    2012-01-01

    A summary of computational and experimental aeroelastic (AE) and aeroservoelastic (ASE) results for the Semi-Span Super-Sonic Transport (S4T) wind-tunnel model is presented. A broad range of analyses and multiple AE and ASE wind-tunnel tests of the S4T wind-tunnel model have been performed in support of the ASE element in the Supersonics Program, part of the NASA Fundamental Aeronautics Program. This paper is intended to be an overview of multiple papers that comprise a special S4T technical session. Along those lines, a brief description of the design and hardware of the S4T wind-tunnel model will be presented. Computational results presented include linear and nonlinear aeroelastic analyses, and rapid aeroelastic analyses using CFD-based reduced-order models (ROMs). A brief survey of some of the experimental results from two open-loop and two closed-loop wind-tunnel tests performed at the NASA Langley Transonic Dynamics Tunnel (TDT) will be presented as well.

  1. Prediction of elemental creep. [steady state and cyclic data from regression analysis

    NASA Technical Reports Server (NTRS)

    Davis, J. W.; Rummler, D. R.

    1975-01-01

    Cyclic and steady-state creep tests were performed to provide data which were used to develop predictive equations. These equations, describing creep as a function of stress, temperature, and time, were developed through the use of a least squares regression analyses computer program for both the steady-state and cyclic data sets. Comparison of the data from the two types of tests, revealed that there was no significant difference between the cyclic and steady-state creep strains for the L-605 sheet under the experimental conditions investigated (for the same total time at load). Attempts to develop a single linear equation describing the combined steady-state and cyclic creep data resulted in standard errors of estimates higher than obtained for the individual data sets. A proposed approach to predict elemental creep in metals uses the cyclic creep equation and a computer program which applies strain and time hardening theories of creep accumulation.

  2. The Use of Finite Element Analyses to Design and Fabricate Three-Dimensional Scaffolds for Skeletal Tissue Engineering

    PubMed Central

    Hendrikson, Wim. J.; van Blitterswijk, Clemens. A.; Rouwkema, Jeroen; Moroni, Lorenzo

    2017-01-01

    Computational modeling has been increasingly applied to the field of tissue engineering and regenerative medicine. Where in early days computational models were used to better understand the biomechanical requirements of targeted tissues to be regenerated, recently, more and more models are formulated to combine such biomechanical requirements with cell fate predictions to aid in the design of functional three-dimensional scaffolds. In this review, we highlight how computational modeling has been used to understand the mechanisms behind tissue formation and can be used for more rational and biomimetic scaffold-based tissue regeneration strategies. With a particular focus on musculoskeletal tissues, we discuss recent models attempting to predict cell activity in relation to specific mechanical and physical stimuli that can be applied to them through porous three-dimensional scaffolds. In doing so, we review the most common scaffold fabrication methods, with a critical view on those technologies that offer better properties to be more easily combined with computational modeling. Finally, we discuss how modeling, and in particular finite element analysis, can be used to optimize the design of scaffolds for skeletal tissue regeneration. PMID:28567371

  3. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard

    1991-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/ mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  4. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Bartos, Karen F.; Fite, E. B.; Sharp, G. R.

    1992-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  5. MANTLE: A finite element program for the thermal-mechanical analysis of mantle convection. A user's manual with examples

    NASA Technical Reports Server (NTRS)

    Thompson, E.

    1979-01-01

    A finite element computer code for the analysis of mantle convection is described. The coupled equations for creeping viscous flow and heat transfer can be solved for either a transient analysis or steady-state analysis. For transient analyses, either a control volume or a control mass approach can be used. Non-Newtonian fluids with viscosities which have thermal and spacial dependencies can be easily incorporated. All material parameters may be written as function statements by the user or simply specified as constants. A wide range of boundary conditions, both for the thermal analysis and the viscous flow analysis can be specified. For steady-state analyses, elastic strain rates can be included. Although this manual was specifically written for users interested in mantle convection, the code is equally well suited for analysis in a number of other areas including metal forming, glacial flows, and creep of rock and soil.

  6. Sierra/Solid Mechanics 4.48 User's Guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merewether, Mark Thomas; Crane, Nathan K; de Frias, Gabriel Jose

    Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutionsmore » of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.« less

  7. Stress and Fracture Analyses Under Elastic-plastic and Creep Conditions: Some Basic Developments and Computational Approaches

    NASA Technical Reports Server (NTRS)

    Reed, K. W.; Stonesifer, R. B.; Atluri, S. N.

    1983-01-01

    A new hybrid-stress finite element algorith, suitable for analyses of large quasi-static deformations of inelastic solids, is presented. Principal variables in the formulation are the nominal stress-rate and spin. A such, a consistent reformulation of the constitutive equation is necessary, and is discussed. The finite element equations give rise to an initial value problem. Time integration has been accomplished by Euler and Runge-Kutta schemes and the superior accuracy of the higher order schemes is noted. In the course of integration of stress in time, it has been demonstrated that classical schemes such as Euler's and Runge-Kutta may lead to strong frame-dependence. As a remedy, modified integration schemes are proposed and the potential of the new schemes for suppressing frame dependence of numerically integrated stress is demonstrated. The topic of the development of valid creep fracture criteria is also addressed.

  8. Coupled Thermo-Mechanical Analyses of Dynamically Loaded Rubber Cylinders

    NASA Technical Reports Server (NTRS)

    Johnson, Arthur R.; Chen, Tzi-Kang

    2000-01-01

    A procedure that models coupled thermo-mechanical deformations of viscoelastic rubber cylinders by employing the ABAQUS finite element code is described. Computational simulations of hysteretic heating are presented for several tall and short rubber cylinders both with and without a steel disk at their centers. The cylinders are compressed axially and are then cyclically loaded about the compressed state. The non-uniform hysteretic heating of the rubber cylinders containing a steel disk is presented. The analyses performed suggest that the coupling procedure should be considered for further development as a design tool for rubber degradation studies.

  9. Analysis of physical-chemical processes governing SSME internal fluid flows

    NASA Technical Reports Server (NTRS)

    Singhal, A. K.; Owens, S. F.; Mukerjee, T.; Keeton, L. W.; Prakash, C.; Przekwas, A. J.

    1984-01-01

    The efforts to adapt CHAM's computational fluid dynamics code, PHOENICS, to the analysis of flow within the high pressure fuel turbopump (HPFTP) aft-platform seal cavity of the SSME are summarized. In particular, the special purpose PHOENICS satellite and ground station specifically formulated for this application are listed and described, and the preliminary results of the first part two-dimensional analyses are presented and discussed. Planned three-dimensional analyses are also briefly outlined. To further understand the mixing and combustion processes in the SSME fuelside preburners, a single oxygen-hydrogen jet element was investigated.

  10. A Semi-Analytical Method for Determining the Energy Release Rate of Cracks in Adhesively-Bonded Single-Lap Composite Joints

    NASA Technical Reports Server (NTRS)

    Yang, Charles; Sun, Wenjun; Tomblin, John S.; Smeltzer, Stanley S., III

    2007-01-01

    A semi-analytical method for determining the strain energy release rate due to a prescribed interface crack in an adhesively-bonded, single-lap composite joint subjected to axial tension is presented. The field equations in terms of displacements within the joint are formulated by using first-order shear deformable, laminated plate theory together with kinematic relations and force equilibrium conditions. The stress distributions for the adherends and adhesive are determined after the appropriate boundary and loading conditions are applied and the equations for the field displacements are solved. Based on the adhesive stress distributions, the forces at the crack tip are obtained and the strain energy release rate of the crack is determined by using the virtual crack closure technique (VCCT). Additionally, the test specimen geometry from both the ASTM D3165 and D1002 test standards are utilized during the derivation of the field equations in order to correlate analytical models with future test results. The system of second-order differential field equations is solved to provide the adherend and adhesive stress response using the symbolic computation tool, Maple 9. Finite element analyses using J-integral as well as VCCT were performed to verify the developed analytical model. The finite element analyses were conducted using the commercial finite element analysis software ABAQUS. The results determined using the analytical method correlated well with the results from the finite element analyses.

  11. An efficient structural finite element for inextensible flexible risers

    NASA Astrophysics Data System (ADS)

    Papathanasiou, T. K.; Markolefas, S.; Khazaeinejad, P.; Bahai, H.

    2017-12-01

    A core part of all numerical models used for flexible riser analysis is the structural component representing the main body of the riser as a slender beam. Loads acting on this structural element are self-weight, buoyant and hydrodynamic forces, internal pressure and others. A structural finite element for an inextensible riser with a point-wise enforcement of the inextensibility constrain is presented. In particular, the inextensibility constraint is applied only at the nodes of the meshed arc length parameter. Among the virtues of the proposed approach is the flexibility in the application of boundary conditions and the easy incorporation of dissipative forces. Several attributes of the proposed finite element scheme are analysed and computation times for the solution of some simplified examples are discussed. Future developments aim at the appropriate implementation of material and geometric parameters for the beam model, i.e. flexural and torsional rigidity.

  12. One-dimensional analysis of filamentary composite beam columns with thin-walled open sections

    NASA Technical Reports Server (NTRS)

    Lo, Patrick K.-L.; Johnson, Eric R.

    1986-01-01

    Vlasov's one-dimensional structural theory for thin-walled open section bars was originally developed and used for metallic elements. The theory was recently extended to laminated bars fabricated from advanced composite materials. The purpose of this research is to provide a study and assessment of the extended theory. The focus is on flexural and torsional-flexural buckling of thin-walled, open section, laminated composite columns. Buckling loads are computed from the theory using a linear bifurcation analysis and a geometrically nonlinear beam column analysis by the finite element method. Results from the analyses are compared to available test data.

  13. High temperature transformations of waste printed circuit boards from computer monitor and CPU: Characterisation of residues and kinetic studies.

    PubMed

    Rajagopal, Raghu Raman; Rajarao, Ravindra; Sahajwalla, Veena

    2016-11-01

    This paper investigates the high temperature transformation, specifically the kinetic behaviour of the waste printed circuit board (WPCB) derived from computer monitor (single-sided/SSWPCB) and computer processing boards - CPU (multi-layered/MLWPCB) using Thermo-Gravimetric Analyser (TGA) and Vertical Thermo-Gravimetric Analyser (VTGA) techniques under nitrogen atmosphere. Furthermore, the resulting WPCB residues were subjected to characterisation using X-ray Fluorescence spectrometry (XRF), Carbon Analyser, X-ray Photoelectron Spectrometer (XPS) and Scanning Electron Microscopy (SEM). In order to analyse the material degradation of WPCB, TGA from 40°C to 700°C at the rates of 10°C, 20°C and 30°C and VTGA at 700°C, 900°C and 1100°C were performed respectively. The data obtained was analysed on the basis of first order reaction kinetics. Through experiments it is observed that there exists a substantial difference between SSWPCB and MLWPCB in their decomposition levels, kinetic behaviour and structural properties. The calculated activation energy (E A ) of SSWPCB is found to be lower than that of MLWPCB. Elemental analysis of SSWPCB determines to have high carbon content in contrast to MLWPCB and differences in materials properties have significant influence on kinetics, which is ceramic rich, proving to have differences in the physicochemical properties. These high temperature transformation studies and associated analytical investigations provide fundamental understanding of different WPCB and its major variations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Application of wall-models to discontinuous Galerkin LES

    NASA Astrophysics Data System (ADS)

    Frère, Ariane; Carton de Wiart, Corentin; Hillewaert, Koen; Chatelain, Philippe; Winckelmans, Grégoire

    2017-08-01

    Wall-resolved Large-Eddy Simulations (LES) are still limited to moderate Reynolds number flows due to the high computational cost required to capture the inner part of the boundary layer. Wall-modeled LES (WMLES) provide more affordable LES by modeling the near-wall layer. Wall function-based WMLES solve LES equations up to the wall, where the coarse mesh resolution essentially renders the calculation under-resolved. This makes the accuracy of WMLES very sensitive to the behavior of the numerical method. Therefore, best practice rules regarding the use and implementation of WMLES cannot be directly transferred from one methodology to another regardless of the type of discretization approach. Whilst numerous studies present guidelines on the use of WMLES, there is a lack of knowledge for discontinuous finite-element-like high-order methods. Incidentally, these methods are increasingly used on the account of their high accuracy on unstructured meshes and their strong computational efficiency. The present paper proposes best practice guidelines for the use of WMLES in these methods. The study is based on sensitivity analyses of turbulent channel flow simulations by means of a Discontinuous Galerkin approach. It appears that good results can be obtained without the use of a spatial or temporal averaging. The study confirms the importance of the wall function input data location and suggests to take it at the bottom of the second off-wall element. These data being available through the ghost element, the suggested method prevents the loss of computational scalability experienced in unstructured WMLES. The study also highlights the influence of the polynomial degree used in the wall-adjacent element. It should preferably be of even degree as using polynomials of degree two in the first off-wall element provides, surprisingly, better results than using polynomials of degree three.

  15. Blasim: A computational tool to assess ice impact damage on engine blades

    NASA Astrophysics Data System (ADS)

    Reddy, E. S.; Abumeri, G. H.; Chamis, C. C.

    1993-04-01

    A portable computer called BLASIM was developed at NASA LeRC to assess ice impact damage on aircraft engine blades. In addition to ice impact analyses, the code also contains static, dynamic, resonance margin, and supersonic flutter analysis capabilities. Solid, hollow, superhybrid, and composite blades are supported. An optional preprocessor (input generator) was also developed to interactively generate input for BLASIM. The blade geometry can be defined using a series of airfoils at discrete input stations or by a finite element grid. The code employs a coarse, fixed finite element mesh containing triangular plate finite elements to minimize program execution time. Ice piece is modeled using an equivalent spherical objective that has a high velocity opposite that of the aircraft and parallel to the engine axis. For local impact damage assessment, the impact load is considered as a distributed force acting over a region around the impact point. The average radial strain of the finite elements along the leading edge is used as a measure of the local damage. To estimate damage at the blade root, the impact is treated as an impulse and a combined stress failure criteria is employed. Parametric studies of local and root ice impact damage, and post-impact dynamics are discussed for solid and composite blades.

  16. Implicit Space-Time Conservation Element and Solution Element Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Wang, Xiao-Yen

    1999-01-01

    Artificial numerical dissipation is in important issue in large Reynolds number computations. In such computations, the artificial dissipation inherent in traditional numerical schemes can overwhelm the physical dissipation and yield inaccurate results on meshes of practical size. In the present work, the space-time conservation element and solution element method is used to construct new and accurate implicit numerical schemes such that artificial numerical dissipation will not overwhelm physical dissipation. Specifically, these schemes have the property that numerical dissipation vanishes when the physical viscosity goes to zero. These new schemes therefore accurately model the physical dissipation even when it is extremely small. The new schemes presented are two highly accurate implicit solvers for a convection-diffusion equation. The two schemes become identical in the pure convection case, and in the pure diffusion case. The implicit schemes are applicable over the whole Reynolds number range, from purely diffusive equations to convection-dominated equations with very small viscosity. The stability and consistency of the schemes are analysed, and some numerical results are presented. It is shown that, in the inviscid case, the new schemes become explicit and their amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, their principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.

  17. Advanced Small Perturbation Potential Flow Theory for Unsteady Aerodynamic and Aeroelastic Analyses

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    2005-01-01

    An advanced small perturbation (ASP) potential flow theory has been developed to improve upon the classical transonic small perturbation (TSP) theories that have been used in various computer codes. These computer codes are typically used for unsteady aerodynamic and aeroelastic analyses in the nonlinear transonic flight regime. The codes exploit the simplicity of stationary Cartesian meshes with the movement or deformation of the configuration under consideration incorporated into the solution algorithm through a planar surface boundary condition. The new ASP theory was developed methodically by first determining the essential elements required to produce full-potential-like solutions with a small perturbation approach on the requisite Cartesian grid. This level of accuracy required a higher-order streamwise mass flux and a mass conserving surface boundary condition. The ASP theory was further developed by determining the essential elements required to produce results that agreed well with Euler solutions. This level of accuracy required mass conserving entropy and vorticity effects, and second-order terms in the trailing wake boundary condition. Finally, an integral boundary layer procedure, applicable to both attached and shock-induced separated flows, was incorporated for viscous effects. The resulting ASP potential flow theory, including entropy, vorticity, and viscous effects, is shown to be mathematically more appropriate and computationally more accurate than the classical TSP theories. The formulaic details of the ASP theory are described fully and the improvements are demonstrated through careful comparisons with accepted alternative results and experimental data. The new theory has been used as the basis for a new computer code called ASP3D (Advanced Small Perturbation - 3D), which also is briefly described with representative results.

  18. A solid reactor core thermal model for nuclear thermal rockets

    NASA Astrophysics Data System (ADS)

    Rider, William J.; Cappiello, Michael W.; Liles, Dennis R.

    1991-01-01

    A Helium/Hydrogen Cooled Reactor Analysis (HERA) computer code has been developed. HERA has the ability to model arbitrary geometries in three dimensions, which allows the user to easily analyze reactor cores constructed of prismatic graphite elements. The code accounts for heat generation in the fuel, control rods, and other structures; conduction and radiation across gaps; convection to the coolant; and a variety of boundary conditions. The numerical solution scheme has been optimized for vector computers, making long transient analyses economical. Time integration is either explicit or implicit, which allows the use of the model to accurately calculate both short- or long-term transients with an efficient use of computer time. Both the basic spatial and temporal integration schemes have been benchmarked against analytical solutions.

  19. Computation of forces from deformed visco-elastic biological tissues

    NASA Astrophysics Data System (ADS)

    Muñoz, José J.; Amat, David; Conte, Vito

    2018-04-01

    We present a least-squares based inverse analysis of visco-elastic biological tissues. The proposed method computes the set of contractile forces (dipoles) at the cell boundaries that induce the observed and quantified deformations. We show that the computation of these forces requires the regularisation of the problem functional for some load configurations that we study here. The functional measures the error of the dynamic problem being discretised in time with a second-order implicit time-stepping and in space with standard finite elements. We analyse the uniqueness of the inverse problem and estimate the regularisation parameter by means of an L-curved criterion. We apply the methodology to a simple toy problem and to an in vivo set of morphogenetic deformations of the Drosophila embryo.

  20. The MHOST finite element program: 3-D inelastic analysis methods for hot section components. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Nakazawa, Shohei

    1991-01-01

    Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.

  1. Aging of monolithic zirconia dental prostheses: Protocol for a 5-year prospective clinical study using ex vivo analyses.

    PubMed

    Koenig, Vinciane; Wulfman, Claudine P; Derbanne, Mathieu A; Dupont, Nathalie M; Le Goff, Stéphane O; Tang, Mie-Leng; Seidel, Laurence; Dewael, Thibaut Y; Vanheusden, Alain J; Mainjot, Amélie K

    2016-12-15

    Recent introduction of computer-aided design/computer-aided manufacturing (CAD/CAM) monolithic zirconia dental prostheses raises the issue of material low thermal degradation (LTD), a well-known problem with zirconia hip prostheses. This phenomenon could be accentuated by masticatory mechanical stress. Until now zirconia LTD process has only been studied in vitro . This work introduces an original protocol to evaluate LTD process of monolithic zirconia prostheses in the oral environment and to study their general clinical behavior, notably in terms of wear. 101 posterior monolithic zirconia tooth elements (molars and premolars) are included in a 5-year prospective clinical trial. On each element, several areas between 1 and 2 mm 2 (6 on molars, 4 on premolars) are determined on restoration surface: areas submitted or non-submitted to mastication mechanical stress, glazed or non-glazed. Before prosthesis placement, ex vivo analyses regarding LTD and wear are performed using Raman spectroscopy, SEM imagery and 3D laser profilometry. After placement, restorations are clinically evaluated following criteria of the World Dental Federation (FDI), complemented by the analysis of fracture clinical risk factors. Two independent examiners perform the evaluations. Clinical evaluation and ex vivo analyses are carried out after 6 months and then each year for up to 5 years. For clinicians and patients, the results of this trial will justify the use of monolithic zirconia restorations in dental practice. For researchers, the originality of a clinical study including ex vivo analyses of material aging will provide important data regarding zirconia properties.Trial registration: ClinicalTrials.gov Identifier: NCT02150226.

  2. An Approximate Dissipation Function for Large Strain Rubber Thermo-Mechanical Analyses

    NASA Technical Reports Server (NTRS)

    Johnson, Arthur R.; Chen, Tzi-Kang

    2003-01-01

    Mechanically induced viscoelastic dissipation is difficult to compute. When the constitutive model is defined by history integrals, the formula for dissipation is a double convolution integral. Since double convolution integrals are difficult to approximate, coupled thermo-mechanical analyses of highly viscous rubber-like materials cannot be made with most commercial finite element software. In this study, we present a method to approximate the dissipation for history integral constitutive models that represent Maxwell-like materials without approximating the double convolution integral. The method requires that the total stress can be separated into elastic and viscous components, and that the relaxation form of the constitutive law is defined with a Prony series. Numerical data is provided to demonstrate the limitations of this approximate method for determining dissipation. Rubber cylinders with imbedded steel disks and with an imbedded steel ball are dynamically loaded, and the nonuniform heating within the cylinders is computed.

  3. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  4. Modal and Impact Dynamics Analysis of an Aluminum Cylinder

    NASA Technical Reports Server (NTRS)

    Lessard, Wendy B.

    2002-01-01

    This paper presents analyses for the modal characteristics and impact response of an all-aluminum cylinder. The analyses were performed in preparation for impact tests of the cylinder at The Impact Dynamics Research Facility (IDRF) at the NASA Langley Research Center. Mode shapes and frequencies were computed using NASTRAN and compared with existing experimental data to assess the overall accuracy of the mass and stiffness of the finite element model. A series of non-linear impact analyses were then performed using MSC Dytran in which the weight distribution on the floor and the impact velocity of the cylinder were varied. The effects of impact velocity and mass on the rebound and gross deformation of the cylinder were studied in this investigation.

  5. Elemental and isotopic imaging to study biogeochemical functioning of intact soil micro-environments

    NASA Astrophysics Data System (ADS)

    Mueller, Carsten W.

    2017-04-01

    The complexity of soils extends from the ecosystem-scale to individual micro-aggregates, where nano-scale interactions between biota, organic matter (OM) and mineral particles are thought to control the long-term fate of soil carbon and nitrogen. It is known that such biogeochemical processes show disproportionally high reaction rates within nano- to micro-meter sized isolated zones ('hot spots') in comparison to surrounding areas. However, the majority of soil research is conducted on large bulk (> 1 g) samples, which are often significantly altered prior to analysis and analysed destructively. Thus it has previously been impossible to study elemental flows (e.g. C and N) between plants, microbes and soil in complex environments at the necessary spatial resolution within an intact soil system. By using nano-scale secondary ion mass spectrometry (NanoSIMS) in concert with other imaging techniques (e.g. scanning electron microscopy (SEM) and micro computed tomography (µCT)), classic analyses (isotopic and elemental analysis) and biochemical methods (e.g. GC-MS) it is possible to exhibit a more complete picture of soil processes at the micro-scale. I will present exemplarily results about the fate and distribution of organic C and N in complex micro-scale soil structures for a range of intact soil systems. Elemental imaging was used to study initial soil formation as an increase in the structural connectivity of micro-aggregates. Element distribution will be presented as a key to detect functional spatial patterns and biogeochemical hot spots in macro-aggregate functioning and development. In addition isotopic imaging will be demonstrated as a key to trace the fate of plant derived OM in the intact rhizosphere from the root to microbiota and mineral soil particles. Especially the use of stable isotope enrichment (e.g. 13CO2, 15NH4+) in conjunction with NanoSIMS allows to directly trace the fate of OM or nutrients in soils at the relevant scale (e.g. assimilate C / inorganic N in the rhizosphere). However, especially the elemental mapping requires more sophisticated computational approaches to evaluate (and quantify) the spatial heterogeneities of biogeochemical properties in intact soil systems.

  6. GPGPU-based explicit finite element computations for applications in biomechanics: the performance of material models, element technologies, and hardware generations.

    PubMed

    Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N

    2017-12-01

    Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.

  7. Structural Anomalies Detected in Ceramic Matrix Composites Using Combined Nondestructive Evaluation and Finite Element Analysis (NDE and FEA)

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Baaklini, George Y.; Bhatt, Ramakrishna T.

    2003-01-01

    Most reverse engineering approaches involve imaging or digitizing an object and then creating a computerized reconstruction that can be integrated, in three dimensions, into a particular design environment. The rapid prototyping technique builds high-quality physical prototypes directly from computer-aided design files. This fundamental technique for interpreting and interacting with large data sets is being used here via Velocity2 (an integrated image-processing software, ref. 1) using computed tomography (CT) data to produce a prototype three-dimensional test specimen model for analyses. A study at the NASA Glenn Research Center proposes to use these capabilities to conduct a combined nondestructive evaluation (NDE) and finite element analysis (FEA) to screen pretest and posttest structural anomalies in structural components. A tensile specimen made of silicon nitrite (Si3N4) ceramic matrix composite was considered to evaluate structural durability and deformity. Ceramic matrix composites are being sought as candidate materials to replace nickel-base superalloys for turbine engine applications. They have the unique characteristics of being able to withstand higher operating temperatures and harsh combustion environments. In addition, their low densities relative to metals help reduce component mass (ref. 2). Detailed three-dimensional volume rendering of the tensile test specimen was successfully carried out with Velocity2 (ref. 1) using two-dimensional images that were generated via computed tomography. Subsequent, three-dimensional finite element analyses were performed, and the results obtained were compared with those predicted by NDE-based calculations and experimental tests. It was shown that Velocity2 software can be used to render a three-dimensional object from a series of CT scan images with a minimum level of complexity. The analytical results (ref. 3) show that the high-stress regions correlated well with the damage sites identified by the CT scans and the experimental data. Furthermore, modeling of the voids collected via NDE offered an analytical advantage that resulted in more accurate assessments of the material s structural strength. The top figure shows a CT scan image of the specimen test section illustrating various hidden structural entities in the material and an optical image of the test specimen considered in this study. The bottom figure represents the stress response predicted from the finite element analyses (ref .3 ) for a selected CT slice where it clearly illustrates the correspondence of the high stress risers due to voids in the material with those predicted by the NDE. This study is continuing, and efforts are concentrated on improving the modeling capabilities to imitate the structural anomalies as detected.

  8. Coupled 2D-3D finite element method for analysis of a skin panel with a discontinuous stiffener

    NASA Technical Reports Server (NTRS)

    Wang, J. T.; Lotts, C. G.; Davis, D. D., Jr.; Krishnamurthy, T.

    1992-01-01

    This paper describes a computationally efficient analysis method which was used to predict detailed stress states in a typical composite compression panel with a discontinuous hat stiffener. A global-local approach was used. The global model incorporated both 2D shell and 3D brick elements connected by newly developed transition elements. Most of the panel was modeled with 2D elements, while 3D elements were employed to model the stiffener flange and the adjacent skin. Both linear and geometrically nonlinear analyses were performed on the global model. The effect of geometric nonlinearity induced by the eccentric load path due to the discontinuous hat stiffener was significant. The local model used a fine mesh of 3D brick elements to model the region at the end of the stiffener. Boundary conditions of the local 3D model were obtained by spline interpolation of the nodal displacements from the global analysis. Detailed in-plane and through-the-thickness stresses were calculated in the flange-skin interface near the end of the stiffener.

  9. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  10. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  11. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  12. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  13. 14 CFR § 1214.801 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  14. "Group IV Nanomembranes, Nanoribbons, and Quantum Dots: Processing, Characterization, and Novel Devices"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    liu, feng

    This theoretical project has been carried out in close interaction with the experimental project at UW-Madison under the same title led by PI Max Lagally and co-PI Mark Eriksson. Extensive computational studies have been performed to address a broad range of topics from atomic structure, stability, mechanical property, to electronic structure, optoelectronic and transport properties of various nanoarchitectures in the context of Si and other solid nanomembranes. These have been done by using combinations of different theoretical and computational approaches, ranging from first-principles calculations and molecular dynamics (MD) simulations to finite-element (FE) analyses and continuum modeling.

  15. Physical validation of a patient-specific contact finite element model of the ankle.

    PubMed

    Anderson, Donald D; Goldsworthy, Jane K; Li, Wendy; James Rudert, M; Tochigi, Yuki; Brown, Thomas D

    2007-01-01

    A validation study was conducted to determine the extent to which computational ankle contact finite element (FE) results agreed with experimentally measured tibio-talar contact stress. Two cadaver ankles were loaded in separate test sessions, during which ankle contact stresses were measured with a high-resolution (Tekscan) pressure sensor. Corresponding contact FE analyses were subsequently performed for comparison. The agreement was good between FE-computed and experimentally measured mean (3.2% discrepancy for one ankle, 19.3% for the other) and maximum (1.5% and 6.2%) contact stress, as well as for contact area (1.7% and 14.9%). There was also excellent agreement between histograms of fractional areas of cartilage experiencing specific ranges of contact stress. Finally, point-by-point comparisons between the computed and measured contact stress distributions over the articular surface showed substantial agreement, with correlation coefficients of 90% for one ankle and 86% for the other. In the past, general qualitative, but little direct quantitative agreement has been demonstrated with articular joint contact FE models. The methods used for this validation enable formal comparison of computational and experimental results, and open the way for objective statistical measures of regional correlation between FE-computed contact stress distributions from comparison articular joint surfaces (e.g., those from an intact versus those with residual intra-articular fracture incongruity).

  16. Using computer graphics to enhance astronaut and systems safety

    NASA Technical Reports Server (NTRS)

    Brown, J. W.

    1985-01-01

    Computer graphics is being employed at the NASA Johnson Space Center as a tool to perform rapid, efficient and economical analyses for man-machine integration, flight operations development and systems engineering. The Operator Station Design System (OSDS), a computer-based facility featuring a highly flexible and versatile interactive software package, PLAID, is described. This unique evaluation tool, with its expanding data base of Space Shuttle elements, various payloads, experiments, crew equipment and man models, supports a multitude of technical evaluations, including spacecraft and workstation layout, definition of astronaut visual access, flight techniques development, cargo integration and crew training. As OSDS is being applied to the Space Shuttle, Orbiter payloads (including the European Space Agency's Spacelab) and future space vehicles and stations, astronaut and systems safety are being enhanced. Typical OSDS examples are presented. By performing physical and operational evaluations during early conceptual phases. supporting systems verification for flight readiness, and applying its capabilities to real-time mission support, the OSDS provides the wherewithal to satisfy a growing need of the current and future space programs for efficient, economical analyses.

  17. Multi-Element Unstructured Analyses of Complex Valve Systems

    NASA Technical Reports Server (NTRS)

    Sulyma, Peter (Technical Monitor); Ahuja, Vineet; Hosangadi, Ashvin; Shipman, Jeremy

    2004-01-01

    The safe and reliable operation of high pressure test stands for rocket engine and component testing places an increased emphasis on the performance of control valves and flow metering devices. In this paper, we will present a series of high fidelity computational analyses of systems ranging from cryogenic control valves and pressure regulator systems to cavitating venturis that are used to support rocket engine and component testing at NASA Stennis Space Center. A generalized multi-element framework with sub-models for grid adaption, grid movement and multi-phase flow dynamics has been used to carry out the simulations. Such a framework provides the flexibility of resolving the structural and functional complexities that are typically associated with valve-based high pressure feed systems and have been difficult to deal with traditional CFD methods. Our simulations revealed a rich variety of flow phenomena such as secondary flow patterns, hydrodynamic instabilities, fluctuating vapor pockets etc. In the paper, we will discuss performance losses related to cryogenic control valves, and provide insight into the physics of the dominant multi-phase fluid transport phenomena that are responsible for the choking like behavior in cryogenic control elements. Additionally, we will provide detailed analyses of the modal instability that is observed in the operation of the dome pressure regulator valve. Such instabilities are usually not localized and manifest themselves as a system wide phenomena leading to an undesirable chatter at high flow conditions.

  18. Three-Dimensional Analysis of Voids in AM60B Magnesium Tensile Bars Using Computed Tomography Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waters, A M

    2001-05-01

    In an effort to increase automobile fuel efficiency as well as decrease the output of harmful greenhouse gases, the automotive industry has recently shown increased interest in cast light metals such as magnesium alloys in an effort to increase weight savings. Currently several magnesium alloys such as AZ91 and AM60B are being used in structural applications for automobiles. However, these magnesium alloys are not as well characterized as other commonly used structural metals such as aluminum. This dissertation presents a methodology to nondestructively quantify damage accumulation due to void behavior in three dimensions in die-cast magnesium AM60B tensile bars asmore » a function of mechanical load. Computed tomography data was acquired after tensile bars were loaded up to and including failure, and analyzed to characterize void behavior as it relates to damage accumulation. Signal and image processing techniques were used along with a cluster labeling routine to nondestructively quantify damage parameters in three dimensions. Void analyses were performed including void volume distribution characterization, nearest neighbor distance calculations, shape parameters, and volumetric renderings of voids in the alloy. The processed CT data was used to generate input files for use in finite element simulations, both two- and three-dimensional. The void analyses revealed that the overwhelming source of failure in each tensile bar was a ring of porosity within each bar, possibly due to a solidification front inherent to the casting process. The measured damage parameters related to void nucleation, growth, and coalescence were shown to contribute significantly to total damage accumulation. Void volume distributions were characterized using a Weibull function, and the spatial distributions of voids were shown to be clustered. Two-dimensional finite element analyses of the tensile bars were used to fine-tune material damage models and a three-dimensional mesh of an extracted portion of one tensile bar including voids was generated from CT data and used as input to a finite element analysis.« less

  19. Using computer graphics to design Space Station Freedom viewing

    NASA Technical Reports Server (NTRS)

    Goldsberry, Betty S.; Lippert, Buddy O.; Mckee, Sandra D.; Lewis, James L., Jr.; Mount, Francis E.

    1993-01-01

    Viewing requirements were identified early in the Space Station Freedom program for both direct viewing via windows and indirect viewing via cameras and closed-circuit television (CCTV). These requirements reside in NASA Program Definition and Requirements Document (PDRD), Section 3: Space Station Systems Requirements. Currently, analyses are addressing the feasibility of direct and indirect viewing. The goal of these analyses is to determine the optimum locations for the windows, cameras, and CCTV's in order to meet established requirements, to adequately support space station assembly, and to operate on-board equipment. PLAID, a three-dimensional computer graphics program developed at NASA JSC, was selected for use as the major tool in these analyses. PLAID provides the capability to simulate the assembly of the station as well as to examine operations as the station evolves. This program has been used successfully as a tool to analyze general viewing conditions for many Space Shuttle elements and can be used for virtually all Space Station components. Additionally, PLAID provides the ability to integrate an anthropometric scale-modeled human (representing a crew member) with interior and exterior architecture.

  20. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 2 Core.

    PubMed

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2018-03-09

    Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 2 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language), validation rules that determine the validity of an SBML document, and examples of models in SBML form. The design of Version 2 differs from Version 1 principally in allowing new MathML constructs, making more child elements optional, and adding identifiers to all SBML elements instead of only selected elements. Other materials and software are available from the SBML project website at http://sbml.org/.

  1. Development of an integrated BEM approach for hot fluid structure interaction

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.; Shi, Y.

    1991-01-01

    The development of a comprehensive fluid-structure interaction capability within a boundary element computer code is described. This new capability is implemented in a completely general manner, so that quite arbitrary geometry, material properties and boundary conditions may be specified. Thus, a single analysis code can be used to run structures-only problems, fluids-only problems, or the combined fluid-structure problem. In all three cases, steady or transient conditions can be selected, with or without thermal effects. Nonlinear analyses can be solved via direct iteration or by employing a modified Newton-Raphson approach. A number of detailed numerical examples are included at the end of these two sections to validate the formulations and to emphasize both the accuracy and generality of the computer code. A brief review of the recent applicable boundary element literature is included for completeness. The fluid-structure interaction facility is discussed. Once again, several examples are provided to highlight this unique capability. A collection of potential boundary element applications that have been uncovered as a result of work related to the present grant is given. For most of those problems, satisfactory analysis techniques do not currently exist.

  2. Element-topology-independent preconditioners for parallel finite element computations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alexander, Scott

    1992-01-01

    A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.

  3. Calculation of skin-stiffener interface stresses in stiffened composite panels

    NASA Technical Reports Server (NTRS)

    Cohen, David; Hyer, Michael W.

    1987-01-01

    A method for computing the skin-stiffener interface stresses in stiffened composite panels is developed. Both geometrically linear and nonlinear analyses are considered. Particular attention is given to the flange termination region where stresses are expected to exhibit unbounded characteristics. The method is based on a finite-element analysis and an elasticity solution. The finite-element analysis is standard, while the elasticity solution is based on an eigenvalue expansion of the stress functions. The eigenvalue expansion is assumed to be valid in the local flange termination region and is coupled with the finite-element analysis using collocation of stresses on the local region boundaries. Accuracy and convergence of the local elasticity solution are assessed using a geometrically linear analysis. Using this analysis procedure, the influence of geometric nonlinearities and stiffener parameters on the skin-stiffener interface stresses is evaluated.

  4. Supercomputers for engineering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goudreau, G.L.; Benson, D.J.; Hallquist, J.O.

    1986-07-01

    The Cray-1 and Cray X-MP/48 experience in engineering computations at the Lawrence Livermore National Laboratory is surveyed. The fully vectorized explicit DYNA and implicit NIKE finite element codes are discussed with respect to solid and structural mechanics. The main efficiencies for production analyses are currently obtained by simple CFT compiler exploitation of pipeline architecture for inner do-loop optimization. Current developmet of outer-loop multitasking is also discussed. Applications emphasis will be on 3D examples spanning earth penetrator loads analysis, target lethality assessment, and crashworthiness. The use of a vectorized large deformation shell element in both DYNA and NIKE has substantially expandedmore » 3D nonlinear capability. 25 refs., 7 figs.« less

  5. 3-D Analysis of Flanged Joints Through Various Preload Methods Using ANSYS

    NASA Astrophysics Data System (ADS)

    Murugan, Jeyaraj Paul; Kurian, Thomas; Jayaprakash, Janardhan; Sreedharapanickar, Somanath

    2015-10-01

    Flanged joints are being employed in aerospace solid rocket motor hardware for the integration of various systems or subsystems. Hence, the design of flanged joints is very important in ensuring the integrity of motor while functioning. As these joints are subjected to higher loads due to internal pressure acting inside the motor chamber, an appropriate preload is required to be applied in this joint before subjecting it to the external load. Preload, also known as clamp load, is applied on the fastener and helps to hold the mating flanges together. Generally preload is simulated as a thermal load and the exact preload is obtained through number of iterations. Infact, more iterations are required when considering the material nonlinearity of the bolt. This way of simulation will take more computational time for generating the required preload. Now a days most commercial software packages use pretension elements for simulating the preload. This element does not require iterations for inducing the preload and it can be solved with single iteration. This approach takes less computational time and thus one can study the characteristics of the joint easily by varying the preload. When the structure contains more number of joints with different sizes of fasteners, pretension elements can be used compared to thermal load approach for simulating each size of fastener. This paper covers the details of analyses carried out simulating the preload through various options viz., preload through thermal, initial state command and pretension element etc. using ANSYS finite element package.

  6. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-01

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.

  7. A Systematic Literature Mapping of Risk Analysis of Big Data in Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Bee Yusof Ali, Hazirah; Marziana Abdullah, Lili; Kartiwi, Mira; Nordin, Azlin; Salleh, Norsaremah; Sham Awang Abu Bakar, Normi

    2018-05-01

    This paper investigates previous literature that focusses on the three elements: risk assessment, big data and cloud. We use a systematic literature mapping method to search for journals and proceedings. The systematic literature mapping process is utilized to get a properly screened and focused literature. With the help of inclusion and exclusion criteria, the search of literature is further narrowed. Classification helps us in grouping the literature into categories. At the end of the mapping, gaps can be seen. The gap is where our focus should be in analysing risk of big data in cloud computing environment. Thus, a framework of how to assess the risk of security, privacy and trust associated with big data and cloud computing environment is highly needed.

  8. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  9. WARP3D-Release 10.8: Dynamic Nonlinear Analysis of Solids using a Preconditioned Conjugate Gradient Software Architecture

    NASA Technical Reports Server (NTRS)

    Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.

    1998-01-01

    This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves vectorization of the code on uni-processor hardware and enables straightforward parallel-vector processing of element blocks on multi-processor hardware.

  10. Factors that Influence the Success of Male and Female Computer Programming Students in College

    NASA Astrophysics Data System (ADS)

    Clinkenbeard, Drew A.

    As the demand for a technologically skilled work force grows, experience and skill in computer science have become increasingly valuable for college students. However, the number of students graduating with computer science degrees is not growing proportional to this need. Traditionally several groups are underrepresented in this field, notably women and students of color. This study investigated elements of computer science education that influence academic achievement in beginning computer programming courses. The goal of the study was to identify elements that increase success in computer programming courses. A 38-item questionnaire was developed and administered during the Spring 2016 semester at California State University Fullerton (CSUF). CSUF is an urban public university comprised of about 40,000 students. Data were collected from three beginning programming classes offered at CSUF. In total 411 questionnaires were collected resulting in a response rate of 58.63%. Data for the study were grouped into three broad categories of variables. These included academic and background variables; affective variables; and peer, mentor, and role-model variables. A conceptual model was developed to investigate how these variables might predict final course grade. Data were analyzed using statistical techniques such as linear regression, factor analysis, and path analysis. Ultimately this study found that peer interactions, comfort with computers, computer self-efficacy, self-concept, and perception of achievement were the best predictors of final course grade. In addition, the analyses showed that male students exhibited higher levels of computer self-efficacy and self-concept compared to female students, even when they achieved comparable course grades. Implications and explanations of these findings are explored, and potential policy changes are offered.

  11. Panel Stiffener Debonding Analysis using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2008-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out -of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer fo to, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  12. Panel-Stiffener Debonding and Analysis Using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2007-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer foot, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  13. An atomic finite element model for biodegradable polymers. Part 2. A model for change in Young's modulus due to polymer chain scission.

    PubMed

    Gleadall, Andrew; Pan, Jingzhe; Kruft, Marc-Anton

    2015-11-01

    Atomic simulations were undertaken to analyse the effect of polymer chain scission on amorphous poly(lactide) during degradation. Many experimental studies have analysed mechanical properties degradation but relatively few computation studies have been conducted. Such studies are valuable for supporting the design of bioresorbable medical devices. Hence in this paper, an Effective Cavity Theory for the degradation of Young's modulus was developed. Atomic simulations indicated that a volume of reduced-stiffness polymer may exist around chain scissions. In the Effective Cavity Theory, each chain scission is considered to instantiate an effective cavity. Finite Element Analysis simulations were conducted to model the effect of the cavities on Young's modulus. Since polymer crystallinity affects mechanical properties, the effect of increases in crystallinity during degradation on Young's modulus is also considered. To demonstrate the ability of the Effective Cavity Theory, it was fitted to several sets of experimental data for Young's modulus in the literature. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Elemental abundance analyses with DAO spectrograms. VII - The late normal B stars Pi Ceti, 134 Tauri, 21 Aquilae, and Nu Capricorni and the use of RETICON spectra

    NASA Astrophysics Data System (ADS)

    Adelman, Saul J.

    1991-09-01

    This paper presents elemental abundance analyses of sharp-lined normal late B stars. These stars exhibit mostly near-solar abundances, but each star also shows a few abundances which are a factor of 2 less than solar. The coadded photographic spectrograms are supplemented with Reticon data. A comparison of 261 equivalent widths on 2.4 A/mm spectra of sharp-lined B and A stars shows that the Reticon equivalent widths are about 95 percent of the coadded equivalent mean. The H-gamma profiles of the coadded and Reticon spectra for eight sharp-lined stars show generally good agreement. The generally high quality of the coadded data produced from 10 or more spectrograms is confirmed using the REDUCE graphics-oriented computed reduction code. For five stars, metal lines which fall in the gap between the U and V plates are analyzed using Reticon data.

  15. Elemental abundance analyses with DAO spectrograms. VII - The late normal B stars Pi Ceti, 134 Tauri, 21 Aquilae, and Nu Capricorni and the use of Reticon spectra

    NASA Technical Reports Server (NTRS)

    Adelman, Saul J.

    1991-01-01

    This paper presents elemental abundance analyses of sharp-lined normal late B stars. These stars exhibit mostly near-solar abundances, but each star also shows a few abundances which are a factor of 2 less than solar. The coadded photographic spectrograms are supplemented with Reticon data. A comparison of 261 equivalent widths on 2.4 A/mm spectra of sharp-lined B and A stars shows that the Reticon equivalent widths are about 95 percent of the coadded equivalent mean. The H-gamma profiles of the coadded and Reticon spectra for eight sharp-lined stars show generally good agreement. The generally high quality of the coadded data produced from 10 or more spectrograms is confirmed using the REDUCE graphics-oriented computed reduction code. For five stars, metal lines which fall in the gap between the U and V plates are analyzed using Reticon data.

  16. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2003-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  17. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2004-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  18. Constitutive modeling for isotropic materials (HOST)

    NASA Technical Reports Server (NTRS)

    Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.; Cassenti, B. N.

    1985-01-01

    This report presents the results of the second year of work on a problem which is part of the NASA HOST Program. Its goals are: (1) to develop and validate unified constitutive models for isotropic materials, and (2) to demonstrate their usefulness for structural analyses of hot section components of gas turbine engines. The unified models selected for development and evaluation are that of Bodner-Partom and Walker. For model evaluation purposes, a large constitutive data base is generated for a B1900 + Hf alloy by performing uniaxial tensile, creep, cyclic, stress relation, and thermomechanical fatigue (TMF) tests as well as biaxial (tension/torsion) tests under proportional and nonproportional loading over a wide range of strain rates and temperatures. Systematic approaches for evaluating material constants from a small subset of the data base are developed. Correlations of the uniaxial and biaxial tests data with the theories of Bodner-Partom and Walker are performed to establish the accuracy, range of applicability, and integability of the models. Both models are implemented in the MARC finite element computer code and used for TMF analyses. Benchmark notch round experiments are conducted and the results compared with finite-element analyses using the MARC code and the Walker model.

  19. Modelling Technology for Building Fire Scene with Virtual Geographic Environment

    NASA Astrophysics Data System (ADS)

    Song, Y.; Zhao, L.; Wei, M.; Zhang, H.; Liu, W.

    2017-09-01

    Building fire is a risky activity that can lead to disaster and massive destruction. The management and disposal of building fire has always attracted much interest from researchers. Integrated Virtual Geographic Environment (VGE) is a good choice for building fire safety management and emergency decisions, in which a more real and rich fire process can be computed and obtained dynamically, and the results of fire simulations and analyses can be much more accurate as well. To modelling building fire scene with VGE, the application requirements and modelling objective of building fire scene were analysed in this paper. Then, the four core elements of modelling building fire scene (the building space environment, the fire event, the indoor Fire Extinguishing System (FES) and the indoor crowd) were implemented, and the relationship between the elements was discussed also. Finally, with the theory and framework of VGE, the technology of building fire scene system with VGE was designed within the data environment, the model environment, the expression environment, and the collaborative environment as well. The functions and key techniques in each environment are also analysed, which may provide a reference for further development and other research on VGE.

  20. Damage Tolerance Analysis of a Pressurized Liquid Oxygen Tank

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Harvin, Stephen F.; Gregory, Peyton B.; Mason, Brian H.; Thompson, Joe E.; Hoffman, Eric K.

    2006-01-01

    A damage tolerance assessment was conducted of an 8,000 gallon pressurized Liquid Oxygen (LOX) tank. The LOX tank is constructed of a stainless steel pressure vessel enclosed by a thermal-insulating vacuum jacket. The vessel is pressurized to 2,250 psi with gaseous nitrogen resulting in both thermal and pressure stresses on the tank wall. Finite element analyses were performed on the tank to characterize the stresses from operation. Engineering material data was found from both the construction of the tank and the technical literature. An initial damage state was assumed based on records of a nondestructive inspection performed on the tank. The damage tolerance analyses were conducted using the NASGRO computer code. This paper contains the assumptions, and justifications, made for the input parameters to the damage tolerance analyses and the results of the damage tolerance analyses with a discussion on the operational safety of the LOX tank.

  1. Effect of Stitching on Debonding in Composite Structural Elements

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Glaessgen, E. H.

    2001-01-01

    Stitched multiaxial warp knit materials have been suggested as viable alternatives to laminated prepreg materials for large aircraft structures such as wing skins. Analyses have been developed to quantify the effectiveness of stitching for reducing strain energy release rates in skin-stiffener debond, lap joint and sandwich debond configurations. Strain energy release rates were computed using the virtual crack closure technique. In all configurations, the stitches were shown to significantly reduce the strain energy release rate.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Itagaki, Masafumi; Miyoshi, Yoshinori; Hirose, Hideyuki

    A procedure is presented for the determination of geometric buckling for regular polygons. A new computation technique, the multiple reciprocity boundary element method (MRBEM), has been applied to solve the one-group neutron diffusion equation. The main difficulty in applying the ordinary boundary element method (BEM) to neutron diffusion problems has been the need to compute a domain integral, resulting from the fission source. The MRBEM has been developed for transforming this type of domain integral into an equivalent boundary integral. The basic idea of the MRBEM is to apply repeatedly the reciprocity theorem (Green's second formula) using a sequence ofmore » higher order fundamental solutions. The MRBEM requires discretization of the boundary only rather than of the domain. This advantage is useful for extensive survey analyses of buckling for complex geometries. The results of survey analyses have indicated that the general form of geometric buckling is B[sub g][sup 2] = (a[sub n]/R[sub c])[sup 2], where R[sub c] represents the radius of the circumscribed circle of the regular polygon under consideration. The geometric constant A[sub n] depends on the type of regular polygon and takes the value of [pi] for a square and 2.405 for a circle, an extreme case that has an infinite number of sides. Values of a[sub n] for a triangle, pentagon, hexagon, and octagon have been calculated as 4.190, 2.281, 2.675, and 2.547, respectively.« less

  3. Assignment Of Finite Elements To Parallel Processors

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.

    1990-01-01

    Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.

  4. An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Saether, E.; Glaessgen, E.H.; Yamakov, V.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  5. A New Concurrent Multiscale Methodology for Coupling Molecular Dynamics and Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin; Saether, Erik; Glaessgen, Edward H/.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  6. Transient Seepage for Levee Engineering Analyses

    NASA Astrophysics Data System (ADS)

    Tracy, F. T.

    2017-12-01

    Historically, steady-state seepage analyses have been a key tool for designing levees by practicing engineers. However, with the advances in computer modeling, transient seepage analysis has become a potentially viable tool. A complication is that the levees usually have partially saturated flow, and this is significantly more complicated in transient flow. This poster illustrates four elements of our research in partially saturated flow relating to the use of transient seepage for levee design: (1) a comparison of results from SEEP2D, SEEP/W, and SLIDE for a generic levee cross section common to the southeastern United States; (2) the results of a sensitivity study of varying saturated hydraulic conductivity, the volumetric water content function (as represented by van Genuchten), and volumetric compressibility; (3) a comparison of when soils do and do not exhibit hysteresis, and (4) a description of proper and improper use of transient seepage in levee design. The variables considered for the sensitivity and hysteresis studies are pore pressure beneath the confining layer at the toe, the flow rate through the levee system, and a levee saturation coefficient varying between 0 and 1. Getting results for SEEP2D, SEEP/W, and SLIDE to match proved more difficult than expected. After some effort, the results matched reasonably well. Differences in results were caused by various factors, including bugs, different finite element meshes, different numerical formulations of the system of nonlinear equations to be solved, and differences in convergence criteria. Varying volumetric compressibility affected the above test variables the most. The levee saturation coefficient was most affected by the use of hysteresis. The improper use of pore pressures from a transient finite element seepage solution imported into a slope stability computation was found to be the most grievous mistake in using transient seepage in the design of levees.

  7. HydroShare: An online, collaborative environment for the sharing of hydrologic data and models (Invited)

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Arrigo, J.; Hooper, R. P.; Valentine, D. W.; Maidment, D. R.

    2013-12-01

    HydroShare is an online, collaborative system being developed for sharing hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. HydroShare will use the integrated Rule-Oriented Data System (iRODS) to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.

  8. Finite element modelling of crash response of composite aerospace sub-floor structures

    NASA Astrophysics Data System (ADS)

    McCarthy, M. A.; Harte, C. G.; Wiggenraad, J. F. M.; Michielsen, A. L. P. J.; Kohlgrüber, D.; Kamoulakos, A.

    Composite energy-absorbing structures for use in aircraft are being studied within a European Commission research programme (CRASURV - Design for Crash Survivability). One of the aims of the project is to evaluate the current capabilities of crashworthiness simulation codes for composites modelling. This paper focuses on the computational analysis using explicit finite element analysis, of a number of quasi-static and dynamic tests carried out within the programme. It describes the design of the structures, the analysis techniques used, and the results of the analyses in comparison to the experimental test results. It has been found that current multi-ply shell models are capable of modelling the main energy-absorbing processes at work in such structures. However some deficiencies exist, particularly in modelling fabric composites. Developments within the finite element code are taking place as a result of this work which will enable better representation of composite fabrics.

  9. Cross-ply laminates with holes in compression - Straight free-edge stresses determined by two- to three-dimensional global/local finite element analysis

    NASA Technical Reports Server (NTRS)

    Thompson, Danniella Muheim; Griffin, O. Hayden, Jr.; Vidussoni, Marco A.

    1990-01-01

    A practical example of applying two- to three-dimensional (2- to 3-D) global/local finite element analysis to laminated composites is presented. Cross-ply graphite/epoxy laminates of 0.1-in. (0.254-cm) thickness with central circular holes ranging from 1 to 6 in. (2.54 to 15.2 cm) in diameter, subjected to in-plane compression were analyzed. Guidelines for full three-dimensional finite element analysis and two- to three-dimensional global/local analysis of interlaminar stresses at straight free edges of laminated composites are included. The larger holes were found to reduce substantially the interlaminar stresses at the straight free-edge in proximity to the hole. Three-dimensional stress results were obtained for thin laminates which require prohibitive computer resources for full three-dimensional analyses of comparative accuracy.

  10. Dental application of novel finite element analysis software for three-dimensional finite element modeling of a dentulous mandible from its computed tomography images.

    PubMed

    Nakamura, Keiko; Tajima, Kiyoshi; Chen, Ker-Kong; Nagamatsu, Yuki; Kakigawa, Hiroshi; Masumi, Shin-ich

    2013-12-01

    This study focused on the application of novel finite-element analysis software for constructing a finite-element model from the computed tomography data of a human dentulous mandible. The finite-element model is necessary for evaluating the mechanical response of the alveolar part of the mandible, resulting from occlusal force applied to the teeth during biting. Commercially available patient-specific general computed tomography-based finite-element analysis software was solely applied to the finite-element analysis for the extraction of computed tomography data. The mandibular bone with teeth was extracted from the original images. Both the enamel and the dentin were extracted after image processing, and the periodontal ligament was created from the segmented dentin. The constructed finite-element model was reasonably accurate using a total of 234,644 nodes and 1,268,784 tetrahedral and 40,665 shell elements. The elastic moduli of the heterogeneous mandibular bone were determined from the bone density data of the computed tomography images. The results suggested that the software applied in this study is both useful and powerful for creating a more accurate three-dimensional finite-element model of a dentulous mandible from the computed tomography data without the need for any other software.

  11. Scan for Motifs: a webserver for the analysis of post-transcriptional regulatory elements in the 3' untranslated regions (3' UTRs) of mRNAs.

    PubMed

    Biswas, Ambarish; Brown, Chris M

    2014-06-08

    Gene expression in vertebrate cells may be controlled post-transcriptionally through regulatory elements in mRNAs. These are usually located in the untranslated regions (UTRs) of mRNA sequences, particularly the 3'UTRs. Scan for Motifs (SFM) simplifies the process of identifying a wide range of regulatory elements on alignments of vertebrate 3'UTRs. SFM includes identification of both RNA Binding Protein (RBP) sites and targets of miRNAs. In addition to searching pre-computed alignments, the tool provides users the flexibility to search their own sequences or alignments. The regulatory elements may be filtered by expected value cutoffs and are cross-referenced back to their respective sources and literature. The output is an interactive graphical representation, highlighting potential regulatory elements and overlaps between them. The output also provides simple statistics and links to related resources for complementary analyses. The overall process is intuitive and fast. As SFM is a free web-application, the user does not need to install any software or databases. Visualisation of the binding sites of different classes of effectors that bind to 3'UTRs will facilitate the study of regulatory elements in 3' UTRs.

  12. DRE-Enhanced Swept-Wing Natural Laminar Flow at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Malik, Mujeeb; Liao, Wei; Li, Fe; Choudhari, Meelan

    2013-01-01

    Nonlinear parabolized stability equations and secondary instability analyses are used to provide a computational assessment of the potential use of the discrete roughness elements (DRE) technology for extending swept-wing natural laminar flow at chord Reynolds numbers relevant to transport aircraft. Computations performed for the boundary layer on a natural laminar flow airfoil with a leading-edge sweep angle of 34.6deg, free-stream Mach number of 0.75 and chord Reynolds numbers of 17 x 10(exp 6), 24 x 10(exp 6) and 30 x 10(exp 6) suggest that DRE could delay laminar-turbulent transition by about 20% when transition is caused by stationary crossflow disturbances. Computations show that the introduction of small wavelength stationary crossflow disturbances (i.e., DRE) also suppresses the growth of most amplified traveling crossflow disturbances.

  13. An Ancient Transkingdom Horizontal Transfer of Penelope-Like Retroelements from Arthropods to Conifers

    PubMed Central

    Lin, Xuan; Faridi, Nurul; Casola, Claudio

    2016-01-01

    Comparative genomics analyses empowered by the wealth of sequenced genomes have revealed numerous instances of horizontal DNA transfers between distantly related species. In eukaryotes, repetitive DNA sequences known as transposable elements (TEs) are especially prone to move across species boundaries. Such horizontal transposon transfers, or HTTs, are relatively common within major eukaryotic kingdoms, including animals, plants, and fungi, while rarely occurring across these kingdoms. Here, we describe the first case of HTT from animals to plants, involving TEs known as Penelope-like elements, or PLEs, a group of retrotransposons closely related to eukaryotic telomerases. Using a combination of in situ hybridization on chromosomes, polymerase chain reaction experiments, and computational analyses we show that the predominant PLE lineage, EN(+)PLEs, is highly diversified in loblolly pine and other conifers, but appears to be absent in other gymnosperms. Phylogenetic analyses of both protein and DNA sequences reveal that conifers EN(+)PLEs, or Dryads, form a monophyletic group clustering within a clade of primarily arthropod elements. Additionally, no EN(+)PLEs were detected in 1,928 genome assemblies from 1,029 nonmetazoan and nonconifer genomes from 14 major eukaryotic lineages. These findings indicate that Dryads emerged following an ancient horizontal transfer of EN(+)PLEs from arthropods to a common ancestor of conifers approximately 340 Ma. This represents one of the oldest known interspecific transmissions of TEs, and the most conspicuous case of DNA transfer between animals and plants. PMID:27190138

  14. Elevated temperature crack growth

    NASA Technical Reports Server (NTRS)

    Kim, K. S.; Vanstone, R. H.

    1992-01-01

    The purpose of this program was to extend the work performed in the base program (CR 182247) into the regime of time-dependent crack growth under isothermal and thermal mechanical fatigue (TMF) loading, where creep deformation also influences the crack growth behavior. The investigation was performed in a two-year, six-task, combined experimental and analytical program. The path-independent integrals for application to time-dependent crack growth were critically reviewed. The crack growth was simulated using a finite element method. The path-independent integrals were computed from the results of finite-element analyses. The ability of these integrals to correlate experimental crack growth data were evaluated under various loading and temperature conditions. The results indicate that some of these integrals are viable parameters for crack growth prediction at elevated temperatures.

  15. Nonlinear metamaterials for holography

    PubMed Central

    Almeida, Euclides; Bitton, Ora

    2016-01-01

    A hologram is an optical element storing phase and possibly amplitude information enabling the reconstruction of a three-dimensional image of an object by illumination and scattering of a coherent beam of light, and the image is generated at the same wavelength as the input laser beam. In recent years, it was shown that information can be stored in nanometric antennas giving rise to ultrathin components. Here we demonstrate nonlinear multilayer metamaterial holograms. A background free image is formed at a new frequency—the third harmonic of the illuminating beam. Using e-beam lithography of multilayer plasmonic nanoantennas, we fabricate polarization-sensitive nonlinear elements such as blazed gratings, lenses and other computer-generated holograms. These holograms are analysed and prospects for future device applications are discussed. PMID:27545581

  16. Bearing tester data compilation, analysis, and reporting and bearing math modeling

    NASA Technical Reports Server (NTRS)

    1986-01-01

    A test condition data base was developed for the Bearing and Seal Materials Tester (BSMT) program which permits rapid retrieval of test data for trend analysis and evaluation. A model was developed for the Space shuttle Main Engine (SSME) Liquid Oxygen (LOX) turbopump shaft/bearing system. The model was used to perform parametric analyses to determine the sensitivity of bearing operating characteristics and temperatures to variations in: axial preload, contact friction, coolant flow and subcooling, heat transfer coefficients, outer race misalignments, and outer race to isolator clearances. The bearing program ADORE (Advanced Dynamics of Rolling Elements) was installed on the UNIVAC 1100/80 computer system and is operational. ADORE is an advanced FORTRAN computer program for the real time simulation of the dynamic performance of rolling bearings. A model of the 57 mm turbine-end bearing is currently being checked out. Analyses were conducted to estimate flow work energy for several flow diverter configurations and coolant flow rates for the LOX BSMT.

  17. Higher-order adaptive finite-element methods for Kohn–Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motamarri, P.; Nowak, M.R.; Leiter, K.

    2013-11-15

    We present an efficient computational approach to perform real-space electronic structure calculations using an adaptive higher-order finite-element discretization of Kohn–Sham density-functional theory (DFT). To this end, we develop an a priori mesh-adaption technique to construct a close to optimal finite-element discretization of the problem. We further propose an efficient solution strategy for solving the discrete eigenvalue problem by using spectral finite-elements in conjunction with Gauss–Lobatto quadrature, and a Chebyshev acceleration technique for computing the occupied eigenspace. The proposed approach has been observed to provide a staggering 100–200-fold computational advantage over the solution of a generalized eigenvalue problem. Using the proposedmore » solution procedure, we investigate the computational efficiency afforded by higher-order finite-element discretizations of the Kohn–Sham DFT problem. Our studies suggest that staggering computational savings—of the order of 1000-fold—relative to linear finite-elements can be realized, for both all-electron and local pseudopotential calculations, by using higher-order finite-element discretizations. On all the benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy, suggesting that the hexic spectral-element may be an optimal choice for the finite-element discretization of the Kohn–Sham DFT problem. A comparative study of the computational efficiency of the proposed higher-order finite-element discretizations suggests that the performance of finite-element basis is competing with the plane-wave discretization for non-periodic local pseudopotential calculations, and compares to the Gaussian basis for all-electron calculations to within an order of magnitude. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of a metallic system containing 1688 atoms using modest computational resources, and good scalability of the present implementation up to 192 processors.« less

  18. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  19. Scoresum - A technique for displaying and evaluating multi-element geochemical information, with examples of its use in regional mineral assessment programs

    USGS Publications Warehouse

    Chaffee, M.A.

    1983-01-01

    A technique called SCORESUM was developed to display a maximum of multi-element geochemical information on a minimum number of maps for mineral assessment purposes. The technique can be done manually for a small analytical data set or can be done with a computer for a large data set. SCORESUM can be used with highly censored data and can also weight samples so as to minimize the chemical differences of diverse lithologies in different parts of a given study area. The full range of reported analyses for each element of interest in a data set is divided into four categories. Anomaly scores - values of O (background), 1 (weakly anomalous), 2 (moderately anomalous), and 3 (strongly anomalous) - are substituted for all of the analyses falling into each of the four categories. A group of elements based on known or suspected association in altered or mineralized areas is selected for study and the anomaly scores for these elements are summed for each sample site and then plotted on a map. Some of the results of geochemical studies conducted for mineral assessments in two areas are briefly described. The first area, the Mokelumne Wilderness and vicinity, is a relatively small and geologically simple one. The second, the Walker Lake 1?? ?? 2?? quadrangle, is a large area that has extremely complex geology and that contains a number of different mineral deposit environments. These two studies provide examples of how the SCORESUM technique has been used (1) to enhance relatively small but anomalous areas and (2) to delineate and rank areas containing geochemical signatures for specific suites of elements related to certain types of alteration or mineralization. ?? 1983.

  20. Statistical Constraints from Siderophile Elements on Earth's Accretion, Differentiation, and Initial Core Stratification

    NASA Astrophysics Data System (ADS)

    O'Rourke, J. G.; Stevenson, D. J.

    2015-12-01

    Abundances of siderophile elements in the primitive mantle constrain the conditions of Earth's core/mantle differentiation. Core growth occurred as Earth accreted from collisions between planetesimals and larger embryos of unknown original provenance, so geochemistry is directly related to the overall dynamics of Solar System formation. Recent studies claim that only certain conditions of equilibration (pressure, temperature, and oxygen fugacity) during core formation can reproduce the available data. Typical analyses, however, only consider the effects of varying a few out of tens of free parameters in continuous core formation models. Here we describe the Markov chain Monte Carlo method, which simultaneously incorporates the large uncertainties on Earth's composition and the parameterizations that describe elemental partitioning between metal and silicate. This Bayesian technique is vastly more computationally efficient than a simple grid search and is well suited to models of planetary accretion that involve a plethora of variables. In contrast to previous work, we find that analyses of siderophile elements alone cannot yield a unique scenario for Earth's accretion. Our models predict a wide range of possible light element contents for the core, encompassing all combinations permitted by seismology and mineral physics. Specifically, we are agnostic between silicon and oxygen as the dominant light element, and the addition of carbon or sulfur is also permissible but not well constrained. Redox conditions may have remained roughly constant during Earth's accretion or relatively oxygen-rich material could have been incorporated before reduced embryos. Pressures and temperatures of equilibration, likewise, may only increase slowly throughout accretion. Therefore, we do not necessarily expect a thick (>500 km), compositionally stratified layer that is stable against convection to develop at the top of the core of Earth (or, by analogy, Venus). A thinner stable layer might inhibit the initialization of the dynamo.

  1. A novel approach to enhance the accuracy of vibration control of Frames

    NASA Astrophysics Data System (ADS)

    Toloue, Iraj; Shahir Liew, Mohd; Harahap, I. S. H.; Lee, H. E.

    2018-03-01

    All structures built within known seismically active regions are typically designed to endure earthquake forces. Despite advances in earthquake resistant structures, it can be inferred from hindsight that no structure is entirely immune to damage from earthquakes. Active vibration control systems, unlike the traditional methods which enlarge beams and columns, are highly effective countermeasures to reduce the effects of earthquake loading on a structure. It requires fast computation of nonlinear structural analysis in near time and has historically demanded advanced programming hosted on powerful computers. This research aims to develop a new approach for active vibration control of frames, which is applicable over both elastic and plastic material behavior. In this study, the Force Analogy Method (FAM), which is based on Hook's Law is further extended using the Timoshenko element which considers shear deformations to increase the reliability and accuracy of the controller. The proposed algorithm is applied to a 2D portal frame equipped with linear actuator, which is designed based on full state Linear Quadratic Regulator (LQR). For comparison purposes, the portal frame is analysed by both the Euler Bernoulli and Timoshenko element respectively. The results clearly demonstrate the superiority of the Timoshenko element over Euler Bernoulli for application in nonlinear analysis.

  2. Revisiting of Multiscale Static Analysis of Notched Laminates Using the Generalized Method of Cells

    NASA Technical Reports Server (NTRS)

    Naghipour Ghezeljeh, Paria; Arnold, Steven M.; Pineda, Evan J.

    2016-01-01

    Composite material systems generally exhibit a range of behavior on different length scales (from constituent level to macro); therefore, a multiscale framework is beneficial for the design and engineering of these material systems. The complex nature of the observed composite failure during experiments suggests the need for a three-dimensional (3D) multiscale model to attain a reliable prediction. However, the size of a multiscale three-dimensional finite element model can become prohibitively large and computationally costly. Two-dimensional (2D) models are preferred due to computational efficiency, especially if many different configurations have to be analyzed for an in-depth damage tolerance and durability design study. In this study, various 2D and 3D multiscale analyses will be employed to conduct a detailed investigation into the tensile failure of a given multidirectional, notched carbon fiber reinforced polymer laminate. Threedimensional finite element analysis is typically considered more accurate than a 2D finite element model, as compared with experiments. Nevertheless, in the absence of adequate mesh refinement, large differences may be observed between a 2D and 3D analysis, especially for a shear-dominated layup. This observed difference has not been widely addressed in previous literature and is the main focus of this paper.

  3. Analytical and experimental vibration studies of a 1/8-scale shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Pinson, L. D.

    1975-01-01

    Natural frequencies and mode shapes for four symmetric vibration modes and four antisymmetric modes are compared with predictions based on NASTRAN finite-element analyses. Initial predictions gave poor agreement with test data; an extensive investigation revealed that the major factors influencing agreement were out-of-plane imperfections in fuselage panels and a soft fin-fuselage connection. Computations with a more refined analysis indicated satisfactory frequency predictions for all modes studied, within 11 percent of experimental values.

  4. Three-dimensional finite-element elastic analysis of a thermally cycled single-edge wedge geometry specimen

    NASA Technical Reports Server (NTRS)

    Bizon, P. T.; Hill, R. J.; Guilliams, B. P.; Drake, S. K.; Kladden, J. L.

    1979-01-01

    An elastic stress analysis was performed on a wedge specimen (prismatic bar with single-wedge cross section) subjected to thermal cycles in fluidized beds. Seven different combinations consisting of three alloys (NASA TAZ-8A, 316 stainless steel, and A-286) and four thermal cycling conditions were analyzed. The analyses were performed as a joint effort of two laboratories using different models and computer programs (NASTRAN and ISO3DQ). Stress, strain, and temperature results are presented.

  5. Analyses of Multishaft Rotor-Bearing Response

    NASA Technical Reports Server (NTRS)

    Nelson, H. D.; Meacham, W. L.

    1985-01-01

    Method works for linear and nonlinear systems. Finite-element-based computer program developed to analyze free and forced response of multishaft rotor-bearing systems. Acronym, ARDS, denotes Analysis of Rotor Dynamic Systems. Systems with nonlinear interconnection or support bearings or both analyzed by numerically integrating reduced set of coupledsystem equations. Linear systems analyzed in closed form for steady excitations and treated as equivalent to nonlinear systems for transient excitation. ARDS is FORTRAN program developed on an Amdahl 470 (similar to IBM 370).

  6. G-Anchor: a novel approach for whole-genome comparative mapping utilizing evolutionary conserved DNA sequences.

    PubMed

    Lenis, Vasileios Panagiotis E; Swain, Martin; Larkin, Denis M

    2018-05-01

    Cross-species whole-genome sequence alignment is a critical first step for genome comparative analyses, ranging from the detection of sequence variants to studies of chromosome evolution. Animal genomes are large and complex, and whole-genome alignment is a computationally intense process, requiring expensive high-performance computing systems due to the need to explore extensive local alignments. With hundreds of sequenced animal genomes available from multiple projects, there is an increasing demand for genome comparative analyses. Here, we introduce G-Anchor, a new, fast, and efficient pipeline that uses a strictly limited but highly effective set of local sequence alignments to anchor (or map) an animal genome to another species' reference genome. G-Anchor makes novel use of a databank of highly conserved DNA sequence elements. We demonstrate how these elements may be aligned to a pair of genomes, creating anchors. These anchors enable the rapid mapping of scaffolds from a de novo assembled genome to chromosome assemblies of a reference species. Our results demonstrate that G-Anchor can successfully anchor a vertebrate genome onto a phylogenetically related reference species genome using a desktop or laptop computer within a few hours and with comparable accuracy to that achieved by a highly accurate whole-genome alignment tool such as LASTZ. G-Anchor thus makes whole-genome comparisons accessible to researchers with limited computational resources. G-Anchor is a ready-to-use tool for anchoring a pair of vertebrate genomes. It may be used with large genomes that contain a significant fraction of evolutionally conserved DNA sequences and that are not highly repetitive, polypoid, or excessively fragmented. G-Anchor is not a substitute for whole-genome aligning software but can be used for fast and accurate initial genome comparisons. G-Anchor is freely available and a ready-to-use tool for the pairwise comparison of two genomes.

  7. Investigation of the local stress perturbation in Long Valley, California, by coupling seismic analyses and FEM numerical modeling

    NASA Astrophysics Data System (ADS)

    Lin, G.; Albino, F.; Amelung, F.

    2017-12-01

    Long Valley Caldera in eastern California is well known for producing numerous volcanic eruptions over the past 3 Myr. There has been a stress perturbation in the vicinity of the caldera with respect to the regional stress field. In this study, we combine seismic analyses and finite-element numerical modeling to investigate this local stress anomaly. We first compute focal mechanisms for earthquakes relocated by using a three-dimensional (3-D) seismic velocity model and waveform cross-correlation data. The final 42,000 good-quality focal solutions show that the mechanisms are dominated by approximately the same amount of normal faulting and strike-slip and much fewer reverse focal types. These focal mechanisms are then used to invert for the stress field in the study area by applying the SATSI algorithm. The orientations of the inverted minimum horizontal principal stress (ShMIN) greatly agree with those in previous studies based on analyses of focal mechanisms, borehole breakouts, and fault offsets. The NE-SW oriented ShMIN in the resurgent dome and south moat of the caldera is in contrast to the dominating E-W orientation in the western Basin and Range province and Mammoth Mountain. We then investigate which mechanism most likely causes this local stress perturbation by applying 3-D Finite Element Modeling (FEM). Mechanical properties (e.g., density, Poisson's ratio, and Young's Modulus) used in the model are derived from the latest 3-D seismic tomography model. Taking into account an initial stress field, we examine stress perturbations resulting from different sources: (1) pressurization of a magma reservoir, (2) dyking event, and (3) tectonic faulting; and compute the corresponding stress field orientation for each and compare it with the observations.

  8. Full Field X-Ray Fluorescence Imaging Using Micro Pore Optics for Planetary Surface Exploration

    NASA Technical Reports Server (NTRS)

    Sarrazin, P.; Blake, D. F.; Gailhanou, M.; Walter, P.; Schyns, E.; Marchis, F.; Thompson, K.; Bristow, T.

    2016-01-01

    Many planetary surface processes leave evidence as small features in the sub-millimetre scale. Current planetary X-ray fluorescence spectrometers lack the spatial resolution to analyse such small features as they only provide global analyses of areas greater than 100 mm(exp 2). A micro-XRF spectrometer will be deployed on the NASA Mars 2020 rover to analyse spots as small as 120m. When using its line-scanning capacity combined to perpendicular scanning by the rover arm, elemental maps can be generated. We present a new instrument that provides full-field XRF imaging, alleviating the need for precise positioning and scanning mechanisms. The Mapping X-ray Fluorescence Spectrometer - "Map-X" - will allow elemental imaging with approximately 100µm spatial resolution and simultaneously provide elemental chemistry at the scale where many relict physical, chemical and biological features can be imaged in ancient rocks. The arm-mounted Map-X instrument is placed directly on the surface of an object and held in a fixed position during measurements. A 25x25 mm(exp 2) surface area is uniformly illuminated with X-rays or alpha-particles and gamma-rays. A novel Micro Pore Optic focusses a fraction of the emitted X-ray fluorescence onto a CCD operated at a few frames per second. On board processing allows measuring the energy and coordinates of each X-ray photon collected. Large sets of frames are reduced into 2d histograms used to compute higher level data products such as elemental maps and XRF spectra from selected regions of interest. XRF spectra are processed on the ground to further determine quantitative elemental compositions. The instrument development will be presented with an emphasis on the characterization and modelling of the X-ray focussing Micro Pore Optic. An outlook on possible alternative XRF imaging applications will be discussed.

  9. [Comparison between the Range of Movement Canine Real Cervical Spine and Numerical Simulation - Computer Model Validation].

    PubMed

    Srnec, R; Horák, Z; Sedláček, R; Sedlinská, M; Krbec, M; Nečas, A

    2017-01-01

    PURPOSE OF THE STUDY In developing new or modifying the existing surgical treatment methods of spine conditions an integral part of ex vivo experiments is the assessment of mechanical, kinematic and dynamic properties of created constructions. The aim of the study is to create an appropriately validated numerical model of canine cervical spine in order to obtain a tool for basic research to be applied in cervical spine surgeries. For this purpose, canine is a suitable model due to the occurrence of similar cervical spine conditions in some breeds of dogs and in humans. The obtained model can also be used in research and in clinical veterinary practice. MATERIAL AND METHODS In order to create a 3D spine model, the LightSpeed 16 (GE, Milwaukee, USA) multidetector computed tomography was used to scan the cervical spine of Doberman Pinscher. The data were transmitted to Mimics 12 software (Materialise HQ, Belgium), in which the individual vertebrae were segmented on CT scans by thresholding. The vertebral geometry was exported to Rhinoceros software (McNeel North America, USA) for modelling, and subsequently the specialised software Abaqus (Dassault Systemes, France) was used to analyse the response of the physiological spine model to external load by the finite element method (FEM). All the FEM based numerical simulations were considered as nonlinear contact statistic tasks. In FEM analyses, angles between individual spinal segments were monitored in dependence on ventroflexion/ /dorziflexion. The data were validated using the latero-lateral radiographs of cervical spine of large breed dogs with no evident clinical signs of cervical spine conditions. The radiographs within the cervical spine range of motion were taken at three different positions: in neutral position, in maximal ventroflexion and in maximal dorziflexion. On X-rays, vertebral inclination angles in monitored spine positions were measured and compared with the results obtain0ed from FEM analyses of the numerical model. RESULTS It is obvious from the results that the physiological spine model tested by the finite element method shows a very similar mechanical behaviour as the physiological canine spine. The biggest difference identified between the resulting values was reported in C6-C7 segment in dorsiflexion (Δφ = 5.95%), or in C4-C5 segment in ventroflexion (Δφ = -3.09%). CONCLUSIONS The comparisons between the mobility of cervical spine in ventroflexion/dorsiflexion on radiographs of the real models and the simulated numerical model by finite element method showed a high degree of results conformity with a minimal difference. Therefore, for future experiments the validated numerical model can be used as a tool of basic research on condition that the results of analyses carried out by finite element method will be affected only by an insignificant error. The computer model, on the other hand, is merely a simplified system and in comparison with the real situation cannot fully evaluate the dynamics of the action of forces in time, their variability, and also the individual effects of supportive skeletal tissues. Based on what has been said above, it is obvious that there is a need to exercise restraint in interpreting the obtained results. Key words: cervical spine, kinematics, numerical modelling, finite element method, canine.

  10. Will the digital computer transform classical mathematics?

    PubMed

    Rotman, Brian

    2003-08-15

    Mathematics and machines have influenced each other for millennia. The advent of the digital computer introduced a powerfully new element that promises to transform the relation between them. This paper outlines the thesis that the effect of the digital computer on mathematics, already widespread, is likely to be radical and far-reaching. To articulate this claim, an abstract model of doing mathematics is introduced based on a triad of actors of which one, the 'agent', corresponds to the function performed by the computer. The model is used to frame two sorts of transformation. The first is pragmatic and involves the alterations and progressive colonization of the content and methods of enquiry of various mathematical fields brought about by digital methods. The second is conceptual and concerns a fundamental antagonism between the infinity enshrined in classical mathematics and physics (continuity, real numbers, asymptotic definitions) and the inherently real and material limit of processes associated with digital computation. An example which lies in the intersection of classical mathematics and computer science, the P=NP problem, is analysed in the light of this latter issue.

  11. Shedding light into the function of the earliest vertebrate skeleton

    NASA Astrophysics Data System (ADS)

    Martinez-Perez, Carlos; Purnell, Mark; Rayfield, Emily; Donoghue, Philip

    2016-04-01

    Conodonts are an extinct group of jawless vertebrates, the first in our evolutionary lineage to develop a biomineralized skeleton. As such, the conodont skeleton is of great significance because of the insights it provides concerning the biology and function of the primitive vertebrate skeleton. Conodont function has been debated for a century and a half on the basis of its paleocological importance in the Palaezoic ecosystems. However, due to the lack of extanct close representatives and the small size of the conodont element (under a milimiter in length) strongly limited their functional analysis, traditional restricted to analogy. More recently, qualitative approaches have been developed, facilitating tests of element function based on occlusal performance and analysis of microwear and microstructure. In this work we extend these approaches using novel quantitative experimental methods including Synchrotron Radiation X-ray Tomographic Microscopy or Finite Element Analysis to test hypotheses of conodont function. The development of high resolution virtual models of conodont elements, together with biomechanical approaches using Finite Element analysis, informed by occlusal and microwear analyses, provided conclusive support to test hypothesis of structural adaptation within the crown tissue microstructure, showing a close topological co-variation patterns of compressive and tensile stress distribution with different crystallite orientation. In addition, our computational analyses strongly support a tooth-like function for many conodont species. Above all, our study establishes a framework (experimental approach) in which the functional ecology of conodonts can be read from their rich taxonomy and phylogeny, representing an important attempt to understand the role of this abundant and diverse clade in the Phanerozoic marine ecosystems.

  12. Effect of boundary conditions on the numerical solutions of representative volume element problems for random heterogeneous composite microstructures

    NASA Astrophysics Data System (ADS)

    Cho, Yi Je; Lee, Wook Jin; Park, Yong Ho

    2014-11-01

    Aspects of numerical results from computational experiments on representative volume element (RVE) problems using finite element analyses are discussed. Two different boundary conditions (BCs) are examined and compared numerically for volume elements with different sizes, where tests have been performed on the uniaxial tensile deformation of random particle reinforced composites. Structural heterogeneities near model boundaries such as the free-edges of particle/matrix interfaces significantly influenced the overall numerical solutions, producing force and displacement fluctuations along the boundaries. Interestingly, this effect was shown to be limited to surface regions within a certain distance of the boundaries, while the interior of the model showed almost identical strain fields regardless of the applied BCs. Also, the thickness of the BC-affected regions remained constant with varying volume element sizes in the models. When the volume element size was large enough compared to the thickness of the BC-affected regions, the structural response of most of the model was found to be almost independent of the applied BC such that the apparent properties converged to the effective properties. Finally, the mechanism that leads a RVE model for random heterogeneous materials to be representative is discussed in terms of the size of the volume element and the thickness of the BC-affected region.

  13. Computing Fiber/Matrix Interfacial Effects In SiC/RBSN

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Hopkins, Dale A.

    1996-01-01

    Computational study conducted to demonstrate use of boundary-element method in analyzing effects of fiber/matrix interface on elastic and thermal behaviors of representative laminated composite materials. In study, boundary-element method implemented by Boundary Element Solution Technology - Composite Modeling System (BEST-CMS) computer program.

  14. Finite element analysis of structural engineering problems using a viscoplastic model incorporating two back stresses

    NASA Technical Reports Server (NTRS)

    Arya, Vinod K.; Halford, Gary R.

    1993-01-01

    The feasibility of a viscoplastic model incorporating two back stresses and a drag strength is investigated for performing nonlinear finite element analyses of structural engineering problems. To demonstrate suitability for nonlinear structural analyses, the model is implemented into a finite element program and analyses for several uniaxial and multiaxial problems are performed. Good agreement is shown between the results obtained using the finite element implementation and those obtained experimentally. The advantages of using advanced viscoplastic models for performing nonlinear finite element analyses of structural components are indicated.

  15. Integration of experimental and computational methods for identifying geometric, thermal and diffusive properties of biomaterials

    NASA Astrophysics Data System (ADS)

    Weres, Jerzy; Kujawa, Sebastian; Olek, Wiesław; Czajkowski, Łukasz

    2016-04-01

    Knowledge of physical properties of biomaterials is important in understanding and designing agri-food and wood processing industries. In the study presented in this paper computational methods were developed and combined with experiments to enhance identification of agri-food and forest product properties, and to predict heat and water transport in such products. They were based on the finite element model of heat and water transport and supplemented with experimental data. Algorithms were proposed for image processing, geometry meshing, and inverse/direct finite element modelling. The resulting software system was composed of integrated subsystems for 3D geometry data acquisition and mesh generation, for 3D geometry modelling and visualization, and for inverse/direct problem computations for the heat and water transport processes. Auxiliary packages were developed to assess performance, accuracy and unification of data access. The software was validated by identifying selected properties and using the estimated values to predict the examined processes, and then comparing predictions to experimental data. The geometry, thermal conductivity, specific heat, coefficient of water diffusion, equilibrium water content and convective heat and water transfer coefficients in the boundary layer were analysed. The estimated values, used as an input for simulation of the examined processes, enabled reduction in the uncertainty associated with predictions.

  16. Patient-specific stress analyses in the ascending thoracic aorta using a finite-element implementation of the constrained mixture theory.

    PubMed

    Mousavi, S Jamaleddin; Avril, Stéphane

    2017-10-01

    It is now a rather common approach to perform patient-specific stress analyses of arterial walls using finite-element models reconstructed from gated medical images. However, this requires to compute for every Gauss point the deformation gradient between the current configuration and a stress-free reference configuration. It is technically difficult to define such a reference configuration, and there is actually no guarantee that a stress-free configuration is physically attainable due to the presence of internal stresses in unloaded soft tissues. An alternative framework was proposed by Bellini et al. (Ann Biomed Eng 42(3):488-502, 2014). It consists of computing the deformation gradients between the current configuration and a prestressed reference configuration. We present here the first finite-element results based on this concept using the Abaqus software. The reference configuration is set arbitrarily to the in vivo average geometry of the artery, which is obtained from gated medical images and is assumed to be mechanobiologically homeostatic. For every Gauss point, the stress is split additively into the contributions of each individual load-bearing constituent of the tissue, namely elastin, collagen, smooth muscle cells. Each constituent is assigned an independent prestretch in the reference configuration, named the deposition stretch. The outstanding advantage of the present approach is that it simultaneously computes the in situ stresses existing in the reference configuration and predicts the residual stresses that occur after removing the different loadings applied onto the artery (pressure and axial load). As a proof of concept, we applied it on an ideal thick-wall cylinder and showed that the obtained results were consistent with corresponding experimental and analytical results of the well-known literature. In addition, we developed a patient-specific model of a human ascending thoracic aneurysmal aorta and demonstrated the utility in predicting the wall stress distribution in vivo under the effects of physiological pressure. Finally, we simulated the whole process preceding traditional in vitro uniaxial tensile testing of arteries, including excision from the body, radial cutting, flattening and subsequent tensile loading, showing how this process may impact the final mechanical properties derived from these in vitro tests.

  17. Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach

    PubMed Central

    Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan

    2017-01-01

    In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case. PMID:28609471

  18. Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach.

    PubMed

    Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan

    2017-01-01

    In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case.

  19. A living mesoscopic cellular automaton made of skin scales.

    PubMed

    Manukyan, Liana; Montandon, Sophie A; Fofonjka, Anamarija; Smirnov, Stanislav; Milinkovitch, Michel C

    2017-04-12

    In vertebrates, skin colour patterns emerge from nonlinear dynamical microscopic systems of cell interactions. Here we show that in ocellated lizards a quasi-hexagonal lattice of skin scales, rather than individual chromatophore cells, establishes a green and black labyrinthine pattern of skin colour. We analysed time series of lizard scale colour dynamics over four years of their development and demonstrate that this pattern is produced by a cellular automaton (a grid of elements whose states are iterated according to a set of rules based on the states of neighbouring elements) that dynamically computes the colour states of individual mesoscopic skin scales to produce the corresponding macroscopic colour pattern. Using numerical simulations and mathematical derivation, we identify how a discrete von Neumann cellular automaton emerges from a continuous Turing reaction-diffusion system. Skin thickness variation generated by three-dimensional morphogenesis of skin scales causes the underlying reaction-diffusion dynamics to separate into microscopic and mesoscopic spatial scales, the latter generating a cellular automaton. Our study indicates that cellular automata are not merely abstract computational systems, but can directly correspond to processes generated by biological evolution.

  20. A living mesoscopic cellular automaton made of skin scales

    NASA Astrophysics Data System (ADS)

    Manukyan, Liana; Montandon, Sophie A.; Fofonjka, Anamarija; Smirnov, Stanislav; Milinkovitch, Michel C.

    2017-04-01

    In vertebrates, skin colour patterns emerge from nonlinear dynamical microscopic systems of cell interactions. Here we show that in ocellated lizards a quasi-hexagonal lattice of skin scales, rather than individual chromatophore cells, establishes a green and black labyrinthine pattern of skin colour. We analysed time series of lizard scale colour dynamics over four years of their development and demonstrate that this pattern is produced by a cellular automaton (a grid of elements whose states are iterated according to a set of rules based on the states of neighbouring elements) that dynamically computes the colour states of individual mesoscopic skin scales to produce the corresponding macroscopic colour pattern. Using numerical simulations and mathematical derivation, we identify how a discrete von Neumann cellular automaton emerges from a continuous Turing reaction-diffusion system. Skin thickness variation generated by three-dimensional morphogenesis of skin scales causes the underlying reaction-diffusion dynamics to separate into microscopic and mesoscopic spatial scales, the latter generating a cellular automaton. Our study indicates that cellular automata are not merely abstract computational systems, but can directly correspond to processes generated by biological evolution.

  1. A locally refined rectangular grid finite element method - Application to computational fluid dynamics and computational physics

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1991-01-01

    The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.

  2. The NASA High Speed ASE Project: Computational Analyses of a Low-Boom Supersonic Configuration

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; DeLaGarza, Antonio; Zink, Scott; Bounajem, Elias G.; Johnson, Christopher; Buonanno, Michael; Sanetrik, Mark D.; Yoo, Seung Y.; Kopasakis, George; Christhilf, David M.; hide

    2014-01-01

    A summary of NASA's High Speed Aeroservoelasticity (ASE) project is provided with a focus on a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The summary includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, structured and unstructured CFD grids, and discussion of the FEM development including sizing and structural constraints applied to the N+2 configuration. Linear results obtained to date include linear mode shapes and linear flutter boundaries. In addition to the tasks associated with the N+2 configuration, a summary of the work involving the development of AeroPropulsoServoElasticity (APSE) models is also discussed.

  3. A computer-assisted data collection system for use in a multicenter study of American Indians and Alaska Natives: SCAPES.

    PubMed

    Edwards, Roger L; Edwards, Sandra L; Bryner, James; Cunningham, Kelly; Rogers, Amy; Slattery, Martha L

    2008-04-01

    We describe a computer-assisted data collection system developed for a multicenter cohort study of American Indian and Alaska Native people. The study computer-assisted participant evaluation system or SCAPES is built around a central database server that controls a small private network with touch screen workstations. SCAPES encompasses the self-administered questionnaires, the keyboard-based stations for interviewer-administered questionnaires, a system for inputting medical measurements, and administrative tasks such as data exporting, backup and management. Elements of SCAPES hardware/network design, data storage, programming language, software choices, questionnaire programming including the programming of questionnaires administered using audio computer-assisted self-interviewing (ACASI), and participant identification/data security system are presented. Unique features of SCAPES are that data are promptly made available to participants in the form of health feedback; data can be quickly summarized for tribes for health monitoring and planning at the community level; and data are available to study investigators for analyses and scientific evaluation.

  4. Comparison of Response Surface and Kriging Models in the Multidisciplinary Design of an Aerospike Nozzle

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.

    1998-01-01

    The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.

  5. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  6. Equivalent orthotropic elastic moduli identification method for laminated electrical steel sheets

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Nishikawa, Yasunari; Yamasaki, Shintaro; Fujita, Kikuo; Kawamoto, Atsushi; Kuroishi, Masakatsu; Nakai, Hideo

    2016-05-01

    In this paper, a combined numerical-experimental methodology for the identification of elastic moduli of orthotropic media is presented. Special attention is given to the laminated electrical steel sheets, which are modeled as orthotropic media with nine independent engineering elastic moduli. The elastic moduli are determined specifically for use with finite element vibration analyses. We propose a three-step methodology based on a conventional nonlinear least squares fit between measured and computed natural frequencies. The methodology consists of: (1) successive augmentations of the objective function by increasing the number of modes, (2) initial condition updates, and (3) appropriate selection of the natural frequencies based on their sensitivities on the elastic moduli. Using the results of numerical experiments, it is shown that the proposed method achieves more accurate converged solution than a conventional approach. Finally, the proposed method is applied to measured natural frequencies and mode shapes of the laminated electrical steel sheets. It is shown that the method can successfully identify the orthotropic elastic moduli that can reproduce the measured natural frequencies and frequency response functions by using finite element analyses with a reasonable accuracy.

  7. 3D DEM analyses of the 1963 Vajont rock slide

    NASA Astrophysics Data System (ADS)

    Boon, Chia Weng; Houlsby, Guy; Utili, Stefano

    2013-04-01

    The 1963 Vajont rock slide has been modelled using the distinct element method (DEM). The open-source DEM code, YADE (Kozicki & Donzé, 2008), was used together with the contact detection algorithm proposed by Boon et al. (2012). The critical sliding friction angle at the slide surface was sought using a strength reduction approach. A shear-softening contact model was used to model the shear resistance of the clayey layer at the slide surface. The results suggest that the critical sliding friction angle can be conservative if stability analyses are calculated based on the peak friction angles. The water table was assumed to be horizontal and the pore pressure at the clay layer was assumed to be hydrostatic. The influence of reservoir filling was marginal, increasing the sliding friction angle by only 1.6˚. The results of the DEM calculations were found to be sensitive to the orientations of the bedding planes and cross-joints. Finally, the failure mechanism was investigated and arching was found to be present at the bend of the chair-shaped slope. References Boon C.W., Houlsby G.T., Utili S. (2012). A new algorithm for contact detection between convex polygonal and polyhedral particles in the discrete element method. Computers and Geotechnics, vol 44, 73-82, doi.org/10.1016/j.compgeo.2012.03.012. Kozicki, J., & Donzé, F. V. (2008). A new open-source software developed for numerical simulations using discrete modeling methods. Computer Methods in Applied Mechanics and Engineering, 197(49-50), 4429-4443.

  8. On the effects of grid ill-conditioning in three dimensional finite element vector potential magnetostatic field computations

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1990-01-01

    The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.

  9. Algorithms and analyses for stochastic optimization for turbofan noise reduction using parallel reduced-order modeling

    NASA Astrophysics Data System (ADS)

    Yang, Huanhuan; Gunzburger, Max

    2017-06-01

    Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.

  10. Global exponential periodicity and stability of discrete-time complex-valued recurrent neural networks with time-delays.

    PubMed

    Hu, Jin; Wang, Jun

    2015-06-01

    In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The application of probabilistic fracture analysis to residual life evaluation of embrittled reactor vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, T.L.; Simonen, F.A.

    1992-05-01

    Probabilistic fracture mechanics analysis is a major element of comprehensive probabilistic methodology on which current NRC regulatory requirements for pressurized water reactor vessel integrity evaluation are based. Computer codes such as OCA-P and VISA-II perform probabilistic fracture analyses to estimate the increase in vessel failure probability that occurs as the vessel material accumulates radiation damage over the operating life of the vessel. The results of such analyses, when compared with limits of acceptable failure probabilities, provide an estimation of the residual life of a vessel. Such codes can be applied to evaluate the potential benefits of plant-specific mitigating actions designedmore » to reduce the probability of failure of a reactor vessel. 10 refs.« less

  12. The application of probabilistic fracture analysis to residual life evaluation of embrittled reactor vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, T.L.; Simonen, F.A.

    1992-01-01

    Probabilistic fracture mechanics analysis is a major element of comprehensive probabilistic methodology on which current NRC regulatory requirements for pressurized water reactor vessel integrity evaluation are based. Computer codes such as OCA-P and VISA-II perform probabilistic fracture analyses to estimate the increase in vessel failure probability that occurs as the vessel material accumulates radiation damage over the operating life of the vessel. The results of such analyses, when compared with limits of acceptable failure probabilities, provide an estimation of the residual life of a vessel. Such codes can be applied to evaluate the potential benefits of plant-specific mitigating actions designedmore » to reduce the probability of failure of a reactor vessel. 10 refs.« less

  13. Analysis of the STS-126 Flow Control Valve Structural-Acoustic Coupling Failure

    NASA Technical Reports Server (NTRS)

    Jones, Trevor M.; Larko, Jeffrey M.; McNelis, Mark E.

    2010-01-01

    During the Space Transportation System mission STS-126, one of the main engine's flow control valves incurred an unexpected failure. A section of the valve broke off during liftoff. It is theorized that an acoustic mode of the flowing fuel, coupled with a structural mode of the valve, causing a high cycle fatigue failure. This report documents the analysis efforts conducted in an attempt to verify this theory. Hand calculations, computational fluid dynamics, and finite element methods are all implemented and analyses are performed using steady-state methods in addition to transient analysis methods. The conclusion of the analyses is that there is a critical acoustic mode that aligns with a structural mode of the valve

  14. Comparison of measured temperatures, thermal stresses and creep residues with predictions on a built-up titanium structure

    NASA Technical Reports Server (NTRS)

    Jenkins, Jerald M.

    1987-01-01

    Temperature, thermal stresses, and residual creep stresses were studied by comparing laboratory values measured on a built-up titanium structure with values calculated from finite-element models. Several such models were used to examine the relationship between computational thermal stresses and thermal stresses measured on a built-up structure. Element suitability, element density, and computational temperature discrepancies were studied to determine their impact on measured and calculated thermal stress. The optimum number of elements is established from a balance between element density and suitable safety margins, such that the answer is acceptably safe yet is economical from a computational viewpoint. It is noted that situations exist where relatively small excursions of calculated temperatures from measured values result in far more than proportional increases in thermal stress values. Measured residual stresses due to creep significantly exceeded the values computed by the piecewise linear elastic strain analogy approach. The most important element in the computation is the correct definition of the creep law. Computational methodology advances in predicting residual stresses due to creep require significantly more viscoelastic material characterization.

  15. A computer code for calculations in the algebraic collective model of the atomic nucleus

    NASA Astrophysics Data System (ADS)

    Welsh, T. A.; Rowe, D. J.

    2016-03-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.

  16. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.

  17. Large Angle Transient Dynamics (LATDYN) user's manual

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.

    1991-01-01

    A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.

  18. Computational Modeling for the Flow Over a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun

    1999-01-01

    The flow over a multi-element airfoil is computed using two two-equation turbulence models. The computations are performed using the INS2D) Navier-Stokes code for two angles of attack. Overset grids are used for the three-element airfoil. The computed results are compared with experimental data for the surface pressure, skin friction coefficient, and velocity magnitude. The computed surface quantities generally agree well with the measurement. The computed results reveal the possible existence of a mixing-layer-like region of flow next to the suction surface of the slat for both angles of attack.

  19. Instrumentation and Performance Analysis Plans for the HIFiRE Flight 2 Experiment

    NASA Technical Reports Server (NTRS)

    Gruber, Mark; Barhorst, Todd; Jackson, Kevin; Eklund, Dean; Hass, Neal; Storch, Andrea M.; Liu, Jiwen

    2009-01-01

    Supersonic combustion performance of a bi-component gaseous hydrocarbon fuel mixture is one of the primary aspects under investigation in the HIFiRE Flight 2 experiment. In-flight instrumentation and post-test analyses will be two key elements used to determine the combustion performance. Pre-flight computational fluid dynamics (CFD) analyses provide valuable information that can be used to optimize the placement of a constrained set of wall pressure instrumentation in the experiment. The simulations also allow pre-flight assessments of performance sensitivities leading to estimates of overall uncertainty in the determination of combustion efficiency. Based on the pre-flight CFD results, 128 wall pressure sensors have been located throughout the isolator/combustor flowpath to minimize the error in determining the wall pressure force at Mach 8 flight conditions. Also, sensitivity analyses show that mass capture and combustor exit stream thrust are the two primary contributors to uncertainty in combustion efficiency.

  20. Terrain Correction on the moving equal area cylindrical map projection of the surface of a reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A.; Safari, A.; Grafarend, E.

    2003-04-01

    An operational algorithm for computing the ellipsoidal terrain correction based on application of closed form solution of the Newton integral in terms of Cartesian coordinates in the cylindrical equal area map projected surface of a reference ellipsoid has been developed. As the first step the mapping of the points on the surface of a reference ellipsoid onto the cylindrical equal area map projection of a cylinder tangent to a point on the surface of reference ellipsoid closely studied and the map projection formulas are computed. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid is considered and the gravitational potential and the vector of gravitational intensity of these mass elements has been computed via the solution of Newton integral in terms of ellipsoidal coordinates. The geographical cross section areas of the selected ellipsoidal mass elements are transferred into cylindrical equal area map projection and based on the transformed area elements Cartesian mass elements with the same height as that of the ellipsoidal mass elements are constructed. Using the close form solution of the Newton integral in terms of Cartesian coordinates the potential of the Cartesian mass elements are computed and compared with the same results based on the application of the ellipsoidal Newton integral over the ellipsoidal mass elements. The results of the numerical computations show that difference between computed gravitational potential of the ellipsoidal mass elements and Cartesian mass element in the cylindrical equal area map projection is of the order of 1.6 × 10-8m^2/s^2 for a mass element with the cross section size of 10 km × 10 km and the height of 1000 m. For a 1 km × 1 km mass element with the same height, this difference is less than 1.5 × 10-4 m^2}/s^2. The results of the numerical computations indicate that a new method for computing the terrain correction based on the closed form solution of the Newton integral in terms of Cartesian coordinates and with accuracy of ellipsoidal terrain correction has been achieved! In this way one can enjoy the simplicity of the solution of the Newton integral in terms of Cartesian coordinates and at the same time the accuracy of the ellipsoidal terrain correction, which is needed for the modern theory of geoid computations.

  1. Rule-based programming paradigm: a formal basis for biological, chemical and physical computation.

    PubMed

    Krishnamurthy, V; Krishnamurthy, E V

    1999-03-01

    A rule-based programming paradigm is described as a formal basis for biological, chemical and physical computations. In this paradigm, the computations are interpreted as the outcome arising out of interaction of elements in an object space. The interactions can create new elements (or same elements with modified attributes) or annihilate old elements according to specific rules. Since the interaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements, so that the elements evolve toward an equilibrium or unstable or chaotic state. Such an evolution may retain certain invariant properties of the attributes of the elements. The object space resembles Gibbsian ensemble that corresponds to a distribution of points in the space of positions and momenta (called phase space). It permits the introduction of probabilities in rule applications. As each element of the ensemble changes over time, its phase point is carried into a new phase point. The evolution of this probability cloud in phase space corresponds to a distributed probabilistic computation. Thus, this paradigm can handle tor deterministic exact computation when the initial conditions are exactly specified and the trajectory of evolution is deterministic. Also, it can handle probabilistic mode of computation if we want to derive macroscopic or bulk properties of matter. We also explain how to support this rule-based paradigm using relational-database like query processing and transactions.

  2. A boundary integral method for numerical computation of radar cross section of 3D targets using hybrid BEM/FEM with edge elements

    NASA Astrophysics Data System (ADS)

    Dodig, H.

    2017-11-01

    This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.

  3. A Statistical Approach for the Concurrent Coupling of Molecular Dynamics and Finite Element Methods

    NASA Technical Reports Server (NTRS)

    Saether, E.; Yamakov, V.; Glaessgen, E.

    2007-01-01

    Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, increasing the size of the MD domain quickly presents intractable computational demands. A robust approach to surmount this computational limitation has been to unite continuum modeling procedures such as the finite element method (FEM) with MD analyses thereby reducing the region of atomic scale refinement. The challenging problem is to seamlessly connect the two inherently different simulation techniques at their interface. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the typical boundary value problem used to define a coupled domain. The method uses statistical averaging of the atomistic MD domain to provide displacement interface boundary conditions to the surrounding continuum FEM region, which, in return, generates interface reaction forces applied as piecewise constant traction boundary conditions to the MD domain. The two systems are computationally disconnected and communicate only through a continuous update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM) as opposed to a direct coupling method where interface atoms and FEM nodes are individually related. The methodology is inherently applicable to three-dimensional domains, avoids discretization of the continuum model down to atomic scales, and permits arbitrary temperatures to be applied.

  4. Simulation of Detecting Damage in Composite Stiffened Panel Using Lamb Waves

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Ross, Richard W.; Huang, Guo L.; Yuan, Fuh G.

    2013-01-01

    Lamb wave damage detection in a composite stiffened panel is simulated by performing explicit transient dynamic finite element analyses and using signal imaging techniques. This virtual test process does not need to use real structures, actuators/sensors, or laboratory equipment. Quasi-isotropic laminates are used for the stiffened panels. Two types of damage are studied. One type is a damage in the skin bay and the other type is a debond between the stiffener flange and the skin. Innovative approaches for identifying the damage location and imaging the damage were developed. The damage location is identified by finding the intersection of the damage locus and the path of the time reversal wave packet re-emitted from the sensor nodes. The damage locus is a circle that envelops the potential damage locations. Its center is at the actuator location and its radius is computed by multiplying the group velocity by the time of flight to damage. To create a damage image for estimating the size of damage, a group of nodes in the neighborhood of the damage location is identified for applying an image condition. The image condition, computed at a finite element node, is the zero-lag cross-correlation (ZLCC) of the time-reversed incident wave signal and the time reversal wave signal from the sensor nodes. This damage imaging process is computationally efficient since only the ZLCC values of a small amount of nodes in the neighborhood of the identified damage location are computed instead of those of the full model.

  5. Computation of Asteroid Proper Elements: Recent Advances

    NASA Astrophysics Data System (ADS)

    Knežević, Z.

    2017-12-01

    The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.

  6. [Analysis of 14 elements for Jinhua bergamot by X-ray fluorescence spectrometry and elemental analyser].

    PubMed

    Wang, Zhi-gang; Yu, Hong-mei

    2012-01-01

    The content of the elements C, H, O and N in Jinhua bergamot was analysed by using Vario III elemental analyser, the bergamot sample was scanned by using X-ray fluorescence spectrometer with PW2400 wavelength dispersion, and the content of the elements Mg, Al, P, S, Cl, K, Ca, Mn, Fe and Sr was analysed by using IQ+ analytical method. It turned out that the result is more ideal if the content of the elements C, H, O and N is processed as fix phase, and the analytical result is more ideal if, to prevent the sample skin from coming off, the sample is wrapped with mylar film with the film coefficient adjusted.

  7. A comparison of DXA and CT based methods for estimating the strength of the femoral neck in post-menopausal women

    PubMed Central

    Danielson, Michelle E.; Beck, Thomas J.; Karlamangla, Arun S.; Greendale, Gail A.; Atkinson, Elizabeth J.; Lian, Yinjuan; Khaled, Alia S.; Keaveny, Tony M.; Kopperdahl, David; Ruppert, Kristine; Greenspan, Susan; Vuga, Marike; Cauley, Jane A.

    2013-01-01

    Purpose Simple 2-dimensional (2D) analyses of bone strength can be done with dual energy x-ray absorptiometry (DXA) data and applied to large data sets. We compared 2D analyses to 3-dimensional (3D) finite element analyses (FEA) based on quantitative computed tomography (QCT) data. Methods 213 women participating in the Study of Women’s Health across the Nation (SWAN) received hip DXA and QCT scans. DXA BMD and femoral neck diameter and axis length were used to estimate geometry for composite bending (BSI) and compressive strength (CSI) indices. These and comparable indices computed by Hip Structure Analysis (HSA) on the same DXA data were compared to indices using QCT geometry. Simple 2D engineering simulations of a fall impacting on the greater trochanter were generated using HSA and QCT femoral neck geometry; these estimates were benchmarked to a 3D FEA of fall impact. Results DXA-derived CSI and BSI computed from BMD and by HSA correlated well with each other (R= 0.92 and 0.70) and with QCT-derived indices (R= 0.83–0.85 and 0.65–0.72). The 2D strength estimate using HSA geometry correlated well with that from QCT (R=0.76) and with the 3D FEA estimate (R=0.56). Conclusions Femoral neck geometry computed by HSA from DXA data corresponds well enough to that from QCT for an analysis of load stress in the larger SWAN data set. Geometry derived from BMD data performed nearly as well. Proximal femur breaking strength estimated from 2D DXA data is not as well correlated with that derived by a 3D FEA using QCT data. PMID:22810918

  8. Unification of color postprocessing techniques for 3-dimensional computational mechanics

    NASA Technical Reports Server (NTRS)

    Bailey, Bruce Charles

    1985-01-01

    To facilitate the understanding of complex three-dimensional numerical models, advanced interactive color postprocessing techniques are introduced. These techniques are sufficiently flexible so that postprocessing difficulties arising from model size, geometric complexity, response variation, and analysis type can be adequately overcome. Finite element, finite difference, and boundary element models may be evaluated with the prototype postprocessor. Elements may be removed from parent models to be studied as independent subobjects. Discontinuous responses may be contoured including responses which become singular, and nonlinear color scales may be input by the user for the enhancement of the contouring operation. Hit testing can be performed to extract precise geometric, response, mesh, or material information from the database. In addition, stress intensity factors may be contoured along the crack front of a fracture model. Stepwise analyses can be studied, and the user can recontour responses repeatedly, as if he were paging through the response sets. As a system, these tools allow effective interpretation of complex analysis results.

  9. MSC/NASTRAN Stress Analysis of Complete Models Subjected to Random and Quasi-Static Loads

    NASA Technical Reports Server (NTRS)

    Hampton, Roy W.

    2000-01-01

    Space payloads, such as those which fly on the Space Shuttle in Spacelab, are designed to withstand dynamic loads which consist of combined acoustic random loads and quasi-static acceleration loads. Methods for computing the payload stresses due to these loads are well known and appear in texts and NASA documents, but typically involve approximations such as the Miles' equation, as well as possible adjustments based on "modal participation factors." Alternatively, an existing capability in MSC/NASTRAN may be used to output exact root mean square [rms] stresses due to the random loads for any specified elements in the Finite Element Model. However, it is time consuming to use this methodology to obtain the rms stresses for the complete structural model and then combine them with the quasi-static loading induced stresses. Special processing was developed as described here to perform the stress analysis of all elements in the model using existing MSC/NASTRAN and MSC/PATRAN and UNIX utilities. Fail-safe and buckling analyses applications are also described.

  10. Mechanistic design concepts for conventional flexible pavements

    NASA Astrophysics Data System (ADS)

    Elliott, R. P.; Thompson, M. R.

    1985-02-01

    Mechanical design concepts for convetional flexible pavement (asphalt concrete (AC) surface plus granular base/subbase) for highways are proposed and validated. The procedure is based on ILLI-PAVE, a stress dependent finite element computer program, coupled with appropriate transfer functions. Two design criteria are considered: AC flexural fatigue cracking and subgrade rutting. Algorithms were developed relating pavement response parameters (stresses, strains, deflections) to AC thickness, AC moduli, granular layer thickness, and subgrade moduli. Extensive analyses of the AASHO Road Test flexible pavement data are presented supporting the validity of the proposed concepts.

  11. A Conceptual Model for Analysing Collaborative Work and Products in Groupware Systems

    NASA Astrophysics Data System (ADS)

    Duque, Rafael; Bravo, Crescencio; Ortega, Manuel

    Collaborative work using groupware systems is a dynamic process in which many tasks, in different application domains, are carried out. Currently, one of the biggest challenges in the field of CSCW (Computer-Supported Cooperative Work) research is to establish conceptual models which allow for the analysis of collaborative activities and their resulting products. In this article, we propose an ontology that conceptualizes the required elements which enable an analysis to infer a set of analysis indicators, thus evaluating both the individual and group work and the artefacts which are produced.

  12. Micro-fabrication of a novel linear actuator

    NASA Astrophysics Data System (ADS)

    Jiang, Shuidong; Liu, Lei; Hou, Yangqing; Fang, Houfei

    2017-04-01

    The novel linear actuator is researched with light weight, small volume, low power consumption, fast response and relatively large displacement output. It can be used for the net surface control of large deployable mesh antennas, the tension precise adjustment of the controlled cable in the tension and tensile truss structure and many other applications. The structure and the geometry parameters are designed and analysed by finite element method in multi-physics coupling. Meantime, the relationship between input voltage and displacement output is computed, and the strength check is completed according to the stress distribution. Carbon fiber reinforced composite (CFRC), glass fiber reinforced composited (GFRC), and Lead Zirconium Titanate (PZT) materials are used to fabricate the actuator by using laser etching and others MEMS process. The displacement output is measured by the laser displacement sensor device at the input voltage range of DC0-180V. The response time is obtained by oscilloscope at the arbitrarily voltage in the above range. The nominal force output is measured by the PTR-1101 mechanics setup. Finally, the computed and test results are compared and analysed.

  13. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Schöbi, Roland; Sudret, Bruno

    2017-06-01

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.

  14. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch

    2017-06-15

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less

  15. Cloud-free resolution element statistics program

    NASA Technical Reports Server (NTRS)

    Liley, B.; Martin, C. D.

    1971-01-01

    Computer program computes number of cloud-free elements in field-of-view and percentage of total field-of-view occupied by clouds. Human error is eliminated by using visual estimation to compute cloud statistics from aerial photographs.

  16. Improving finite element results in modeling heart valve mechanics.

    PubMed

    Earl, Emily; Mohammadi, Hadi

    2018-06-01

    Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.

  17. Stabilization and discontinuity-capturing parameters for space-time flow computations with finite element and isogeometric discretizations

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Otoguro, Yuto

    2018-04-01

    Stabilized methods, which have been very common in flow computations for many years, typically involve stabilization parameters, and discontinuity-capturing (DC) parameters if the method is supplemented with a DC term. Various well-performing stabilization and DC parameters have been introduced for stabilized space-time (ST) computational methods in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible and compressible flows. These parameters were all originally intended for finite element discretization but quite often used also for isogeometric discretization. The stabilization and DC parameters we present here for ST computations are in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible flows, target isogeometric discretization, and are also applicable to finite element discretization. The parameters are based on a direction-dependent element length expression. The expression is outcome of an easy to understand derivation. The key components of the derivation are mapping the direction vector from the physical ST element to the parent ST element, accounting for the discretization spacing along each of the parametric coordinates, and mapping what we have in the parent element back to the physical element. The test computations we present for pure-advection cases show that the parameters proposed result in good solution profiles.

  18. On modelling three-dimensional piezoelectric smart structures with boundary spectral element method

    NASA Astrophysics Data System (ADS)

    Zou, Fangxin; Aliabadi, M. H.

    2017-05-01

    The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.

  19. A new hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

  20. Dynamic simulation and preliminary finite element analysis of gunshot wounds to the human mandible.

    PubMed

    Tang, Zhen; Tu, Wenbing; Zhang, Gang; Chen, Yubin; Lei, Tao; Tan, Yinghui

    2012-05-01

    Due to the complications arising from gunshot wounds to the maxillofacial region, traditional models of gunshot wounds cannot meet our research needs. In this study, we established a finite element model and conducted preliminary simulation and analysis to determine the injury mechanism and degree of damage for gunshot wounds to the human mandible. Based on a previously developed modelling method that used animal experiments and internal parameters, digital computed tomography data for the human mandible were used to establish a three-dimensional finite element model of the human mandible. The mechanism by which a gunshot injures the mandible was dynamically simulated under different shot conditions. First, the residual velocities of the shootings using different projectiles at varying entry angles and impact velocities were calculated. Second, the energy losses of the projectiles and the rates of energy loss after exiting the mandible were calculated. Finally, the data were compared and analysed. The dynamic processes involved in gunshot wounds to the human mandible were successfully simulated using two projectiles, three impact velocities, and three entry angles. The stress distributions in different parts of mandible after injury were also simulated. Based on the computation and analysis of the modelling data, we found that the injury severity of the mandible and the injury efficiency of the projectiles differ under different injury conditions. The finite element model has many advantages for the analysis of ballistic wounds, and is expected to become an improved model for studying maxillofacial gunshot wounds. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Multiscale Modeling of Damage Processes in fcc Aluminum: From Atoms to Grains

    NASA Technical Reports Server (NTRS)

    Glaessgen, E. H.; Saether, E.; Yamakov, V.

    2008-01-01

    Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, current analysis is limited to small domains and increasing the size of the MD domain quickly presents intractable computational demands. A preferred approach to surmount this computational limitation has been to combine continuum mechanics-based modeling procedures, such as the finite element method (FEM), with MD analyses thereby reducing the region of atomic scale refinement. Such multiscale modeling strategies can be divided into two broad classifications: concurrent multiscale methods that directly incorporate an atomistic domain within a continuum domain and sequential multiscale methods that extract an averaged response from the atomistic simulation for later use as a constitutive model in a continuum analysis.

  2. Mechanical characterization and structural analysis of recycled fiber-reinforced-polymer resin-transfer-molded beams

    NASA Astrophysics Data System (ADS)

    Tan, Eugene Wie Loon

    1999-09-01

    The present investigation was focussed on the mechanical characterization and structural analysis of resin-transfer-molded beams containing recycled fiber-reinforced polymers. The beams were structurally reinforced with continuous unidirectional glass fibers. The reinforcing filler materials consisted entirely of recycled fiber-reinforced polymer wastes (trim and overspray). The principal resin was a 100-percent dicyclo-pentadiene unsaturated polyester specially formulated with very low viscosity for resin transfer molding. Variations of the resin transfer molding technique were employed to produce specimens for material characterization. The basic materials that constituted the structural beams, continuous-glass-fiber-reinforced, recycled-trim-filled and recycled-overspray-filled unsaturated polyesters, were fully characterized in axial and transverse compression and tension, and inplane and interlaminar shear, to ascertain their strengths, ultimate strains, elastic moduli and Poisson's ratios. Experimentally determined mechanical properties of the recycled-trim-filled and recycled-overspray-filled materials from the present investigation were superior to those of unsaturated polyester polymer concretes and Portland cement concretes. Mechanical testing and finite element analyses of flexure (1 x 1 x 20 in) and beam (2 x 4 x 40 in) specimens were conducted. These structurally-reinforced specimens were tested and analyzed in four-point, third-point flexure to determine their ultimate loads, maximum fiber stresses and mid-span deflections. The experimentally determined load capacities of these specimens were compared to those of equivalent steel-reinforced Portland cement concrete beams computed using reinforced concrete theory. Mechanics of materials beam theory was utilized to predict the ultimate loads and mid-span deflections of the flexure and beam specimens. However, these predictions proved to be severely inadequate. Finite element (fracture propagation) analyses of the flexure and beam specimens were also performed. These progressive failure analyses more closely approximated flexural behavior under actual testing conditions by reducing the elastic moduli of elements that were considered to have partially or totally failed. Individual element failures were predicted using the maximum stress, Tsai-Hill and Tsai-Wu failure criteria. Excellent predictions of flexural behavior were attributed to the progressive failure analyses combined with an appropriate failure criterion, and the reliable input material properties that were generated.

  3. Production of hybrid granitic magma at the advancing front of basaltic underplating: Inferences from the Sesia Magmatic System (south-western Alps, Italy)

    NASA Astrophysics Data System (ADS)

    Sinigoi, Silvano; Quick, James E.; Demarchi, Gabriella; Klötzli, Urs S.

    2016-05-01

    The Permian Sesia Magmatic System of the southwestern Alps displays the plumbing system beneath a Permian caldera, including a deep crustal gabbroic complex, upper crustal granite plutons and a bimodal volcanic field dominated by rhyolitic tuff filling the caldera. Isotopic compositions of the deep crustal gabbro overlap those of coeval andesitic basalts, whereas granites define a distinct, more radiogenic cluster (Sri ≈ 0.708 and 0.710, respectively). AFC computations starting from the best mafic candidate for a starting melt show that Nd and Sr isotopic compositions and trace elements of andesitic basalts may be modeled by reactive bulk assimilation of ≈ 30% of partially depleted crust and ≈ 15%-30% gabbro fractionation. Trace elements of the deep crustal gabbro cumulates require a further ≈ 60% fractionation of the andesitic basalt and loss of ≈ 40% of silica-rich residual melt. The composition of the granite plutons is consistent with a mixture of relatively constant proportions of residual melt delivered from the gabbro and anatectic melt. Chemical and field evidence leads to a conceptual model which links the production of the two granitic components to the evolution of the Mafic Complex. During the growth of the Mafic Complex, progressive incorporation of packages of crustal rocks resulted in a roughly steady state rate of assimilation. Anatectic granite originates in the hot zone of melting crust located above the advancing mafic intrusion. Upward segregation of anatectic melts facilitates the assimilation of the partially depleted restite by stoping. At each cycle of mafic intrusion and incorporation, residual and anatectic melts are produced in roughly constant proportions, because the amount of anatectic melt produced at the roof is a function of volume and latent heat of crystallization of the underplated mafic melt which in turn produces proportional amounts of hybrid gabbro cumulates and residual melt. Such a process can explain the restricted range in isotopic compositions of most rhyolitic and granitic rocks of the Permo-Carboniferous province of Europe and elsewhere. Sheet labelled "XRF standard analyses" reports replicate analyses normalized to 100 obtained by XRF on international standards analyzed along with our samples. Sheet labelled "XRF replicate sample analyses" reports replicate XRF analyses on two samples of our data set. ICP-MS analyses from Acme Analytical Laboratories Ltd. are shown for comparison. Sheet labelled "ICP-MS analyses" reports replicate analyses of trace elements on standard SO18, its official value and replicate analyses of two our samples provided by Acme Analytical Laboratories Ltd. Sheet labelled "kinzigite". Major and trace elements of amphibolite-facies paragneiss samples of the Kinzigite Formation from the roof of the Mafic Complex. In bold data by ICP-MS, other data by XRF. For Ba, Rb and Sr XRF data were included in the average estimate to increase the statistics. The last column reports the average data of amphibolite-facies rocks from the Kinzigite Formation from Schnetger (1994). Sheet labelled "PBB paragneiss". Data for granulite-facies paragneiss samples in the septa of the paragneiss bearing belt (PBB). XRF data for Ba and Sr were included in the average estimate to increase the statistics (Rb excluded because close to detection limit for XRF in many samples). The last column reports the average data of granulite-facies rocks from Val Strona (stronalite) from Schnetger (1994). Sheet labelled "PBB charnockite". Data for charnockitic rocks included in paragneiss septa. XRF data for Ba and Sr were included in the average estimate to increase the statistics (Rb excluded because close to detection limit for XRF in many samples). Sheet labelled "computed crustal assimilant". Reports the average compositions of paragneiss in amphibolite and granulite facies from this work and from Schnetger (1994). The bulk composition of the septa is computed as 70% paragneiss and 30% charnockite, as roughly estimated in the field. The partially depleted assimilant is computed as a 50/50 mixture of amphibolite- and granulite facies rocks. Sheet labelled "anatectic products" includes leucosomes at the roof of the Mafic Complex, anatectic granites from this work and from the Atesina Volcanic district (Rottura et al., 1998). In bold data by ICP-MS, other data by XRF. Sheet labelled "Valle Mosso granite" reports the whole rock compositions of granitic rocks of the pluton, distinguishing samples from upper and lower granite. XRF data for Ba, Rb and Sr were included in the average estimate to increase the statistics. The last column reports the bulk composition of the pluton, estimated as 70% lower and 30% upper granite. Sheet labelled "Rhyolite" reports whole rock and average compositions of rhyolite. Sheet labelled "UMC gabbro" reports whole rock compositions of gabbros from the upper Mafic Complex. Samples are grouped as pertaining to the "Upper Zone" and "Main Gabbro" according the subdivision of Rivalenti et al. (1975). Gt gabbro = garnet-bearing gabbro. In bold data by ICP-MS, other data by XRF. For Ba and Sr XRF data were included in the average estimate to increase the statistics. Sheet labelled "computed average UMC" reports the whole composition of upper Mafic complex, estimated as 30% Upper Zone and 70% Main Gabbro. Sheet labelled "mafic rocks in middle crust" reports the whole rock compositions from the mafic pod PST262, intruded at the boundary between Ivrea Zone and Serie dei Laghi at 287 ± 5 Ma (Klötzli et al., 2014) and mafic dikes and an enclave intruded in the lower Valle Mosso granite. Sheet labelled "mafic volcanic rocks" reports the whole rock compositions of basaltic andesite and andesite from the Sesia Magmatic System. The average composition is computed excluding altered samples and XRF data for trace elements. Sr and Nd isotope data from this work and previous publications. Sheet labelled "compositions for modelling" reports a summary of the average compositions of the components used for the computations. Sheet labelled "Kd used for AFC and FC modelling" reports the Kd values and percent of mineral phases used in the AFC and FC computations (from Claeson and Meurer, 2004; Rollinson, 1993; Green et al., 2000; Namur et al., 2011). Sheet labelled "trace elements modelling" reports the results of AFC, bulk mixing and FC computations on trace elements. The enclosed figure illustrates the bulk mixing lines between Campore and average crust or anatectic granite respectively. Mixing required getting the composition of andesitic basalt with average crust and anatectic granite varies from 33 to 63% respectively (see text for consequences). The AFC path from Campore to andesitic basalts overlaps the bulk mixing lines. The shape of the mixing line between residual and anatectic melt results in the poor sensibility of Nd to the addition of anatectic melt to the residual one (εNd remains within the field of mafic rocks up to 80% addition of anatectic melt). Sheet labelled "major elements modelling" reports the results of mass balance computations on major-elements based on bulk mixing and XL-FRAC (Stormer and Nicholls, 1978). Sheet labelled "EC-RAXFC modelling" reports input data and results obtained by EC-RAXFC code (Bohrson and Spera, 2007) to simulate the energy constrained AFC from Campore to andesitic basalt. Liquidus temperature and specific heat of magma and assimilant (tlm, tla, cpm, cpa) as well as heat of crystallization and fusion (hm, ha) were obtained by Rhyolite-Melts code (Gualda et al., 2012) at P = 6 kbar (intermediate pressure between the roof and the deepest rocks of the Mafic Complex; Demarchi et al., 1998), assuming QFM + 2, and H2O content = 0.5 for Campore and = 1.0 for assimilant (intermediate between kinzigite and stronalite from Schnetger, 1994). Initial temperature of assimilant (tlo) was assumed equal to the solidus temperature (ts), which results around 850° from the experimental melting of natural metapelite (Vielzeuf and Holloway, 1988). Non-linear melting functions were chosen within the range of values suggested by Bohrson and Spera (2007). Recharge magma (R) was set = 0 because the homogeneity of the Upper Mafic Complex is best explained if each new mafic pulse is injected at the new neutral buoyancy level, above a dense and partially depleted restite, and may be treated as a single pulse. X was set = 1 assuming that all anatectic melt enters the mafic magma. Different simulations were run using alternatively bulk partition coefficients of Sr and Nd for the assimilant (Da) reported for "standard" upper crust by Bohrson and Spera (2001; 1.5 and 0.25, respectively), Da estimated from our data set (2.15 and 2.6, respectively) and intermediate values. For the mafic magma, the bulk D values (Dm) of 0.77 for Sr and 0.34 for Nd result from the Kd and percent of mineral phases used in the AFC computation. Lat-long grid for samples reported in OS tables.

  4. 75 FR 70623 - Airworthiness Directives; DORNIER LUFTFAHRT GmbH Models Dornier 228-100, Dornier 228-101, Dornier...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-18

    ... measurements as well as finite element modelling and fatigue analyses to better understand the stress... include strain measurements as well as finite element modeling and fatigue analyses to better understand... finite element modelling and fatigue analyses to better understand the stress distribution onto the frame...

  5. Porosity estimates on basaltic basement samples using the neutron absorption cross section (Σ): Implications for fluid flow and alteration of the oceanic crust

    NASA Astrophysics Data System (ADS)

    Reichow, M. K.; Brewer, T. S.; Marvin, L. G.; Lee, S. V.

    2008-12-01

    Little information presently exists on the heterogeneity of hydrothermal alteration in the oceanic crust or the variability of the associated thermal, fluid, and chemical fluxes. Formation porosities are important controls on these fluxes and porosity measurements are routinely collected during wireline logging operations. These estimates on the formation porosity are measures of the moderating power of the formation in response to bombardment by neutrons. The neutron absorption macroscopic cross-section (Σ = σρ) is a representation of the ability of the rock to slow down neutrons, and as such can be used to invert the porosity of a sample. Boron, lithium and other trace elements are important controls on σ-values, and the distribution of these is influenced by secondary low-temperature alteration processes. Consequently, computed σ-values may be used to discriminate between various basalt types and to identify areas of secondary alteration. Critical in this analysis is the degree of alteration, since elements such as B and Li can dramatically affect the sigma value and leading to erroneous porosity values. We analysed over 150 'pool-samples' for S, Li, Be and B element concentrations to estimate their contribution to the measured neutron porosity. These chemical analyses allow the calculation of the model sigma values for individual samples. Using a range of variably altered samples recovered during IODP Expeditions 309 and 312 we provide bulk estimates of alteration within the drilled section using the measured neutron porosity. B concentration in Hole 1256D increases with depth, with sharp rises at 959 and 1139 mbsf. Elevated wireline neutron porosities cannot always be directly linked with high B content. However, our preliminary results imply that increased neutron porosity (~15) at depths below 1100 mbsf may reflect hydrothermal alteration rather than formation porosity. This interpretation is supported when compared with generally lower computed porosity estimates derived from resistivity measurements for the same intervals.

  6. Behaviour of Masonry Walls under Horizontal Shear in Mining Areas

    NASA Astrophysics Data System (ADS)

    Kadela, Marta; Bartoszek, Marek; Fedorowicz, Jan

    2017-12-01

    The paper discusses behaviour of masonry walls constructed with small-sized elements under the effects of mining activity. It presents some mechanisms of damage occurring in such structures, its forms in real life and the behaviour of large fragments of masonry walls subjected to specific loads in FEM computational models. It offers a constitutive material model, which enables numerical analyses and monitoring of the behaviour of numerical models as regards elastic-plastic performance of the material, with consideration of its degradation. Results from the numerical analyses are discussed for isolated fragments of the wall subjected to horizontal shear, with consideration of degradation, impact of imposed vertical load as well as the effect of weakening of the wall, which was achieved by introducing openings in it, on the performance and deformation of the wall.

  7. A Model for Simulating the Response of Aluminum Honeycomb Structure to Transverse Loading

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Czabaj, Michael W.; Jackson, Wade C.

    2012-01-01

    A 1-dimensional material model was developed for simulating the transverse (thickness-direction) loading and unloading response of aluminum honeycomb structure. The model was implemented as a user-defined material subroutine (UMAT) in the commercial finite element analysis code, ABAQUS(Registered TradeMark)/Standard. The UMAT has been applied to analyses for simulating quasi-static indentation tests on aluminum honeycomb-based sandwich plates. Comparison of analysis results with data from these experiments shows overall good agreement. Specifically, analyses of quasi-static indentation tests yielded accurate global specimen responses. Predicted residual indentation was also in reasonable agreement with measured values. Overall, this simple model does not involve a significant computational burden, which makes it more tractable to simulate other damage mechanisms in the same analysis.

  8. A unique set of micromechanics equations for high temperature metal matrix composites

    NASA Technical Reports Server (NTRS)

    Hopkins, D. A.; Chamis, C. C.

    1985-01-01

    A unique set of micromechanic equations is presented for high temperature metal matrix composites. The set includes expressions to predict mechanical properties, thermal properties and constituent microstresses for the unidirectional fiber reinforced ply. The equations are derived based on a mechanics of materials formulation assuming a square array unit cell model of a single fiber, surrounding matrix and an interphase to account for the chemical reaction which commonly occurs between fiber and matrix. A three-dimensional finite element analysis was used to perform a preliminary validation of the equations. Excellent agreement between properties predicted using the micromechanics equations and properties simulated by the finite element analyses are demonstrated. Implementation of the micromechanics equations as part of an integrated computational capability for nonlinear structural analysis of high temperature multilayered fiber composites is illustrated.

  9. Methyl cation affinities of neutral and anionic maingroup-element hydrides: trends across the periodic table and correlation with proton affinities.

    PubMed

    Mulder, R Joshua; Guerra, Célia Fonseca; Bickelhaupt, F Matthias

    2010-07-22

    We have computed the methyl cation affinities in the gas phase of archetypal anionic and neutral bases across the periodic table using ZORA-relativistic density functional theory (DFT) at BP86/QZ4P//BP86/TZ2P. The main purpose of this work is to provide the methyl cation affinities (and corresponding entropies) at 298 K of all anionic (XH(n-1)(-)) and neutral bases (XH(n)) constituted by maingroup-element hydrides of groups 14-17 and the noble gases (i.e., group 18) along the periods 2-6. The cation affinity of the bases decreases from H(+) to CH(3)(+). To understand this trend, we have carried out quantitative bond energy decomposition analyses (EDA). Quantitative correlations are established between the MCA and PA values.

  10. Development of an integrated aeroservoelastic analysis program and correlation with test data

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Brenner, M. J.; Voelker, L. S.

    1991-01-01

    The details and results are presented of the general-purpose finite element STructural Analysis RoutineS (STARS) to perform a complete linear aeroelastic and aeroservoelastic analysis. The earlier version of the STARS computer program enabled effective finite element modeling as well as static, vibration, buckling, and dynamic response of damped and undamped systems, including those with pre-stressed and spinning structures. Additions to the STARS program include aeroelastic modeling for flutter and divergence solutions, and hybrid control system augmentation for aeroservoelastic analysis. Numerical results of the X-29A aircraft pertaining to vibration, flutter-divergence, and open- and closed-loop aeroservoelastic controls analysis are compared to ground vibration, wind-tunnel, and flight-test results. The open- and closed-loop aeroservoelastic control analyses are based on a hybrid formulation representing the interaction of structural, aerodynamic, and flight-control dynamics.

  11. A split finite element algorithm for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1979-01-01

    An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.

  12. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  13. Quantitative Effects of P Elements on Hybrid Dysgenesis in Drosophila Melanogaster

    PubMed Central

    Rasmusson, K. E.; Simmons, M. J.; Raymond, J. D.; McLarnon, C. F.

    1990-01-01

    Genetic analyses involving chromosomes from seven inbred lines derived from a single M' strain were used to study the quantitative relationships between the incidence and severity of P-M hybrid dysgenesis and the number of genomic P elements. In four separate analyses, the mutability of sn(w), a P element-insertion mutation of the X-linked singed locus, was found to be inversely related to the number of autosomal P elements. Since sn(w) mutability is caused by the action of the P transposase, this finding supports the hypothesis that genomic P elements titrate the transposase present within a cell. Other analyses demonstrated that autosomal transmission ratios were distorted by P element action. In these analyses, the amount of distortion against an autosome increased more or less linearly with the number of P elements carried by the autosome. Additional analyses showed that the magnitude of this distortion was reduced when a second P element-containing autosome was present in the genome. This reduction could adequately be explained by transposase titration; there was no evidence that it was due to repressor molecules binding to P elements and inhibiting their movement. The influence of genomic P elements on the incidence of gonadal dysgenesis was also investigated. Although no simple relationship between the number of P elements and the incidence of the trait could be discerned, it was clear that even a small number of elements could increase the incidence markedly. The failure to find a quantitative relationship between P element number and the incidence of gonadal dysgenesis probably reflects the complex etiology of this trait. PMID:2155853

  14. Bioinformatics and genomic analysis of transposable elements in eukaryotic genomes.

    PubMed

    Janicki, Mateusz; Rooke, Rebecca; Yang, Guojun

    2011-08-01

    A major portion of most eukaryotic genomes are transposable elements (TEs). During evolution, TEs have introduced profound changes to genome size, structure, and function. As integral parts of genomes, the dynamic presence of TEs will continue to be a major force in reshaping genomes. Early computational analyses of TEs in genome sequences focused on filtering out "junk" sequences to facilitate gene annotation. When the high abundance and diversity of TEs in eukaryotic genomes were recognized, these early efforts transformed into the systematic genome-wide categorization and classification of TEs. The availability of genomic sequence data reversed the classical genetic approaches to discovering new TE families and superfamilies. Curated TE databases and their accurate annotation of genome sequences in turn facilitated the studies on TEs in a number of frontiers including: (1) TE-mediated changes of genome size and structure, (2) the influence of TEs on genome and gene functions, (3) TE regulation by host, (4) the evolution of TEs and their population dynamics, and (5) genomic scale studies of TE activity. Bioinformatics and genomic approaches have become an integral part of large-scale studies on TEs to extract information with pure in silico analyses or to assist wet lab experimental studies. The current revolution in genome sequencing technology facilitates further progress in the existing frontiers of research and emergence of new initiatives. The rapid generation of large-sequence datasets at record low costs on a routine basis is challenging the computing industry on storage capacity and manipulation speed and the bioinformatics community for improvement in algorithms and their implementations.

  15. Study of the elastic behavior of synthetic lightweight aggregates (SLAs)

    NASA Astrophysics Data System (ADS)

    Jin, Na

    Synthetic lightweight aggregates (SLAs), composed of coal fly ash and recycled plastics, represent a resilient construction material that could be a key aspect to future sustainable development. This research focuses on a prediction of the elastic modulus of SLA, assumed as a homogenous and isotropic composite of particulates of high carbon fly ash (HCFA) and a matrix of plastics (HDPE, LDPE, PS and mixture of plastics), with the emphasis on SLAs made of HCFA and PS. The elastic moduli of SLA with variable fly ash volume fractions are predicted based on finite element analyses (FEA) performed using the computer programs ABAQUS and PLAXIS. The effect of interface friction (roughness) between phases and other computation parameters; e.g., loading strain, stiffness of component, element type and boundary conditions, are included in these analyses. Analytical models and laboratory tests provide a baseline for comparison. Overall, results indicate ABAQUS generates elastic moduli closer to those predicted by well-established analytical models than moduli predicted from PLAXIS, especially for SLAs with lower fly ash content. In addition, an increase in roughness, loading strain indicated increase of SLAs stiffness, especially as fly ash content increases. The elastic moduli obtained from unconfined compression generally showed less elastic moduli than those obtained from analytical and ABAQUS 3D predictions. This may be caused by possible existence of pre-failure surface in specimen and the directly interaction between HCFA particles. Recommendations for the future work include laboratory measurements of SLAs moduli and FEM modeling that considers various sizes and random distribution of HCFA particles in SLAs.

  16. Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements

    PubMed Central

    Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.

    2016-01-01

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037

  17. Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses

    NASA Astrophysics Data System (ADS)

    Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.

    2017-12-01

    To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number hp160221).

  18. Determination of elemental composition of substance lost following wear of all-ceramic materials.

    PubMed

    Dündar, Mine; Artunç, Celal; Toksavul, Suna; Ozmen, Dilek; Turgan, Nevbahar

    2003-01-01

    The aim of this study was to test the possible elemental release of four different all-ceramic materials in a wear machine to predict results about their long-term behavior in the oral environment. Four different all-ceramic materials with different chemical compositions were selected for the wear testing. A total of 20 cylindric samples, five for each ceramic group, were prepared according to the manufacturers' instructions. These were subjected to two-body wear testing in an artificial saliva medium under a covered unit with a computer-operated wear machine. The artificial saliva solutions for each material were analyzed for the determination of amounts of sodium, potassium, calcium, magnesium, and lithium elements released from the glass-ceramic materials. The differences between and within groups were statistically analyzed with a one-way ANOVA, followed by Duncan tests. The statistical analyses revealed no significant differences among Na, K, Ca, or Mg levels (P > .05) released from the leucite-reinforced groups, while there was a significant (P < .05) increase in Li release from the lithium disilicate group. Considerable element release to the artifical saliva medium was demonstrated in short-term wear testing. The lithia-based ceramic was more prone to Li release when compared with other elements and materials.

  19. An Ancient Transkingdom Horizontal Transfer of Penelope-Like Retroelements from Arthropods to Conifers.

    PubMed

    Lin, Xuan; Faridi, Nurul; Casola, Claudio

    2016-05-02

    Comparative genomics analyses empowered by the wealth of sequenced genomes have revealed numerous instances of horizontal DNA transfers between distantly related species. In eukaryotes, repetitive DNA sequences known as transposable elements (TEs) are especially prone to move across species boundaries. Such horizontal transposon transfers, or HTTs, are relatively common within major eukaryotic kingdoms, including animals, plants, and fungi, while rarely occurring across these kingdoms. Here, we describe the first case of HTT from animals to plants, involving TEs known as Penelope-like elements, or PLEs, a group of retrotransposons closely related to eukaryotic telomerases. Using a combination of in situ hybridization on chromosomes, polymerase chain reaction experiments, and computational analyses we show that the predominant PLE lineage, EN(+)PLEs, is highly diversified in loblolly pine and other conifers, but appears to be absent in other gymnosperms. Phylogenetic analyses of both protein and DNA sequences reveal that conifers EN(+)PLEs, or Dryads, form a monophyletic group clustering within a clade of primarily arthropod elements. Additionally, no EN(+)PLEs were detected in 1,928 genome assemblies from 1,029 nonmetazoan and nonconifer genomes from 14 major eukaryotic lineages. These findings indicate that Dryads emerged following an ancient horizontal transfer of EN(+)PLEs from arthropods to a common ancestor of conifers approximately 340 Ma. This represents one of the oldest known interspecific transmissions of TEs, and the most conspicuous case of DNA transfer between animals and plants. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution 2016. This work is written by US Government employees and is in the public domain in the US.

  20. A combined registration and finite element analysis method for fast estimation of intraoperative brain shift; phantom and animal model study.

    PubMed

    Mohammadi, Amrollah; Ahmadian, Alireza; Rabbani, Shahram; Fattahi, Ehsan; Shirani, Shapour

    2017-12-01

    Finite element models for estimation of intraoperative brain shift suffer from huge computational cost. In these models, image registration and finite element analysis are two time-consuming processes. The proposed method is an improved version of our previously developed Finite Element Drift (FED) registration algorithm. In this work the registration process is combined with the finite element analysis. In the Combined FED (CFED), the deformation of whole brain mesh is iteratively calculated by geometrical extension of a local load vector which is computed by FED. While the processing time of the FED-based method including registration and finite element analysis was about 70 s, the computation time of the CFED was about 3.2 s. The computational cost of CFED is almost 50% less than similar state of the art brain shift estimators based on finite element models. The proposed combination of registration and structural analysis can make the calculation of brain deformation much faster. Copyright © 2016 John Wiley & Sons, Ltd.

  1. On computational methods for crashworthiness

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1992-01-01

    The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

  2. Efficient simulation of incompressible viscous flow over multi-element airfoils

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Wiltberger, N. Lyn; Kwak, Dochan

    1992-01-01

    The incompressible, viscous, turbulent flow over single and multi-element airfoils is numerically simulated in an efficient manner by solving the incompressible Navier-Stokes equations. The computer code uses the method of pseudo-compressibility with an upwind-differencing scheme for the convective fluxes and an implicit line-relaxation solution algorithm. The motivation for this work includes interest in studying the high-lift take-off and landing configurations of various aircraft. In particular, accurate computation of lift and drag at various angles of attack, up to stall, is desired. Two different turbulence models are tested in computing the flow over an NACA 4412 airfoil; an accurate prediction of stall is obtained. The approach used for multi-element airfoils involves the use of multiple zones of structured grids fitted to each element. Two different approaches are compared: a patched system of grids, and an overlaid Chimera system of grids. Computational results are presented for two-element, three-element, and four-element airfoil configurations. Excellent agreement with experimental surface pressure coefficients is seen. The code converges in less than 200 iterations, requiring on the order of one minute of CPU time (on a CRAY YMP) per element in the airfoil configuration.

  3. Feasibility study for the implementation of NASTRAN on the ILLIAC 4 parallel processor

    NASA Technical Reports Server (NTRS)

    Field, E. I.

    1975-01-01

    The ILLIAC IV, a fourth generation multiprocessor using parallel processing hardware concepts, is operational at Moffett Field, California. Its capability to excel at matrix manipulation, makes the ILLIAC well suited for performing structural analyses using the finite element displacement method. The feasibility of modifying the NASTRAN (NASA structural analysis) computer program to make effective use of the ILLIAC IV was investigated. The characteristics are summarized of the ILLIAC and the ARPANET, a telecommunications network which spans the continent making the ILLIAC accessible to nearly all major industrial centers in the United States. Two distinct approaches are studied: retaining NASTRAN as it now operates on many of the host computers of the ARPANET to process the input and output while using the ILLIAC only for the major computational tasks, and installing NASTRAN to operate entirely in the ILLIAC environment. Though both alternatives offer similar and significant increases in computational speed over modern third generation processors, the full installation of NASTRAN on the ILLIAC is recommended. Specifications are presented for performing that task with manpower estimates and schedules to correspond.

  4. Reliability analysis of laminated CMC components through shell subelement techniques

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Gyekenyesi, John P.

    1992-01-01

    An updated version of the integrated design program Composite Ceramics Analysis and Reliability Evaluation of Structures (C/CARES) was developed for the reliability evaluation of ceramic matrix composites (CMC) laminated shell components. The algorithm is now split into two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The interface program creates a neutral data base which is then read by the reliability module. This neutral data base concept allows easy data transfer between different computer systems. The new interface program from the finite-element code Matrix Automated Reduction and Coupling (MARC) also includes the option of using hybrid laminates (a combination of plies of different materials or different layups) and allows for variations in temperature fields throughout the component. In the current version of C/CARES, a subelement technique was implemented, enabling stress gradients within an element to be taken into account. The noninteractive reliability function is now evaluated at each Gaussian integration point instead of using averaging techniques. As a result of the increased number of stress evaluation points, considerable improvements in the accuracy of reliability analyses were realized.

  5. Advanced Software for Analysis of High-Speed Rolling-Element Bearings

    NASA Technical Reports Server (NTRS)

    Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.

    2003-01-01

    COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.

  6. Design and Implementation of a Parallel Multivariate Ensemble Kalman Filter for the Poseidon Ocean General Circulation Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)

    2001-01-01

    A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.

  7. Geology and surface geochemistry of the Roosevelt Springs Known Geothermal Resource Area, Utah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovell, J.S.; Meyer, W.T.; Atkinson, D.J.

    1980-01-01

    Available data on the Roosevelt area were synthesized to determine the spatial arrangement of the rocks, and the patterns of mass and energy flow within them. The resulting model lead to a new interpretation of the geothermal system, and provided ground truth for evaluating the application of soil geochemistry to exploration for concealed geothermal fields. Preliminary geochemical studies comparing the surface microlayer to conventional soil sampling methods indicated both practical and chemical advantages for the surface microlayer technique, which was particularly evident in the case of As, Sb and Cs. Subsequent multi-element analyses of surface microlayer samples collected over anmore » area of 100 square miles were processed to produce single element contour maps for 41 chemical parameters. Computer manipulation of the multi-element data using R-mode factor analysis provided the optimum method of interpretation of the surface microlayer data. A trace element association of As, Sb and Cs in the surface microlayer provided the best indication of the leakage of geothermal solutions to the surface, while regional mercury trends may reflect the presence of a mercury vapour anomaly above a concealed heat source.« less

  8. Computer- and web-based interventions to promote healthy eating among children and adolescents: a systematic review.

    PubMed

    Hamel, Lauren M; Robbins, Lorraine B

    2013-01-01

    To: (1) determine the effect of computer- and web-based interventions on improving eating behavior (e.g. increasing fruit and vegetable consumption; decreasing fat consumption) and/or diet-related physical outcomes (e.g. body mass index) among children and adolescents; and (2) examine what elements enhance success. Children and adolescents are the heaviest they have ever been. Excess weight can carry into adulthood and result in chronic health problems. Because of the capacity to reach large audiences of children and adolescents to promote healthy eating, computer- and web-based interventions hold promise for helping to curb this serious trend. However, evidence to support this approach is lacking. Systematic review using guidelines from the Cochrane Effective Practice and Organisation of Care Group. The following databases were searched for studies from 1998-2011: CINAHL; PubMed; Cochrane; PsycINFO; ERIC; and Proquest. Fifteen randomized controlled trials or quasi-experimental studies were analysed in a systematic review. Although a majority of interventions resulted in statistically significant positive changes in eating behavior and/or diet-related physical outcomes, interventions that included post intervention follow-up, ranging from 3-18 months, showed that changes were not maintained. Elements, such as conducting the intervention at school or using individually tailored feedback, may enhance success. Computer- and web-based interventions can improve eating behavior and diet-related physical outcomes among children and adolescents, particularly when conducted in schools and individually tailored. These interventions can complement and support nursing efforts to give preventive care; however, maintenance efforts are recommended. © 2012 Blackwell Publishing Ltd.

  9. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures.

    PubMed

    Zhan, Yijian; Meschke, Günther

    2017-07-08

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.

  10. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures

    PubMed Central

    Zhan, Yijian

    2017-01-01

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130

  11. Finite element dynamic analysis on CDC STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lambiotte, J. J., Jr.

    1978-01-01

    Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.

  12. View of MISSE-8 taken during a session of EVA

    NASA Image and Video Library

    2011-07-12

    ISS028-E-016111 (12 July 2011) --- This close-up image, recorded during a July 12 spacewalk, shows the Materials on International Space Station Experiment - 8 (MISSE-8). The experiment package is a test bed for materials and computing elements attached to the outside of the orbiting complex. These materials and computing elements are being evaluated for the effects of atomic oxygen, ultraviolet, direct sunlight, radiation, and extremes of heat and cold. This experiment allows the development and testing of new materials and computing elements that can better withstand the rigors of space environments. Results will provide a better understanding of the durability of various materials and computing elements when they are exposed to the space environment, with applications in the design of future spacecraft.

  13. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  14. Microstructural characterization, petrophysics and upscaling - from porous media to fractural media

    NASA Astrophysics Data System (ADS)

    Liu, J.; Liu, K.; Regenauer-Lieb, K.

    2017-12-01

    We present an integrated study for the characterization of complex geometry, fluid transport features and mechanical deformation at micro-scale and the upscaling of properties using microtomographic data: We show how to integrate microstructural characterization by the volume fraction, specific surface area, connectivity (percolation), shape and orientation of microstructures with identification of individual fractures from a 3D fractural network. In a first step we use stochastic analyses of microstructures to determine the geometric RVE (representative volume element) of samples. We proceed by determining the size of a thermodynamic RVE by computing upper/lower bounds of entropy production through Finite Element (FE) analyses on a series of models with increasing sizes. The minimum size for thermodynamic RVE's is identified on the basis of the convergence criteria of the FE simulations. Petrophysical properties (permeability and mechanical parameters, including plastic strength) are then computed numerically if thermodynamic convergence criteria are fulfilled. Upscaling of properties is performed by means of percolation theory. The percolation threshold is detected by using a shrinking/expanding algorithm on static micro-CT images of rocks. Parameters of the scaling laws can be extracted from quantitative analyses and/or numerical simulations on a series of models with similar structures but different porosities close to the percolation threshold. Different rock samples are analyzed. Characterizing parameters of porous/fractural rocks are obtained. Synthetic derivative models of the microstructure are used to estimate the relationships between porosity and mechanical properties. Results obtained from synthetic sandstones show that yield stress, cohesion and the angle of friction are linearly proportional to porosity. Our integrated study shows that digital rock technology can provide meaningful parameters for effective upscaling if thermodynamic volume averaging satisfies the convergence criteria. For strongly heterogeneous rocks, however, thermodynamic convergence criteria may not meet; a continuum approach cannot be justified in this case.

  15. Additional and revised thermochemical data and computer code for WATEQ2: a computerized chemical model for trace and major element speciation and mineral equilibria of natural waters

    USGS Publications Warehouse

    Ball, James W.; Nordstrom, D. Kirk; Jenne, Everett A.

    1980-01-01

    A computerized chemical model, WATEQ2, has resulted from extensive additions to and revision of the WATEQ model of Truesdell and Jones (Truesdell, A. H., and Jones, B. F., 1974, WATEQ, a computer program for calculating chemical equilibria of natural waters: J. Res. U. S. Geol, Survey, v. 2, p. 233-274). The model building effort has necessitated searching the literature and selecting thermochemical data pertinent to the reactions added to the model. This supplementary report manes available the details of the reactions added to the model together with the selected thermochemical data and their sources. Also listed are details of program operation and a brief description of the output of the model. Appendices-contain a glossary of identifiers used in the PL/1 computer code, the complete PL/1 listing, and sample output from three water analyses used as test cases.

  16. Analysis of Tube Hydroforming by means of an Inverse Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Johnson, Kenneth I.; Khaleel, Mohammad A.

    2003-05-01

    This paper presents a computational tool for the analysis of freely hydroformed tubes by means of an inverse approach. The formulation of the inverse method developed by Guo et al. is adopted and extended to the tube hydrofoming problems in which the initial geometry is a round tube submitted to hydraulic pressure and axial feed at the tube ends (end-feed). A simple criterion based on a forming limit diagram is used to predict the necking regions in the deformed workpiece. Although the developed computational tool is a stand-alone code, it has been linked to the Marc finite element code formore » meshing and visualization of results. The application of the inverse approach to tube hydroforming is illustrated through the analyses of the aluminum alloy AA6061-T4 seamless tubes under free hydroforming conditions. The results obtained are in good agreement with those issued from a direct incremental approach. However, the computational time in the inverse procedure is much less than that in the incremental method.« less

  17. Thermal Hydraulics Design and Analysis Methodology for a Solid-Core Nuclear Thermal Rocket Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Chen, Yen-Sen; Cheng, Gary; Ito, Yasushi

    2013-01-01

    Nuclear thermal propulsion is a leading candidate for in-space propulsion for human Mars missions. This chapter describes a thermal hydraulics design and analysis methodology developed at the NASA Marshall Space Flight Center, in support of the nuclear thermal propulsion development effort. The objective of this campaign is to bridge the design methods in the Rover/NERVA era, with a modern computational fluid dynamics and heat transfer methodology, to predict thermal, fluid, and hydrogen environments of a hypothetical solid-core, nuclear thermal engine the Small Engine, designed in the 1960s. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics and heat transfer platform, while formulations of flow and heat transfer through porous and solid media were implemented to describe those of hydrogen flow channels inside the solid24 core. Design analyses of a single flow element and the entire solid-core thrust chamber of the Small Engine were performed and the results are presented herein

  18. Optically intraconnected computer employing dynamically reconfigurable holographic optical element

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A. (Inventor)

    1992-01-01

    An optically intraconnected computer and a reconfigurable holographic optical element employed therein. The basic computer comprises a memory for holding a sequence of instructions to be executed; logic for accessing the instructions in sequence; logic for determining for each the instruction the function to be performed and the effective address thereof; a plurality of individual elements on a common support substrate optimized to perform certain logical sequences employed in executing the instructions; and, element selection logic connected to the logic determining the function to be performed for each the instruction for determining the class of each function and for causing the instruction to be executed by those the elements which perform those associated the logical sequences affecting the instruction execution in an optimum manner. In the optically intraconnected version, the element selection logic is adapted for transmitting and switching signals to the elements optically.

  19. Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.

    PubMed

    Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A

    2016-03-21

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Graphics enhanced computer emulation for improved timing-race and fault tolerance control system analysis. [of Centaur liquid-fuel booster

    NASA Technical Reports Server (NTRS)

    Szatkowski, G. P.

    1983-01-01

    A computer simulation system has been developed for the Space Shuttle's advanced Centaur liquid fuel booster rocket, in order to conduct systems safety verification and flight operations training. This simulation utility is designed to analyze functional system behavior by integrating control avionics with mechanical and fluid elements, and is able to emulate any system operation, from simple relay logic to complex VLSI components, with wire-by-wire detail. A novel graphics data entry system offers a pseudo-wire wrap data base that can be easily updated. Visual subsystem operations can be selected and displayed in color on a six-monitor graphics processor. System timing and fault verification analyses are conducted by injecting component fault modes and min/max timing delays, and then observing system operation through a red line monitor.

  1. Evaluation of the Pseudostatic Analyses of Earth Dams Using FE Simulation and Observed Earthquake-Induced Deformations: Case Studies of Upper San Fernando and Kitayama Dams

    PubMed Central

    Akhlaghi, Tohid

    2014-01-01

    Evaluation of the accuracy of the pseudostatic approach is governed by the accuracy with which the simple pseudostatic inertial forces represent the complex dynamic inertial forces that actually exist in an earthquake. In this study, the Upper San Fernando and Kitayama earth dams, which have been designed using the pseudostatic approach and damaged during the 1971 San Fernando and 1995 Kobe earthquakes, were investigated and analyzed. The finite element models of the dams were prepared based on the detailed available data and results of in situ and laboratory material tests. Dynamic analyses were conducted to simulate the earthquake-induced deformations of the dams using the computer program Plaxis code. Then the pseudostatic seismic coefficient used in the design and analyses of the dams were compared with the seismic coefficients obtained from dynamic analyses of the simulated model as well as the other available proposed pseudostatic correlations. Based on the comparisons made, the accuracy and reliability of the pseudostatic seismic coefficients are evaluated and discussed. PMID:24616636

  2. Efficient Computation Of Behavior Of Aircraft Tires

    NASA Technical Reports Server (NTRS)

    Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.

    1989-01-01

    NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.

  3. Systems design and analysis of the microwave radiometer spacecraft

    NASA Technical Reports Server (NTRS)

    Garrett, L. B.

    1981-01-01

    Systems design and analysis data were generated for microwave radiometer spacecraft concept using the Large Advanced Space Systems (LASS) computer aided design and analysis program. Parametric analyses were conducted for perturbations off the nominal-orbital-altitude/antenna-reflector-size and for control/propulsion system options. Optimized spacecraft mass, structural element design, and on-orbit loading data are presented. Propulsion and rigid-body control systems sensitivities to current and advanced technology are established. Spacecraft-induced and environmental effects on antenna performance (surface accuracy, defocus, and boresight off-set) are quantified and structured material frequencies and modal shapes are defined.

  4. An axisymmetric PFEM formulation for bottle forming simulation

    NASA Astrophysics Data System (ADS)

    Ryzhakov, Pavel B.

    2017-01-01

    A numerical model for bottle forming simulation is proposed. It is based upon the Particle Finite Element Method (PFEM) and is developed for the simulation of bottles characterized by rotational symmetry. The PFEM strategy is adapted to suit the problem of interest. Axisymmetric version of the formulation is developed and a modified contact algorithm is applied. This results in a method characterized by excellent computational efficiency and volume conservation characteristics. The model is validated. An example modelling the final blow process is solved. Bottle wall thickness is estimated and the mass conservation of the method is analysed.

  5. Preflight transient dynamic analyses of B-52 aircraft carrying Space Shuttle solid rocket booster drop-test vehicle

    NASA Technical Reports Server (NTRS)

    Ko, W. L.; Schuster, L. S.

    1984-01-01

    This paper concerns the transient dynamic analysis of the B-52 aircraft carrying the Space Shuttle solid rocket booster drop test vehicle (SRB/DTV). The NASA structural analysis (NASTRAN) finite element computer program was used in the analysis. The B-52 operating conditions considered for analysis were (1) landing and (2) braking on aborted takeoff runs. The transient loads for the B-52 pylon front and rear hooks were calculated. The results can be used to establish the safe maneuver envelopes for the B-52 carrying the SRB/DTV in landings and brakings.

  6. Thermal analysis of underground power cable system

    NASA Astrophysics Data System (ADS)

    Rerak, Monika; Ocłoń, Paweł

    2017-10-01

    The paper presents the application of Finite Element Method in thermal analysis of underground power cable system. The computations were performed for power cables buried in-line in the ground at a depth of 2 meters. The developed mathematical model allows determining the two-dimensional temperature distribution in the soil, thermal backfill and power cables. The simulations studied the effect of soil and cable backfill thermal conductivity on the maximum temperature of the cable conductor. Also, the effect of cable diameter on the temperature of cable core was studied. Numerical analyses were performed based on a program written in MATLAB.

  7. Autonomous spectrographic system to analyse the main elements of fireballs and meteors

    NASA Astrophysics Data System (ADS)

    Espartero, Francisco Ángel; Martínez, Germán; Frías, Marta; Montes Moya, Francisco Simón; Castro-Tirado, Alberto Javier

    2018-01-01

    We present a meteor observation system based on imaging CCD cameras, wide-field optics and a diffraction grating. This system is composed of two independent spectrographs with different configurations, which allows us to capture images of fireballs and meteors with several fields of view and sensitivities. The complete set forms a small autonomous observatory, comprised of a sealed box with a sliding roof, weather station and computers for data storing and reduction. Since 2014, several meteors have been studied using this facility, such as the Alcalá la Real fireball recorded on 30 September 2016.

  8. Smeared quasidistributions in perturbation theory

    NASA Astrophysics Data System (ADS)

    Monahan, Christopher

    2018-03-01

    Quasi- and pseudodistributions provide a new approach to determining parton distribution functions from first principles' calculations of QCD. Here, I calculate the flavor nonsinglet unpolarized quasidistribution at one loop in perturbation theory, using the gradient flow to remove ultraviolet divergences. I demonstrate that, as expected, the gradient flow does not change the infrared structure of the quasidistribution at one loop and use the results to match the smeared matrix elements to those in the MS ¯ scheme. This matching calculation is required to relate numerical results obtained from nonperturbative lattice QCD computations to light-front parton distribution functions extracted from global analyses of experimental data.

  9. Markup of temporal information in electronic health records.

    PubMed

    Hyun, Sookyung; Bakken, Suzanne; Johnson, Stephen B

    2006-01-01

    Temporal information plays a critical role in the understanding of clinical narrative (i.e., free text). We developed a representation for marking up temporal information in a narrative, consisting of five elements: 1) reference point, 2) direction, 3) number, 4) time unit, and 5) pattern. We identified 254 temporal expressions from 50 discharge summaries and represented them using our scheme. The overall inter-rater reliability among raters applying the representation model was 75 percent agreement. The model can contribute to temporal reasoning in computer systems for decision support, data mining, and process and outcomes analyses by providing structured temporal information.

  10. On finite element methods for the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Aziz, A. K.; Werschulz, A. G.

    1979-01-01

    The numerical solution of the Helmholtz equation is considered via finite element methods. A two-stage method which gives the same accuracy in the computed gradient as in the computed solution is discussed. Error estimates for the method using a newly developed proof are given, and the computational considerations which show this method to be computationally superior to previous methods are presented.

  11. Modal Substructuring of Geometrically Nonlinear Finite Element Models with Interface Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.

    Substructuring methods have been widely used in structural dynamics to divide large, complicated finite element models into smaller substructures. For linear systems, many methods have been developed to reduce the subcomponents down to a low order set of equations using a special set of component modes, and these are then assembled to approximate the dynamics of a large scale model. In this paper, a substructuring approach is developed for coupling geometrically nonlinear structures, where each subcomponent is drastically reduced to a low order set of nonlinear equations using a truncated set of fixedinterface and characteristic constraint modes. The method usedmore » to extract the coefficients of the nonlinear reduced order model (NLROM) is non-intrusive in that it does not require any modification to the commercial FEA code, but computes the NLROM from the results of several nonlinear static analyses. The NLROMs are then assembled to approximate the nonlinear differential equations of the global assembly. The method is demonstrated on the coupling of two geometrically nonlinear plates with simple supports at all edges. The plates are joined at a continuous interface through the rotational degrees-of-freedom (DOF), and the nonlinear normal modes (NNMs) of the assembled equations are computed to validate the models. The proposed substructuring approach reduces a 12,861 DOF nonlinear finite element model down to only 23 DOF, while still accurately reproducing the first three NNMs of the full order model.« less

  12. Modal Substructuring of Geometrically Nonlinear Finite Element Models with Interface Reduction

    DOE PAGES

    Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.

    2017-03-29

    Substructuring methods have been widely used in structural dynamics to divide large, complicated finite element models into smaller substructures. For linear systems, many methods have been developed to reduce the subcomponents down to a low order set of equations using a special set of component modes, and these are then assembled to approximate the dynamics of a large scale model. In this paper, a substructuring approach is developed for coupling geometrically nonlinear structures, where each subcomponent is drastically reduced to a low order set of nonlinear equations using a truncated set of fixedinterface and characteristic constraint modes. The method usedmore » to extract the coefficients of the nonlinear reduced order model (NLROM) is non-intrusive in that it does not require any modification to the commercial FEA code, but computes the NLROM from the results of several nonlinear static analyses. The NLROMs are then assembled to approximate the nonlinear differential equations of the global assembly. The method is demonstrated on the coupling of two geometrically nonlinear plates with simple supports at all edges. The plates are joined at a continuous interface through the rotational degrees-of-freedom (DOF), and the nonlinear normal modes (NNMs) of the assembled equations are computed to validate the models. The proposed substructuring approach reduces a 12,861 DOF nonlinear finite element model down to only 23 DOF, while still accurately reproducing the first three NNMs of the full order model.« less

  13. Microarray Analysis of LTR Retrotransposon Silencing Identifies Hdac1 as a Regulator of Retrotransposon Expression in Mouse Embryonic Stem Cells

    PubMed Central

    Madej, Monika J.; Taggart, Mary; Gautier, Philippe; Garcia-Perez, Jose Luis; Meehan, Richard R.; Adams, Ian R.

    2012-01-01

    Retrotransposons are highly prevalent in mammalian genomes due to their ability to amplify in pluripotent cells or developing germ cells. Host mechanisms that silence retrotransposons in germ cells and pluripotent cells are important for limiting the accumulation of the repetitive elements in the genome during evolution. However, although silencing of selected individual retrotransposons can be relatively well-studied, many mammalian retrotransposons are seldom analysed and their silencing in germ cells, pluripotent cells or somatic cells remains poorly understood. Here we show, and experimentally verify, that cryptic repetitive element probes present in Illumina and Affymetrix gene expression microarray platforms can accurately and sensitively monitor repetitive element expression data. This computational approach to genome-wide retrotransposon expression has allowed us to identify the histone deacetylase Hdac1 as a component of the retrotransposon silencing machinery in mouse embryonic stem cells, and to determine the retrotransposon targets of Hdac1 in these cells. We also identify retrotransposons that are targets of other retrotransposon silencing mechanisms such as DNA methylation, Eset-mediated histone modification, and Ring1B/Eed-containing polycomb repressive complexes in mouse embryonic stem cells. Furthermore, our computational analysis of retrotransposon silencing suggests that multiple silencing mechanisms are independently targeted to retrotransposons in embryonic stem cells, that different genomic copies of the same retrotransposon can be differentially sensitive to these silencing mechanisms, and helps define retrotransposon sequence elements that are targeted by silencing machineries. Thus repeat annotation of gene expression microarray data suggests that a complex interplay between silencing mechanisms represses retrotransposon loci in germ cells and embryonic stem cells. PMID:22570599

  14. Neutron-capture element abundances in the planetary nebula NGC 5315 from deep optical and near-infrared spectrophotometry★†

    NASA Astrophysics Data System (ADS)

    Madonna, S.; García-Rojas, J.; Sterling, N. C.; Delgado-Inglada, G.; Mesa-Delgado, A.; Luridiana, V.; Roederer, I. U.; Mashburn, A. L.

    2017-10-01

    We analyse the chemical composition of the planetary nebula (PN) NGC 5315, through high-resolution (R ˜ 40000) optical spectroscopy with Ultraviolet-Visual Echelle Spectrograph at the Very Large Telescope, and medium-resolution (R ˜ 4800) near-infrared spectroscopy with Folded-port InfraRed Echellette at Magellan Baade Telescope, covering a wide spectral range from 0.31 to 2.50 μm. The main aim of this work is to investigate neutron (n)-capture element abundances to study the operation of the slow n-capture ('s-process') in the asymptotic giant branch (AGB) progenitor of NGC 5315. We detect more than 700 emission lines, including ions of the n-capture elements Se, Kr, Xe and possibly Br. We compute physical conditions from a large number of diagnostic line ratios, and derive ionic abundances for species with available atomic data. The total abundances are computed using recent ionization correction factors (ICFs) or by summing ionic abundances. Total abundances of common elements are in good agreement with previous work on this object. Based on our abundance analysis of NGC 5315, including the lack of s-process enrichment, we speculate that the most probable evolutionary scenario is that the progenitor star is in a binary system as hinted at by radial velocity studies, and interactions with its companion truncated the AGB before s-process enrichment could occur. However there are other two possible scenarios for its evolution, that cannot be ruled out: (I) the progenitor is a low-mass single star that did not undergo third dredge-up; (II) the progenitor star of NGC 5315 had an initial mass of 3-5 M⊙, and any s-process enhancements were heavily diluted by the massive envelope during the AGB phase.

  15. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  16. Finite Element Analysis of Magnetic Damping Effects on G-Jitter Induced Fluid Flow

    NASA Technical Reports Server (NTRS)

    Pan, Bo; Li, Ben Q.; deGroh, Henry C., III

    1997-01-01

    This paper reports some interim results on numerical modeling and analyses of magnetic damping of g-jitter driven fluid flow in microgravity. A finite element model is developed to represent the fluid flow, thermal and solute transport phenomena in a 2-D cavity under g-jitter conditions with and without an applied magnetic field. The numerical model is checked by comparing with analytical solutions obtained for a simple parallel plate channel flow driven by g-jitter in a transverse magnetic field. The model is then applied to study the effect of steady state g-jitter induced oscillation and on the solute redistribution in the liquid that bears direct relevance to the Bridgman-Stockbarger single crystal growth processes. A selection of computed results is presented and the results indicate that an applied magnetic field can effectively damp the velocity caused by g-jitter and help to reduce the time variation of solute redistribution.

  17. S-band omnidirectional antenna for the SERT-C satellite

    NASA Technical Reports Server (NTRS)

    Bassett, H. L.; Cofer, J. W., Jr.; Sheppard, R. R.; Sinclair, M. J.

    1975-01-01

    The program to design an S-band omnidirectional antenna system for the SERT-C spacecraft is discussed. The program involved the tasks of antenna analyses by computer techniques, scale model radiation pattern measurements of a number of antenna systems, full-scale RF measurements, and the recommended design, including detailed drawings. A number of antenna elements were considered: the cavity-backed spiral, quadrifilar helix, and crossed-dipoles were chosen for in-depth studies. The final design consisted of a two-element array of cavity-backed spirals mounted on opposite sides of spacecraft and fed in-phase through a hybrid junction. This antenna system meets the coverage requirement of having a gain of at least minus 10 dBi over 50 percent of a 4 pi steradian sphere with the solar panels in operation. This coverage level is increased if the ground station has the capability to change polarization.

  18. Static analysis of C-shape SMA middle ear prosthesis

    NASA Astrophysics Data System (ADS)

    Latalski, Jarosław; Rusinek, Rafał

    2017-08-01

    Shape memory alloys are a family of metals with the ability to change specimen shape depending on their temperature. This unique property is useful in many areas of mechanical and biomechanical engineering. A new half-ring middle ear prosthesis design made of a shape memory alloy, that is undergoing initial clinical tests, is investigated in this research paper. The analytical model of the studied structure made of nonlinear constitutive material is solved to identify the temperature-dependent stiffness characteristics of the proposed design on the basis of the Crotti-Engesser theorem. The final integral expression for the element deflection is highly complex, thus the solution has to be computed numerically. The final results show the proposed shape memory C-shape element to behave linearly in the analysed range of loadings and temperatures. This is an important observation that significantly simplifies the analysis of the prototype structure and opens wide perspectives for further possible applications of shape memory alloys.

  19. Unified constitutive material models for nonlinear finite-element structural analysis. [gas turbine engine blades and vanes

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Laflen, J. H.; Lindholm, U. S.

    1985-01-01

    Unified constitutive material models were developed for structural analyses of aircraft gas turbine engine components with particular application to isotropic materials used for high-pressure stage turbine blades and vanes. Forms or combinations of models independently proposed by Bodner and Walker were considered. These theories combine time-dependent and time-independent aspects of inelasticity into a continuous spectrum of behavior. This is in sharp contrast to previous classical approaches that partition inelastic strain into uncoupled plastic and creep components. Predicted stress-strain responses from these models were evaluated against monotonic and cyclic test results for uniaxial specimens of two cast nickel-base alloys, B1900+Hf and Rene' 80. Previously obtained tension-torsion test results for Hastelloy X alloy were used to evaluate multiaxial stress-strain cycle predictions. The unified models, as well as appropriate algorithms for integrating the constitutive equations, were implemented in finite-element computer codes.

  20. First stage identification of syntactic elements in an extra-terrestrial signal

    NASA Astrophysics Data System (ADS)

    Elliott, John

    2011-02-01

    By investigating the generic attributes of a representative set of terrestrial languages at varying levels of abstraction, it is our endeavour to try and isolate elements of the signal universe, which are computationally tractable for its detection and structural decipherment. Ultimately, our aim is to contribute in some way to the understanding of what 'languageness' actually is. This paper describes algorithms and software developed to characterise and detect generic intelligent language-like features in an input signal, using natural language learning techniques: looking for characteristic statistical "language-signatures" in test corpora. As a first step towards such species-independent language-detection, we present a suite of programs to analyse digital representations of a range of data, and use the results to extrapolate whether or not there are language-like structures which distinguish this data from other sources, such as music, images, and white noise.

  1. Analysis of multiple activity manual materials handling tasks using A Guide to Manual Materials Handling.

    PubMed

    Mital, A

    1999-01-01

    Manual handling of materials continues to be a hazardous activity, leading to a very significant number of severe overexertion injuries. Designing jobs that are within the physical capabilities of workers is one approach ergonomists have adopted to redress this problem. As a result, several job design procedures have been developed over the years. However, these procedures are limited to designing or evaluating only pure lifting jobs or only the lifting aspect of a materials handling job. This paper describes a general procedure that may be used to design or analyse materials handling jobs that involve several different kinds of activities (e.g. lifting, lowering, carrying, pushing, etc). The job design/analysis procedure utilizes an elemental approach (breaking the job into elements) and relies on databases provided in A Guide to Manual Materials Handling to compute associated risk factors. The use of the procedure is demonstrated with the help of two case studies.

  2. Concept and analytical basis for revistas - A fast, flexible computer/graphic system for generating periodic satellite coverage patterns

    NASA Technical Reports Server (NTRS)

    King, J. C.

    1976-01-01

    The generation of satellite coverage patterns is facilitated by three basic strategies: use of a simplified physical model, permitting rapid closed-form calculation; separation of earth rotation and nodal precession from initial geometric analyses; and use of symmetries to construct traces of indefinite length by repetitive transposition of basic one-quadrant elements. The complete coverage patterns generated consist of a basic nadir trace plus a number of associated off-nadir traces, one for each sensor swath edge to be delineated. Each trace is generated by transposing one or two of the basic quadrant elements into a circle on a nonrotating earth model sphere, after which the circle is expanded into the actual 'helical' pattern by adding rotational displacements to the longitude coordinates. The procedure adapts to the important periodic coverage cases by direct insertion of the characteristic integers N and R (days and orbital revolutions, respectively, per coverage period).

  3. The computation of induced drag with nonplanar and deformed wakes

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Smith, Stephen

    1991-01-01

    The classical calculation of inviscid drag, based on far field flow properties, is reexamined with particular attention to the nonlinear effects of wake roll-up. Based on a detailed look at nonlinear, inviscid flow theory, it is concluded that many of the classical, linear results are more general than might have been expected. Departures from the linear theory are identified and design implications are discussed. Results include the following: Wake deformation has little effect on the induced drag of a single element wing, but introduces first order corrections to the induced drag of a multi-element lifting system. Far field Trefftz-plane analysis may be used to estimate the induced drag of lifting systems, even when wake roll-up is considered, but numerical difficulties arise. The implications of several other approximations made in lifting line theory are evaluated by comparison with more refined analyses.

  4. Test and Analysis of Foam Impacting a 6x6 Inch RCC Flat Panel

    NASA Technical Reports Server (NTRS)

    Lessard, Wendy B.

    2006-01-01

    This report presents the testing and analyses of a foam projectile impacting onto thirteen 6x6 inch flat panels at a 90 degrees incidence angle. The panels tested in this investigation were fabricated of Reinforced-Carbon-Carbon material and were used to aid in the validation of an existing material model, MAT58. The computational analyses were performed using LS-DYNA, which is a physics-based, nonlinear, transient, finite element code used for analyzing material responses subjected to high impact forces and other dynamic conditions. The test results were used to validate LS-DYNA predictions and to determine the threshold of damage generated by the MAT58 cumulative damage material model. The threshold of damage parameter represents any external or internal visible RCC damage detectable by nondestructive evaluation techniques.

  5. View of MISSE-8 taken during a session of EVA

    NASA Image and Video Library

    2011-07-12

    ISS028-E-016107 (12 July 2011) --- This medium close-up image, recorded during a July 12 spacewalk, shows the Materials on International Space Station Experiment - 8 (MISSE-8). The experiment package is a test bed for materials and computing elements attached to the outside of the orbiting complex. These materials and computing elements are being evaluated for the effects of atomic oxygen, ultraviolet, direct sunlight, radiation, and extremes of heat and cold. This experiment allows the development and testing of new materials and computing elements that can better withstand the rigors of space environments. Results will provide a better understanding of the durability of various materials and computing elements when they are exposed to the space environment, with applications in the design of future spacecraft.

  6. Asteroid orbital inversion using uniform phase-space sampling

    NASA Astrophysics Data System (ADS)

    Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.

    2014-07-01

    We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, corresponding virtual least-squares orbital elements are derived using the Nelder-Mead downhill simplex method; third, repeating the procedure two times allows for a computation of a difference for two sets of virtual orbital elements; and, fourth, this orbital-element difference constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal p.d.f. In a discrete approximation, the allowed proposals coincide with the differences that are based on a large number of pre-computed sets of virtual least-squares orbital elements. The virtual-observation MCMC method is thus based on the characterization of the relevant volume in the orbital-element phase space. Here we utilize MCMC to map the phase-space domain of acceptable solutions. We can make use of the proposal p.d.f.s from the MCMC ranging and virtual-observation methods. The present phase-space mapping produces, upon convergence, a uniform sampling of the solution space within a pre-defined χ^2-value. The weights of the sampled orbital elements are then computed on the basis of the corresponding χ^2-values. The present method resembles the original ranging method. On one hand, MCMC mapping is insensitive to local extrema in the phase space and efficiently maps the solution space. This is somewhat contrary to the MCMC methods described above. On the other hand, MCMC mapping can suffer from producing a small number of sample elements with small χ^2-values, in resemblance to the original ranging method. We apply the methods to example near-Earth, main-belt, and transneptunian objects, and highlight the utilization of the methods in the data processing and analysis pipeline of the ESA Gaia space mission.

  7. Inferring transposons activity chronology by TRANScendence - TEs database and de-novo mining tool.

    PubMed

    Startek, Michał Piotr; Nogły, Jakub; Gromadka, Agnieszka; Grzebelus, Dariusz; Gambin, Anna

    2017-10-16

    The constant progress in sequencing technology leads to ever increasing amounts of genomic data. In the light of current evidence transposable elements (TEs for short) are becoming useful tools for learning about the evolution of host genome. Therefore the software for genome-wide detection and analysis of TEs is of great interest. Here we describe the computational tool for mining, classifying and storing TEs from newly sequenced genomes. This is an online, web-based, user-friendly service, enabling users to upload their own genomic data, and perform de-novo searches for TEs. The detected TEs are automatically analyzed, compared to reference databases, annotated, clustered into families, and stored in TEs repository. Also, the genome-wide nesting structure of found elements are detected and analyzed by new method for inferring evolutionary history of TEs. We illustrate the functionality of our tool by performing a full-scale analyses of TE landscape in Medicago truncatula genome. TRANScendence is an effective tool for the de-novo annotation and classification of transposable elements in newly-acquired genomes. Its streamlined interface makes it well-suited for evolutionary studies.

  8. Boundary element analysis of corrosion problems for pumps and pipes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyasaka, M.; Amaya, K.; Kishimoto, K.

    1995-12-31

    Three-dimensional (3D) and axi-symmetric boundary element methods (BEM) were developed to quantitatively estimate cathodic protection and macro-cell corrosion. For 3D analysis, a multiple-region method (MRM) was developed in addition to a single-region method (SRM). The validity and usefulness of the BEMs were demonstrated by comparing numerical results with experimental data from galvanic corrosion systems of a cylindrical model and a seawater pipe, and from a cathodic protection system of an actual seawater pump. It was shown that a highly accurate analysis could be performed for fluid machines handling seawater with complex 3D fields (e.g. seawater pump) by taking account ofmore » flow rate and time dependencies of polarization curve. Compared to the 3D BEM, the axi-symmetric BEM permitted large reductions in numbers of elements and nodes, which greatly simplified analysis of axi-symmetric fields such as pipes. Computational accuracy and CPU time were compared between analyses using two approximation methods for polarization curves: a logarithmic-approximation method and a linear-approximation method.« less

  9. Optimum element density studies for finite-element thermal analysis of hypersonic aircraft structures

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy; Muramoto, Kyle M.

    1990-01-01

    Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.

  10. Computer aided stress analysis of long bones utilizing computer tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marom, S.A.

    1986-01-01

    A computer aided analysis method, utilizing computed tomography (CT) has been developed, which together with a finite element program determines the stress-displacement pattern in a long bone section. The CT data file provides the geometry, the density and the material properties for the generated finite element model. A three-dimensional finite element model of a tibial shaft is automatically generated from the CT file by a pre-processing procedure for a finite element program. The developed pre-processor includes an edge detection algorithm which determines the boundaries of the reconstructed cross-sectional images of the scanned bone. A mesh generation procedure than automatically generatesmore » a three-dimensional mesh of a user-selected refinement. The elastic properties needed for the stress analysis are individually determined for each model element using the radiographic density (CT number) of each pixel with the elemental borders. The elastic modulus is determined from the CT radiographic density by using an empirical relationship from the literature. The generated finite element model, together with applied loads, determined from existing gait analysis and initial displacements, comprise a formatted input for the SAP IV finite element program. The output of this program, stresses and displacements at the model elements and nodes, are sorted and displayed by a developed post-processor to provide maximum and minimum values at selected locations in the model.« less

  11. Effectiveness of Multimedia Elements in Computer Supported Instruction: Analysis of Personalization Effects, Students' Performances and Costs

    ERIC Educational Resources Information Center

    Zaidel, Mark; Luo, XiaoHui

    2010-01-01

    This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…

  12. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  13. Computer program calculates gamma ray source strengths of materials exposed to neutron fluxes

    NASA Technical Reports Server (NTRS)

    Heiser, P. C.; Ricks, L. O.

    1968-01-01

    Computer program contains an input library of nuclear data for 44 elements and their isotopes to determine the induced radioactivity for gamma emitters. Minimum input requires the irradiation history of the element, a four-energy-group neutron flux, specification of an alloy composition by elements, and selection of the output.

  14. Prediction of overall and blade-element performance for axial-flow pump configurations

    NASA Technical Reports Server (NTRS)

    Serovy, G. K.; Kavanagh, P.; Okiishi, T. H.; Miller, M. J.

    1973-01-01

    A method and a digital computer program for prediction of the distributions of fluid velocity and properties in axial flow pump configurations are described and evaluated. The method uses the blade-element flow model and an iterative numerical solution of the radial equilbrium and continuity conditions. Correlated experimental results are used to generate alternative methods for estimating blade-element turning and loss characteristics. Detailed descriptions of the computer program are included, with example input and typical computed results.

  15. Computing Mass Properties From AutoCAD

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  16. Life assessment of structural components using inelastic finite element analyses

    NASA Technical Reports Server (NTRS)

    Arya, Vinod K.; Halford, Gary R.

    1993-01-01

    The need for enhanced and improved performance of structural components subject to severe cyclic thermal/mechanical loadings, such as in the aerospace industry, requires development of appropriate solution technologies involving time-dependent inelastic analyses. Such analyses are mandatory to predict local stress-strain response and to assess more accurately the cyclic life time of structural components. The NASA-Lewis Research Center is cognizant of this need. As a result of concerted efforts at Lewis during the last few years, several such finite element solution technologies (in conjunction with the finite element program MARC) were developed and successfully applied to numerous uniaxial and multiaxial problems. These solution technologies, although developed for use with MARC program, are general in nature and can easily be extended for adaptation with other finite element programs such as ABAQUS, ANSYS, etc. The description and results obtained from two such inelastic finite element solution technologies are presented. The first employs a classical (non-unified) creep-plasticity model. An application of this technology is presented for a hypersonic inlet cowl-lip problem. The second of these technologies uses a unified creep-plasticity model put forth by Freed. The structural component for which this finite element solution technology is illustrated, is a cylindrical rocket engine thrust chamber. The advantages of employing a viscoplastic model for nonlinear time-dependent structural analyses are demonstrated. The life analyses for cowl-lip and cylindrical thrust chambers are presented. These analyses are conducted by using the stress-strain response of these components obtained from the corresponding finite element analyses.

  17. Reporting of meta-analyses of randomized controlled trials with a focus on drug safety: an empirical assessment.

    PubMed

    Hammad, Tarek A; Neyarapally, George A; Pinheiro, Simone P; Iyasu, Solomon; Rochester, George; Dal Pan, Gerald

    2013-01-01

    Due to the sparse nature of serious drug-related adverse events (AEs), meta-analyses combining data from several randomized controlled trials (RCTs) to evaluate drug safety issues are increasingly being conducted and published, influencing clinical and regulatory decision making. Evaluation of meta-analyses involves the assessment of both the individual constituent trials and the approaches used to combine them. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) reporting framework is designed to enhance the reporting of systematic reviews and meta-analyses. However, PRISMA may not cover all critical elements useful in the evaluation of meta-analyses with a focus on drug safety particularly in the regulatory-public health setting. This work was conducted to (1) evaluate the adherence of a sample of published drug safety-focused meta-analyses to the PRISMA reporting framework, (2) identify gaps in this framework based on key aspects pertinent to drug safety, and (3) stimulate the development and validation of a more comprehensive reporting tool that incorporates elements unique to drug safety evaluation. We selected a sample of meta-analyses of RCTs based on review of abstracts from high-impact journals as well as top medical specialty journals between 2009 and 2011. We developed a preliminary reporting framework based on PRISMA with specific additional reporting elements critical for the evaluation of drug safety meta-analyses of RCTs. The reporting of pertinent elements in each meta-analysis was reviewed independently by two authors; discrepancies in the independent evaluations were resolved through discussions between the two authors. A total of 27 meta-analyses, 12 from highest impact journals, 13 from specialty medical journals, and 2 from Cochrane reviews, were identified and evaluated. The great majority (>85%) of PRISMA elements were addressed in more than half of the meta-analyses reviewed. However, the majority of meta-analyses (>60%) did not address most (>80%) of the additional reporting elements critical for the evaluation of drug safety. Some of these elements were not addressed in any of the reviewed meta-analyses. This review included a sample of meta-analyses, with a focus on drug safety, recently published in high-impact journals; therefore, we may have underestimated the extent of the reporting problem across all meta-analyses of drug safety. Furthermore, temporal trends in reporting could not be evaluated in this review because of the short time interval selected. While the majority of PRISMA elements were addressed by most studies reviewed, the majority of studies did not address most of the additional safety-related elements. These findings highlight the need for the development and validation of a drug safety reporting framework and the importance of the current initiative by the Council for International Organizations of Medical Sciences (CIOMS) to create a guidance document for drug safety information synthesis/meta-analysis, which may improve reporting, conduct, and evaluation of meta-analyses of drug safety and inform clinical and regulatory decision making.

  18. Effecting a broadcast with an allreduce operation on a parallel computer

    DOEpatents

    Almasi, Gheorghe; Archer, Charles J.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    A parallel computer comprises a plurality of compute nodes organized into at least one operational group for collective parallel operations. Each compute node is assigned a unique rank and is coupled for data communications through a global combining network. One compute node is assigned to be a logical root. A send buffer and a receive buffer is configured. Each element of a contribution of the logical root in the send buffer is contributed. One or more zeros corresponding to a size of the element are injected. An allreduce operation with a bitwise OR using the element and the injected zeros is performed. And the result for the allreduce operation is determined and stored in each receive buffer.

  19. Boundary element analyses for sound transmission loss of panels.

    PubMed

    Zhou, Ran; Crocker, Malcolm J

    2010-02-01

    The sound transmission characteristics of an aluminum panel and two composite sandwich panels were investigated by using two boundary element analyses. The effect of air loading on the structural behavior of the panels is included in one boundary element analysis, by using a light-fluid approximation for the eigenmode series to evaluate the structural response. In the other boundary element analysis, the air loading is treated as an added mass. The effect of the modal energy loss factor on the sound transmission loss of the panels was investigated. Both boundary element analyses were used to study the sound transmission loss of symmetric sandwich panels excited by a random incidence acoustic field. A classical wave impedance analysis was also used to make sound transmission loss predictions for the two foam-filled honeycomb sandwich panels. Comparisons between predictions of sound transmission loss for the two foam-filled honeycomb sandwich panels excited by a random incidence acoustic field obtained from the wave impedance analysis, the two boundary element analyses, and experimental measurements are presented.

  20. Individual-specific multi-scale finite element simulation of cortical bone of human proximal femur

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ascenzi, Maria-Grazia, E-mail: mgascenzi@mednet.ucla.edu; Kawas, Neal P., E-mail: nealkawas@ucla.edu; Lutz, Andre, E-mail: andre.lutz@hotmail.de

    2013-07-01

    We present an innovative method to perform multi-scale finite element analyses of the cortical component of the femur using the individual’s (1) computed tomography scan; and (2) a bone specimen obtained in conjunction with orthopedic surgery. The method enables study of micro-structural characteristics regulating strains and stresses under physiological loading conditions. The analysis of the micro-structural scenarios that cause variation of strain and stress is the first step in understanding the elevated strains and stresses in bone tissue, which are indicative of higher likelihood of micro-crack formation in bone, implicated in consequent remodeling or macroscopic bone fracture. Evidence that micro-structuremore » varies with clinical history and contributes in significant, but poorly understood, ways to bone function, motivates the method’s development, as does need for software tools to investigate relationships between macroscopic loading and micro-structure. Three applications – varying region of interest, bone mineral density, and orientation of collagen type I, illustrate the method. We show, in comparison between physiological loading and simple compression of a patient’s femur, that strains computed at the multi-scale model’s micro-level: (i) differ; and (ii) depend on local collagen-apatite orientation and degree of calcification. Our findings confirm the strain concentration role of osteocyte lacunae, important for mechano-transduction. We hypothesize occurrence of micro-crack formation, leading either to remodeling or macroscopic fracture, when the computed strains exceed the elastic range observed in micro-structural testing.« less

  1. Cyclic Symmetry Finite Element Forced Response Analysis of a Distortion-Tolerant Fan with Boundary Layer Ingestion

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Reddy, T. S. R.; Bakhle, M. A.; Coroneos, R. M.; Stefko, G. L.; Provenza, A. J.; Duffy, K. P.

    2018-01-01

    Accurate prediction of the blade vibration stress is required to determine overall durability of fan blade design under Boundary Layer Ingestion (BLI) distorted flow environments. Traditional single blade modeling technique is incapable of representing accurate modeling for the entire rotor blade system subject to complex dynamic loading behaviors and vibrations in distorted flow conditions. A particular objective of our work was to develop a high-fidelity full-rotor aeromechanics analysis capability for a system subjected to a distorted inlet flow by applying cyclic symmetry finite element modeling methodology. This reduction modeling method allows computationally very efficient analysis using a small periodic section of the full rotor blade system. Experimental testing by the use of the 8-foot by 6-foot Supersonic Wind Tunnel Test facility at NASA Glenn Research Center was also carried out for the system designated as the Boundary Layer Ingesting Inlet/Distortion-Tolerant Fan (BLI2DTF) technology development. The results obtained from the present numerical modeling technique were evaluated with those of the wind tunnel experimental test, toward establishing a computationally efficient aeromechanics analysis modeling tool facilitating for analyses of the full rotor blade systems subjected to a distorted inlet flow conditions. Fairly good correlations were achieved hence our computational modeling techniques were fully demonstrated. The analysis result showed that the safety margin requirement set in the BLI2DTF fan blade design provided a sufficient margin with respect to the operating speed range.

  2. In vivo evaluation of a magnesium-based degradable intramedullary nailing system in a sheep model.

    PubMed

    Rössig, Christina; Angrisani, Nina; Helmecke, Patrick; Besdo, Silke; Seitz, Jan-Marten; Welke, Bastian; Fedchenko, Nickolay; Kock, Heiko; Reifenrath, Janin

    2015-10-01

    The biocompatibility and the degradation behavior of the LAE442 magnesium-based intramedullary interlocked nailing system (IM-NS) was assessed in vivo in a comparative study (stainless austenitic steel 1.4441LA) for the first time. IM-NS was implanted into the right tibia (24-week investigation period; nails/screws diameter: 9 mm/3.5 mm, length: 130 mm/15-40 mm) of 10 adult sheep (LAE442, stainless steel, n=5 each group). Clinical and radiographic examinations, in vivo computed tomography (CT), ex vivo micro-computed tomography (μCT), mechanical and histological examinations and element analyses of alloying elements in inner organs were performed. The mechanical examinations (four-point bending) revealed a significant decrease of LAE442 implant stiffness, force at 0.2% offset yield point and maximum force. Periosteal (new bone formation) and endosteal (bone decline) located bone alterations occurred in both groups (LAE442 alloy more pronounced). Moderate gas formation was observed within the LAE442 alloy group. The CT-measured implant volume decreased slightly (not significant). Histologically a predominantly direct bone-to-implant interface existed within the LAE442 alloy group. Formation of a fibrous tissue capsule around the nail occurred in the steel group. Minor inflammatory infiltration was observed in the LAE442 alloy group. Significantly increased quantities of rare earth elements were detected in the LAE442 alloy group. μCT examination showed the beginning of corrosion in dependence of the surrounding tissue. After 24 weeks the local biocompatibility of LAE442 can be considered as suitable for a degradable implant material. An application oriented interlocked intramedullary nailing system in a comparative study (degradable magnesium-based LAE442 alloy vs. steel alloy) was examined in a sheep model for the first time. We focused in particular on the examination of implant degradation by means of (μ-)CT, mechanical properties (four-point bending), clinical compatibility, local bone reactions (X-ray and histology) and possible systemic toxicity (histology and element analyses of inner organs). A significant decrease of magnesium (LAE442 alloy) implant stiffness and maximum force occurred. Moderate not clinically relevant gas accumulation was determined. A predominantly direct bone-to-implant contact existed within the magnesium (LAE442 alloy) group compared to an indirect contact in the steel group. Rare earth element accumulation could be observed in inner organs but H&E staining was inconspicuous. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  3. Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheid, Robert E.

    1989-01-01

    The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.

  4. Computed tomography-based finite element analysis to assess fracture risk and osteoporosis treatment

    PubMed Central

    Imai, Kazuhiro

    2015-01-01

    Finite element analysis (FEA) is a computer technique of structural stress analysis and developed in engineering mechanics. FEA has developed to investigate structural behavior of human bones over the past 40 years. When the faster computers have acquired, better FEA, using 3-dimensional computed tomography (CT) has been developed. This CT-based finite element analysis (CT/FEA) has provided clinicians with useful data. In this review, the mechanism of CT/FEA, validation studies of CT/FEA to evaluate accuracy and reliability in human bones, and clinical application studies to assess fracture risk and effects of osteoporosis medication are overviewed. PMID:26309819

  5. A computer program for anisotropic shallow-shell finite elements using symbolic integration

    NASA Technical Reports Server (NTRS)

    Andersen, C. M.; Bowen, J. T.

    1976-01-01

    A FORTRAN computer program for anisotropic shallow-shell finite elements with variable curvature is described. A listing of the program is presented together with printed output for a sample case. Computation times and central memory requirements are given for several different elements. The program is based on a stiffness (displacement) finite-element model in which the fundamental unknowns consist of both the displacement and the rotation components of the reference surface of the shell. Two triangular and four quadrilateral elements are implemented in the program. The triangular elements have 6 or 10 nodes, and the quadrilateral elements have 4 or 8 nodes. Two of the quadrilateral elements have internal degrees of freedom associated with displacement modes which vanish along the edges of the elements (bubble modes). The triangular elements and the remaining two quadrilateral elements do not have bubble modes. The output from the program consists of arrays corresponding to the stiffness, the geometric stiffness, the consistent mass, and the consistent load matrices for individual elements. The integrals required for the generation of these arrays are evaluated by using symbolic (or analytic) integration in conjunction with certain group-theoretic techniques. The analytic expressions for the integrals are exact and were developed using the symbolic and algebraic manipulation language.

  6. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.

  7. Storing and managing information artifacts collected by information analysts using a computing device

    DOEpatents

    Pike, William A; Riensche, Roderick M; Best, Daniel M; Roberts, Ian E; Whyatt, Marie V; Hart, Michelle L; Carr, Norman J; Thomas, James J

    2012-09-18

    Systems and computer-implemented processes for storage and management of information artifacts collected by information analysts using a computing device. The processes and systems can capture a sequence of interactive operation elements that are performed by the information analyst, who is collecting an information artifact from at least one of the plurality of software applications. The information artifact can then be stored together with the interactive operation elements as a snippet on a memory device, which is operably connected to the processor. The snippet comprises a view from an analysis application, data contained in the view, and the sequence of interactive operation elements stored as a provenance representation comprising operation element class, timestamp, and data object attributes for each interactive operation element in the sequence.

  8. GAP Noise Computation By The CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2001-01-01

    A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.

  9. Analysis of rocket engine injection combustion processes

    NASA Technical Reports Server (NTRS)

    Salmon, J. W.; Saltzman, D. H.

    1977-01-01

    Mixing methodology improvement for the JANNAF DER and CICM injection/combustion analysis computer programs was accomplished. ZOM plane prediction model development was improved for installation into the new standardized DER computer program. An intra-element mixing model developing approach was recommended for gas/liquid coaxial injection elements for possible future incorporation into the CICM computer program.

  10. Contours identification of elements in a cone beam computed tomography for investigating maxillary cysts

    NASA Astrophysics Data System (ADS)

    Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia

    2013-10-01

    Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.

  11. An Implicit Upwind Algorithm for Computing Turbulent Flows on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Anerson, W. Kyle; Bonhaus, Daryl L.

    1994-01-01

    An implicit, Navier-Stokes solution algorithm is presented for the computation of turbulent flow on unstructured grids. The inviscid fluxes are computed using an upwind algorithm and the solution is advanced in time using a backward-Euler time-stepping scheme. At each time step, the linear system of equations is approximately solved with a point-implicit relaxation scheme. This methodology provides a viable and robust algorithm for computing turbulent flows on unstructured meshes. Results are shown for subsonic flow over a NACA 0012 airfoil and for transonic flow over a RAE 2822 airfoil exhibiting a strong upper-surface shock. In addition, results are shown for 3 element and 4 element airfoil configurations. For the calculations, two one equation turbulence models are utilized. For the NACA 0012 airfoil, a pressure distribution and force data are compared with other computational results as well as with experiment. Comparisons of computed pressure distributions and velocity profiles with experimental data are shown for the RAE airfoil and for the 3 element configuration. For the 4 element case, comparisons of surface pressure distributions with experiment are made. In general, the agreement between the computations and the experiment is good.

  12. Prediction and Verification of Ductile Crack Growth from Simulated Defects in Strength Overmatched Butt Welds

    NASA Technical Reports Server (NTRS)

    Nishioka, Owen S.

    1997-01-01

    Defects that develop in welds during the fabrication process are frequently manifested as embedded flaws from lack of fusion or lack of penetration. Fracture analyses of welded structures must be able to assess the effect of such defects on the structural integrity of weldments; however, the transferability of R-curves measured in laboratory specimens to defective structural welds has not been fully examined. In the current study, the fracture behavior of an overmatched butt weld containing a simulated buried, lack-of-penetration defect is studied. A specimen designed to simulate pressure vessel butt welds is considered; namely, a center crack panel specimen, of 1.25 inch by 1.25 inch cross section, loaded in tension. The stress-relieved double-V weld has a yield strength 50% higher than that of the plate material, and displays upper shelf fracture behavior at room temperature. Specimens are precracked, loaded monotonically while load-CMOD measurements are made, then stopped and heat tinted to mark the extent of ductile crack growth. These measurements are compared to predictions made using finite element analysis of the specimens using the fracture mechanics code Warp3D, which models void growth using the Gurson-Tvergaard dilitant plasticity formulation within fixed sized computational cells ahead of the crack front. Calibrating data for the finite element analyses, namely cell size and initial material porosities are obtained by matching computational predictions to experimental results from tests of welded compact tension specimens. The R-curves measured in compact tension specimens are compared to those obtained from multi-specimen weld tests, and conclusions as to the transferability of R-curves is discussed.

  13. Evaluating uncertainty in predicting spatially variable representative elementary scales in fractured aquifers, with application to Turkey Creek Basin, Colorado

    USGS Publications Warehouse

    Wellman, Tristan P.; Poeter, Eileen P.

    2006-01-01

    Computational limitations and sparse field data often mandate use of continuum representation for modeling hydrologic processes in large‐scale fractured aquifers. Selecting appropriate element size is of primary importance because continuum approximation is not valid for all scales. The traditional approach is to select elements by identifying a single representative elementary scale (RES) for the region of interest. Recent advances indicate RES may be spatially variable, prompting unanswered questions regarding the ability of sparse data to spatially resolve continuum equivalents in fractured aquifers. We address this uncertainty of estimating RES using two techniques. In one technique we employ data‐conditioned realizations generated by sequential Gaussian simulation. For the other we develop a new approach using conditioned random walks and nonparametric bootstrapping (CRWN). We evaluate the effectiveness of each method under three fracture densities, three data sets, and two groups of RES analysis parameters. In sum, 18 separate RES analyses are evaluated, which indicate RES magnitudes may be reasonably bounded using uncertainty analysis, even for limited data sets and complex fracture structure. In addition, we conduct a field study to estimate RES magnitudes and resulting uncertainty for Turkey Creek Basin, a crystalline fractured rock aquifer located 30 km southwest of Denver, Colorado. Analyses indicate RES does not correlate to rock type or local relief in several instances but is generally lower within incised creek valleys and higher along mountain fronts. Results of this study suggest that (1) CRWN is an effective and computationally efficient method to estimate uncertainty, (2) RES predictions are well constrained using uncertainty analysis, and (3) for aquifers such as Turkey Creek Basin, spatial variability of RES is significant and complex.

  14. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  15. Plane stress analysis of wood members using isoparametric finite elements, a computer program

    Treesearch

    Gary D. Gerhardt

    1983-01-01

    A finite element program is presented which computes displacements, strains, and stresses in wood members of arbitrary shape which are subjected to plane strain/stressloading conditions. This report extends a program developed by R. L. Taylor in 1977, by adding both the cubic isoparametric finite element and the capability to analyze nonisotropic materials. The...

  16. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  17. Books and monographs on finite element technology

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1985-01-01

    The present paper proviees a listing of all of the English books and some of the foreign books on finite element technology, taking into account also a list of the conference proceedings devoted solely to finite elements. The references are divided into categories. Attention is given to fundamentals, mathematical foundations, structural and solid mechanics applications, fluid mechanics applications, other applied science and engineering applications, computer implementation and software systems, computational and modeling aspects, special topics, boundary element methods, proceedings of symmposia and conferences on finite element technology, bibliographies, handbooks, and historical accounts.

  18. Spatially explicit spectral analysis of point clouds and geospatial data

    USGS Publications Warehouse

    Buscombe, Daniel D.

    2015-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software packagePySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is described, and its functionality illustrated with an example of a high-resolution bathymetric point cloud data collected with multibeam echosounder.

  19. A comparison between families obtained from different proper elements

    NASA Technical Reports Server (NTRS)

    Zappala, Vincenzo; Cellino, Alberto; Farinella, Paolo

    1992-01-01

    Using the hierarchical method of family identification developed by Zappala et al., the results coming from the data set of proper elements computed by Williams (about 2100 numbered + about 1200 PLS 2 asteroids) and by Milani and Knezevic (5.7 version, about 4200 asteroids) are compared. Apart from some expected discrepancies due to the different data sets and/or low accuracy of proper elements computed in peculiar dynamical zones, a good agreement was found in several cases. It follows that these high reliability families represent a sample which can be considered independent on the methods used for their proper elements computation. Therefore, they should be considered as the best candidates for detailed physical studies.

  20. A Computational Approach for Automated Posturing of a Human Finite Element Model

    DTIC Science & Technology

    2016-07-01

    Std. Z39.18 July 2016 Memorandum Report A Computational Approach for Automated Posturing of a Human Finite Element Model Justin McKee and Adam...protection by influencing the path that loading will be transferred into the body and is a major source of variability. The development of a finite element ...posture, human body, finite element , leg, spine 42 Adam Sokolow 410-306-2985Unclassified Unclassified Unclassified UU ii Approved for public release

  1. ELECTRONIC ANALOG COMPUTER FOR DETERMINING RADIOACTIVE DISINTEGRATION

    DOEpatents

    Robinson, H.P.

    1959-07-14

    A computer is presented for determining growth and decay curves for elements in a radioactive disintegration series wherein one unstable element decays to form a second unstable element or isotope, which in turn forms a third element, etc. The growth and decay curves of radioactive elements are simulated by the charge and discharge curves of a resistance-capacitance network. Several such networks having readily adjustable values are connected in series with an amplifier between each successive pair. The time constant of each of the various networks is set proportional to the half-life of a corresponding element in the series represented and the charge and discharge curves of each of the networks simulates the element growth and decay curve.

  2. Development of non-linear finite element computer code

    NASA Technical Reports Server (NTRS)

    Becker, E. B.; Miller, T.

    1985-01-01

    Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.

  3. Synthesis, spectral and quantum chemical studies on NO-chelating sulfamonomethoxine-cyclophosph(V)azane and its Er(III) complex.

    PubMed

    Alaghaz, Abdel-Nasser M A; Ammar, Reda A A; Koehler, Gottfried; Wolschann, Karl Peter; El-Gogary, Tarek M

    2014-07-15

    Computational studies have been carried out at the DFT-B3LYP/6-31G(d) level of theory on the structural and spectroscopic properties of novel ethane-1,2-diol-dichlorocyclophosph(V)azane of sulfamonomethoxine (L), and its binuclear Er(III) complex. Different tautomers of the ligand were optimized at the ab initio DFT level. Keto-form structure is about 15.8 kcal/mol more stable than the enol form (taking zpe correction into account). Simulated IR frequencies were scaled and compared with that experimentally measured. TD-DFT method was used to compute the UV-VIS spectra which show good agreement with measured electronic spectra. The structures of the novel isolated products are proposed based on elemental analyses, IR, UV-VIS, (1)H NMR, (31)P NMR, SEM, XRD spectra, effective magnetic susceptibility measurements and thermogravimetric analysis (TGA). Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Rough set classification based on quantum logic

    NASA Astrophysics Data System (ADS)

    Hassan, Yasser F.

    2017-11-01

    By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.

  5. Automatic measurements and computations for radiochemical analyses

    USGS Publications Warehouse

    Rosholt, J.N.; Dooley, J.R.

    1960-01-01

    In natural radioactive sources the most important radioactive daughter products useful for geochemical studies are protactinium-231, the alpha-emitting thorium isotopes, and the radium isotopes. To resolve the abundances of these thorium and radium isotopes by their characteristic decay and growth patterns, a large number of repeated alpha activity measurements on the two chemically separated elements were made over extended periods of time. Alpha scintillation counting with automatic measurements and sample changing is used to obtain the basic count data. Generation of the required theoretical decay and growth functions, varying with time, and the least squares solution of the overdetermined simultaneous count rate equations are done with a digital computer. Examples of the complex count rate equations which may be solved and results of a natural sample containing four ??-emitting isotopes of thorium are illustrated. These methods facilitate the determination of the radioactive sources on the large scale required for many geochemical investigations.

  6. Computational analysis of human and mouse CREB3L4 Protein

    PubMed Central

    Velpula, Kiran Kumar; Rehman, Azeem Abdul; Chigurupati, Soumya; Sanam, Ramadevi; Inampudi, Krishna Kishore; Akila, Chandra Sekhar

    2012-01-01

    CREB3L4 is a member of the CREB/ATF transcription factor family, characterized by their regulation of gene expression through the cAMP-responsive element. Previous studies identified this protein in mice and humans. Whereas CREB3L4 in mice (referred to as Tisp40) is found in the testes and functions in spermatogenesis, human CREB3L4 is primarily detected in the prostate and has been implicated in cancer. We conducted computational analyses to compare the structural homology between murine Tisp40α human CREB3L4. Our results reveal that the primary and secondary structures of the two proteins contain high similarity. Additionally, predicted helical transmembrane structure reveals that the proteins likely have similar structure and function. This study offers preliminary findings that support the translation of mouse Tisp40α findings into human models, based on structural homology. PMID:22829733

  7. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  8. An emulator for minimizing finite element analysis implementation resources

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.

    1982-01-01

    A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.

  9. Synchrotron Imaging Computations on the Grid without the Computing Element

    NASA Astrophysics Data System (ADS)

    Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.

    2011-12-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  10. An Information-Based Machine Learning Approach to Elasticity Imaging

    PubMed Central

    Hoerig, Cameron; Ghaboussi, Jamshid; Insana, Michael. F.

    2016-01-01

    An information-based technique is described for applications in mechanical-property imaging of soft biological media under quasi-static loads. We adapted the Autoprogressive method that was originally developed for civil engineering applications for this purpose. The Autoprogressive method is a computational technique that combines knowledge of object shape and a sparse distribution of force and displacement measurements with finite-element analyses and artificial neural networks to estimate a complete set of stress and strain vectors. Elasticity imaging parameters are then computed from estimated stresses and strains. We introduce the technique using ultrasonic pulse-echo measurements in simple gelatin imaging phantoms having linear-elastic properties so that conventional finite-element modeling can be used to validate results. The Autoprogressive algorithm does not require any assumptions about the material properties and can, in principle, be used to image media with arbitrary properties. We show that by selecting a few well-chosen force-displacement measurements that are appropriately applied during training and establish convergence, we can estimate all nontrivial stress and strain vectors throughout an object and accurately estimate an elastic modulus at high spatial resolution. This new method of modeling the mechanical properties of tissue-like materials introduces a unique method of solving the inverse problem and is the first technique for imaging stress without assuming the underlying constitutive model. PMID:27858175

  11. The Linear Parameters and the Decoupling Matrix for Linearly Coupled Motion in 6 Dimensional Phase Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parzen, George

    It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 x 6 matrix, R. R will be called the decoupling matrix. It will be shown that of the 36 elements of the 6 x 6 decoupling matrix R, only 12 elements are independent. This may be contrasted with the results for motion in 4- dimensional phase space, wheremore » R has 4 independent elements. A set of equations is given from which the 12 elements of R can be computed from the one period transfer matrix. This set of equations also allows the linear parameters, the β i,α i, i = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix. An alternative procedure for computing the linear parameters,β i,α i, i = 1, 3, and the 12 independent elements of the decoupling matrix R is also given which depends on computing the eigenvectors of the one period transfer matrix. These results can be used in a tracking program, where the one period transfer matrix can be computed by multiplying the transfer matrices of all the elements in a period, to compute the linear parameters α i and β i, i = 1, 3, and the elements of the decoupling matrix R. The procedure presented here for studying coupled motion in 6-dimensional phase space can also be applied to coupled motion in 4-dimensional phase space, where it may be a useful alternative procedure to the procedure presented by Edwards and Teng. In particular, it gives a simpler programing procedure for computing the beta functions and the emittances for coupled motion in 4-dimensional phase space.« less

  12. The linear parameters and the decoupling matrix for linearly coupled motion in 6 dimensional phase space. Informal report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parzen, G.

    It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 {times} 6 matrix, R. R will be called the decoupling matrix. It will be shown that of the 36 elements of the 6 {times} 6 decoupling matrix R, only 12 elements are independent. This may be contrasted with the results for motion in 4-dimensional phase space, where Rmore » has 4 independent elements. A set of equations is given from which the 12 elements of R can be computed from the one period transfer matrix. This set of equations also allows the linear parameters, {beta}{sub i}, {alpha}{sub i} = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix. An alternative procedure for computing the linear parameters, the {beta}{sub i}, {alpha}{sub i} i = 1, 3, and the 12 independent elements of the decoupling matrix R is also given which depends on computing the eigenvectors of the one period transfer matrix. These results can be used in a tracking program, where the one period transfer matrix can be computed by multiplying the transfer matrices of all the elements in a period, to compute the linear parameters {alpha}{sub i} and {beta}{sub i}, i = 1, 3, and the elements of the decoupling matrix R. The procedure presented here for studying coupled motion in 6-dimensional phase space can also be applied to coupled motion in 4-dimensional phase space, where it may be a useful alternative procedure to the procedure presented by Edwards and Teng. In particular, it gives a simpler programming procedure for computing the beta functions and the emittances for coupled motion in 4-dimensional phase space.« less

  13. Trace-element analyses of core samples from the 1967-1988 drillings of Kilauea Iki lava lake, Hawaii

    USGS Publications Warehouse

    Helz, Rosalind Tuthill

    2012-01-01

    This report presents previously unpublished analyses of trace elements in drill core samples from Kilauea Iki lava lake and from the 1959 eruption that fed the lava lake. The two types of data presented were obtained by instrumental neutron-activation analysis (INAA) and energy-dispersive X-ray fluorescence analysis (EDXRF). The analyses were performed in U.S. Geological Survey (USGS) laboratories from 1989 to 1994. This report contains 93 INAA analyses on 84 samples and 68 EDXRF analyses on 68 samples. The purpose of the study was to document trace-element variation during chemical differentiation, especially during the closed-system differentiation of Kilauea Iki lava lake.

  14. Using OSG Computing Resources with (iLC)Dirac

    NASA Astrophysics Data System (ADS)

    Sailer, A.; Petric, M.; CLICdp Collaboration

    2017-10-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called ‘SiteDirectors’, which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional site-specific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were developed. Not only is the usage of these types of computing elements now completely transparent for all DIRAC instances, which makes DIRAC a flexible solution for OSG based virtual organisations, but it also allows LCG Grid Sites to move to the HTCondor-CE software, without shutting DIRAC based VOs out of their site. In these proceedings we detail how we interfaced the DIRAC system to the HTCondor-CE and Globus computing elements and explain the encountered obstacles and solutions developed, and how the linear collider community uses resources in the OSG.

  15. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  16. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  17. On finite element implementation and computational techniques for constitutive modeling of high temperature composites

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.

    1989-01-01

    The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.

  18. A breakthrough for experiencing and understanding simulated physics

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1988-01-01

    The use of computer simulation in physics research is discussed, focusing on improvements to graphic workstations. Simulation capabilities and applications of enhanced visualization tools are outlined. The elements of an ideal computer simulation are presented and the potential for improving various simulation elements is examined. The interface between the human and the computer and simulation models are considered. Recommendations are made for changes in computer simulation practices and applications of simulation technology in education.

  19. A class of hybrid finite element methods for electromagnetics: A review

    NASA Technical Reports Server (NTRS)

    Volakis, J. L.; Chatterjee, A.; Gong, J.

    1993-01-01

    Integral equation methods have generally been the workhorse for antenna and scattering computations. In the case of antennas, they continue to be the prominent computational approach, but for scattering applications the requirement for large-scale computations has turned researchers' attention to near neighbor methods such as the finite element method, which has low O(N) storage requirements and is readily adaptable in modeling complex geometrical features and material inhomogeneities. In this paper, we review three hybrid finite element methods for simulating composite scatterers, conformal microstrip antennas, and finite periodic arrays. Specifically, we discuss the finite element method and its application to electromagnetic problems when combined with the boundary integral, absorbing boundary conditions, and artificial absorbers for terminating the mesh. Particular attention is given to large-scale simulations, methods, and solvers for achieving low memory requirements and code performance on parallel computing architectures.

  20. Prediction of High-Lift Flows using Turbulent Closure Models

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.; Ying, Susan X.; Bertelrud, Arild

    1997-01-01

    The flow over two different multi-element airfoil configurations is computed using linear eddy viscosity turbulence models and a nonlinear explicit algebraic stress model. A subset of recently-measured transition locations using hot film on a McDonnell Douglas configuration is presented, and the effect of transition location on the computed solutions is explored. Deficiencies in wake profile computations are found to be attributable in large part to poor boundary layer prediction on the generating element, and not necessarily inadequate turbulence modeling in the wake. Using measured transition locations for the main element improves the prediction of its boundary layer thickness, skin friction, and wake profile shape. However, using measured transition locations on the slat still yields poor slat wake predictions. The computation of the slat flow field represents a key roadblock to successful predictions of multi-element flows. In general, the nonlinear explicit algebraic stress turbulence model gives very similar results to the linear eddy viscosity models.

  1. Elucidating the underlying components of food valuation in the human orbitofrontal cortex.

    PubMed

    Suzuki, Shinsuke; Cross, Logan; O'Doherty, John P

    2017-12-01

    The valuation of food is a fundamental component of our decision-making. Yet little is known about how value signals for food and other rewards are constructed by the brain. Using a food-based decision task in human participants, we found that subjective values can be predicted from beliefs about constituent nutritive attributes of food: protein, fat, carbohydrates and vitamin content. Multivariate analyses of functional MRI data demonstrated that, while food value is represented in patterns of neural activity in both medial and lateral parts of the orbitofrontal cortex (OFC), only the lateral OFC represents the elemental nutritive attributes. Effective connectivity analyses further indicate that information about the nutritive attributes represented in the lateral OFC is integrated within the medial OFC to compute an overall value. These findings provide a mechanistic account for the construction of food value from its constituent nutrients.

  2. Recent advances in ChIP-seq analysis: from quality management to whole-genome annotation.

    PubMed

    Nakato, Ryuichiro; Shirahige, Katsuhiko

    2017-03-01

    Chromatin immunoprecipitation followed by sequencing (ChIP-seq) analysis can detect protein/DNA-binding and histone-modification sites across an entire genome. Recent advances in sequencing technologies and analyses enable us to compare hundreds of samples simultaneously; such large-scale analysis has potential to reveal the high-dimensional interrelationship level for regulatory elements and annotate novel functional genomic regions de novo. Because many experimental considerations are relevant to the choice of a method in a ChIP-seq analysis, the overall design and quality management of the experiment are of critical importance. This review offers guiding principles of computation and sample preparation for ChIP-seq analyses, highlighting the validity and limitations of the state-of-the-art procedures at each step. We also discuss the latest challenges of single-cell analysis that will encourage a new era in this field. © The Author 2016. Published by Oxford University Press.

  3. Multiaxial Cyclic Thermoplasticity Analysis with Besseling's Subvolume Method

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1983-01-01

    A modification was formulated to Besseling's Subvolume Method to allow it to use multilinear stress-strain curves which are temperature dependent to perform cyclic thermoplasticity analyses. This method automotically reproduces certain aspects of real material behavior important in the analysis of Aircraft Gas Turbine Engine (AGTE) components. These include the Bauschinger effect, cross-hardening, and memory. This constitutive equation was implemented in a finite element computer program called CYANIDE. Subsequently, classical time dependent plasticity (creep) was added to the program. Since its inception, this program was assessed against laboratory and component testing and engine experience. The ability of this program to simulate AGTE material response characteristics was verified by this experience and its utility in providing data for life analyses was demonstrated. In this area of life analysis, the multiaxial thermoplasticity capabilities of the method have proved a match for the actual AGTE life experience.

  4. Experimental design of an interlaboratory study for trace metal analysis of liquid fluids. [for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Greenbauer-Seng, L. A.

    1983-01-01

    The accurate determination of trace metals and fuels is an important requirement in much of the research into and development of alternative fuels for aerospace applications. Recognizing the detrimental effects of certain metals on fuel performance and fuel systems at the part per million and in some cases part per billion levels requires improved accuracy in determining these low concentration elements. Accurate analyses are also required to ensure interchangeability of analysis results between vendor, researcher, and end use for purposes of quality control. Previous interlaboratory studies have demonstrated the inability of different laboratories to agree on the results of metal analysis, particularly at low concentration levels, yet typically good precisions are reported within a laboratory. An interlaboratory study was designed to gain statistical information about the sources of variation in the reported concentrations. Five participant laboratories were used on a fee basis and were not informed of the purpose of the analyses. The effects of laboratory, analytical technique, concentration level, and ashing additive were studied in four fuel types for 20 elements of interest. The prescribed sample preparation schemes (variations of dry ashing) were used by all of the laboratories. The analytical data were statistically evaluated using a computer program for the analysis of variance technique.

  5. A geochemical atlas of North Carolina, USA

    USGS Publications Warehouse

    Reid, J.C.

    1993-01-01

    A geochemical atlas of North Carolina, U.S.A., was prepared using National Uranium Resource Evaluation (NURE) stream-sediment data. Before termination of the NURE program, sampling of nearly the entire state (48,666 square miles of land area) was completed and geochemical analyses were obtained. The NURE data are applicable to mineral exploration, agriculture, waste disposal siting issues, health, and environmental studies. Applications in state government include resource surveys to assist mineral exploration by identifying geochemical anomalies and areas of mineralization. Agriculture seeks to identify areas with favorable (or unfavorable) conditions for plant growth, disease, and crop productivity. Trace elements such as cobalt, copper, chromium, iron, manganese, zinc, and molybdenum must be present within narrow ranges in soils for optimum growth and productivity. Trace elements as a contributing factor to disease are of concern to health professionals. Industry can use pH and conductivity data for water samples to site facilities which require specific water quality. The North Carolina NURE database consists of stream-sediment samples, groundwater samples, and stream-water analyses. The statewide database consists of 6,744 stream-sediment sites, 5,778 groundwater sample sites, and 295 stream-water sites. Neutron activation analyses were provided for U, Br, Cl, F, Mn, Na, Al, V, Dy in groundwater and stream water, and for U, Th, Hf, Ce, Fe, Mn, Na, Sc, Ti, V, Al, Dy, Eu, La, Sm, Yb, and Lu in stream sediments. Supplemental analyses by other techniques were reported on U (extractable), Ag, As, Ba, Be, Ca, Co, Cr, Cu, K, Li, Mg, Mo, Nb, Ni, P, Pb, Se, Sn, Sr, W, Y, and Zn for 4,619 stream-sediment samples. A small subset of 334 stream samples was analyzed for gold. The goal of the atlas was to make available the statewide NURE data with minimal interpretation to enable prospective users to modify and manipulate the data for their end use. The atlas provides only very general indication of geochemical distribution patterns and should not be used for site specific studies. The atlas maps for each element were computer-generated at the state's geographic information system (Center for Geographic Information and Analysis [CGIA]). The Division of Statistics and Information Services provided input files. The maps in the atlas are point maps. Each sample is represented by a symbol generally corresponding to a quartile class. Other reports will transmit sample and analytical data for state regions. Data are tentatively planned to be available on disks in spreadsheet format for personal computers. During the second phase of this project, stream-sediment samples are being assigned to state geologic map unit names using a GIS system to determine background and anomaly values. Subsequent publications will make this geochemical data and accompanying interpretations available to a wide spectrum of interdisciplinary users. ?? 1993.

  6. Finite Element Analysis and Test Correlation of a 10-Meter Inflation-Deployed Solar Sail

    NASA Technical Reports Server (NTRS)

    Sleight, David W.; Michii, Yuki; Lichodziejewski, David; Derbes, Billy; Mann. Troy O.; Slade, Kara N.; Wang, John T.

    2005-01-01

    Under the direction of the NASA In-Space Propulsion Technology Office, the team of L Garde, NASA Jet Propulsion Laboratory, Ball Aerospace, and NASA Langley Research Center has been developing a scalable solar sail configuration to address NASA's future space propulsion needs. Prior to a flight experiment of a full-scale solar sail, a comprehensive phased test plan is currently being implemented to advance the technology readiness level of the solar sail design. These tests consist of solar sail component, subsystem, and sub-scale system ground tests that simulate the vacuum and thermal conditions of the space environment. Recently, two solar sail test articles, a 7.4-m beam assembly subsystem test article and a 10-m four-quadrant solar sail system test article, were tested in vacuum conditions with a gravity-offload system to mitigate the effects of gravity. This paper presents the structural analyses simulating the ground tests and the correlation of the analyses with the test results. For programmatic risk reduction, a two-prong analysis approach was undertaken in which two separate teams independently developed computational models of the solar sail test articles using the finite element analysis software packages: NEiNastran and ABAQUS. This paper compares the pre-test and post-test analysis predictions from both software packages with the test data including load-deflection curves from static load tests, and vibration frequencies and mode shapes from vibration tests. The analysis predictions were in reasonable agreement with the test data. Factors that precluded better correlation of the analyses and the tests were uncertainties in the material properties, test conditions, and modeling assumptions used in the analyses.

  7. Composition analysis by scanning femtosecond laser ultraprobing (CASFLU).

    DOEpatents

    Ishikawa, Muriel Y.; Wood, Lowell L.; Campbell, E. Michael; Stuart, Brent C.; Perry, Michael D.

    2002-01-01

    The composition analysis by scanning femtosecond ultraprobing (CASFLU) technology scans a focused train of extremely short-duration, very intense laser pulses across a sample. The partially-ionized plasma ablated by each pulse is spectrometrically analyzed in real time, determining the ablated material's composition. The steering of the scanned beam thus is computer directed to either continue ablative material-removal at the same site or to successively remove nearby material for the same type of composition analysis. This invention has utility in high-speed chemical-elemental, molecular-fragment and isotopic analyses of the microstructure composition of complex objects, e.g., the oxygen isotopic compositions of large populations of single osteons in bone.

  8. Formulation of an improved smeared stiffener theory for buckling analysis of grid-stiffened composite panels

    NASA Technical Reports Server (NTRS)

    Jaunky, Navin; Knight, Norman F., Jr.; Ambur, Damodar R.

    1995-01-01

    A smeared stiffener theory for stiffened panels is presented that includes skin-stiffener interaction effects. The neutral surface profile of the skin-stiffener combination is developed analytically using the minimum potential energy principle and statics conditions. The skin-stiffener interaction is accounted for by computing the stiffness due to the stiffener and the skin in the skin-stiffener region about the neutral axis at the stiffener. Buckling load results for axially stiffened, orthogrid, and general grid-stiffened panels are obtained using the smeared stiffness combined with a Rayleigh-Ritz method and are compared with results from detailed finite element analyses.

  9. Suppression of radiation-induced point defects by rhenium and osmium interstitials in tungsten

    PubMed Central

    Suzudo, Tomoaki; Hasegawa, Akira

    2016-01-01

    Modeling the evolution of radiation-induced defects is important for finding radiation-resistant materials, which would be greatly appreciated in nuclear applications. We apply the density functional theory combined with comprehensive analyses of massive experimental database to indicate a mechanism to mitigate the effect of radiation on W crystals by adding particular solute elements that change the migration property of interstitials. The resultant mechanism is applicable to any body-centered-cubic (BCC) metals whose self-interstitial atoms become a stable crowdion and is expected to provide a general guideline for computational design of radiation-resistant alloys in the field of nuclear applications. PMID:27824134

  10. Processor Would Find Best Paths On Map

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1990-01-01

    Proposed very-large-scale integrated (VLSI) circuit image-data processor finds path of least cost from specified origin to any destination on map. Cost of traversal assigned to each picture element of map. Path of least cost from originating picture element to every other picture element computed as path that preserves as much as possible of signal transmitted by originating picture element. Dedicated microprocessor at each picture element stores cost of traversal and performs its share of computations of paths of least cost. Least-cost-path problem occurs in research, military maneuvers, and in planning routes of vehicles.

  11. Modules and methods for all photonic computing

    DOEpatents

    Schultz, David R.; Ma, Chao Hung

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  12. Development and validation of a subject-specific finite element model of the functional spinal unit to predict vertebral strength.

    PubMed

    Lee, Chu-Hee; Landham, Priyan R; Eastell, Richard; Adams, Michael A; Dolan, Patricia; Yang, Lang

    2017-09-01

    Finite element models of an isolated vertebral body cannot accurately predict compressive strength of the spinal column because, in life, compressive load is variably distributed across the vertebral body and neural arch. The purpose of this study was to develop and validate a patient-specific finite element model of a functional spinal unit, and then use the model to predict vertebral strength from medical images. A total of 16 cadaveric functional spinal units were scanned and then tested mechanically in bending and compression to generate a vertebral wedge fracture. Before testing, an image processing and finite element analysis framework (SpineVox-Pro), developed previously in MATLAB using ANSYS APDL, was used to generate a subject-specific finite element model with eight-node hexahedral elements. Transversely isotropic linear-elastic material properties were assigned to vertebrae, and simple homogeneous linear-elastic properties were assigned to the intervertebral disc. Forward bending loading conditions were applied to simulate manual handling. Results showed that vertebral strengths measured by experiment were positively correlated with strengths predicted by the functional spinal unit finite element model with von Mises or Drucker-Prager failure criteria ( R 2  = 0.80-0.87), with areal bone mineral density measured by dual-energy X-ray absorptiometry ( R 2  = 0.54) and with volumetric bone mineral density from quantitative computed tomography ( R 2  = 0.79). Large-displacement non-linear analyses on all specimens did not improve predictions. We conclude that subject-specific finite element models of a functional spinal unit have potential to estimate the vertebral strength better than bone mineral density alone.

  13. A finite element method to compute three-dimensional equilibrium configurations of fluid membranes: Optimal parameterization, variational formulation and applications

    NASA Astrophysics Data System (ADS)

    Rangarajan, Ramsharan; Gao, Huajian

    2015-09-01

    We introduce a finite element method to compute equilibrium configurations of fluid membranes, identified as stationary points of a curvature-dependent bending energy functional under certain geometric constraints. The reparameterization symmetries in the problem pose a challenge in designing parametric finite element methods, and existing methods commonly resort to Lagrange multipliers or penalty parameters. In contrast, we exploit these symmetries by representing solution surfaces as normal offsets of given reference surfaces and entirely bypass the need for artificial constraints. We then resort to a Galerkin finite element method to compute discrete C1 approximations of the normal offset coordinate. The variational framework presented is suitable for computing deformations of three-dimensional membranes subject to a broad range of external interactions. We provide a systematic algorithm for computing large deformations, wherein solutions at subsequent load steps are identified as perturbations of previously computed ones. We discuss the numerical implementation of the method in detail and demonstrate its optimal convergence properties using examples. We discuss applications of the method to studying adhesive interactions of fluid membranes with rigid substrates and to investigate the influence of membrane tension in tether formation.

  14. Computer program for definition of transonic axial-flow compressor blade rows. [computer program for fabrication and aeroelastic analysis

    NASA Technical Reports Server (NTRS)

    Crouse, J. E.

    1974-01-01

    A method is presented for designing axial-flow compressor blading from blade elements defined on cones which pass through the blade-edge streamline locations. Each blade-element centerline is composed of two segments which are tangent to each other. The centerline and surfaces of each segment have constant change of angle with path distance. The stacking line for the blade elements can be leaned in both the axial and tangential directions. The output of the computer program gives coordinates for fabrication and properties for aeroelastic analysis for planar blade sections. These coordinates and properties are obtained by interpolation across conical blade elements. The program is structured to be coupled with an aerodynamic design program.

  15. The EPIRARE proposal of a set of indicators and common data elements for the European platform for rare disease registration.

    PubMed

    Taruscio, Domenica; Mollo, Emanuela; Gainotti, Sabina; Posada de la Paz, Manuel; Bianchi, Fabrizio; Vittozzi, Luciano

    2014-01-01

    The European Union acknowledges the relevance of registries as key instruments for developing rare disease (RD) clinical research, improving patient care and health service (HS) planning and funded the EPIRARE project to improve standardization and data comparability among patient registries and to support new registries and data collections. A reference list of patient registry-based indicators has been prepared building on the work of previous EU projects and on the platform stakeholders' information needs resulting from the EPIRARE surveys and consultations. The variables necessary to compute these indicators have been analysed for their scope and use and then organized in data domains. The reference indicators span from disease surveillance, to socio-economic burden, HS monitoring, research and product development, policy equity and effectiveness. The variables necessary to compute these reference indicators have been selected and, with the exception of more sophisticated indicators for research and clinical care quality, they can be collected as data elements common (CDE) to all rare diseases. They have been organized in data domains characterized by their contents and main goal and a limited set of mandatory data elements has been defined, which allows case notification independently of the physician or the health service. The definition of a set of CDE for the European platform for RD patient registration is the first step in the promotion of the use of common tools for the collection of comparable data. The proposed organization of the CDE contributes to the completeness of case ascertainment, with the possible involvement of patients and patient associations in the registration process.

  16. On a 3-D singularity element for computation of combined mode stress intensities

    NASA Technical Reports Server (NTRS)

    Atluri, S. N.; Kathiresan, K.

    1976-01-01

    A special three-dimensional singularity element is developed for the computation of combined modes 1, 2, and 3 stress intensity factors, which vary along an arbitrarily curved crack front in three dimensional linear elastic fracture problems. The finite element method is based on a displacement-hybrid finite element model, based on a modified variational principle of potential energy, with arbitrary element interior displacements, interelement boundary displacements, and element boundary tractions as variables. The special crack-front element used in this analysis contains the square root singularity in strains and stresses, where the stress-intensity factors K(1), K(2), and K(3) are quadratically variable along the crack front and are solved directly along with the unknown nodal displacements.

  17. Stroke patients’ utilisation of extrinsic feedback from computer-based technology in the home: a multiple case study realistic evaluation

    PubMed Central

    2014-01-01

    Background Evidence indicates that post − stroke rehabilitation improves function, independence and quality of life. A key aspect of rehabilitation is the provision of appropriate information and feedback to the learner. Advances in information and communications technology (ICT) have allowed for the development of various systems to complement stroke rehabilitation that could be used in the home setting. These systems may increase the provision of rehabilitation a stroke survivor receives and carries out, as well as providing a learning platform that facilitates long-term self-managed rehabilitation and behaviour change. This paper describes the application of an innovative evaluative methodology to explore the utilisation of feedback for post-stroke upper-limb rehabilitation in the home. Methods Using the principles of realistic evaluation, this study aimed to test and refine intervention theories by exploring the complex interactions of contexts, mechanisms and outcomes that arise from technology deployment in the home. Methods included focus groups followed by multi-method case studies (n = 5) before, during and after the use of computer-based equipment. Data were analysed in relation to the context-mechanism-outcome hypotheses case by case. This was followed by a synthesis of the findings to answer the question, ‘what works for whom and in what circumstances and respects?’ Results Data analysis reveals that to achieve desired outcomes through the use of ICT, key elements of computer feedback, such as accuracy, measurability, rewarding feedback, adaptability, and knowledge of results feedback, are required to trigger the theory-driven mechanisms underpinning the intervention. In addition, the pre-existing context and the personal and environmental contexts, such as previous experience of service delivery, personal goals, trust in the technology, and social circumstances may also enable or constrain the underpinning theory-driven mechanisms. Conclusions Findings suggest that the theory-driven mechanisms underpinning the utilisation of feedback from computer-based technology for home-based upper-limb post-stroke rehabilitation are dependent on key elements of computer feedback and the personal and environmental context. The identification of these elements may therefore inform the development of technology; therapy education and the subsequent adoption of technology and a self-management paradigm; long-term self-managed rehabilitation; and importantly, improvements in the physical and psychosocial aspects of recovery. PMID:24903401

  18. Stroke patients' utilisation of extrinsic feedback from computer-based technology in the home: a multiple case study realistic evaluation.

    PubMed

    Parker, Jack; Mawson, Susan; Mountain, Gail; Nasr, Nasrin; Zheng, Huiru

    2014-06-05

    Evidence indicates that post-stroke rehabilitation improves function, independence and quality of life. A key aspect of rehabilitation is the provision of appropriate information and feedback to the learner.Advances in information and communications technology (ICT) have allowed for the development of various systems to complement stroke rehabilitation that could be used in the home setting. These systems may increase the provision of rehabilitation a stroke survivor receives and carries out, as well as providing a learning platform that facilitates long-term self-managed rehabilitation and behaviour change. This paper describes the application of an innovative evaluative methodology to explore the utilisation of feedback for post-stroke upper-limb rehabilitation in the home. Using the principles of realistic evaluation, this study aimed to test and refine intervention theories by exploring the complex interactions of contexts, mechanisms and outcomes that arise from technology deployment in the home. Methods included focus groups followed by multi-method case studies (n = 5) before, during and after the use of computer-based equipment. Data were analysed in relation to the context-mechanism-outcome hypotheses case by case. This was followed by a synthesis of the findings to answer the question, 'what works for whom and in what circumstances and respects?' Data analysis reveals that to achieve desired outcomes through the use of ICT, key elements of computer feedback, such as accuracy, measurability, rewarding feedback, adaptability, and knowledge of results feedback, are required to trigger the theory-driven mechanisms underpinning the intervention. In addition, the pre-existing context and the personal and environmental contexts, such as previous experience of service delivery, personal goals, trust in the technology, and social circumstances may also enable or constrain the underpinning theory-driven mechanisms. Findings suggest that the theory-driven mechanisms underpinning the utilisation of feedback from computer-based technology for home-based upper-limb post-stroke rehabilitation are dependent on key elements of computer feedback and the personal and environmental context. The identification of these elements may therefore inform the development of technology; therapy education and the subsequent adoption of technology and a self-management paradigm; long-term self-managed rehabilitation; and importantly, improvements in the physical and psychosocial aspects of recovery.

  19. Finite element analysis of transonic flows in cascades: Importance of computational grids in improving accuracy and convergence

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Akay, H. U.

    1981-01-01

    The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.

  20. Aorta modeling with the element-based zero-stress state and isogeometric discretization

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Sasaki, Takafumi

    2017-02-01

    Patient-specific arterial fluid-structure interaction computations, including aorta computations, require an estimation of the zero-stress state (ZSS), because the image-based arterial geometries do not come from a ZSS. We have earlier introduced a method for estimation of the element-based ZSS (EBZSS) in the context of finite element discretization of the arterial wall. The method has three main components. 1. An iterative method, which starts with a calculated initial guess, is used for computing the EBZSS such that when a given pressure load is applied, the image-based target shape is matched. 2. A method for straight-tube segments is used for computing the EBZSS so that we match the given diameter and longitudinal stretch in the target configuration and the "opening angle." 3. An element-based mapping between the artery and straight-tube is extracted from the mapping between the artery and straight-tube segments. This provides the mapping from the arterial configuration to the straight-tube configuration, and from the estimated EBZSS of the straight-tube configuration back to the arterial configuration, to be used as the initial guess for the iterative method that matches the image-based target shape. Here we present the version of the EBZSS estimation method with isogeometric wall discretization. With isogeometric discretization, we can obtain the element-based mapping directly, instead of extracting it from the mapping between the artery and straight-tube segments. That is because all we need for the element-based mapping, including the curvatures, can be obtained within an element. With NURBS basis functions, we may be able to achieve a similar level of accuracy as with the linear basis functions, but using larger-size and much fewer elements. Higher-order NURBS basis functions allow representation of more complex shapes within an element. To show how the new EBZSS estimation method performs, we first present 2D test computations with straight-tube configurations. Then we show how the method can be used in a 3D computation where the target geometry is coming from medical image of a human aorta.

  1. Preventing smoking relapse via Web-based computer-tailored feedback: a randomized controlled trial.

    PubMed

    Elfeddali, Iman; Bolman, Catherine; Candel, Math J J M; Wiers, Reinout W; de Vries, Hein

    2012-08-20

    Web-based computer-tailored approaches have the potential to be successful in supporting smoking cessation. However, the potential effects of such approaches for relapse prevention and the value of incorporating action planning strategies to effectively prevent smoking relapse have not been fully explored. The Stay Quit for You (SQ4U) study compared two Web-based computer-tailored smoking relapse prevention programs with different types of planning strategies versus a control group. To assess the efficacy of two Web-based computer-tailored programs in preventing smoking relapse compared with a control group. The action planning (AP) program provided tailored feedback at baseline and invited respondents to do 6 preparatory and coping planning assignments (the first 3 assignments prior to quit date and the final 3 assignments after quit date). The action planning plus (AP+) program was an extended version of the AP program that also provided tailored feedback at 11 time points after the quit attempt. Respondents in the control group only filled out questionnaires. The study also assessed possible dose-response relationships between abstinence and adherence to the programs. The study was a randomized controlled trial with three conditions: the control group, the AP program, and the AP+ program. Respondents were daily smokers (N = 2031), aged 18 to 65 years, who were motivated and willing to quit smoking within 1 month. The primary outcome was self-reported continued abstinence 12 months after baseline. Logistic regression analyses were conducted using three samples: (1) all respondents as randomly assigned, (2) a modified sample that excluded respondents who did not make a quit attempt in conformance with the program protocol, and (3) a minimum dose sample that also excluded respondents who did not adhere to at least one of the intervention elements. Observed case analyses and conservative analyses were conducted. In the observed case analysis of the randomized sample, abstinence rates were 22% (45/202) in the control group versus 33% (63/190) in the AP program and 31% (53/174) in the AP+ program. The AP program (odds ratio 1.95, P = .005) and the AP+ program (odds ratio 1.61, P = .049) were significantly more effective than the control condition. Abstinence rates and effects differed per sample. Finally, the results suggest a dose-response relationship between abstinence and the number of program elements completed by the respondents. Despite the differences in results caused by the variation in our analysis approaches, we can conclude that Web-based computer-tailored programs combined with planning strategy assignments and feedback after the quit attempt can be effective in preventing relapse 12 months after baseline. However, adherence to the intervention seems critical for effectiveness. Finally, our results also suggest that more research is needed to assess the optimum intervention dose. Dutch Trial Register: NTR1892; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=1892 (Archived by WebCite at http://www.webcitation.org/693S6uuPM).

  2. A Spectral Finite Element Approach to Modeling Soft Solids Excited with High-Frequency Harmonic Loads

    PubMed Central

    Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.

    2010-01-01

    An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402

  3. [The laboratory of tomorrow. Particular reference to hematology].

    PubMed

    Cazal, P

    1985-01-01

    A serious prediction can only be an extrapolation of recent developments. To be exact, the development has to continue in the same direction, which is only a probability. Probable development of hematological technology: Progress in methods. Development of new labelling methods: radio-elements, antibodies. Monoclonal antibodies. Progress in equipment: Cell counters and their adaptation to routine hemograms is a certainty. From analyzers: a promise that will perhaps become reality. Coagulometers: progress still to be made. Hemagglutination detectors and their application to grouping: good achievements, but the market is too limited. Computerization and automation: What form will the computerizing take? What will the computer do? Who will the computer control? What should the automatic analyzers be? Two current levels. Relationships between the automatic analysers and the computer. rapidity, fidelity and above all, reliability. Memory: large capacity and easy access. Disadvantages: conservatism and technical dependency. How can they be avoided? Development of the environment: Laboratory input: outside supplies, electricity, reagents, consumables. Samples and their identification. Output: distribution of results and communication problems. Centralization or decentralization? What will tomorrow's laboratory be? 3 hypotheses: optimistic, pessimistic, and balanced.

  4. Fracture mechanics life analytical methods verification testing

    NASA Technical Reports Server (NTRS)

    Favenesi, J. A.; Clemmons, T. G.; Lambert, T. J.

    1994-01-01

    Verification and validation of the basic information capabilities in NASCRAC has been completed. The basic information includes computation of K versus a, J versus a, and crack opening area versus a. These quantities represent building blocks which NASCRAC uses in its other computations such as fatigue crack life and tearing instability. Several methods were used to verify and validate the basic information capabilities. The simple configurations such as the compact tension specimen and a crack in a finite plate were verified and validated versus handbook solutions for simple loads. For general loads using weight functions, offline integration using standard FORTRAN routines was performed. For more complicated configurations such as corner cracks and semielliptical cracks, NASCRAC solutions were verified and validated versus published results and finite element analyses. A few minor problems were identified in the basic information capabilities of the simple configurations. In the more complicated configurations, significant differences between NASCRAC and reference solutions were observed because NASCRAC calculates its solutions as averaged values across the entire crack front whereas the reference solutions were computed for a single point.

  5. Unsteady Aero Computation of a 1 1/2 Stage Large Scale Rotating Turbine

    NASA Technical Reports Server (NTRS)

    To, Wai-Ming

    2012-01-01

    This report is the documentation of the work performed for the Subsonic Rotary Wing Project under the NASA s Fundamental Aeronautics Program. It was funded through Task Number NNC10E420T under GESS-2 Contract NNC06BA07B in the period of 10/1/2010 to 8/31/2011. The objective of the task is to provide support for the development of variable speed power turbine technology through application of computational fluid dynamics analyses. This includes work elements in mesh generation, multistage URANS simulations, and post-processing of the simulation results for comparison with the experimental data. The unsteady CFD calculations were performed with the TURBO code running in multistage single passage (phase lag) mode. Meshes for the blade rows were generated with the NASA developed TCGRID code. The CFD performance is assessed and improvements are recommended for future research in this area. For that, the United Technologies Research Center's 1 1/2 stage Large Scale Rotating Turbine was selected to be the candidate engine configuration for this computational effort because of the completeness and availability of the data.

  6. Computer program for analysis of high speed, single row, angular contact, spherical roller bearing, SASHBEAN. Volume 2: Mathematical formulation and analysis

    NASA Technical Reports Server (NTRS)

    Aggarwal, Arun K.

    1993-01-01

    Spherical roller bearings have typically been used in applications with speeds limited to about 5000 rpm and loads limited for operation at less than about 0.25 million DN. However, spherical roller bearings are now being designed for high load and high speed applications including aerospace applications. A computer program, SASHBEAN, was developed to provide an analytical tool to design, analyze, and predict the performance of high speed, single row, angular contact (including zero contact angle), spherical roller bearings. The material presented is the mathematical formulation and analytical methods used to develop computer program SASHBEAN. For a given set of operating conditions, the program calculates the bearings ring deflections (axial and radial), roller deflections, contact areas stresses, depth and magnitude of maximum shear stresses, axial thrust, rolling element and cage rotational speeds, lubrication parameters, fatigue lives, and rates of heat generation. Centrifugal forces and gyroscopic moments are fully considered. The program is also capable of performing steady-state and time-transient thermal analyses of the bearing system.

  7. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  8. Numerical model for healthy and injured ankle ligaments.

    PubMed

    Forestiero, Antonella; Carniel, Emanuele Luigi; Fontanella, Chiara Giulia; Natali, Arturo Nicola

    2017-06-01

    The aim of this work is to provide a computational tool for the investigation of ankle mechanics under different loading conditions. The attention is focused on the biomechanical role of ankle ligaments that are fundamental for joints stability. A finite element model of the human foot is developed starting from Computed Tomography and Magnetic Resonance Imaging, using particular attention to the definition of ankle ligaments. A refined fiber-reinforced visco-hyperelastic constitutive model is assumed to characterize the mechanical response of ligaments. Numerical analyses that interpret anterior drawer and the talar tilt tests reported in literature are performed. The numerical results are in agreement with the range of values obtained by experimental tests confirming the accuracy of the procedure adopted. The increase of the ankle range of motion after some ligaments rupture is also evaluated, leading to the capability of the numerical models to interpret the damage conditions. The developed computational model provides a tool for the investigation of foot and ankle functionality in terms of stress-strain of the tissues and in terms of ankle motion, considering different types of damage to ankle ligaments.

  9. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  10. Exponential convergence through linear finite element discretization of stratified subdomains

    NASA Astrophysics Data System (ADS)

    Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali

    2016-10-01

    Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.

  11. A survey of parametrized variational principles and applications to computational mechanics

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1993-01-01

    This survey paper describes recent developments in the area of parametrized variational principles (PVP's) and selected applications to finite-element computational mechanics. A PVP is a variational principle containing free parameters that have no effect on the Euler-Lagrange equations. The theory of single-field PVP's based on gauge functions (also known as null Lagrangians) is a subset of the inverse problem of variational calculus that has limited value. On the other hand, multifield PVP's are more interesting from theoretical and practical standpoints. Following a tutorial introduction, the paper describes the recent construction of multifield PVP's in several areas of elasticity and electromagnetics. It then discusses three applications to finite-element computational mechanics: the derivation of high-performance finite elements, the development of element-level error indicators, and the constructions of finite element templates. The paper concludes with an overview of open research areas.

  12. Integrated analyses in plastics forming

    NASA Astrophysics Data System (ADS)

    Bo, Wang

    This is the thesis which explains the progress made in the analysis, simulation and testing of plastics forming. This progress can be applied to injection and compression mould design. Three activities of plastics forming have been investigated, namely filling analysis, cooling analysis and ejecting analysis. The filling section of plastics forming has been analysed and calculated by using MOLDFLOW and FILLCALC V. software. A comparing of high speed compression moulding and injection moulding has been made. The cooling section of plastics forming has been analysed by using MOLDFLOW software and a finite difference computer program. The latter program can be used as a sample program to calculate the feasibility of cooling different materials to required target temperatures under controlled cooling conditions. The application of thermal imaging has been also introduced to determine the actual process temperatures. Thermal imaging can be used as a powerful tool to analyse mould surface temperatures and to verify the mathematical model. A buckling problem for ejecting section has been modelled and calculated by PATRAN/ABAQUS finite element analysis software and tested. These calculations and analysis are applied to the special case but can be use as an example for general analysis and calculation in the ejection section of plastics forming.

  13. Learning by statistical cooperation of self-interested neuron-like computing elements.

    PubMed

    Barto, A G

    1985-01-01

    Since the usual approaches to cooperative computation in networks of neuron-like computating elements do not assume that network components have any "preferences", they do not make substantive contact with game theoretic concepts, despite their use of some of the same terminology. In the approach presented here, however, each network component, or adaptive element, is a self-interested agent that prefers some inputs over others and "works" toward obtaining the most highly preferred inputs. Here we describe an adaptive element that is robust enough to learn to cooperate with other elements like itself in order to further its self-interests. It is argued that some of the longstanding problems concerning adaptation and learning by networks might be solvable by this form of cooperativity, and computer simulation experiments are described that show how networks of self-interested components that are sufficiently robust can solve rather difficult learning problems. We then place the approach in its proper historical and theoretical perspective through comparison with a number of related algorithms. A secondary aim of this article is to suggest that beyond what is explicitly illustrated here, there is a wealth of ideas from game theory and allied disciplines such as mathematical economics that can be of use in thinking about cooperative computation in both nervous systems and man-made systems.

  14. Efficient simulation of incompressible viscous flow over multi-element airfoils

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Wiltberger, N. Lyn; Kwak, Dochan

    1993-01-01

    The incompressible, viscous, turbulent flow over single and multi-element airfoils is numerically simulated in an efficient manner by solving the incompressible Navier-Stokes equations. The solution algorithm employs the method of pseudo compressibility and utilizes an upwind differencing scheme for the convective fluxes, and an implicit line-relaxation scheme. The motivation for this work includes interest in studying high-lift take-off and landing configurations of various aircraft. In particular, accurate computation of lift and drag at various angles of attack up to stall is desired. Two different turbulence models are tested in computing the flow over an NACA 4412 airfoil; an accurate prediction of stall is obtained. The approach used for multi-element airfoils involves the use of multiple zones of structured grids fitted to each element. Two different approaches are compared; a patched system of grids, and an overlaid Chimera system of grids. Computational results are presented for two-element, three-element, and four-element airfoil configurations. Excellent agreement with experimental surface pressure coefficients is seen. The code converges in less than 200 iterations, requiring on the order of one minute of CPU time on a CRAY YMP per element in the airfoil configuration.

  15. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX-80

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-11-01

    The finite element method has proven to be an invaluable tool for analysis and design of complex, high performance systems, such as bladed-disk assemblies in aircraft turbofan engines. However, as the problem size increase, the computation time required by conventional computers can be prohibitively high. Parallel processing computers provide the means to overcome these computation time limits. This report summarizes the results of a research activity aimed at providing a finite element capability for analyzing turbomachinery bladed-disk assemblies in a vector/parallel processing environment. A special purpose code, named with the acronym SAPNEW, has been developed to perform static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements. SAPNEW provides a stand alone capability for static and eigen analysis on the Alliant FX/80, a parallel processing computer. A preprocessor, named with the acronym NTOS, has been developed to accept NASTRAN input decks and convert them to the SAPNEW format to make SAPNEW more readily used by researchers at NASA Lewis Research Center.

  16. Symbolic computation of equivalence transformations and parameter reduction for nonlinear physical models

    NASA Astrophysics Data System (ADS)

    Cheviakov, Alexei F.

    2017-11-01

    An efficient systematic procedure is provided for symbolic computation of Lie groups of equivalence transformations and generalized equivalence transformations of systems of differential equations that contain arbitrary elements (arbitrary functions and/or arbitrary constant parameters), using the software package GeM for Maple. Application of equivalence transformations to the reduction of the number of arbitrary elements in a given system of equations is discussed, and several examples are considered. The first computational example of generalized equivalence transformations where the transformation of the dependent variable involves an arbitrary constitutive function is presented. As a detailed physical example, a three-parameter family of nonlinear wave equations describing finite anti-plane shear displacements of an incompressible hyperelastic fiber-reinforced medium is considered. Equivalence transformations are computed and employed to radically simplify the model for an arbitrary fiber direction, invertibly reducing the model to a simple form that corresponds to a special fiber direction, and involves no arbitrary elements. The presented computation algorithm is applicable to wide classes of systems of differential equations containing arbitrary elements.

  17. Computation of Asteroid Proper Elements on the Grid

    NASA Astrophysics Data System (ADS)

    Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.

    2009-12-01

    A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  18. Deep sequencing of cardiac microRNA-mRNA interactomes in clinical and experimental cardiomyopathy

    PubMed Central

    Matkovich, Scot J.; Dorn, Gerald W.

    2018-01-01

    Summary MicroRNAs are a family of short (~21 nucleotide) noncoding RNAs that serve key roles in cellular growth and differentiation and the response of the heart to stress stimuli. As the sequence-specific recognition element of RNA-induced silencing complexes (RISCs), microRNAs bind mRNAs and prevent their translation via mechanisms that may include transcript degradation and/or prevention of ribosome binding. Short microRNA sequences and the ability of microRNAs to bind to mRNA sites having only partial/imperfect sequence complementarity complicates purely computational analyses of microRNA-mRNA interactomes. Furthermore, computational microRNA target prediction programs typically ignore biological context, and therefore the principal determinants of microRNA-mRNA binding: the presence and quantity of each. To address these deficiencies we describe an empirical method, developed via studies of stressed and failing hearts, to determine disease-induced changes in microRNAs, mRNAs, and the mRNAs targeted to the RISC, without cross-linking mRNAs to RISC proteins. Deep sequencing methods are used to determine RNA abundances, delivering unbiased, quantitative RNA data limited only by their annotation in the genome of interest. We describe the laboratory bench steps required to perform these experiments, experimental design strategies to achieve an appropriate number of sequencing reads per biological replicate, and computer-based processing tools and procedures to convert large raw sequencing data files into gene expression measures useful for differential expression analyses. PMID:25836573

  19. Deep sequencing of cardiac microRNA-mRNA interactomes in clinical and experimental cardiomyopathy.

    PubMed

    Matkovich, Scot J; Dorn, Gerald W

    2015-01-01

    MicroRNAs are a family of short (~21 nucleotide) noncoding RNAs that serve key roles in cellular growth and differentiation and the response of the heart to stress stimuli. As the sequence-specific recognition element of RNA-induced silencing complexes (RISCs), microRNAs bind mRNAs and prevent their translation via mechanisms that may include transcript degradation and/or prevention of ribosome binding. Short microRNA sequences and the ability of microRNAs to bind to mRNA sites having only partial/imperfect sequence complementarity complicate purely computational analyses of microRNA-mRNA interactomes. Furthermore, computational microRNA target prediction programs typically ignore biological context, and therefore the principal determinants of microRNA-mRNA binding: the presence and quantity of each. To address these deficiencies we describe an empirical method, developed via studies of stressed and failing hearts, to determine disease-induced changes in microRNAs, mRNAs, and the mRNAs targeted to the RISC, without cross-linking mRNAs to RISC proteins. Deep sequencing methods are used to determine RNA abundances, delivering unbiased, quantitative RNA data limited only by their annotation in the genome of interest. We describe the laboratory bench steps required to perform these experiments, experimental design strategies to achieve an appropriate number of sequencing reads per biological replicate, and computer-based processing tools and procedures to convert large raw sequencing data files into gene expression measures useful for differential expression analyses.

  20. Exploring the quantum speed limit with computer games

    NASA Astrophysics Data System (ADS)

    Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.

    2016-04-01

    Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.

  1. Exploring the quantum speed limit with computer games.

    PubMed

    Sørensen, Jens Jakob W H; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F

    2016-04-14

    Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.

  2. Mesh morphing for finite element analysis of implant positioning in cementless total hip replacements.

    PubMed

    Bah, Mamadou T; Nair, Prasanth B; Browne, Martin

    2009-12-01

    Finite element (FE) analysis of the effect of implant positioning on the performance of cementless total hip replacements (THRs) requires the generation of multiple meshes to account for positioning variability. This process can be labour intensive and time consuming as CAD operations are needed each time a specific orientation is to be analysed. In the present work, a mesh morphing technique is developed to automate the model generation process. The volume mesh of a baseline femur with the implant in a nominal position is deformed as the prosthesis location is varied. A virtual deformation field, obtained by solving a linear elasticity problem with appropriate boundary conditions, is applied. The effectiveness of the technique is evaluated using two metrics: the percentages of morphed elements exceeding an aspect ratio of 20 and an angle of 165 degrees between the adjacent edges of each tetrahedron. Results show that for 100 different implant positions, the first and second metrics never exceed 3% and 3.5%, respectively. To further validate the proposed technique, FE contact analyses are conducted using three selected morphed models to predict the strain distribution in the bone and the implant micromotion under joint and muscle loading. The entire bone strain distribution is well captured and both percentages of bone volume with strain exceeding 0.7% and bone average strains are accurately computed. The results generated from the morphed mesh models correlate well with those for models generated from scratch, increasing confidence in the methodology. This morphing technique forms an accurate and efficient basis for FE based implant orientation and stability analysis of cementless hip replacements.

  3. De-MA: a web Database for electron Microprobe Analyses to assist EMP lab manager and users

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.

    2012-12-01

    Lab managers and users of electron microprobe (EMP) facilities require comprehensive, yet flexible documentation structures, as well as an efficient scheduling mechanism. A single on-line database system for managing reservations, and providing information on standards, quantitative and qualitative setups (element mapping, etc.), and X-ray data has been developed for this purpose. This system is particularly useful in multi-user facilities where experience ranges from beginners to the highly experienced. New users and occasional facility users will find these tools extremely useful in developing and maintaining high quality, reproducible, and efficient analyses. This user-friendly database is available through the web, and uses MySQL as a database and PHP/HTML as script language (dynamic website). The database includes several tables for standards information, X-ray lines, X-ray element mapping, PHA, element setups, and agenda. It is configurable for up to five different EMPs in a single lab, each of them having up to five spectrometers and as many diffraction crystals as required. The installation should be done on a web server supporting PHP/MySQL, although installation on a personal computer is possible using third-party freeware to create a local Apache server, and to enable PHP/MySQL. Since it is web-based, any user outside the EMP lab can access this database anytime through any web browser and on any operating system. The access can be secured using a general password protection (e.g. htaccess). The web interface consists of 6 main menus. (1) "Standards" lists standards defined in the database, and displays detailed information on each (e.g. material type, name, reference, comments, and analyses). Images such as EDS spectra or BSE can be associated with a standard. (2) "Analyses" lists typical setups to use for quantitative analyses, allows calculation of mineral composition based on a mineral formula, or calculation of mineral formula based on a fixed amount of oxygen, or of cation (using an analysis in element or oxide weight-%); this latter includes re-calculation of H2O/CO2 based on stoichiometry, and oxygen correction for F and Cl. Another option offers a list of any available standards and possible peak or background interferences for a series of elements. (3) "X-ray maps" lists the different setups recommended for element mapping using WDS, and a map calculator to facilitate maps setups and to estimate the total mapping time. (4) "X-ray data" lists all x-ray lines for a specific element (K, L, M, absorption edges, and satellite peaks) in term of energy, wavelength and peak position. A check for possible interferences on peak or background is also possible. Theoretical x-ray peak positions for each crystal are calculated based on the 2d spacing of each crystal and the wavelength of each line. (5) "Agenda" menu displays the reservation dates for each month and for each EMP lab defined. It also offers a reservation request option, this request being sent by email to the EMP manager for approval. (6) Finally, "Admin" is password restricted, and contains all necessary options to manage the database through user-friendly forms. The installation of this database is made easy and knowledge of HTML, PHP, or MySQL is unnecessary to install, configure, manage, or use it. A working database is accessible at http://cub.geoloweb.ch.

  4. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    PubMed

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in prosthetic ankle-foot complex compared to normal one. The predicted plantar pressures and von Misses stress distributions for a normal foot were consistent with other FE models given in the literature. The present study is aimed to open new approaches for the development of ankle prosthesis.

  5. Path Toward a Unifid Geometry for Radiation Transport

    NASA Technical Reports Server (NTRS)

    Lee, Kerry; Barzilla, Janet; Davis, Andrew; Zachmann

    2014-01-01

    The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex computer-aided design (CAD) models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN [high charge and energy transport code developed by NASA Langley Research Center (LaRC)], are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit-specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The workflow for achieving radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats

  6. Comparison of full 3-D, thin-film 3-D, and thin-film plate analyses of a postbuckled embedded delamination

    NASA Technical Reports Server (NTRS)

    Whitcomb, John D.

    1989-01-01

    Strain-energy release rates are often used to predict when delamination growth will occur in laminates under compression. Because of the inherently high computational cost of performing such analyses, less rigorous analyses such as thin-film plate analysis were used. The assumptions imposed by plate theory restrict the analysis to the calculation of total strain energy, G(sub t). The objective is to determine the accuracy of thin-film plate analysis by comparing the distribution of G(sub t) calculated using fully three dimensional (3D), thin-film 3D, and thin-film plate analyses. Thin-film 3D analysis is the same as thin-film plate analysis, except 3D analysis is used to model the sublaminate. The 3D stress analyses were performed using the finite element program NONLIN3D. The plate analysis results were obtained from published data, which used STAGS. Strain-energy release rates were calculated using variations of the virtual crack closure technique. The results demonstrate that thin-film plate analysis can predict the distribution of G(sub t) quite well, at least for the configurations considered. Also, these results verify the accuracy of the strain-energy release rate procedure for plate analysis.

  7. Computational analyses in cognitive neuroscience: in defense of biological implausibility.

    PubMed

    Dror, I E; Gallogly, D P

    1999-06-01

    Because cognitive neuroscience researchers attempt to understand the human mind by bridging behavior and brain, they expect computational analyses to be biologically plausible. In this paper, biologically implausible computational analyses are shown to have critical and essential roles in the various stages and domains of cognitive neuroscience research. Specifically, biologically implausible computational analyses can contribute to (1) understanding and characterizing the problem that is being studied, (2) examining the availability of information and its representation, and (3) evaluating and understanding the neuronal solution. In the context of the distinct types of contributions made by certain computational analyses, the biological plausibility of those analyses is altogether irrelevant. These biologically implausible models are nevertheless relevant and important for biologically driven research.

  8. Optimisation and evaluation of pre-design models for offshore wind turbines with jacket support structures and their influence on integrated load simulations

    NASA Astrophysics Data System (ADS)

    Schafhirt, S.; Kaufer, D.; Cheng, P. W.

    2014-12-01

    In recent years many advanced load simulation tools, allowing an aero-servo-hydroelastic analyses of an entire offshore wind turbine, have been developed and verified. Nowadays, even an offshore wind turbine with a complex support structure such as a jacket can be analysed. However, the computational effort rises significantly with an increasing level of details. This counts especially for offshore wind turbines with lattice support structures, since those models do naturally have a higher number of nodes and elements than simpler monopile structures. During the design process multiple load simulations are demanded to obtain an optimal solution. In the view of pre-design tasks it is crucial to apply load simulations which keep the simulation quality and the computational effort in balance. The paper will introduce a reference wind turbine model consisting of the REpower5M wind turbine and a jacket support structure with a high level of detail. In total twelve variations of this reference model are derived and presented. Main focus is to simplify the models of the support structure and the foundation. The reference model and the simplified models are simulated with the coupled simulation tool Flex5-Poseidon and analysed regarding frequencies, fatigue loads, and ultimate loads. A model has been found which reaches an adequate increase of simulation speed while holding the results in an acceptable range compared to the reference results.

  9. The comparative hydrodynamics of rapid rotation by predatory appendages.

    PubMed

    McHenry, M J; Anderson, P S L; Van Wassenbergh, S; Matthews, D G; Summers, A P; Patek, S N

    2016-11-01

    Countless aquatic animals rotate appendages through the water, yet fluid forces are typically modeled with translational motion. To elucidate the hydrodynamics of rotation, we analyzed the raptorial appendages of mantis shrimp (Stomatopoda) using a combination of flume experiments, mathematical modeling and phylogenetic comparative analyses. We found that computationally efficient blade-element models offered an accurate first-order approximation of drag, when compared with a more elaborate computational fluid-dynamic model. Taking advantage of this efficiency, we compared the hydrodynamics of the raptorial appendage in different species, including a newly measured spearing species, Coronis scolopendra The ultrafast appendages of a smasher species (Odontodactylus scyllarus) were an order of magnitude smaller, yet experienced values of drag-induced torque similar to those of a spearing species (Lysiosquillina maculata). The dactyl, a stabbing segment that can be opened at the distal end of the appendage, generated substantial additional drag in the smasher, but not in the spearer, which uses the segment to capture evasive prey. Phylogenetic comparative analyses revealed that larger mantis shrimp species strike more slowly, regardless of whether they smash or spear their prey. In summary, drag was minimally affected by shape, whereas size, speed and dactyl orientation dominated and differentiated the hydrodynamic forces across species and sizes. This study demonstrates the utility of simple mathematical modeling for comparative analyses and illustrates the multi-faceted consequences of drag during the evolutionary diversification of rotating appendages. © 2016. Published by The Company of Biologists Ltd.

  10. Computer simulation of functioning of elements of security systems

    NASA Astrophysics Data System (ADS)

    Godovykh, A. V.; Stepanov, B. P.; Sheveleva, A. A.

    2017-01-01

    The article is devoted to issues of development of the informational complex for simulation of functioning of the security system elements. The complex is described from the point of view of main objectives, a design concept and an interrelation of main elements. The proposed conception of the computer simulation provides an opportunity to simulate processes of security system work for training security staff during normal and emergency operation.

  11. Nonvolatile Ionic Two-Terminal Memory Device

    NASA Technical Reports Server (NTRS)

    Williams, Roger M.

    1990-01-01

    Conceptual solid-state memory device nonvolatile and erasable and has only two terminals. Proposed device based on two effects: thermal phase transition and reversible intercalation of ions. Transfer of sodium ions between source of ions and electrical switching element increases or decreases electrical conductance of element, turning switch "on" or "off". Used in digital computers and neural-network computers. In neural networks, many small, densely packed switches function as erasable, nonvolatile synaptic elements.

  12. Errors due to the truncation of the computational domain in static three-dimensional electrical impedance tomography.

    PubMed

    Vauhkonen, P J; Vauhkonen, M; Kaipio, J P

    2000-02-01

    In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The currents spread out in three dimensions and therefore off-plane structures have a significant effect on the reconstructed images. A question arises: how far from the current carrying electrodes should the discretized model of the object be extended? If the model is truncated too near the electrodes, errors are produced in the reconstructed images. On the other hand if the model is extended very far from the electrodes the computational time may become too long in practice. In this paper the model truncation problem is studied with the extended finite element method. Forward solutions obtained using so-called infinite elements, long finite elements and separable long finite elements are compared to the correct solution. The effects of the truncation of the computational domain on the reconstructed images are also discussed and results from the three-dimensional (3D) sensitivity analysis are given. We show that if the finite element method with ordinary elements is used in static 3D EIT, the dimension of the problem can become fairly large if the errors associated with the domain truncation are to be avoided.

  13. Characterization and Analyses of Valves, Feed Lines and Tanks used in Propellant Delivery Systems at NASA SSC

    NASA Technical Reports Server (NTRS)

    Ryan, Harry M.; Coote, David J.; Ahuja, Vineet; Hosangadi, Ashvin

    2006-01-01

    Accurate modeling of liquid rocket engine test processes involves assessing critical fluid mechanic and heat and mass transfer mechanisms within a cryogenic environment, and accurately modeling fluid properties such as vapor pressure and liquid and gas densities as a function of pressure and temperature. The Engineering and Science Directorate at the NASA John C. Stennis Space Center has developed and implemented such analytic models and analysis processes that have been used over a broad range of thermodynamic systems and resulted in substantial improvements in rocket propulsion testing services. In this paper, we offer an overview of the analyses techniques used to simulate pressurization and propellant fluid systems associated with the test stands at the NASA John C. Stennis Space Center. More specifically, examples of the global performance (one-dimensional) of a propellant system are provided as predicted using the Rocket Propulsion Test Analysis (RPTA) model. Computational fluid dynamic (CFD) analyses utilizing multi-element, unstructured, moving grid capability of complex cryogenic feed ducts, transient valve operation, and pressurization and mixing in propellant tanks are provided as well.

  14. Patient-specific non-linear finite element modelling for predicting soft organ deformation in real-time: application to non-rigid neuroimage registration.

    PubMed

    Wittek, Adam; Joldes, Grand; Couton, Mathieu; Warfield, Simon K; Miller, Karol

    2010-12-01

    Long computation times of non-linear (i.e. accounting for geometric and material non-linearity) biomechanical models have been regarded as one of the key factors preventing application of such models in predicting organ deformation for image-guided surgery. This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy (i.e. surgical opening of the skull) using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU). In contrast to commercial finite element codes that rely on an updated Lagrangian formulation and implicit integration in time domain for steady state solutions, our procedures utilise the total Lagrangian formulation with explicit time stepping and dynamic relaxation. We used patient-specific finite element meshes consisting of hexahedral and non-locking tetrahedral elements, together with realistic material properties for the brain tissue and appropriate contact conditions at the boundaries. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register (i.e. align) the preoperative and intraoperative images indicated that the models very accurately predict the intraoperative deformations within the brain. For each case, computing the brain deformation field took less than 4 s using an NVIDIA Tesla C870 GPU, which is two orders of magnitude reduction in computation time in comparison to our previous study in which the brain deformation was predicted using a commercial finite element solver executed on a personal computer. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Analysis and synthesis of distributed-lumped-active networks by digital computer

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.

  16. Study of propellant dynamics in a shuttle type launch vehicle

    NASA Technical Reports Server (NTRS)

    Jones, C. E.; Feng, G. C.

    1972-01-01

    A method and an associated digital computer program for evaluating the vibrational characteristics of large liquid-filled rigid wall tanks of general shape are presented. A solution procedure was developed in which slosh modes and frequencies are computed for systems mathematically modeled as assemblages of liquid finite elements. To retain sparsity in the assembled system mass and stiffness matrices, a compressible liquid element formulation was incorporated in the program. The approach taken in the liquid finite element formulation is compatible with triangular and quadrilateral structural finite elements so that the analysis of liquid motion can be coupled with flexible tank wall motion at some future time. The liquid element repertoire developed during the course of this study consists of a two-dimensional triangular element and a three-dimensional tetrahedral element.

  17. A three-dimensional FEM-DEM technique for predicting the evolution of fracture in geomaterials and concrete

    NASA Astrophysics Data System (ADS)

    Zárate, Francisco; Cornejo, Alejandro; Oñate, Eugenio

    2018-07-01

    This paper extends to three dimensions (3D), the computational technique developed by the authors in 2D for predicting the onset and evolution of fracture in a finite element mesh in a simple manner based on combining the finite element method and the discrete element method (DEM) approach (Zárate and Oñate in Comput Part Mech 2(3):301-314, 2015). Once a crack is detected at an element edge, discrete elements are generated at the adjacent element vertexes and a simple DEM mechanism is considered in order to follow the evolution of the crack. The combination of the DEM with simple four-noded linear tetrahedron elements correctly captures the onset of fracture and its evolution, as shown in several 3D examples of application.

  18. The influence of the mechanical behaviour of the middle ear ligaments: a finite element analysis.

    PubMed

    Gentil, F; Parente, M; Martins, P; Garbe, C; Jorge, R N; Ferreira, A; Tavares, João Manuel R S

    2011-01-01

    The interest in computer modelling of biomechanical systems, mainly by using the finite element method (FEM), has been increasing, in particular for analysis of the mechanical behaviour of the human ear. In this work, a finite element model of the middle ear was developed to study the dynamic structural response to harmonic vibrations for distinct sound pressure levels applied on the eardrum. The model includes different ligaments and muscle tendons with elastic and hyperelastic behaviour for these supportive structures. Additionally, the nonlinear behaviour of the ligaments and muscle tendons was investigated, as they are the connection between ossicles by contact formulation. Harmonic responses of the umbo and stapes footplate displacements, between 100 Hz and 10 kHz, were obtained and compared with previously published work. The stress state of ligaments (superior, lateral, and anterior of malleus and superior and posterior of incus) was analysed, with the focus on balance of the supportive structures of the middle ear, as ligaments make the link between the ossicular chain and the walls of the tympanic cavity. The results obtained in this work highlight the importance of using hyperelastic models to simulate the mechanical behaviour for the ligaments and tendons.

  19. Compression Strength of Composite Primary Structural Components

    NASA Technical Reports Server (NTRS)

    Johnson, Eric R.

    1998-01-01

    Research conducted under NASA Grant NAG-1-537 focussed on the response and failure of advanced composite material structures for application to aircraft. Both experimental and analytical methods were utilized to study the fundamental mechanics of the response and failure of selected structural components subjected to quasi-static loads. Most of the structural components studied were thin-walled elements subject to compression, such that they exhibited buckling and postbuckling responses prior to catastrophic failure. Consequently, the analyses were geometrically nonlinear. Structural components studied were dropped-ply laminated plates, stiffener crippling, pressure pillowing of orthogonally stiffened cylindrical shells, axisymmetric response of pressure domes, and the static crush of semi-circular frames. Failure of these components motivated analytical studies on an interlaminar stress postprocessor for plate and shell finite element computer codes, and global/local modeling strategies in finite element modeling. These activities are summarized in the following section. References to literature published under the grant are listed on pages 5 to 10 by a letter followed by a number under the categories of journal publications, conference publications, presentations, and reports. These references are indicated in the text by their letter and number as a superscript.

  20. Nonlinear modelling of high-speed catenary based on analytical expressions of cable and truss elements

    NASA Astrophysics Data System (ADS)

    Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing

    2015-10-01

    Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.

  1. Air slab-correction for Γ-ray attenuation measurements

    NASA Astrophysics Data System (ADS)

    Mann, Kulwinder Singh

    2017-12-01

    Gamma (γ)-ray shielding behaviour (GSB) of a material can be ascertained from its linear attenuation coefficient (μ, cm-1). Narrow-beam transmission geometry is required for μ-measurement. In such measurements, a thin slab of the material has to insert between point-isotropic γ-ray source and detector assembly. The accuracy in measurements requires that sample's optical thickness (OT) remain below 0.5 mean free path (mfp). Sometimes it is very difficult to produce thin slab of sample (absorber), on the other hand for thick absorber, i.e. OT >0.5 mfp, the influence of the air displaced by it cannot be ignored during μ-measurements. Thus, for a thick sample, correction factor has been suggested which compensates the air present in the transmission geometry. The correction factor has been named as an air slab-correction (ASC). Six samples of low-Z engineering materials (cement-black, clay, red-mud, lime-stone, cement-white and plaster-of-paris) have been selected for investigating the effect of ASC on μ-measurements at three γ-ray energies (661.66, 1173.24, 1332.50 keV). The measurements have been made using point-isotropic γ-ray sources (Cs-137 and Co-60), NaI(Tl) detector and multi-channel-analyser coupled with a personal computer. Theoretical values of μ have been computed using a GRIC2-toolkit (standardized computer programme). Elemental compositions of the samples were measured with Wavelength Dispersive X-ray Fluorescence (WDXRF) analyser. Inter-comparison of measured and computed μ-values, suggested that the application of ASC helps in precise μ-measurement for thick samples of low-Z materials. Thus, this hitherto widely ignored ASC factor is recommended to use in similar γ-ray measurements.

  2. Memory-Efficient Analysis of Dense Functional Connectomes.

    PubMed

    Loewe, Kristian; Donohue, Sarah E; Schoenfeld, Mircea A; Kruse, Rudolf; Borgelt, Christian

    2016-01-01

    The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download.

  3. Memory-Efficient Analysis of Dense Functional Connectomes

    PubMed Central

    Loewe, Kristian; Donohue, Sarah E.; Schoenfeld, Mircea A.; Kruse, Rudolf; Borgelt, Christian

    2016-01-01

    The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download. PMID:27965565

  4. Extraction of information from major element chemical analyses of lunar basalts

    NASA Technical Reports Server (NTRS)

    Butler, J. C.

    1985-01-01

    Major element chemical analyses often form the framework within which similarities and differences of analyzed specimens are noted and used to propose or devise models. When percentages are formed the ratios of pairs of components are preserved whereas many familiar statistical and geometrical descriptors are likely to exhibit major changes. This ratio preserving aspect forms the basis for a proposed framework. An analysis of compositional variability within the data set of 42 major element analyses of lunar reference samples was selected to investigate this proposal.

  5. Grid Computing at GSI for ALICE and FAIR - present and future

    NASA Astrophysics Data System (ADS)

    Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten

    2012-12-01

    The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE@CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.

  6. Variable-Complexity Multidisciplinary Optimization on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.

    1998-01-01

    This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.

  7. 47 CFR 69.307 - General support facilities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...

  8. 47 CFR 69.307 - General support facilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...

  9. 47 CFR 69.307 - General support facilities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...

  10. 47 CFR 69.307 - General support facilities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...

  11. 47 CFR 69.307 - General support facilities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...

  12. ORNL Pre-test Analyses of A Large-scale Experiment in STYLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Paul T; Yin, Shengjun; Klasky, Hilda B

    Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current statusmore » of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite-element solutions and to also assess the level of confidence that can be placed in the best-estimate finiteelement solutions.« less

  13. Comparisons of Elemental Profiles of the Western Spruce Budworm Reared on Three host Foilages and Artificial Medium

    Treesearch

    John A. McLean; P. Laks; T.L. Shore

    1983-01-01

    Western spruce budworm were reared on three host foliages and artificial medium. Trace element analyses showed large differences in elemental concentrations between food sources and only minor differences between insect life stages. Discriminant analyses were carried out to test the distinctiveness of adult chemoprints from each rearing regime. Fe, Cu, and Zn were...

  14. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  15. Cost Considerations in Nonlinear Finite-Element Computing

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.

    1985-01-01

    Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.

  16. Fracture behaviors of ceramic tissue scaffolds for load bearing applications

    NASA Astrophysics Data System (ADS)

    Entezari, Ali; Roohani-Esfahani, Seyed-Iman; Zhang, Zhongpu; Zreiqat, Hala; Dunstan, Colin R.; Li, Qing

    2016-07-01

    Healing large bone defects, especially in weight-bearing locations, remains a challenge using available synthetic ceramic scaffolds. Manufactured as a scaffold using 3D printing technology, Sr-HT-Gahnite at high porosity (66%) had demonstrated significantly improved compressive strength (53 ± 9 MPa) and toughness. Nevertheless, the main concern of ceramic scaffolds in general remains to be their inherent brittleness and low fracture strength in load bearing applications. Therefore, it is crucial to establish a robust numerical framework for predicting fracture strengths of such scaffolds. Since crack initiation and propagation plays a critical role on the fracture strength of ceramic structures, we employed extended finite element method (XFEM) to predict fracture behaviors of Sr-HT-Gahnite scaffolds. The correlation between experimental and numerical results proved the superiority of XFEM for quantifying fracture strength of scaffolds over conventional FEM. In addition to computer aided design (CAD) based modeling analyses, XFEM was conducted on micro-computed tomography (μCT) based models for fabricated scaffolds, which took into account the geometric variations induced by the fabrication process. Fracture strengths and crack paths predicted by the μCT-based XFEM analyses correlated well with relevant experimental results. The study provided an effective means for the prediction of fracture strength of porous ceramic structures, thereby facilitating design optimization of scaffolds.

  17. Predicted Aerodynamic Characteristics of a NACA 0015 Airfoil Having a 25% Integral-Type Trailing Edge Flap

    NASA Technical Reports Server (NTRS)

    Hassan, Ahmed

    1999-01-01

    Using the two-dimensional ARC2D Navier-Stokes flow solver analyses were conducted to predict the sectional aerodynamic characteristics of the flapped NACA-0015 airfoil section. To facilitate the analyses and the generation of the computational grids, the airfoil with the deflected trailing edge flap was treated as a single element airfoil with no allowance for a gap between the flap's leading edge and the base of the forward portion of the airfoil. Generation of the O-type computational grids was accomplished using the HYGRID hyperbolic grid generation program. Results were obtained for a wide range of Mach numbers, angles of attack and flap deflections. The predicted sectional lift, drag and pitching moment values for the airfoil were then cast in tabular format (C81) to be used in lifting-line helicopter rotor aerodynamic performance calculations. Similar were also generated for the flap. Mathematical expressions providing the variation of the sectional lift and pitching moment coefficients for the airfoil and for the flap as a function of flap chord length and flap deflection angle were derived within the context of thin airfoil theory. The airfoil's sectional drag coefficient were derived using the ARC2D drag predictions for equivalent two dimensional flow conditions.

  18. Categorizing words using 'frequent frames': what cross-linguistic analyses reveal about distributional acquisition strategies.

    PubMed

    Chemla, Emmanuel; Mintz, Toben H; Bernal, Savita; Christophe, Anne

    2009-04-01

    Mintz (2003) described a distributional environment called a frame, defined as the co-occurrence of two context words with one intervening target word. Analyses of English child-directed speech showed that words that fell within any frequently occurring frame consistently belonged to the same grammatical category (e.g. noun, verb, adjective, etc.). In this paper, we first generalize this result to French, a language in which the function word system allows patterns that are potentially detrimental to a frame-based analysis procedure. Second, we show that the discontinuity of the chosen environments (i.e. the fact that target words are framed by the context words) is crucial for the mechanism to be efficient. This property might be relevant for any computational approach to grammatical categorization. Finally, we investigate a recursive application of the procedure and observe that the categorization is paradoxically worse when context elements are categories rather than actual lexical items. Item-specificity is thus also a core computational principle for this type of algorithm. Our analysis, along with results from behavioural studies (Gómez, 2002; Gómez and Maye, 2005; Mintz, 2006), provides strong support for frames as a basis for the acquisition of grammatical categories by infants. Discontinuity and item-specificity appear to be crucial features.

  19. Theoretical and software considerations for general dynamic analysis using multilevel substructured models

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1985-01-01

    The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.

  20. Delayed Collapse of Wooden Folding Stairs

    NASA Astrophysics Data System (ADS)

    Krentowski, Janusz; Chyzy, Tadeusz

    2017-10-01

    During operation of folding stairs, a fastener joining the ladder hanger with the frame was torn off. A person using the stairs sustained serious injury. In several dozen other locations similar accidents were observed. As a result of inspections, some threaded parts of the screws were found in the gaps between the wooden elements of the stairs’ flaps. In the construction a hatch made of wooden strips is attached to an external frame by means of metal hangers. Laboratory strength tests were conducted on three samples made of wooden elements identical to the ones used in the damaged stairs. Due to complex load distribution mechanism acting on the base of the structure, a three-dimensional FEM model was created. An original software was used for calculations. Five computational model variants were considered. As a result of the numerical analyses, it was unquestionably shown that faulty connections were the cause of the destruction of the stairs. The weakest link in the load transmission chain were found to have been the screws connecting the hatch board with the hangers.

  1. Domain- and nucleotide-specific Rev response element regulation of feline immunodeficiency virus production

    PubMed Central

    Na, Hong; Huisman, Willem; Ellestad, Kristofor K.; Phillips, Tom R.; Power, Christopher

    2010-01-01

    Computational analysis of feline immunodeficiency virus (FIV) RNA sequences indicated that common FIV strains contain a rev response element (RRE) defined by a long unbranched hairpin with 6 stem-loop sub-domains, termed stem-loop A (SLA). To examine the role of the RNA secondary structure of the RRE, mutational analyses were performed in both an infectious FIV molecular clone and a FIV CAT-RRE reporter system. These studies disclosed that the stems within SLA (SA1, 2, 3, 4, and 5) of the RRE were critical but SA6 was not essential for FIV replication and CAT expression. These studies also revealed that the secondary structure rather than an antisense protein (ASP) mediates virus expression and replication in vitro. In addition, a single synonymous mutation within the FIV-RRE, SA3/45, reduced viral reverse transcriptase activity and p24 expression after transfection but in addition also showed a marked reduction in viral expression and production following infection. PMID:20570310

  2. Space Station needs, attributes and architectural options. Volume 2, book 2, part 2, Task 2: Information management system

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Missions to be performed, station operations and functions to be carried out, and technologies anticipated during the time frame of the space station were examined in order to determine the scope of the overall information management system for the space station. This system comprises: (1) the data management system which includes onboard computer related hardware and software required to assume and exercise control of all activities performed on the station; (2) the communication system for both internal and external communications; and (3) the ground segment. Techniques used to examine the information system from a functional and performance point of view are described as well as the analyses performed to derive the architecture of both the onboard data management system and the system for internal and external communications. These architectures are then used to generate a conceptual design of the onboard elements in order to determine the physical parameters (size/weight/power) of the hardware and software. The ground segment elements are summarized.

  3. Vibration and flutter characteristics of the SR7L large-scale propfan

    NASA Technical Reports Server (NTRS)

    August, Richard; Kaza, Krishna Rao V.

    1988-01-01

    An investigation of the vibration characteristics and aeroelastic stability of the SR7L Large-Scale Advanced Propfan was performed using a finite element blade model and an improved aeroelasticity code. Analyses were conducted for different blade pitch angles, blade support conditions, number of blades, rotational speeds, and freestream Mach numbers. A finite element model of the blade was used to determine the blade's vibration behavior and sensitivity to support stiffness. The calculated frequencies and mode shape obtained with this model agreed well with the published experimental data. A computer code recently developed at NASA Lewis Research Center and based on three-dimensional, unsteady, lifting surface aerodynamic theory was used for the aeroelastic analysis to examine the blade's stability at a cruise condition of Mach 0.8 at 1700 rpm. The results showed that the blade is stable for that operating point. However, a flutter condition was predicted if the cruise Mach number was increased to 0.9.

  4. Methodologies for optimal resource allocation to the national space program and new space utilizations. Volume 1: Technical description

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The optimal allocation of resources to the national space program over an extended time period requires the solution of a large combinatorial problem in which the program elements are interdependent. The computer model uses an accelerated search technique to solve this problem. The model contains a large number of options selectable by the user to provide flexible input and a broad range of output for use in sensitivity analyses of all entering elements. Examples of these options are budget smoothing under varied appropriation levels, entry of inflation and discount effects, and probabilistic output which provides quantified degrees of certainty that program costs will remain within planned budget. Criteria and related analytic procedures were established for identifying potential new space program directions. Used in combination with the optimal resource allocation model, new space applications can be analyzed in realistic perspective, including the advantage gain from existing space program plant and on-going programs such as the space transportation system.

  5. Verification of the numerical model of insert-type joint of scaffolding in relation to experimental research

    NASA Astrophysics Data System (ADS)

    Pieńko, Michał; Błazik-Borowa, Ewa

    2018-01-01

    This paper presents the problem of comparing the results of computer simulations with the results of laboratory tests. The subject of the study was the insert-type joint of scaffolding loaded with a bending moment. The research was carried out on the real elements of the scaffolding. Due to the complexity of the connection different friction coefficients and depths of wedge insertion were taken into account in the analysis. The aim of conducting the series of analyses was to determine the sensitivity of the model to the mentioned characteristics. Since laboratory tests were carried out on the real samples, there were no preparations of surface involved in the load transfer. This approach caused many problems with the clear definition of the nature of work of individual node elements during the load. The analysis consist of two stages: the stage in which the connection is defined (the wedge is inserted into the rosette), and the loading stage (the node is loaded by the bending moment).

  6. Space Station needs, attributes and architectural options. Volume 2, book 2, part 2, Task 2: Information management system

    NASA Astrophysics Data System (ADS)

    1983-04-01

    Missions to be performed, station operations and functions to be carried out, and technologies anticipated during the time frame of the space station were examined in order to determine the scope of the overall information management system for the space station. This system comprises: (1) the data management system which includes onboard computer related hardware and software required to assume and exercise control of all activities performed on the station; (2) the communication system for both internal and external communications; and (3) the ground segment. Techniques used to examine the information system from a functional and performance point of view are described as well as the analyses performed to derive the architecture of both the onboard data management system and the system for internal and external communications. These architectures are then used to generate a conceptual design of the onboard elements in order to determine the physical parameters (size/weight/power) of the hardware and software. The ground segment elements are summarized.

  7. Computer Program for Steady Transonic Flow over Thin Airfoils by Finite Elements

    DTIC Science & Technology

    1975-10-01

    COMPUTER PROGRAM FOR STEADY JJ TRANSONIC FLOW OVER THIN AIRFOILS BY g FINITE ELEMENTS • *q^^ r ̂ c HUNTSVILLE RESEARCH & ENGINEERING CENTER...jglMMi B Jun’ INC ORGANIMTION NAME ANO ADDRESS Lö^kfteed Missiles & Space Company, Inc. Huntsville Research & Engineering Center,^ Huntsville, Alab...This report was prepared by personnel in the Computational Mechamcs Section of the Lockheed Missiles fc Space Company, Inc.. Huntsville Research

  8. Computational Modeling For The Transitional Flow Over A Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun; Rumsey, Chris L. (Technical Monitor)

    2000-01-01

    The transitional flow over a multi-element airfoil in a landing configuration are computed using a two equation transition model. The transition model is predictive in the sense that the transition onset is a result of the calculation and no prior knowledge of the transition location is required. The computations were performed using the INS2D) Navier-Stokes code. Overset grids are used for the three-element airfoil. The airfoil operating conditions are varied for a range of angle of attack and for two different Reynolds numbers of 5 million and 9 million. The computed results are compared with experimental data for the surface pressure, skin friction, transition onset location, and velocity magnitude. In general, the comparison shows a good agreement with the experimental data.

  9. Development of Integrated Programs for Aerospace-vehicle design (IPAD): Integrated information processing requirements

    NASA Technical Reports Server (NTRS)

    Southall, J. W.

    1979-01-01

    The engineering-specified requirements for integrated information processing by means of the Integrated Programs for Aerospace-Vehicle Design (IPAD) system are presented. A data model is described and is based on the design process of a typical aerospace vehicle. General data management requirements are specified for data storage, retrieval, generation, communication, and maintenance. Information management requirements are specified for a two-component data model. In the general portion, data sets are managed as entities, and in the specific portion, data elements and the relationships between elements are managed by the system, allowing user access to individual elements for the purpose of query. Computer program management requirements are specified for support of a computer program library, control of computer programs, and installation of computer programs into IPAD.

  10. Determination of apparent coupling factors for adhesive bonded acrylic plates using SEAL approach

    NASA Astrophysics Data System (ADS)

    Pankaj, Achuthan. C.; Shivaprasad, M. V.; Murigendrappa, S. M.

    2018-04-01

    Apparent coupling loss factors (CLF) and velocity responses has been computed for two lap joined adhesive bonded plates using finite element and experimental statistical energy analysis like approach. A finite element model of the plates has been created using ANSYS software. The statistical energy parameters have been computed using the velocity responses obtained from a harmonic forced excitation analysis. Experiments have been carried out for two different cases of adhesive bonded joints and the results have been compared with the apparent coupling factors and velocity responses obtained from finite element analysis. The results obtained from the studies signify the importance of modeling of adhesive bonded joints in computation of the apparent coupling factors and its further use in computation of energies and velocity responses using statistical energy analysis like approach.

  11. Computational strategies for tire monitoring and analysis

    NASA Technical Reports Server (NTRS)

    Danielson, Kent T.; Noor, Ahmed K.; Green, James S.

    1995-01-01

    Computational strategies are presented for the modeling and analysis of tires in contact with pavement. A procedure is introduced for simple and accurate determination of tire cross-sectional geometric characteristics from a digitally scanned image. Three new strategies for reducing the computational effort in the finite element solution of tire-pavement contact are also presented. These strategies take advantage of the observation that footprint loads do not usually stimulate a significant tire response away from the pavement contact region. The finite element strategies differ in their level of approximation and required amount of computer resources. The effectiveness of the strategies is demonstrated by numerical examples of frictionless and frictional contact of the space shuttle Orbiter nose-gear tire. Both an in-house research code and a commercial finite element code are used in the numerical studies.

  12. A multidimensional finite element method for CFD

    NASA Technical Reports Server (NTRS)

    Pepper, Darrell W.; Humphrey, Joseph W.

    1991-01-01

    A finite element method is used to solve the equations of motion for 2- and 3-D fluid flow. The time-dependent equations are solved explicitly using quadrilateral (2-D) and hexahedral (3-D) elements, mass lumping, and reduced integration. A Petrov-Galerkin technique is applied to the advection terms. The method requires a minimum of computational storage, executes quickly, and is scalable for execution on computer systems ranging from PCs to supercomputers.

  13. A COMPARISON OF TRANSIENT INFINITE ELEMENTS AND TRANSIENT KIRCHHOFF INTEGRAL METHODS FOR FAR FIELD ACOUSTIC ANALYSIS

    DOE PAGES

    WALSH, TIMOTHY F.; JONES, ANDREA; BHARDWAJ, MANOJ; ...

    2013-04-01

    Finite element analysis of transient acoustic phenomena on unbounded exterior domains is very common in engineering analysis. In these problems there is a common need to compute the acoustic pressure at points outside of the acoustic mesh, since meshing to points of interest is impractical in many scenarios. In aeroacoustic calculations, for example, the acoustic pressure may be required at tens or hundreds of meters from the structure. In these cases, a method is needed for post-processing the acoustic results to compute the response at far-field points. In this paper, we compare two methods for computing far-field acoustic pressures, onemore » derived directly from the infinite element solution, and the other from the transient version of the Kirchhoff integral. Here, we show that the infinite element approach alleviates the large storage requirements that are typical of Kirchhoff integral and related procedures, and also does not suffer from loss of accuracy that is an inherent part of computing numerical derivatives in the Kirchhoff integral. In order to further speed up and streamline the process of computing the acoustic response at points outside of the mesh, we also address the nonlinear iterative procedure needed for locating parametric coordinates within the host infinite element of far-field points, the parallelization of the overall process, linear solver requirements, and system stability considerations.« less

  14. Path Toward a Unified Geometry for Radiation Transport

    NASA Astrophysics Data System (ADS)

    Lee, Kerry

    The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex CAD models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN (high charge and energy transport code developed by NASA LaRC), are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The work-flow for doing radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats.

  15. Prediction of Fracture Behavior in Rock and Rock-like Materials Using Discrete Element Models

    NASA Astrophysics Data System (ADS)

    Katsaga, T.; Young, P.

    2009-05-01

    The study of fracture initiation and propagation in heterogeneous materials such as rock and rock-like materials are of principal interest in the field of rock mechanics and rock engineering. It is crucial to study and investigate failure prediction and safety measures in civil and mining structures. Our work offers a practical approach to predict fracture behaviour using discrete element models. In this approach, the microstructures of materials are presented through the combination of clusters of bonded particles with different inter-cluster particle and bond properties, and intra-cluster bond properties. The geometry of clusters is transferred from information available from thin sections, computed tomography (CT) images and other visual presentation of the modeled material using customized AutoCAD built-in dialog- based Visual Basic Application. Exact microstructures of the tested sample, including fractures, faults, inclusions and void spaces can be duplicated in the discrete element models. Although the microstructural fabrics of rocks and rock-like structures may have different scale, fracture formation and propagation through these materials are alike and will follow similar mechanics. Synthetic material provides an excellent condition for validating the modelling approaches, as fracture behaviours are known with the well-defined composite's properties. Calibration of the macro-properties of matrix material and inclusions (aggregates), were followed with the overall mechanical material responses calibration by adjusting the interfacial properties. The discrete element model predicted similar fracture propagation features and path as that of the real sample material. The path of the fractures and matrix-inclusion interaction was compared using computed tomography images. Initiation and fracture formation in the model and real material were compared using Acoustic Emission data. Analysing the temporal and spatial evolution of AE events, collected during the sample testing, in relation to the CT images allows the precise reconstruction of the failure sequence. Our proposed modelling approach illustrates realistic fracture formation and growth predictions at different loading conditions.

  16. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  17. DYNALIST II : A Computer Program for Stability and Dynamic Response Analysis of Rail Vehicle Systems : Volume 3. Technical Report Addendum.

    DOT National Transportation Integrated Search

    1976-07-01

    Several new capabilities have been added to the DYNALIST II computer program. These include: (1) a component matrix generator that operates as a 3-D finite element modeling program where elements consist of rigid bodies, flexural bodies, wheelsets, s...

  18. Computational simulation of the creep-rupture process in filamentary composite materials

    NASA Technical Reports Server (NTRS)

    Slattery, Kerry T.; Hackett, Robert M.

    1991-01-01

    A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.

  19. Examining the Relationship between Psychosocial Work Factors and Musculoskeletal Discomfort among Computer Users in Malaysia

    PubMed Central

    Zakerian, SA; Subramaniam, ID

    2011-01-01

    Background: With computers rapidly carving a niche in virtually every nook and crevice of today’s fast-paced society, musculoskeletal disorders are becoming more prevalent among computer users, which comprise a wide spectrum of the Malaysian population, including office workers. While extant literature depicts extensive research on musculoskeletal disorders in general, the five dimensions of psychosocial work factors (job demands, job contentment, job control, computer-related problems and social interaction) attributed to work-related musculoskeletal disorders have been neglected. This study examines the aforementioned elements in detail, pertaining to their relationship with musculoskeletal disorders, focusing in particular, on 120 office workers at Malaysian public sector organizations, whose jobs require intensive computer usage. Methods: Research was conducted between March and July 2009 in public service organizations in Malaysia. This study was conducted via a survey utilizing self-complete questionnaires and diary. The relationship between psychosocial work factors and musculoskeletal discomfort was ascertained through regression analyses, which revealed that some factors were more important than others were. Results: The results indicate a significant relationship among psychosocial work factors and musculoskeletal discomfort among computer users. Several of these factors such as job control, computer-related problem and social interaction of psychosocial work factors are found to be more important than others in musculoskeletal discomfort. Conclusion: With computer usage on the rise among users, the prevalence of musculoskeletal discomfort could lead to unnecessary disabilities, hence, the vital need for greater attention to be given on this aspect in the work place, to alleviate to some extent, potential problems in future. PMID:23113058

  20. The feasibility of using UML to compare the impact of different brands of computer system on the clinical consultation.

    PubMed

    Kumarapeli, Pushpa; de Lusignan, Simon; Koczan, Phil; Jones, Beryl; Sheeler, Ian

    2007-01-01

    UK general practice is universally computerised, with computers used in the consulting room at the point of care. Practices use a range of different brands of computer system, which have developed organically to meet the needs of general practitioners and health service managers. Unified Modelling Language (UML) is a standard modelling and specification notation widely used in software engineering. To examine the feasibility of UML notation to compare the impact of different brands of general practice computer system on the clinical consultation. Multi-channel video recordings of simulated consultation sessions were recorded on three different clinical computer systems in common use (EMIS, iSOFT Synergy and IPS Vision). User action recorder software recorded time logs of keyboard and mouse use, and pattern recognition software captured non-verbal communication. The outputs of these were used to create UML class and sequence diagrams for each consultation. We compared 'definition of the presenting problem' and 'prescribing', as these tasks were present in all the consultations analysed. Class diagrams identified the entities involved in the clinical consultation. Sequence diagrams identified common elements of the consultation (such as prescribing) and enabled comparisons to be made between the different brands of computer system. The clinician and computer system interaction varied greatly between the different brands. UML sequence diagrams are useful in identifying common tasks in the clinical consultation, and for contrasting the impact of the different brands of computer system on the clinical consultation. Further research is needed to see if patterns demonstrated in this pilot study are consistently displayed.

Top