Sample records for computer based load

  1. Transient Three-Dimensional Side Load Analysis of a Film Cooled Nozzle

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Guidos, Mike

    2008-01-01

    Transient three-dimensional numerical investigations on the side load physics for an engine encompassing a film cooled nozzle extension and a regeneratively cooled thrust chamber, were performed. The objectives of this study are to identify the three-dimensional side load physics and to compute the associated aerodynamic side load using an anchored computational methodology. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and a transient inlet history based on an engine system simulation. Ultimately, the computational results will be provided to the nozzle designers for estimating of effect of the peak side load on the nozzle structure. Computations simulating engine startup at ambient pressures corresponding to sea level and three high altitudes were performed. In addition, computations for both engine startup and shutdown transients were also performed for a stub nozzle, operating at sea level. For engine with the full nozzle extension, computational result shows starting up at sea level, the peak side load occurs when the lambda shock steps into the turbine exhaust flow, while the side load caused by the transition from free-shock separation to restricted-shock separation comes at second; and the side loads decreasing rapidly and progressively as the ambient pressure decreases. For the stub nozzle operating at sea level, the computed side loads during both startup and shutdown becomes very small due to the much reduced flow area.

  2. Transient Three-Dimensional Startup Side Load Analysis of a Regeneratively Cooled Nozzle

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    2008-01-01

    The objective of this effort is to develop a computational methodology to capture the startup side load physics and to anchor the computed aerodynamic side loads with the available data from a regeneratively cooled, high-aspect-ratio nozzle, hot-fired at sea level. The computational methodology is based on an unstructured-grid, pressure-based, reacting flow computational fluid dynamics and heat transfer formulation, a transient 5 s inlet history based on an engine system simulation, and a wall temperature distribution to reflect the effect of regenerative cooling. To understand the effect of regenerative wall cooling, two transient computations were performed using the boundary conditions of adiabatic and cooled walls, respectively. The results show that three types of shock evolution are responsible for side loads: generation of combustion wave; transitions among free-shock separation, restricted-shock separation, and simultaneous free-shock and restricted shock separations; along with the pulsation of shocks across the lip, although the combustion wave is commonly eliminated with the sparklers during actual test. The test measured two side load events: a secondary and lower side load, followed by a primary and peak side load. Results from both wall boundary conditions captured the free-shock separation to restricted-shock separation transition with computed side loads matching the measured secondary side load. For the primary side load, the cooled wall transient produced restricted-shock pulsation across the nozzle lip with peak side load matching that of the test, while the adiabatic wall transient captured shock transitions and free-shock pulsation across the lip with computed peak side load 50% lower than that of the measurement. The computed dominant pulsation frequency of the cooled wall nozzle agrees with that of a separate test, while that of the adiabatic wall nozzle is more than 50% lower than that of the measurement. The computed teepee-like formation and the tangential motion of the shocks during lip pulsation also qualitatively agree with those of test observations. Moreover, a third transient computation was performed with a proportionately shortened 1 s sequence, and lower side loads were obtained with the higher ramp rate.

  3. Development of an efficient procedure for calculating the aerodynamic effects of planform variation

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Geller, E. W.

    1981-01-01

    Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.

  4. Transient Three-Dimensional Analysis of Nozzle Side Load in Regeneratively Cooled Engines

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    2005-01-01

    Three-dimensional numerical investigations on the start-up side load physics for a regeneratively cooled, high-aspect-ratio nozzle were performed. The objectives of this study are to identify the three-dimensional side load physics and to compute the associated aerodynamic side load using an anchored computational methodology. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and a transient inlet condition based on an engine system simulation. Computations were performed for both the adiabatic and cooled walls in order to understand the effect of boundary conditions. Finite-rate chemistry was used throughout the study so that combustion effect is always included. The results show that three types of shock evolution are responsible for side loads: generation of combustion wave; transitions among free-shock separation, restricted-shock separation, and simultaneous free-shock and restricted shock separations; along with oscillation of shocks across the lip. Wall boundary conditions drastically affect the computed side load physics: the adiabatic nozzle prefers free-shock separation while the cooled nozzle favors restricted-shock separation, resulting in higher peak side load for the cooled nozzle than that of the adiabatic nozzle. By comparing the computed physics with those of test observations, it is concluded that cooled wall is a more realistic boundary condition, and the oscillation of the restricted-shock separation flow pattern across the lip along with its associated tangential shock motion are the dominant side load physics for a regeneratively cooled, high aspect-ratio rocket engine.

  5. Monitoring task loading with multivariate EEG measures during complex forms of human-computer interaction

    NASA Technical Reports Server (NTRS)

    Smith, M. E.; Gevins, A.; Brown, H.; Karnik, A.; Du, R.

    2001-01-01

    Electroencephalographic (EEG) recordings were made while 16 participants performed versions of a personal-computer-based flight simulation task of low, moderate, or high difficulty. As task difficulty increased, frontal midline theta EEG activity increased and alpha band activity decreased. A participant-specific function that combined multiple EEG features to create a single load index was derived from a sample of each participant's data and then applied to new test data from that participant. Index values were computed for every 4 s of task data. Across participants, mean task load index values increased systematically with increasing task difficulty and differed significantly between the different task versions. Actual or potential applications of this research include the use of multivariate EEG-based methods to monitor task loading during naturalistic computer-based work.

  6. Transient Three-Dimensional Analysis of Side Load in Liquid Rocket Engine Nozzles

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    2004-01-01

    Three-dimensional numerical investigations on the nozzle start-up side load physics were performed. The objective of this study is to identify the three-dimensional side load physics and to compute the associated aerodynamic side load using an anchored computational methodology. The computational methodology is based on an unstructured-grid, and pressure-based computational fluid dynamics formulation, and a simulated inlet condition based on a system calculation. Finite-rate chemistry was used throughout the study so that combustion effect is always included, and the effect of wall cooling on side load physics is studied. The side load physics captured include the afterburning wave, transition from free- shock to restricted-shock separation, and lip Lambda shock oscillation. With the adiabatic nozzle, free-shock separation reappears after the transition from free-shock separation to restricted-shock separation, and the subsequent flow pattern of the simultaneous free-shock and restricted-shock separations creates a very asymmetric Mach disk flow. With the cooled nozzle, the more symmetric restricted-shock separation persisted throughout the start-up transient after the transition, leading to an overall lower side load than that of the adiabatic nozzle. The tepee structures corresponding to the maximum side load were addressed.

  7. An experiment for determining the Euler load by direct computation

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.; Stein, Peter A.

    1986-01-01

    A direct algorithm is presented for computing the Euler load of a column from experimental data. The method is based on exact inextensional theory for imperfect columns, which predicts two distinct deflected shapes at loads near the Euler load. The bending stiffness of the column appears in the expression for the Euler load along with the column length, therefore the experimental data allows a direct computation of bending stiffness. Experiments on graphite-epoxy columns of rectangular cross-section are reported in the paper. The bending stiffness of each composite column computed from experiment is compared with predictions from laminated plate theory.

  8. Transient Three-Dimensional Side Load Analysis of Out-of-Round Film Cooled Nozzles

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Lin, Jeff; Ruf, Joe; Guidos, Mike

    2010-01-01

    The objective of this study is to investigate the effect of nozzle out-of-roundness on the transient startup side loads. The out-of-roundness could be the result of asymmetric loads induced by hardware attached to the nozzle, asymmetric internal stresses induced by previous tests and/or deformation, such as creep, from previous tests. The rocket engine studied encompasses a regeneratively cooled thrust chamber and a film cooled nozzle extension with film coolant distributed from a turbine exhaust manifold. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and a transient inlet history based on an engine system simulation. Transient startup computations were performed with the out-of-roundness achieved by four degrees of ovalization of the nozzle: one perfectly round, one slightly out-of-round, one more out-of-round, and one significantly out-of-round. The computed side load physics caused by the nozzle out-of-roundness and its effect on nozzle side load are reported and discussed.

  9. Transient Two-Dimensional Analysis of Side Load in Liquid Rocket Engine Nozzles

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    2004-01-01

    Two-dimensional planar and axisymmetric numerical investigations on the nozzle start-up side load physics were performed. The objective of this study is to develop a computational methodology to identify nozzle side load physics using simplified two-dimensional geometries, in order to come up with a computational strategy to eventually predict the three-dimensional side loads. The computational methodology is based on a multidimensional, finite-volume, viscous, chemically reacting, unstructured-grid, and pressure-based computational fluid dynamics formulation, and a transient inlet condition based on an engine system modeling. The side load physics captured in the low aspect-ratio, two-dimensional planar nozzle include the Coanda effect, afterburning wave, and the associated lip free-shock oscillation. Results of parametric studies indicate that equivalence ratio, combustion and ramp rate affect the side load physics. The side load physics inferred in the high aspect-ratio, axisymmetric nozzle study include the afterburning wave; transition from free-shock to restricted-shock separation, reverting back to free-shock separation, and transforming to restricted-shock separation again; and lip restricted-shock oscillation. The Mach disk loci and wall pressure history studies reconfirm that combustion and the associated thermodynamic properties affect the formation and duration of the asymmetric flow.

  10. MIRADS-2 Implementation Manual

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Marshall Information Retrieval and Display System (MIRADS) which is a data base management system designed to provide the user with a set of generalized file capabilities is presented. The system provides a wide variety of ways to process the contents of the data base and includes capabilities to search, sort, compute, update, and display the data. The process of creating, defining, and loading a data base is generally called the loading process. The steps in the loading process which includes (1) structuring, (2) creating, (3) defining, (4) and implementing the data base for use by MIRADS are defined. The execution of several computer programs is required to successfully complete all steps of the loading process. This library must be established as a cataloged mass storage file as the first step in MIRADS implementation. The procedure for establishing the MIRADS Library is given. The system is currently operational for the UNIVAC 1108 computer system utilizing the Executive Operating System. All procedures relate to the use of MIRADS on the U-1108 computer.

  11. Dynamic load balancing of applications

    DOEpatents

    Wheat, Stephen R.

    1997-01-01

    An application-level method for dynamically maintaining global load balance on a parallel computer, particularly on massively parallel MIMD computers. Global load balancing is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method supports a large class of finite element and finite difference based applications and provides an automatic element management system to which applications are easily integrated.

  12. Transient three-dimensional startup side load analysis of a regeneratively cooled nozzle

    NASA Astrophysics Data System (ADS)

    Wang, Ten-See

    2009-07-01

    The objective of this effort is to develop a computational methodology to capture the side load physics and to anchor the computed aerodynamic side loads with the available data by simulating the startup transient of a regeneratively cooled, high-aspect-ratio nozzle, hot-fired at sea level. The computational methodology is based on an unstructured-grid, pressure-based, reacting flow computational fluid dynamics and heat transfer formulation, and a transient inlet history based on an engine system simulation. Emphases were put on the effects of regenerative cooling on shock formation inside the nozzle, and ramp rate on side load reduction. The results show that three types of asymmetric shock physics incur strong side loads: the generation of combustion wave, shock transitions, and shock pulsations across the nozzle lip, albeit the combustion wave can be avoided with sparklers during hot-firing. Results from both regenerative cooled and adiabatic wall boundary conditions capture the early shock transitions with corresponding side loads matching the measured secondary side load. It is theorized that the first transition from free-shock separation to restricted-shock separation is caused by the Coanda effect. After which the regeneratively cooled wall enhances the Coanda effect such that the supersonic jet stays attached, while the hot adiabatic wall fights off the Coanda effect, and the supersonic jet becomes detached most of the time. As a result, the computed peak side load and dominant frequency due to shock pulsation across the nozzle lip associated with the regeneratively cooled wall boundary condition match those of the test, while those associated with the adiabatic wall boundary condition are much too low. Moreover, shorter ramp time results show that higher ramp rate has the potential in reducing the nozzle side loads.

  13. Dynamic load balancing of applications

    DOEpatents

    Wheat, S.R.

    1997-05-13

    An application-level method for dynamically maintaining global load balance on a parallel computer, particularly on massively parallel MIMD computers is disclosed. Global load balancing is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method supports a large class of finite element and finite difference based applications and provides an automatic element management system to which applications are easily integrated. 13 figs.

  14. Metacognitive Load--Useful, or Extraneous Concept? Metacognitive and Self-Regulatory Demands in Computer-Based Learning

    ERIC Educational Resources Information Center

    Schwonke, Rolf

    2015-01-01

    Instructional design theories such as the "cognitive load theory" (CLT) or the "cognitive theory of multimedia learning" (CTML) explain learning difficulties in (computer-based) learning usually as a result of design deficiencies that hinder effective schema construction. However, learners often struggle even in well-designed…

  15. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  16. Balancing Particle and Mesh Computation in a Particle-In-Cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worley, Patrick H; D'Azevedo, Eduardo; Hager, Robert

    2016-01-01

    The XGC1 plasma microturbulence particle-in-cell simulation code has both particle-based and mesh-based computational kernels that dominate performance. Both of these are subject to load imbalances that can degrade performance and that evolve during a simulation. Each separately can be addressed adequately, but optimizing just for one can introduce significant load imbalances in the other, degrading overall performance. A technique has been developed based on Golden Section Search that minimizes wallclock time given prior information on wallclock time, and on current particle distribution and mesh cost per cell, and also adapts to evolution in load imbalance in both particle and meshmore » work. In problems of interest this doubled the performance on full system runs on the XK7 at the Oak Ridge Leadership Computing Facility compared to load balancing only one of the kernels.« less

  17. Short-term Power Load Forecasting Based on Balanced KNN

    NASA Astrophysics Data System (ADS)

    Lv, Xianlong; Cheng, Xingong; YanShuang; Tang, Yan-mei

    2018-03-01

    To improve the accuracy of load forecasting, a short-term load forecasting model based on balanced KNN algorithm is proposed; According to the load characteristics, the historical data of massive power load are divided into scenes by the K-means algorithm; In view of unbalanced load scenes, the balanced KNN algorithm is proposed to classify the scene accurately; The local weighted linear regression algorithm is used to fitting and predict the load; Adopting the Apache Hadoop programming framework of cloud computing, the proposed algorithm model is parallelized and improved to enhance its ability of dealing with massive and high-dimension data. The analysis of the household electricity consumption data for a residential district is done by 23-nodes cloud computing cluster, and experimental results show that the load forecasting accuracy and execution time by the proposed model are the better than those of traditional forecasting algorithm.

  18. Load Forecasting Based Distribution System Network Reconfiguration -- A Distributed Data-Driven Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard

    In this paper, a short-term load forecasting approach based network reconfiguration is proposed in a parallel manner. Specifically, a support vector regression (SVR) based short-term load forecasting approach is designed to provide an accurate load prediction and benefit the network reconfiguration. Because of the nonconvexity of the three-phase balanced optimal power flow, a second-order cone program (SOCP) based approach is used to relax the optimal power flow problem. Then, the alternating direction method of multipliers (ADMM) is used to compute the optimal power flow in distributed manner. Considering the limited number of the switches and the increasing computation capability, themore » proposed network reconfiguration is solved in a parallel way. The numerical results demonstrate the feasible and effectiveness of the proposed approach.« less

  19. Understanding the Effects of Databases as Cognitive Tools in a Problem-Based Multimedia Learning Environment

    ERIC Educational Resources Information Center

    Li, Rui; Liu, Min

    2007-01-01

    The purpose of this study is to examine the potential of using computer databases as cognitive tools to share learners' cognitive load and facilitate learning in a multimedia problem-based learning (PBL) environment designed for sixth graders. Two research questions were: (a) can the computer database tool share sixth-graders' cognitive load? and…

  20. Performance Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis with Different Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less

  1. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2013-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.

  2. 10 CFR Appendix A to Subpart U of... - Sampling Plan for Enforcement Testing of Electric Motors

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....010 is based on a 20 percent tolerance in the total power loss at full-load and fixed output power... measured full-load efficiency of unit i. Step 3. Compute the sample standard deviation (S1) of the measured full-load efficiency of the n1 units in the first sample as follows: ER83AD04.006 Step 4. Compute the...

  3. 10 CFR Appendix A to Subpart U of... - Sampling Plan for Enforcement Testing of Electric Motors

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....010 is based on a 20 percent tolerance in the total power loss at full-load and fixed output power... measured full-load efficiency of unit i. Step 3. Compute the sample standard deviation (S1) of the measured full-load efficiency of the n1 units in the first sample as follows: ER83AD04.006 Step 4. Compute the...

  4. 10 CFR Appendix A to Subpart U of... - Sampling Plan for Enforcement Testing of Electric Motors

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....010 is based on a 20 percent tolerance in the total power loss at full-load and fixed output power... measured full-load efficiency of unit i. Step 3. Compute the sample standard deviation (S1) of the measured full-load efficiency of the n1 units in the first sample as follows: ER83AD04.006 Step 4. Compute the...

  5. Applying graph partitioning methods in measurement-based dynamic load balancing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav; Fourestier, Sebastien; Menon, Harshitha

    Load imbalance leads to an increasing waste of resources as an application is scaled to more and more processors. Achieving the best parallel efficiency for a program requires optimal load balancing which is a NP-hard problem. However, finding near-optimal solutions to this problem for complex computational science and engineering applications is becoming increasingly important. Charm++, a migratable objects based programming model, provides a measurement-based dynamic load balancing framework. This framework instruments and then migrates over-decomposed objects to balance computational load and communication at runtime. This paper explores the use of graph partitioning algorithms, traditionally used for partitioning physical domains/meshes, formore » measurement-based dynamic load balancing of parallel applications. In particular, we present repartitioning methods developed in a graph partitioning toolbox called SCOTCH that consider the previous mapping to minimize migration costs. We also discuss a new imbalance reduction algorithm for graphs with irregular load distributions. We compare several load balancing algorithms using microbenchmarks on Intrepid and Ranger and evaluate the effect of communication, number of cores and number of objects on the benefit achieved from load balancing. New algorithms developed in SCOTCH lead to better performance compared to the METIS partitioners for several cases, both in terms of the application execution time and fewer number of objects migrated.« less

  6. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    NASA Astrophysics Data System (ADS)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  7. The building loads analysis system thermodynamics (BLAST) program, Version 2. 0: input booklet. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, E.

    1979-06-01

    The Building Loads Analysis and System Thermodynamics (BLAST) program is a comprehensive set of subprograms for predicting energy consumption in buildings. There are three major subprograms: (1) the space load predicting subprogram, which computes hourly space loads in a building or zone based on user input and hourly weather data; (2) the air distribution system simulation subprogram, which uses the computed space load and user inputs describing the building air-handling system to calculate hot water or steam, chilled water, and electric energy demands; and (3) the central plant simulation program, which simulates boilers, chillers, onsite power generating equipment and solarmore » energy systems and computes monthly and annual fuel and electrical power consumption and plant life cycle cost.« less

  8. Computation of Nonlinear Hydrodynamic Loads on Floating Wind Turbines Using Fluid-Impulse Theory: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kok Yan Chan, G.; Sclavounos, P. D.; Jonkman, J.

    2015-04-02

    A hydrodynamics computer module was developed for the evaluation of the linear and nonlinear loads on floating wind turbines using a new fluid-impulse formulation for coupling with the FAST program. The recently developed formulation allows the computation of linear and nonlinear loads on floating bodies in the time domain and avoids the computationally intensive evaluation of temporal and nonlinear free-surface problems and efficient methods are derived for its computation. The body instantaneous wetted surface is approximated by a panel mesh and the discretization of the free surface is circumvented by using the Green function. The evaluation of the nonlinear loadsmore » is based on explicit expressions derived by the fluid-impulse theory, which can be computed efficiently. Computations are presented of the linear and nonlinear loads on the MIT/NREL tension-leg platform. Comparisons were carried out with frequency-domain linear and second-order methods. Emphasis was placed on modeling accuracy of the magnitude of nonlinear low- and high-frequency wave loads in a sea state. Although fluid-impulse theory is applied to floating wind turbines in this paper, the theory is applicable to other offshore platforms as well.« less

  9. Probabilistic load simulation: Code development status

    NASA Astrophysics Data System (ADS)

    Newell, J. F.; Ho, H.

    1991-05-01

    The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.

  10. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2013-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development. Currently there is no fully coupled computational tool to analyze this fluid/structure interaction process. The objective of this study was to develop a fully coupled aeroelastic modeling capability to describe the fluid/structure interaction process during the transient nozzle operations. The aeroelastic model composes of three components: the computational fluid dynamics component based on an unstructured-grid, pressure-based computational fluid dynamics formulation, the computational structural dynamics component developed in the framework of modal analysis, and the fluid-structural interface component. The developed aeroelastic model was applied to the transient nozzle startup process of the Space Shuttle Main Engine at sea level. The computed nozzle side loads and the axial nozzle wall pressure profiles from the aeroelastic nozzle are compared with those of the published rigid nozzle results, and the impact of the fluid/structure interaction on nozzle side loads is interrogated and presented.

  11. Optimizing Cognitive Load for Learning from Computer-Based Science Simulations

    ERIC Educational Resources Information Center

    Lee, Hyunjeong; Plass, Jan L.; Homer, Bruce D.

    2006-01-01

    How can cognitive load in visual displays of computer simulations be optimized? Middle-school chemistry students (N = 257) learned with a simulation of the ideal gas law. Visual complexity was manipulated by separating the display of the simulations in two screens (low complexity) or presenting all information on one screen (high complexity). The…

  12. Hardware based redundant multi-threading inside a GPU for improved reliability

    DOEpatents

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  13. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less

  14. Transient Three-Dimensional Side Load Analysis of Out-of-Round Film Cooled Nozzles

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Lin, Jeff; Ruf, Joe; Guidos, Mike

    2010-01-01

    The objective of this study is to investigate the effect of nozzle out-of-roundness on the transient startup side loads at a high altitude, with an anchored computational methodology. The out-of-roundness could be the result of asymmetric loads induced by hardware attached to the nozzle, asymmetric internal stresses induced by previous tests, and deformation, such as creep, from previous tests. The rocket engine studied encompasses a regeneratively cooled thrust chamber and a film cooled nozzle extension with film coolant distributed from a turbine exhaust manifold. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and a transient inlet history based on an engine system simulation. Transient startup computations were performed with the out-of-roundness achieved by four different degrees of ovalization: one perfectly round, one slightly out-of-round, one more out-of-round, and one significantly out-of-round. The results show that the separation-line-jump is the peak side load physics for the round, slightly our-of-round, and more out-of-round cases, and the peak side load increases as the degree of out-of-roundness increases. For the significantly out-of-round nozzle, however, the peak side load reduces to comparable to that of the round nozzle and the separation line jump is not the peak side load physics. The counter-intuitive result of the significantly out-of-round case is found to be related to a side force reduction mechanism that splits the effect of the separation-line-jump into two parts, not only in the circumferential direction and most importantly in time.

  15. A comparison of queueing, cluster and distributed computing systems

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  16. Life Prediction for a CMC Component Using the NASALIFE Computer Code

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John Z.; Murthy, Pappu L. N.; Mital, Subodh K.

    2005-01-01

    The computer code, NASALIFE, was used to provide estimates for life of an SiC/SiC stator vane under varying thermomechanical loading conditions. The primary intention of this effort is to show how the computer code NASALIFE can be used to provide reasonable estimates of life for practical propulsion system components made of advanced ceramic matrix composites (CMC). Simple loading conditions provided readily observable and acceptable life predictions. Varying the loading conditions such that low cycle fatigue and creep were affected independently provided expected trends in the results for life due to varying loads and life due to creep. Analysis was based on idealized empirical data for the 9/99 Melt Infiltrated SiC fiber reinforced SiC.

  17. Software for Building Models of 3D Objects via the Internet

    NASA Technical Reports Server (NTRS)

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  18. Population-based learning of load balancing policies for a distributed computer system

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; Wah, Benjamin W.

    1993-01-01

    Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.

  19. Experimental and Numerical Analyses of Dynamic Deformation and Failure in Marine Structures Subjected to Underwater Impulsive Loads

    DTIC Science & Technology

    2012-08-01

    based impulsive loading ......................................... 48 4.4 Computational modeling of USLS ...56 4.5 Underwater Shock Loading Simulator ( USLS ) ...................................................... 59 4.6 Concluding...42 Figure 4.1 Schematic of Underwater Shock Loading Simulator ( USLS ). A high-velocity projectile hits the flyer-plate and creates a stress

  20. Adaptive Load-Balancing Algorithms Using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In a distributed-computing environment, it is important to ensure that the processor workloads are adequately balanced. Among numerous load-balancing algorithms, a unique approach due to Dam and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three novel SBN-based load-balancing algorithms, and implement them on an SP2. A thorough experimental study with Poisson-distributed synthetic loads demonstrates that these algorithms are very effective in balancing system load while minimizing processor idle time. They also compare favorably with several other existing load-balancing techniques. Additional experiments performed with real data demonstrate that the SBN approach is effective in adaptive computational science and engineering applications where dynamic load balancing is extremely crucial.

  1. Real-time polarization imaging algorithm for camera-based polarization navigation sensors.

    PubMed

    Lu, Hao; Zhao, Kaichun; You, Zheng; Huang, Kaoli

    2017-04-10

    Biologically inspired polarization navigation is a promising approach due to its autonomous nature, high precision, and robustness. Many researchers have built point source-based and camera-based polarization navigation prototypes in recent years. Camera-based prototypes can benefit from their high spatial resolution but incur a heavy computation load. The pattern recognition algorithm in most polarization imaging algorithms involves several nonlinear calculations that impose a significant computation burden. In this paper, the polarization imaging and pattern recognition algorithms are optimized through reduction to several linear calculations by exploiting the orthogonality of the Stokes parameters without affecting precision according to the features of the solar meridian and the patterns of the polarized skylight. The algorithm contains a pattern recognition algorithm with a Hough transform as well as orientation measurement algorithms. The algorithm was loaded and run on a digital signal processing system to test its computational complexity. The test showed that the running time decreased to several tens of milliseconds from several thousand milliseconds. Through simulations and experiments, it was found that the algorithm can measure orientation without reducing precision. It can hence satisfy the practical demands of low computational load and high precision for use in embedded systems.

  2. Theoretical, Experimental, and Computational Evaluation of Disk-Loaded Circular Wave Guides

    NASA Technical Reports Server (NTRS)

    Wallett, Thomas M.; Qureshi, A. Haq

    1994-01-01

    A disk-loaded circular wave guide structure and test fixture were fabricated. The dispersion characteristics were found by theoretical analysis, experimental testing, and computer simulation using the codes ARGUS and SOS. Interaction impedances were computed based on the corresponding dispersion characteristics. Finally, an equivalent circuit model for one period of the structure was chosen using equivalent circuit models for cylindrical wave guides of different radii. Optimum values for the discrete capacitors and inductors describing discontinuities between cylindrical wave guides were found using the computer code TOUCHSTONE.

  3. A free-piston Stirling engine/linear alternator controls and load interaction test facility

    NASA Technical Reports Server (NTRS)

    Rauch, Jeffrey S.; Kankam, M. David; Santiago, Walter; Madi, Frank J.

    1992-01-01

    A test facility at LeRC was assembled for evaluating free-piston Stirling engine/linear alternator control options, and interaction with various electrical loads. This facility is based on a 'SPIKE' engine/alternator. The engine/alternator, a multi-purpose load system, a digital computer based load and facility control, and a data acquisition system with both steady-periodic and transient capability are described. Preliminary steady-periodic results are included for several operating modes of a digital AC parasitic load control. Preliminary results on the transient response to switching a resistive AC user load are discussed.

  4. Cloud computing task scheduling strategy based on differential evolution and ant colony optimization

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; Cai, Yu; Fang, Yiqiu

    2018-05-01

    This paper proposes a task scheduling strategy DEACO based on the combination of Differential Evolution (DE) and Ant Colony Optimization (ACO), aiming at the single problem of optimization objective in cloud computing task scheduling, this paper combines the shortest task completion time, cost and load balancing. DEACO uses the solution of the DE to initialize the initial pheromone of ACO, reduces the time of collecting the pheromone in ACO in the early, and improves the pheromone updating rule through the load factor. The proposed algorithm is simulated on cloudsim, and compared with the min-min and ACO. The experimental results show that DEACO is more superior in terms of time, cost, and load.

  5. Investigation of progressive failure robustness and alternate load paths for damage tolerant structures

    NASA Astrophysics Data System (ADS)

    Marhadi, Kun Saptohartyadi

    Structural optimization for damage tolerance under various unforeseen damage scenarios is computationally challenging. It couples non-linear progressive failure analysis with sampling-based stochastic analysis of random damage. The goal of this research was to understand the relationship between alternate load paths available in a structure and its damage tolerance, and to use this information to develop computationally efficient methods for designing damage tolerant structures. Progressive failure of a redundant truss structure subjected to small random variability was investigated to identify features that correlate with robustness and predictability of the structure's progressive failure. The identified features were used to develop numerical surrogate measures that permit computationally efficient deterministic optimization to achieve robustness and predictability of progressive failure. Analysis of damage tolerance on designs with robust progressive failure indicated that robustness and predictability of progressive failure do not guarantee damage tolerance. Damage tolerance requires a structure to redistribute its load to alternate load paths. In order to investigate the load distribution characteristics that lead to damage tolerance in structures, designs with varying degrees of damage tolerance were generated using brute force stochastic optimization. A method based on principal component analysis was used to describe load distributions (alternate load paths) in the structures. Results indicate that a structure that can develop alternate paths is not necessarily damage tolerant. The alternate load paths must have a required minimum load capability. Robustness analysis of damage tolerant optimum designs indicates that designs are tailored to specified damage. A design Optimized under one damage specification can be sensitive to other damages not considered. Effectiveness of existing load path definitions and characterizations were investigated for continuum structures. A load path definition using a relative compliance change measure (U* field) was demonstrated to be the most useful measure of load path. This measure provides quantitative information on load path trajectories and qualitative information on the effectiveness of the load path. The use of the U* description of load paths in optimizing structures for effective load paths was investigated.

  6. Transient Side Load Analysis of Out-of-Round Film-Cooled Nozzle Extensions

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Lin, Jeff; Ruf, Joe; Guidos, Mike

    2012-01-01

    There was interest in understanding the impact of out-of-round nozzle extension on the nozzle side load during transient startup operations. The out-of-round nozzle extension could be the result of asymmetric internal stresses, deformation induced by previous tests, and asymmetric loads induced by hardware attached to the nozzle. The objective of this study was therefore to computationally investigate the effect of out-of-round nozzle extension on the nozzle side loads during an engine startup transient. The rocket engine studied encompasses a regeneratively cooled chamber and nozzle, along with a film cooled nozzle extension. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and transient inlet boundary flow properties derived from an engine system simulation. Six three-dimensional cases were performed with the out-of-roundness achieved by three different degrees of ovalization, elongated on lateral y and z axes: one slightly out-of-round, one more out-of-round, and one significantly out-of-round. The results show that the separation line jump was the primary source of the peak side loads. Comparing to the peak side load of the perfectly round nozzle, the peak side loads increased for the slightly and more ovalized nozzle extensions, and either increased or decreased for the two significantly ovalized nozzle extensions. A theory based on the counteraction of the flow destabilizing effect of an exacerbated asymmetrical flow caused by a lower degree of ovalization, and the flow stabilizing effect of a more symmetrical flow, created also by ovalization, is presented to explain the observations obtained in this effort.

  7. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1991-01-01

    Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  8. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Perry, Boyd, III; Pototzky, Anthony S.

    1991-01-01

    This paper describes and illustrates two matched-filter-theory based schemes for obtaining maximized and time-correlated gust-loads for a nonlinear airplane. The first scheme is computationally fast because it uses a simple one-dimensional search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multidimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  9. Distributed energy storage systems on the basis of electric-vehicle fleets

    NASA Astrophysics Data System (ADS)

    Zhuk, A. Z.; Buzoverov, E. A.; Sheindlin, A. E.

    2015-01-01

    Several power technologies directed to solving the problem of covering nonuniform loads in power systems are developed at the Joint Institute of High Temperatures, Russian Academy of Sciences (JIHT RAS). One direction of investigations is the use of storage batteries of electric vehicles to compensate load peaks in the power system (V2G—vehicle-to-grid technology). The efficiency of energy storage systems based on electric vehicles with traditional energy-saving technologies is compared in the article by means of performing computations. The comparison is performed by the minimum-cost criterion for the peak energy supply to the system. Computations show that the distributed storage systems based on fleets of electric cars are efficient economically with their usage regime to 1 h/day. In contrast to traditional methods, the prime cost of regulation of the loads in the power system based on V2G technology is independent of the duration of the load compensation period (the duration of the consumption peak).

  10. Comparison of Hydrodynamic Load Predictions Between Engineering Models and Computational Fluid Dynamics for the OC4-DeepCwind Semi-Submersible: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.

    Hydrodynamic loads on the platforms of floating offshore wind turbines are often predicted with computer-aided engineering tools that employ Morison's equation and/or potential-flow theory. This work compares results from one such tool, FAST, NREL's wind turbine computer-aided engineering tool, and the computational fluid dynamics package, OpenFOAM, for the OC4-DeepCwind semi-submersible analyzed in the International Energy Agency Wind Task 30 project. Load predictions from HydroDyn, the offshore hydrodynamics module of FAST, are compared with high-fidelity results from OpenFOAM. HydroDyn uses a combination of Morison's equations and potential flow to predict the hydrodynamic forces on the structure. The implications of the assumptionsmore » in HydroDyn are evaluated based on this code-to-code comparison.« less

  11. Effects of Computer-Based Visual Representation on Mathematics Learning and Cognitive Load

    ERIC Educational Resources Information Center

    Yung, Hsin I.; Paas, Fred

    2015-01-01

    Visual representation has been recognized as a powerful learning tool in many learning domains. Based on the assumption that visual representations can support deeper understanding, we examined the effects of visual representations on learning performance and cognitive load in the domain of mathematics. An experimental condition with visual…

  12. Schedulers with load-store queue awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Tong; Eichenberger, Alexandre E.; Jacob, Arpith C.

    2017-02-07

    In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.

  13. Schedulers with load-store queue awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Tong; Eichenberger, Alexandre E.; Jacob, Arpith C.

    2017-01-24

    In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.

  14. Intrusive and Non-Intrusive Instruction in Dynamic Skill Training.

    DTIC Science & Technology

    1981-10-01

    less sensitive to the processing load imposed by the dynaic task together with instructional feedback processing than were the decison - making and...betwee computer based instruction of knowledge systems and computer based instruction of dynamic skills. There is reason to expect that the findings of...knowledge 3Ytm and computer based instruction of dynlamic skill.. There is reason to expect that the findings of research on knowledge system

  15. Comparison of three methods of calculating strain in the mouse ulna in exogenous loading studies.

    PubMed

    Norman, Stephanie C; Wagner, David W; Beaupre, Gary S; Castillo, Alesha B

    2015-01-02

    Axial compression of mouse limbs is commonly used to induce bone formation in a controlled, non-invasive manner. Determination of peak strains caused by loading is central to interpreting results. Load-strain calibration is typically performed using uniaxial strain gauges attached to the diaphyseal, periosteal surface of a small number of sacrificed animals. Strain is measured as the limb is loaded to a range of physiological loads known to be anabolic to bone. The load-strain relationship determined by this subgroup is then extrapolated to a larger group of experimental mice. This method of strain calculation requires the challenging process of strain gauging very small bones which is subject to variability in placement of the strain gauge. We previously developed a method to estimate animal-specific periosteal strain during axial ulnar loading using an image-based computational approach that does not require strain gauges. The purpose of this study was to compare the relationship between load-induced bone formation rates and periosteal strain at ulnar midshaft using three different methods to estimate strain: (A) Nominal strain values based solely on load-strain calibration; (B) Strains calculated from load-strain calibration, but scaled for differences in mid-shaft cross-sectional geometry among animals; and (C) An alternative image-based computational method for calculating strains based on beam theory and animal-specific bone geometry. Our results show that the alternative method (C) provides comparable correlation between strain and bone formation rates in the mouse ulna relative to the strain gauge-dependent methods (A and B), while avoiding the need to use strain gauges. Published by Elsevier Ltd.

  16. The load shedding advisor: An example of a crisis-response expert system

    NASA Technical Reports Server (NTRS)

    Bollinger, Terry B.; Lightner, Eric; Laverty, John; Ambrose, Edward

    1987-01-01

    A Prolog-based prototype expert system is described that was implemented by the Network Operations Branch of the NASA Goddard Space Flight Center. The purpose of the prototype was to test whether a small, inexpensive computer system could be used to host a load shedding advisor, a system which would monitor major physical environment parameters in a computer facility, then recommend appropriate operator reponses whenever a serious condition was detected. The resulting prototype performed significantly to efficiency gains achieved by replacing a purely rule-based design methodology with a hybrid approach that combined procedural, entity-relationship, and rule-based methods.

  17. Dynamic Load Balancing for Adaptive Computations on Distributed-Memory Machines

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Dynamic load balancing is central to adaptive mesh-based computations on large-scale parallel computers. The principal investigator has investigated various issues on the dynamic load balancing problem under NASA JOVE and JAG rants. The major accomplishments of the project are two graph partitioning algorithms and a load balancing framework. The S-HARP dynamic graph partitioner is known to be the fastest among the known dynamic graph partitioners to date. It can partition a graph of over 100,000 vertices in 0.25 seconds on a 64- processor Cray T3E distributed-memory multiprocessor while maintaining the scalability of over 16-fold speedup. Other known and widely used dynamic graph partitioners take over a second or two while giving low scalability of a few fold speedup on 64 processors. These results have been published in journals and peer-reviewed flagship conferences.

  18. Definition and maintenance of a telemetry database dictionary

    NASA Technical Reports Server (NTRS)

    Knopf, William P. (Inventor)

    2007-01-01

    A telemetry dictionary database includes a component for receiving spreadsheet workbooks of telemetry data over a web-based interface from other computer devices. Another component routes the spreadsheet workbooks to a specified directory on the host processing device. A process then checks the received spreadsheet workbooks for errors, and if no errors are detected the spreadsheet workbooks are routed to another directory to await initiation of a remote database loading process. The loading process first converts the spreadsheet workbooks to comma separated value (CSV) files. Next, a network connection with the computer system that hosts the telemetry dictionary database is established and the CSV files are ported to the computer system that hosts the telemetry dictionary database. This is followed by a remote initiation of a database loading program. Upon completion of loading a flatfile generation program is manually initiated to generate a flatfile to be used in a mission operations environment by the core ground system.

  19. Behaviour of Frictional Joints in Steel Arch Yielding Supports

    NASA Astrophysics Data System (ADS)

    Horyl, Petr; Šňupárek, Richard; Maršálek, Pavel

    2014-10-01

    The loading capacity and ability of steel arch supports to accept deformations from the surrounding rock mass is influenced significantly by the function of the connections and in particular, the tightening of the bolts. This contribution deals with computer modelling of the yielding bolt connections for different torques to determine the load-bearing capacity of the connections. Another parameter that affects the loading capacity significantly is the value of the friction coefficient of the contacts between the elements of the joints. The authors investigated both the behaviour and conditions of the individual parts for three values of tightening moment and the relation between the value of screw tightening and load-bearing capacity of the connections for different friction coefficients. ANSYS software and the finite element method were used for the computer modelling. The solution is nonlinear because of the bi-linear material properties of steel and the large deformations. The geometry of the computer model was created from designs of all four parts of the structure. The calculation also defines the weakest part of the joint's structure based on stress analysis. The load was divided into two loading steps: the pre-tensioning of connecting bolts and the deformation loading corresponding to 50-mm slip of one support. The full Newton-Raphson method was chosen for the solution. The calculations were carried out on a computer at the Supercomputing Centre VSB-Technical University of Ostrava.

  20. Computational Methods for Frictional Contact With Applications to the Space Shuttle Orbiter Nose-Gear Tire

    NASA Technical Reports Server (NTRS)

    Tanner, John A.

    1996-01-01

    A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.

  1. Computational methods for frictional contact with applications to the Space Shuttle orbiter nose-gear tire: Comparisons of experimental measurements and analytical predictions

    NASA Technical Reports Server (NTRS)

    Tanner, John A.

    1996-01-01

    A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical-loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.

  2. Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source

    NASA Astrophysics Data System (ADS)

    Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.

    2014-06-01

    To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.

  3. Aeroelastic Modeling of a Nozzle Startup Transient

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2014-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a tightly coupled aeroelastic modeling algorithm by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed under the framework of modal analysis. Transient aeroelastic nozzle startup analyses at sea level were performed, and the computed transient nozzle fluid-structure interaction physics presented,

  4. Dynamic virtual machine allocation policy in cloud computing complying with service level agreement using CloudSim

    NASA Astrophysics Data System (ADS)

    Aneri, Parikh; Sumathy, S.

    2017-11-01

    Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.

  5. The Role of Visual Noise in Influencing Mental Load and Fatigue in a Steady-State Motion Visual Evoked Potential-Based Brain-Computer Interface.

    PubMed

    Xie, Jun; Xu, Guanghua; Luo, Ailing; Li, Min; Zhang, Sicong; Han, Chengcheng; Yan, Wenqiang

    2017-08-14

    As a spatial selective attention-based brain-computer interface (BCI) paradigm, steady-state visual evoked potential (SSVEP) BCI has the advantages of high information transfer rate, high tolerance to artifacts, and robust performance across users. However, its benefits come at the cost of mental load and fatigue occurring in the concentration on the visual stimuli. Noise, as a ubiquitous random perturbation with the power of randomness, may be exploited by the human visual system to enhance higher-level brain functions. In this study, a novel steady-state motion visual evoked potential (SSMVEP, i.e., one kind of SSVEP)-based BCI paradigm with spatiotemporal visual noise was used to investigate the influence of noise on the compensation of mental load and fatigue deterioration during prolonged attention tasks. Changes in α , θ , θ + α powers, θ / α ratio, and electroencephalography (EEG) properties of amplitude, signal-to-noise ratio (SNR), and online accuracy, were used to evaluate mental load and fatigue. We showed that presenting a moderate visual noise to participants could reliably alleviate the mental load and fatigue during online operation of visual BCI that places demands on the attentional processes. This demonstrated that noise could provide a superior solution to the implementation of visual attention controlling-based BCI applications.

  6. Cognitive Load Theory vs. Constructivist Approaches: Which Best Leads to Efficient, Deep Learning?

    ERIC Educational Resources Information Center

    Vogel-Walcutt, J. J.; Gebrim, J. B.; Bowers, C.; Carper, T. M.; Nicholson, D.

    2011-01-01

    Computer-assisted learning, in the form of simulation-based training, is heavily focused upon by the military. Because computer-based learning offers highly portable, reusable, and cost-efficient training options, the military has dedicated significant resources to the investigation of instructional strategies that improve learning efficiency…

  7. Development of a personal computer-based secondary task procedure as a surrogate for a driving simulator

    DOT National Transportation Integrated Search

    2007-08-01

    This research was conducted to develop and test a personal computer-based study procedure (PCSP) with secondary task loading for use in human factors laboratory experiments in lieu of a driving simulator to test reading time and understanding of traf...

  8. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing

    PubMed Central

    Hu, Yu-Chen

    2018-01-01

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%. PMID:29702607

  9. Multiplexer/Demultiplexer Loading Tool (MDMLT)

    NASA Technical Reports Server (NTRS)

    Brewer, Lenox Allen; Hale, Elizabeth; Martella, Robert; Gyorfi, Ryan

    2012-01-01

    The purpose of the MDMLT is to improve the reliability and speed of loading multiplexers/demultiplexers (MDMs) in the Software Development and Integration Laboratory (SDIL) by automating the configuration management (CM) of the loads in the MDMs, automating the loading procedure, and providing the capability to load multiple or all MDMs concurrently. This loading may be accomplished in parallel, or single MDMs (remote). The MDMLT is a Web-based tool that is capable of loading the entire International Space Station (ISS) MDM configuration in parallel. It is able to load Flight Equivalent Units (FEUs), enhanced, standard, and prototype MDMs as well as both EEPROM (Electrically Erasable Programmable Read-Only Memory) and SSMMU (Solid State Mass Memory Unit) (MASS Memory). This software has extensive configuration management to track loading history, and the performance improvement means of loading the entire ISS MDM configuration of 49 MDMs in approximately 30 minutes, as opposed to 36 hours, which is what it took previously utilizing the flight method of S-Band uplink. The laptop version recently added to the MDMLT suite allows remote lab loading with the CM of information entered into a common database when it is reconnected to the network. This allows the program to reconfigure the test rigs quickly between shifts, allowing the lab to support a variety of onboard configurations during a single day, based on upcoming or current missions. The MDMLT Computer Software Configuration Item (CSCI) supports a Web-based command and control interface to the user. An interface to the SDIL File Transfer Protocol (FTP) server is supported to import Integrated Flight Loads (IFLs) and Internal Product Release Notes (IPRNs) into the database. An interface to the Monitor and Control System (MCS) is supported to control the power state, and to enable or disable the debug port of the MDMs to be loaded. Two direct interfaces to the MDM are supported: a serial interface (debug port) to receive MDM memory dump data and the calculated checksum, and the Small Computer System Interface (SCSI) to transfer load files to MDMs with hard disks. File transfer from the MDM Loading Tool to EEPROM within the MDM is performed via the MILSTD- 1553 bus, making use of the Real- Time Input/Output Processors (RTIOP) when using the rig-based MDMLT, and via a bus box when using the laptop MDMLT. The bus box is a cost-effective alternative to PC-1553 cards for the laptop. It is noted that this system can be modified and adapted to any avionic laboratory for spacecraft computer loading, ship avionics, or aircraft avionics where multiple configurations and strong configuration management of software/firmware loads are required.

  10. Methods for computing water-quality loads at sites in the U.S. Geological Survey National Water Quality Network

    USGS Publications Warehouse

    Lee, Casey J.; Murphy, Jennifer C.; Crawford, Charles G.; Deacon, Jeffrey R.

    2017-10-24

    The U.S. Geological Survey publishes information on concentrations and loads of water-quality constituents at 111 sites across the United States as part of the U.S. Geological Survey National Water Quality Network (NWQN). This report details historical and updated methods for computing water-quality loads at NWQN sites. The primary updates to historical load estimation methods include (1) an adaptation to methods for computing loads to the Gulf of Mexico; (2) the inclusion of loads computed using the Weighted Regressions on Time, Discharge, and Season (WRTDS) method; and (3) the inclusion of loads computed using continuous water-quality data. Loads computed using WRTDS and continuous water-quality data are provided along with those computed using historical methods. Various aspects of method updates are evaluated in this report to help users of water-quality loading data determine which estimation methods best suit their particular application.

  11. Dynamic load balance scheme for the DSMC algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jin; Geng, Xiangren; Jiang, Dingwu

    The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less

  12. Scheduling based on a dynamic resource connection

    NASA Astrophysics Data System (ADS)

    Nagiyev, A. E.; Botygin, I. A.; Shersntneva, A. I.; Konyaev, P. A.

    2017-02-01

    The practical using of distributed computing systems associated with many problems, including troubles with the organization of an effective interaction between the agents located at the nodes of the system, with the specific configuration of each node of the system to perform a certain task, with the effective distribution of the available information and computational resources of the system, with the control of multithreading which implements the logic of solving research problems and so on. The article describes the method of computing load balancing in distributed automatic systems, focused on the multi-agency and multi-threaded data processing. The scheme of the control of processing requests from the terminal devices, providing the effective dynamic scaling of computing power under peak load is offered. The results of the model experiments research of the developed load scheduling algorithm are set out. These results show the effectiveness of the algorithm even with a significant expansion in the number of connected nodes and zoom in the architecture distributed computing system.

  13. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  14. A Computer-Based System Integrating Instruction and Information Retrieval: A Description of Some Methodological Considerations.

    ERIC Educational Resources Information Center

    Selig, Judith A.; And Others

    This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent information retrieval and computer-assisted…

  15. A handheld computer as part of a portable in vivo knee joint load monitoring system

    PubMed Central

    Szivek, JA; Nandakumar, VS; Geffre, CP; Townsend, CP

    2009-01-01

    In vivo measurement of loads and pressures acting on articular cartilage in the knee joint during various activities and rehabilitative therapies following focal defect repair will provide a means of designing activities that encourage faster and more complete healing of focal defects. It was the goal of this study to develop a totally portable monitoring system that could be used during various activities and allow continuous monitoring of forces acting on the knee. In order to make the monitoring system portable, a handheld computer with custom software, a USB powered miniature wireless receiver and a battery-powered coil were developed to replace a currently used computer, AC powered bench top receiver and power supply. A Dell handheld running Windows Mobile operating system(OS) programmed using Labview was used to collect strain measurements. Measurements collected by the handheld based system connected to the miniature wireless receiver were compared with the measurements collected by a hardwired system and a computer based system during bench top testing and in vivo testing. The newly developed handheld based system had a maximum accuracy of 99% when compared to the computer based system. PMID:19789715

  16. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing.

    PubMed

    Lin, Yu-Hsiu; Hu, Yu-Chen

    2018-04-27

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%.

  17. CASKS (Computer Analysis of Storage casKS): A microcomputer based analysis system for storage cask design review. User`s manual to Version 1b (including program reference)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, T.F.; Gerhard, M.A.; Trummer, D.J.

    CASKS (Computer Analysis of Storage casKS) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for evaluating safety analysis reports on spent-fuel storage casks. The bulk of the complete program and this user`s manual are based upon the SCANS (Shipping Cask ANalysis System) program previously developed at LLNL. A number of enhancements and improvements were added to the original SCANS program to meet requirements unique to storage casks. CASKS is an easy-to-use system that calculates global response of storage casks to impact loads, pressure loads and thermal conditions. This provides reviewers withmore » a tool for an independent check on analyses submitted by licensees. CASKS is based on microcomputers compatible with the IBM-PC family of computers. The system is composed of a series of menus, input programs, cask analysis programs, and output display programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.« less

  18. Computation of maximum gust loads in nonlinear aircraft using a new method based on the matched filter approach and numerical optimization

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Heeg, Jennifer; Perry, Boyd, III

    1990-01-01

    Time-correlated gust loads are time histories of two or more load quantities due to the same disturbance time history. Time correlation provides knowledge of the value (magnitude and sign) of one load when another is maximum. At least two analysis methods have been identified that are capable of computing maximized time-correlated gust loads for linear aircraft. Both methods solve for the unit-energy gust profile (gust velocity as a function of time) that produces the maximum load at a given location on a linear airplane. Time-correlated gust loads are obtained by re-applying this gust profile to the airplane and computing multiple simultaneous load responses. Such time histories are physically realizable and may be applied to aircraft structures. Within the past several years there has been much interest in obtaining a practical analysis method which is capable of solving the analogous problem for nonlinear aircraft. Such an analysis method has been the focus of an international committee of gust loads specialists formed by the U.S. Federal Aviation Administration and was the topic of a panel discussion at the Gust and Buffet Loads session at the 1989 SDM Conference in Mobile, Alabama. The kinds of nonlinearities common on modern transport aircraft are indicated. The Statical Discrete Gust method is capable of being, but so far has not been, applied to nonlinear aircraft. To make the method practical for nonlinear applications, a search procedure is essential. Another method is based on Matched Filter Theory and, in its current form, is applicable to linear systems only. The purpose here is to present the status of an attempt to extend the matched filter approach to nonlinear systems. The extension uses Matched Filter Theory as a starting point and then employs a constrained optimization algorithm to attack the nonlinear problem.

  19. Estimating sediment discharge: Appendix D

    USGS Publications Warehouse

    Gray, John R.; Simões, Francisco J. M.

    2008-01-01

    Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with different types of bed-load samplers may not be comparable (Gray et al. 1991; Childers 1999; Edwards and Glysson 1999). The total suspended solids (TSS) analytical method tends to produce concentration data from open-channel flows that are biased low with respect to their paired suspended-sediment concentration values, particularly when sand-size material composes more than about a quarter of the material in suspension. Instantaneous sediment-discharge values based on TSS data may differ from the more reliable product of suspended- sediment concentration values and the same water-discharge data by an order of magnitude (Gray et al. 2000; Bent et al. 2001; Glysson et al. 2000; 2001). An assessment of data comparability and reliability is an important first step in the estimation of sediment discharges. There are two approaches to obtaining values describing sediment loads in streams. One is based on direct measurement of the quantities of interest, and the other on relations developed between hydraulic parameters and sediment- transport potential. In the next sections, the most common techniques for both approaches are briefly addressed.

  20. Airborne Electro-Mechanical Actuator Test Stand for Development of Prognostic Health Management Systems

    DTIC Science & Technology

    2010-10-01

    based on a pre-defined UH-60 data format, then also computes the load and position profile information. File Profile Interface In order to test the...of the data set. Figure 13 shows a typical motion profile executed over a period of about twenty minutes. Figure 14 shows the desired ( computed ...flight. The stand is connected to the aircraft data bus and the motion profiles for the test actuators, as well as the load applied to them, are

  1. Development of Environmental Load Estimation Model for Road Drainage Systems in the Early Design Phase

    NASA Astrophysics Data System (ADS)

    Park, Jin-Young; Lee, Dong-Eun; Kim, Byung-Soo

    2017-10-01

    Due to the increasing concern about climate change, efforts to reduce environmental load are continuously being made in construction industry, and LCA (life cycle assessment) is being presented as an effective method to assess environmental load. Since LCA requires information on construction quantity used for environmental load estimation, however, it is not being utilized in the environmental review in the early design phase where it is difficult to obtain such information. In this study, computation system for construction quantity based on standard cross section of road drainage facilities was developed to compute construction quantity required for LCA using only information available in the early design phase to develop and verify the effectiveness of a model that can perform environmental load estimation. The result showed that it is an effective model that can be used in the early design phase as it revealed a 13.39% mean absolute error rate.

  2. Efficient load rating and quantification of life-cycle damage of Indiana bridges due to overweight loads.

    DOT National Transportation Integrated Search

    2016-02-01

    In this study, a computational approach for conducting durability analysis of bridges using detailed finite element models is developed. The underlying approach adopted is based on the hypothesis that the two main factors affecting the life of a brid...

  3. Using Mosix for Wide-Area Compuational Resources

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  4. Integrated Computational Materials Engineering Development of Alternative Cu-Be Alloys

    DTIC Science & Technology

    2012-08-01

    Be alloy replacement in highly loaded wear applications . ● Development bushing designs for the enhancement of dynamic wear performance...Material Properties and Tribological Characterization Cu-Based and Co- Based Alloy Concept Selection Requirements Definition Bushing Design and...properties and cost for highly loaded bushing applications ● QuesTek’s NAVAIR-funded SBIR Phase II program demonstrated the feasibility of designing Be-free

  5. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  6. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  7. Numerical analysis of stiffened shells of revolution. Volume 2: Users' manual for STAR-02S - shell theory automated for rotational structures - 2 (statics), digital computer program

    NASA Technical Reports Server (NTRS)

    Svalbonas, V.

    1973-01-01

    A procedure for the structural analysis of stiffened shells of revolution is presented. A digital computer program based on the Love-Reissner first order shell theory was developed. The computer program can analyze orthotropic thin shells of revolution, subjected to unsymmetric distributed loading or concentrated line loads, as well as thermal strains. The geometrical shapes of the shells which may be analyzed are described. The shell wall cross section can be a sheet, sandwich, or reinforced sheet or sandwich. General stiffness input options are also available.

  8. Voltage profile program for the Kennedy Space Center electric power distribution system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The Kennedy Space Center voltage profile program computes voltages at all busses greater than 1 Kv in the network under various conditions of load. The computation is based upon power flow principles and utilizes a Newton-Raphson iterative load flow algorithm. Power flow conditions throughout the network are also provided. The computer program is designed for both steady state and transient operation. In the steady state mode, automatic tap changing of primary distribution transformers is incorporated. Under transient conditions, such as motor starts etc., it is assumed that tap changing is not accomplished so that transformer secondary voltage is allowed to sag.

  9. Energy Savings in Cellular Networks Based on Space-Time Structure of Traffic Loads

    NASA Astrophysics Data System (ADS)

    Sun, Jingbo; Wang, Yue; Yuan, Jian; Shan, Xiuming

    Since most of energy consumed by the telecommunication infrastructure is due to the Base Transceiver Station (BTS), switching off BTSs when traffic load is low has been recognized as an effective way of saving energy. In this letter, an energy saving scheme is proposed to minimize the number of active BTSs based on the space-time structure of traffic loads as determined by principal component analysis. Compared to existing methods, our approach models traffic loads more accurately, and has a much smaller input size. As it is implemented in an off-line manner, our scheme also avoids excessive communications and computing overheads. Simulation results show that the proposed method has a comparable performance in energy savings.

  10. Shock Location Dominated Transonic Flight Loads on the Active Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Lokos, William A.; Lizotte, Andrew; Lindsley, Ned J.; Stauf, Rick

    2005-01-01

    During several Active Aeroelastic Wing research flights, the shadow of the over-wing shock could be observed because of natural lighting conditions. As the plane accelerated, the shock location moved aft, and as the shadow passed the aileron and trailing-edge flap hinge lines, their associated hinge moments were substantially affected. The observation of the dominant effect of shock location on aft control surface hinge moments led to this investigation. This report investigates the effect of over-wing shock location on wing loads through flight-measured data and analytical predictions. Wing-root and wing-fold bending moment and torque and leading- and trailing-edge hinge moments have been measured in flight using calibrated strain gages. These same loads have been predicted using a computational fluid dynamics code called the Euler Navier-Stokes Three Dimensional Aeroelastic Code. The computational fluid dynamics study was based on the elastically deformed shape estimated by a twist model, which in turn was derived from in-flight-measured wing deflections provided by a flight deflection measurement system. During level transonic flight, the shock location dominated the wing trailing-edge control surface hinge moments. The computational fluid dynamics analysis based on the shape provided by the flight deflection measurement system produced very similar results and substantially correlated with the measured loads data.

  11. Comparison of two methods for estimating discharge and nutrient loads from Tidally affected reaches of the Myakka and Peace Rivers, West-Central Florida

    USGS Publications Warehouse

    Levesque, V.A.; Hammett, K.M.

    1997-01-01

    The Myakka and Peace River Basins constitute more than 60 percent of the total inflow area and contribute more than half the total tributary inflow to the Charlotte Harbor estuarine system. Water discharge and nutrient enrichment have been identified as significant concerns in the estuary, and consequently, it is important to accurately estimate the magnitude of discharges and nutrient loads transported by inflows from both rivers. Two methods for estimating discharge and nutrient loads from tidally affected reaches of the Myakka and Peace Rivers were compared. The first method was a tidal-estimation method, in which discharge and nutrient loads were estimated based on stage, water-velocity, discharge, and water-quality data collected near the mouths of the rivers. The second method was a traditional basin-ratio method in which discharge and nutrient loads at the mouths were estimated from discharge and loads measured at upstream stations. Stage and water-velocity data were collected near the river mouths by submersible instruments, deployed in situ, and discharge measurements were made with an acoustic Doppler current profiler. The data collected near the mouths of the Myakka River and Peace River were filtered, using a low-pass filter, to remove daily mixed-tide effects with periods less than about 2 days. The filtered data from near the river mouths were used to calculate daily mean discharge and nutrient loads. These tidal-estimation-method values were then compared to the basin-ratio-method values. Four separate 30-day periods of differing streamflow conditions were chosen for monitoring and comparison. Discharge and nutrient load estimates computed from the tidal-estimation and basin-ratio methods were most similar during high-flow periods. However, during high flow, the values computed from the tidal-estimation method for the Myakka and Peace Rivers were consistently lower than the values computed from the basin-ratio method. There were substantial differences between discharges and nutrient loads computed from the tidal-estimation and basin-ratio methods during low-flow periods. Furthermore, the differences between the methods were not consistent. Discharges and nutrient loads computed from the tidal-estimation method for the Myakka River were higher than those computed from the basin-ratio method, whereas discharges and nutrients loads computed by the tidal-estimation method for the Peace River were not only lower than those computed from the basin-ratio method, but they actually reflected a negative, or upstream, net movement. Short-term tidal measurement results should be used with caution, because antecedent conditions can influence the discharge and nutrient loads. Continuous tidal data collected over a 1- or 2-year period would be necessary to more accurately estimate the tidally affected discharge and nutrient loads for the Myakka and Peace River Basins.

  12. Identity-Based Authentication for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Li, Hongwei; Dai, Yuanshun; Tian, Ling; Yang, Haomiao

    Cloud computing is a recently developed new technology for complex systems with massive-scale services sharing among numerous users. Therefore, authentication of both users and services is a significant issue for the trust and security of the cloud computing. SSL Authentication Protocol (SAP), once applied in cloud computing, will become so complicated that users will undergo a heavily loaded point both in computation and communication. This paper, based on the identity-based hierarchical model for cloud computing (IBHMCC) and its corresponding encryption and signature schemes, presented a new identity-based authentication protocol for cloud computing and services. Through simulation testing, it is shown that the authentication protocol is more lightweight and efficient than SAP, specially the more lightweight user side. Such merit of our model with great scalability is very suited to the massive-scale cloud.

  13. 10 CFR Appendix A to Subpart U of... - Sampling Plan for Enforcement Testing of Electric Motors

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on a 20 percent tolerance in the total power loss at full-load and fixed output power. Given the... performance of the n1 units in the first sample as follows: ER83AD04.005 where Xi is the measured full-load efficiency of unit i. Step 3. Compute the sample standard deviation (S1) of the measured full-load efficiency...

  14. 10 CFR Appendix A to Subpart U of... - Sampling Plan for Enforcement Testing of Electric Motors

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on a 20 percent tolerance in the total power loss at full-load and fixed output power. Given the... performance of the n1 units in the first sample as follows: ER83AD04.005 where Xi is the measured full-load efficiency of unit i. Step 3. Compute the sample standard deviation (S1) of the measured full-load efficiency...

  15. Preparation, characterization, drug release and computational modelling studies of antibiotics loaded amorphous chitin nanoparticles.

    PubMed

    Gayathri, N K; Aparna, V; Maya, S; Biswas, Raja; Jayakumar, R; Mohan, C Gopi

    2017-12-01

    We present a computational investigation of binding affinity of different types of drugs with chitin nanocarriers. Understanding the chitn polymer-drug interaction is important to design and optimize the chitin based drug delivery systems. The binding affinity of three different types of anti-bacterial drugs Ethionamide (ETA) Methacycline (MET) and Rifampicin (RIF) with amorphous chitin nanoparticles (AC-NPs) were studied by integrating computational and experimental techniques. The binding energies (BE) of hydrophobic ETA, hydrophilic MET and hydrophobic RIF were -7.3kcal/mol, -5.1kcal/mol and -8.1kcal/mol respectively, with respect to AC-NPs, using molecular docking studies. This theoretical result was in good correlation with the experimental studies of AC-drug loading and drug entrapment efficiencies of MET (3.5±0.1 and 25± 2%), ETA (5.6±0.02 and 45±4%) and RIF (8.9±0.20 and 53±5%) drugs respectively. Stability studies of the drug encapsulated nanoparticles showed stable values of size, zeta and polydispersity index at 6°C temperature. The correlation between computational BE and experimental drug entrapment efficiencies of RIF, ETA and MET drugs with four AC-NPs strands were 0.999 respectively, while that of the drug loading efficiencies were 0.854 respectively. Further, the molecular docking results predict the atomic level details derived from the electrostatic, hydrogen bonding and hydrophobic interactions of the drug and nanoparticle for its encapsulation and loading in the chitin-based host-guest nanosystems. The present results thus revealed the drug loading and drug delivery insights and has the potential of reducing the time and cost of processing new antibiotic drug delivery nanosystem optimization, development and discovery. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    1999-01-01

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  17. Method of up-front load balancing for local memory parallel processors

    NASA Technical Reports Server (NTRS)

    Baffes, Paul Thomas (Inventor)

    1990-01-01

    In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.

  18. Analysis and Research on Spatial Data Storage Model Based on Cloud Computing Platform

    NASA Astrophysics Data System (ADS)

    Hu, Yong

    2017-12-01

    In this paper, the data processing and storage characteristics of cloud computing are analyzed and studied. On this basis, a cloud computing data storage model based on BP neural network is proposed. In this data storage model, it can carry out the choice of server cluster according to the different attributes of the data, so as to complete the spatial data storage model with load balancing function, and have certain feasibility and application advantages.

  19. Tools for Early Prediction of Drug Loading in Lipid-Based Formulations

    PubMed Central

    2015-01-01

    Identification of the usefulness of lipid-based formulations (LBFs) for delivery of poorly water-soluble drugs is at date mainly experimentally based. In this work we used a diverse drug data set, and more than 2,000 solubility measurements to develop experimental and computational tools to predict the loading capacity of LBFs. Computational models were developed to enable in silico prediction of solubility, and hence drug loading capacity, in the LBFs. Drug solubility in mixed mono-, di-, triglycerides (Maisine 35-1 and Capmul MCM EP) correlated (R2 0.89) as well as the drug solubility in Carbitol and other ethoxylated excipients (PEG400, R2 0.85; Polysorbate 80, R2 0.90; Cremophor EL, R2 0.93). A melting point below 150 °C was observed to result in a reasonable solubility in the glycerides. The loading capacity in LBFs was accurately calculated from solubility data in single excipients (R2 0.91). In silico models, without the demand of experimentally determined solubility, also gave good predictions of the loading capacity in these complex formulations (R2 0.79). The framework established here gives a better understanding of drug solubility in single excipients and of LBF loading capacity. The large data set studied revealed that experimental screening efforts can be rationalized by solubility measurements in key excipients or from solid state information. For the first time it was shown that loading capacity in complex formulations can be accurately predicted using molecular information extracted from calculated descriptors and thermal properties of the crystalline drug. PMID:26568134

  20. Tools for Early Prediction of Drug Loading in Lipid-Based Formulations.

    PubMed

    Alskär, Linda C; Porter, Christopher J H; Bergström, Christel A S

    2016-01-04

    Identification of the usefulness of lipid-based formulations (LBFs) for delivery of poorly water-soluble drugs is at date mainly experimentally based. In this work we used a diverse drug data set, and more than 2,000 solubility measurements to develop experimental and computational tools to predict the loading capacity of LBFs. Computational models were developed to enable in silico prediction of solubility, and hence drug loading capacity, in the LBFs. Drug solubility in mixed mono-, di-, triglycerides (Maisine 35-1 and Capmul MCM EP) correlated (R(2) 0.89) as well as the drug solubility in Carbitol and other ethoxylated excipients (PEG400, R(2) 0.85; Polysorbate 80, R(2) 0.90; Cremophor EL, R(2) 0.93). A melting point below 150 °C was observed to result in a reasonable solubility in the glycerides. The loading capacity in LBFs was accurately calculated from solubility data in single excipients (R(2) 0.91). In silico models, without the demand of experimentally determined solubility, also gave good predictions of the loading capacity in these complex formulations (R(2) 0.79). The framework established here gives a better understanding of drug solubility in single excipients and of LBF loading capacity. The large data set studied revealed that experimental screening efforts can be rationalized by solubility measurements in key excipients or from solid state information. For the first time it was shown that loading capacity in complex formulations can be accurately predicted using molecular information extracted from calculated descriptors and thermal properties of the crystalline drug.

  1. A Framework to Design the Computational Load Distribution of Wireless Sensor Networks in Power Consumption Constrained Environments

    PubMed Central

    Sánchez-Álvarez, David; Rodríguez-Pérez, Francisco-Javier

    2018-01-01

    In this paper, we present a work based on the computational load distribution among the homogeneous nodes and the Hub/Sink of Wireless Sensor Networks (WSNs). The main contribution of the paper is an early decision support framework helping WSN designers to take decisions about computational load distribution for those WSNs where power consumption is a key issue (when we refer to “framework” in this work, we are considering it as a support tool to make decisions where the executive judgment can be included along with the set of mathematical tools of the WSN designer; this work shows the need to include the load distribution as an integral component of the WSN system for making early decisions regarding energy consumption). The framework takes advantage of the idea that balancing sensors nodes and Hub/Sink computational load can lead to improved energy consumption for the whole or at least the battery-powered nodes of the WSN. The approach is not trivial and it takes into account related issues such as the required data distribution, nodes, and Hub/Sink connectivity and availability due to their connectivity features and duty-cycle. For a practical demonstration, the proposed framework is applied to an agriculture case study, a sector very relevant in our region. In this kind of rural context, distances, low costs due to vegetable selling prices and the lack of continuous power supplies may lead to viable or inviable sensing solutions for the farmers. The proposed framework systematize and facilitates WSN designers the required complex calculations taking into account the most relevant variables regarding power consumption, avoiding full/partial/prototype implementations, and measurements of different computational load distribution potential solutions for a specific WSN. PMID:29570645

  2. Comparison of sensitivity and resolution load sensor at various configuration polymer optical fiber

    NASA Astrophysics Data System (ADS)

    Arifin, A.; Yusran, Miftahuddin, Abdullah, Bualkar; Tahir, Dahlang

    2017-01-01

    This study uses a load sensor with a macro-bending on polymer optical fiber loop model which is placed between two plates with a buffer spring. The load sensor with light intensity modulation principle is an infrared LED emits light through the polymer optical fiber then received by the phototransistor and amplifier. Output voltage from the amplifier continued to arduino sequence and displayed on the computer. Load augment on the sensor resulted in an increase of curvature on polymer optical fibers that can cause power losses gets bigger too. This matter will result in the intensity of light that received by phototransistor getting smaller, so that the output voltage that ligable on computer will be getting smaller too. The sensitivity and resolution load sensors analyzed based on configuration with various amount of loops, imperfection on the jacket, and imperfection at the cladding and core of polymer optical fiber. The results showed that the augment on the amount of load, imperfection on the jacket and imperfection on the sheath and core polymer optical fiber can improve the sensitivity and resolution of the load sensor. The best sensors resolution obtained on the number of loops 4 with imperfection 8 on the core and cladding polymer optical fiber that is 0.037 V/N and 0,026 N. The advantages of the load sensor based on polymers optical fiber are easy to make, low cost and simple to use measurement methods.

  3. Contributions of the stochastic shape wake model to predictions of aerodynamic loads and power under single wake conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doubrawa, P.; Barthelmie, R. J.; Wang, H.

    The contribution of wake meandering and shape asymmetry to load and power estimates is quantified by comparing aeroelastic simulations initialized with different inflow conditions: an axisymmetric base wake, an unsteady stochastic shape wake, and a large-eddy simulation with rotating actuator-line turbine representation. Time series of blade-root and tower base bending moments are analyzed. We find that meandering has a large contribution to the fluctuation of the loads. Moreover, considering the wake edge intermittence via the stochastic shape model improves the simulation of load and power fluctuations and of the fatigue damage equivalent loads. Furthermore, these results indicate that the stochasticmore » shape wake simulator is a valuable addition to simplified wake models when seeking to obtain higher-fidelity computationally inexpensive predictions of loads and power.« less

  4. Contributions of the stochastic shape wake model to predictions of aerodynamic loads and power under single wake conditions

    DOE PAGES

    Doubrawa, P.; Barthelmie, R. J.; Wang, H.; ...

    2016-10-03

    The contribution of wake meandering and shape asymmetry to load and power estimates is quantified by comparing aeroelastic simulations initialized with different inflow conditions: an axisymmetric base wake, an unsteady stochastic shape wake, and a large-eddy simulation with rotating actuator-line turbine representation. Time series of blade-root and tower base bending moments are analyzed. We find that meandering has a large contribution to the fluctuation of the loads. Moreover, considering the wake edge intermittence via the stochastic shape model improves the simulation of load and power fluctuations and of the fatigue damage equivalent loads. Furthermore, these results indicate that the stochasticmore » shape wake simulator is a valuable addition to simplified wake models when seeking to obtain higher-fidelity computationally inexpensive predictions of loads and power.« less

  5. Space shuttle solid rocket booster recovery system definition. Volume 3: SRB water impact loads computer program, user's manual

    NASA Technical Reports Server (NTRS)

    1973-01-01

    This user's manual describes the FORTRAN IV computer program developed to compute the total vertical load, normal concentrated pressure loads, and the center of pressure of typical SRB water impact slapdown pressure distributions specified in the baseline configuration. The program prepares the concentrated pressure load information in punched card format suitable for input to the STAGS computer program. In addition, the program prepares for STAGS input the inertia reacting loads to the slapdown pressure distributions.

  6. Load balancing prediction method of cloud storage based on analytic hierarchy process and hybrid hierarchical genetic algorithm.

    PubMed

    Zhou, Xiuze; Lin, Fan; Yang, Lvqing; Nie, Jing; Tan, Qian; Zeng, Wenhua; Zhang, Nian

    2016-01-01

    With the continuous expansion of the cloud computing platform scale and rapid growth of users and applications, how to efficiently use system resources to improve the overall performance of cloud computing has become a crucial issue. To address this issue, this paper proposes a method that uses an analytic hierarchy process group decision (AHPGD) to evaluate the load state of server nodes. Training was carried out by using a hybrid hierarchical genetic algorithm (HHGA) for optimizing a radial basis function neural network (RBFNN). The AHPGD makes the aggregative indicator of virtual machines in cloud, and become input parameters of predicted RBFNN. Also, this paper proposes a new dynamic load balancing scheduling algorithm combined with a weighted round-robin algorithm, which uses the predictive periodical load value of nodes based on AHPPGD and RBFNN optimized by HHGA, then calculates the corresponding weight values of nodes and makes constant updates. Meanwhile, it keeps the advantages and avoids the shortcomings of static weighted round-robin algorithm.

  7. Passively Targeted Curcumin-Loaded PEGylated PLGA Nanocapsules for Colon Cancer Therapy In Vivo

    PubMed Central

    Klippstein, Rebecca; Wang, Julie Tzu-Wen; El-Gogary, Riham I; Bai, Jie; Mustafa, Falisa; Rubio, Noelia; Bansal, Sukhvinder; Al-Jamal, Wafa T; Al-Jamal, Khuloud T

    2015-01-01

    Clinical applications of curcumin for the treatment of cancer and other chronic diseases have been mainly hindered by its short biological half-life and poor water solubility. Nanotechnology-based drug delivery systems have the potential to enhance the efficacy of poorly soluble drugs for systemic delivery. This study proposes the use of poly(lactic-co-glycolic acid) (PLGA)-based polymeric oil-cored nanocapsules (NCs) for curcumin loading and delivery to colon cancer in mice after systemic injection. Formulations of different oil compositions are prepared and characterized for their curcumin loading, physico-chemical properties, and shelf-life stability. The results indicate that castor oil-cored PLGA-based NC achieves high drug loading efficiency (≈18% w(drug)/w(polymer)%) compared to previously reported NCs. Curcumin-loaded NCs internalize more efficiently in CT26 cells than the free drug, and exert therapeutic activity in vitro, leading to apoptosis and blocking the cell cycle. In addition, the formulated NC exhibits an extended blood circulation profile compared to the non-PEGylated NC, and accumulates in the subcutaneous CT26-tumors in mice, after systemic administration. The results are confirmed by optical and single photon emission computed tomography/computed tomography (SPECT/CT) imaging. In vivo growth delay studies are performed, and significantly smaller tumor volumes are achieved compared to empty NC injected animals. This study shows the great potential of the formulated NC for treating colon cancer. PMID:26140363

  8. Reliability Constrained Priority Load Shedding for Aerospace Power System Automation

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)

    2000-01-01

    The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.

  9. Suspended-sediment and nutrient loads for Waiakea and Alenaio Streams, Hilo, Hawaii, 2003-2006

    USGS Publications Warehouse

    Presley, Todd K.; Jamison, Marcael T.J.; Nishimoto, Dale C.

    2008-01-01

    Suspended sediment and nutrient samples were collected during wet-weather conditions at three sites on two ephemeral streams in the vicinity of Hilo, Hawaii during March 2004 to March 2006. Two sites were sampled on Waiakea Stream at 80- and 860-foot altitudes during March 2004 to August 2005. One site was sampled on Alenaio Stream at 10-foot altitude during November 2005 to March 2006. The sites were selected to represent different land uses and land covers in the area. Most of the drainage area above the upper Waiakea Stream site is conservation land. The drainage areas above the lower site on Waiakea Stream, and the site on Alenaio Stream, are a combination of conservation land, agriculture, rural, and urban land uses. In addition to the sampling, continuous-record streamflow sites were established at the three sampling sites, as well as an additional site on Alenaio Stream at altitude of 75 feet and 0.47 miles upstream from the sampling site. Stage was measured continuously at 15-minute intervals at these sites. Discharge, for any particular instant, or for selected periods of time, were computed based on a stage-discharge relation determined from individual discharge measurements. Continuous records of discharge were computed at the two sites on Waiakea Stream and the upper site on Aleniao Stream. Due to non-ideal hydraulic conditions within the channel of Alenaio Stream, a continuous record of discharge was not computed at the lower site on Alenaio Stream where samples were taken. Samples were analyzed for suspended sediment, and the nutrients total nitrogen, dissolved nitrite plus nitrate, and total phosphorus. Concentration data were converted to instantaneous load values: loads are the product of discharge and concentration, and are presented as tons per day for suspended sediment or pounds per day for nutrients. Daily-mean loads were computed by estimating concentrations relative to discharge using graphical constituent loading analysis techniques. Daily-mean loads were computed at the two Waiakea Stream sampling sites for the analyzed constituents, during the period October 1, 2003 to September 30, 2005. No record of daily-mean load was computed for the Alenaio Stream sampling site due to the problems with computing a discharge record. The maximum daily-mean loads for the upper site on Waiakea Stream for suspended sediment was 79 tons per day, and the maximum daily-mean loads for total nitrogen, dissolved nitrite plus nitrate, and total phosphorus were 1,350, 13, and 300 pounds per day, respectively. The maximum daily-mean loads for the lower site on Waiakea Stream for suspended sediment was 468 tons per day, and the maximum daily-mean loads for total nitrogen, nitrite plus nitrate, and total phosphorus were 913, 8.5, and 176 pounds per day, respectively. From the estimated continuous daily-mean load record, all of the maximum daily-mean loads occurred during October 2003 and September 2004, except for suspended sediment load for the lower site, which occurred on September 15, 2005. Maximum values were not all caused by a single storm event. Overall, the record of daily-mean loads showed lower loads during storm events for suspended sediments and nutrients at the downstream site of Waiakea Stream during 2004 than at the upstream site. During 2005, however, the suspended sediment loads were higher at the downstream site than the upstream site. Construction of a flood control channel between the two sites in 2005 may have contributed to the change in relative suspended-sediment loads.

  10. A revised load estimation procedure for the Susquehanna, Potomac, Patuxent, and Choptank rivers

    USGS Publications Warehouse

    Yochum, Steven E.

    2000-01-01

    The U.S. Geological Survey?s Chesapeake Bay River Input Program has updated the nutrient and suspended-sediment load data base for the Susquehanna, Potomac, Patuxent, and Choptank Rivers using a multiple-window, center-estimate regression methodology. The revised method optimizes the seven-parameter regression approach that has been used historically by the program. The revised method estimates load using the fifth or center year of a sliding 9-year window. Each year a new model is run for each site and constituent, the most recent year is added, and the previous 4 years of estimates are updated. The fifth year in the 9-year window is considered the best estimate and is kept in the data base. The last year of estimation shows the most change from the previous year?s estimate and this change approaches a minimum at the fifth year. Differences between loads computed using this revised methodology and the loads populating the historical data base have been noted but the load estimates do not typically change drastically. The data base resulting from the application of this revised methodology is populated by annual and monthly load estimates that are known with greater certainty than in the previous load data base.

  11. Computer program simplifies selection of structural steel columns

    NASA Technical Reports Server (NTRS)

    Vissing, G. S.

    1966-01-01

    Computer program rapidly selects appropriate size steel columns and base plates for construction of multistory structures. The program produces a printed record containing the size of a section required at a particular elevation, the stress produced by the loads, and the allowable stresses for that section.

  12. [The Key Technology Study on Cloud Computing Platform for ECG Monitoring Based on Regional Internet of Things].

    PubMed

    Yang, Shu; Qiu, Yuyan; Shi, Bo

    2016-09-01

    This paper explores the methods of building the internet of things of a regional ECG monitoring, focused on the implementation of ECG monitoring center based on cloud computing platform. It analyzes implementation principles of automatic identifi cation in the types of arrhythmia. It also studies the system architecture and key techniques of cloud computing platform, including server load balancing technology, reliable storage of massive smalfi les and the implications of quick search function.

  13. Fatigue Behavior of Computer-Aided Design/Computer-Assisted Manufacture Ceramic Abutments as a Function of Design and Ceramics Processing.

    PubMed

    Kelly, J Robert; Rungruanganunt, Patchnee

    2016-01-01

    Zirconia is being widely used, at times apparently by simply copying a metal design into ceramic. Structurally, ceramics are sensitive to both design and processing (fabrication) details. The aim of this work was to examine four computer-aided design/computer-assisted manufacture (CAD/CAM) abutments using a modified International Standards Organization (ISO) implant fatigue protocol to determine performance as a function of design and processing. Two full zirconia and two hybrid (Ti-based) abutments (n = 12 each) were tested wet at 15 Hz at a variety of loads to failure. Failure probability distributions were examined at each load, and when found to be the same, data from all loads were combined for lifetime analysis from accelerated to clinical conditions. Two distinctly different failure modes were found for both full zirconia and Ti-based abutments. One of these for zirconia has been reported clinically in the literature, and one for the Ti-based abutments has been reported anecdotally. The ISO protocol modification in this study forced failures in the abutments; no implant bodies failed. Extrapolated cycles for 10% failure at 70 N were: full zirconia, Atlantis 2 × 10(7) and Straumann 3 × 10(7); and Ti-based, Glidewell 1 × 10(6) and Nobel 1 × 10(21). Under accelerated conditions (200 N), performance differed significantly: Straumann clearly outperformed Astra (t test, P = .013), and the Glidewell Ti-base abutment also outperformed Atlantis zirconia at 200 N (Nobel ran-out; t test, P = .035). The modified ISO protocol in this study produced failures that were seen clinically. The manufacture matters; differences in design and fabrication that influence performance cannot be discerned clinically.

  14. A parallel implementation of an off-lattice individual-based model of multicellular populations

    NASA Astrophysics Data System (ADS)

    Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe

    2015-07-01

    As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.

  15. Analyzing the uncertainty of suspended sediment load prediction using sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Leisenring, Marc; Moradkhani, Hamid

    2012-10-01

    SummaryA first step in understanding the impacts of sediment and controlling the sources of sediment is to quantify the mass loading. Since mass loading is the product of flow and concentration, the quantification of loads first requires the quantification of runoff volume. Using the National Weather Service's SNOW-17 and the Sacramento Soil Moisture Accounting (SAC-SMA) models, this study employed particle filter based Bayesian data assimilation methods to predict seasonal snow water equivalent (SWE) and runoff within a small watershed in the Lake Tahoe Basin located in California, USA. A procedure was developed to scale the variance multipliers (a.k.a hyperparameters) for model parameters and predictions based on the accuracy of the mean predictions relative to the ensemble spread. In addition, an online bias correction algorithm based on the lagged average bias was implemented to detect and correct for systematic bias in model forecasts prior to updating with the particle filter. Both of these methods significantly improved the performance of the particle filter without requiring excessively wide prediction bounds. The flow ensemble was linked to a non-linear regression model that was used to predict suspended sediment concentrations (SSCs) based on runoff rate and time of year. Runoff volumes and SSC were then combined to produce an ensemble of suspended sediment load estimates. Annual suspended sediment loads for the 5 years of simulation were finally computed along with 95% prediction intervals that account for uncertainty in both the SSC regression model and flow rate estimates. Understanding the uncertainty associated with annual suspended sediment load predictions is critical for making sound watershed management decisions aimed at maintaining the exceptional clarity of Lake Tahoe. The computational methods developed and applied in this research could assist with similar studies where it is important to quantify the predictive uncertainty of pollutant load estimates.

  16. Twisting short dsDNA with applied tension

    NASA Astrophysics Data System (ADS)

    Zoli, Marco

    2018-02-01

    The twisting deformation of mechanically stretched DNA molecules is studied by a coarse grained Hamiltonian model incorporating the fundamental interactions that stabilize the double helix and accounting for the radial and angular base pair fluctuations. The latter are all the more important at short length scales in which DNA fragments maintain an intrinsic flexibility. The presented computational method simulates a broad ensemble of possible molecule conformations characterized by a specific average twist and determines the energetically most convenient helical twist by free energy minimization. As this is done for any external load, the method yields the characteristic twist-stretch profile of the molecule and also computes the changes in the macroscopic helix parameters i.e. average diameter and rise distance. It is predicted that short molecules under stretching should first over-twist and then untwist by increasing the external load. Moreover, applying a constant load and simulating a torsional strain which over-twists the helix, it is found that the average helix diameter shrinks while the molecule elongates, in agreement with the experimental trend observed in kilo-base long sequences. The quantitative relation between percent relative elongation and superhelical density at fixed load is derived. The proposed theoretical model and computational method offer a general approach to characterize specific DNA fragments and predict their macroscopic elastic response as a function of the effective potential parameters of the mesoscopic Hamiltonian.

  17. Progressive Damage Analysis of Laminated Composite (PDALC) (A Computational Model Implemented in the NASA COMET Finite Element Code). 2.0

    NASA Technical Reports Server (NTRS)

    Coats, Timothy W.; Harris, Charles E.; Lo, David C.; Allen, David H.

    1998-01-01

    A method for analysis of progressive failure in the Computational Structural Mechanics Testbed is presented in this report. The relationship employed in this analysis describes the matrix crack damage and fiber fracture via kinematics-based volume-averaged damage variables. Damage accumulation during monotonic and cyclic loads is predicted by damage evolution laws for tensile load conditions. The implementation of this damage model required the development of two testbed processors. While this report concentrates on the theory and usage of these processors, a complete listing of all testbed processors and inputs that are required for this analysis are included. Sample calculations for laminates subjected to monotonic and cyclic loads were performed to illustrate the damage accumulation, stress redistribution, and changes to the global response that occurs during the loading history. Residual strength predictions made with this information compared favorably with experimental measurements.

  18. Progressive Damage Analysis of Laminated Composite (PDALC)-A Computational Model Implemented in the NASA COMET Finite Element Code

    NASA Technical Reports Server (NTRS)

    Lo, David C.; Coats, Timothy W.; Harris, Charles E.; Allen, David H.

    1996-01-01

    A method for analysis of progressive failure in the Computational Structural Mechanics Testbed is presented in this report. The relationship employed in this analysis describes the matrix crack damage and fiber fracture via kinematics-based volume-averaged variables. Damage accumulation during monotonic and cyclic loads is predicted by damage evolution laws for tensile load conditions. The implementation of this damage model required the development of two testbed processors. While this report concentrates on the theory and usage of these processors, a complete list of all testbed processors and inputs that are required for this analysis are included. Sample calculations for laminates subjected to monotonic and cyclic loads were performed to illustrate the damage accumulation, stress redistribution, and changes to the global response that occur during the load history. Residual strength predictions made with this information compared favorably with experimental measurements.

  19. Determination of stress intensity factors for interface cracks under mixed-mode loading

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1992-01-01

    A simple technique was developed using conventional finite element analysis to determine stress intensity factors, K1 and K2, for interface cracks under mixed-mode loading. This technique involves the calculation of crack tip stresses using non-singular finite elements. These stresses are then combined and used in a linear regression procedure to calculate K1 and K2. The technique was demonstrated by calculating three different bimaterial combinations. For the normal loading case, the K's were within 2.6 percent of an exact solution. The normalized K's under shear loading were shown to be related to the normalized K's under normal loading. Based on these relations, a simple equation was derived for calculating K1 and K2 for mixed-mode loading from knowledge of the K's under normal loading. The equation was verified by computing the K's for a mixed-mode case with equal and normal shear loading. The correlation between exact and finite element solutions is within 3.7 percent. This study provides a simple procedure to compute K2/K1 ratio which has been used to characterize the stress state at the crack tip for various combinations of materials and loadings. Tests conducted over a range of K2/K1 ratios could be used to fully characterize interface fracture toughness.

  20. Microcomputer software development facilities

    NASA Technical Reports Server (NTRS)

    Gorman, J. S.; Mathiasen, C.

    1980-01-01

    A more efficient and cost effective method for developing microcomputer software is to utilize a host computer with high-speed peripheral support. Application programs such as cross assemblers, loaders, and simulators are implemented in the host computer for each of the microcomputers for which software development is a requirement. The host computer is configured to operate in a time share mode for multiusers. The remote terminals, printers, and down loading capabilities provided are based on user requirements. With this configuration a user, either local or remote, can use the host computer for microcomputer software development. Once the software is developed (through the code and modular debug stage) it can be downloaded to the development system or emulator in a test area where hardware/software integration functions can proceed. The microcomputer software program sources reside in the host computer and can be edited, assembled, loaded, and then downloaded as required until the software development project has been completed.

  1. Predicted effect of dynamic load on pitting fatigue life for low-contact-ratio spur gears

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.

    1986-01-01

    How dynamic load affects the surface pitting fatigue life of external spur gears was predicted by using the NASA computer program TELSGE. Parametric studies were performed over a range of various gear parameters modeling low-contact-ratio involute spur gears. In general, gear life predictions based on dynamic loads differed significantly from those based on static loads, with the predictions being strongly influenced by the maximum dynamic load during contact. Gear mesh operating speed strongly affected predicted dynamic load and life. Meshes operating at a resonant speed or one-half the resonant speed had significantly shorter lives. Dynamic life factors for gear surface pitting fatigue were developed on the basis of the parametric studies. In general, meshes with higher contact ratios had higher dynamic life factors than meshes with lower contact ratios. A design chart was developed for hand calculations of dynamic life factors.

  2. Effect of Cyclic Thermo-Mechanical Loads on Fatigue Reliability in Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Murthy, P. L. N.; Chamis, C. C.

    1996-01-01

    A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multi-factor interaction relationship developed at NASA Lewis Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability- based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)(sub s) graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  3. An Inverse Interpolation Method Utilizing In-Flight Strain Measurements for Determining Loads and Structural Response of Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Shkarayev, S.; Krashantisa, R.; Tessler, A.

    2004-01-01

    An important and challenging technology aimed at the next generation of aerospace vehicles is that of structural health monitoring. The key problem is to determine accurately, reliably, and in real time the applied loads, stresses, and displacements experienced in flight, with such data establishing an information database for structural health monitoring. The present effort is aimed at developing a finite element-based methodology involving an inverse formulation that employs measured surface strains to recover the applied loads, stresses, and displacements in an aerospace vehicle in real time. The computational procedure uses a standard finite element model (i.e., "direct analysis") of a given airframe, with the subsequent application of the inverse interpolation approach. The inverse interpolation formulation is based on a parametric approximation of the loading and is further constructed through a least-squares minimization of calculated and measured strains. This procedure results in the governing system of linear algebraic equations, providing the unknown coefficients that accurately define the load approximation. Numerical simulations are carried out for problems involving various levels of structural approximation. These include plate-loading examples and an aircraft wing box. Accuracy and computational efficiency of the proposed method are discussed in detail. The experimental validation of the methodology by way of structural testing of an aircraft wing is also discussed.

  4. Distriblets: Java-Based Distributed Computing on the Web.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris

    1999-01-01

    Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)

  5. Problem-Solving in the Pre-Clinical Curriculum: The Uses of Computer Simulations.

    ERIC Educational Resources Information Center

    Michael, Joel A.; Rovick, Allen A.

    1986-01-01

    Promotes the use of computer-based simulations in the pre-clinical medical curriculum as a means of providing students with opportunities for problem solving. Describes simple simulations of skeletal muscle loads, complex simulations of major organ systems and comprehensive simulation models of the entire human body. (TW)

  6. Predicting traffic load impact of alternative recreation developments

    Treesearch

    Gary H. Elsner; Ronald A. Oliveira

    1973-01-01

    Traffic load changes as a result of expansion of recreation facilities may be predicted through computations based on estimates of (a) drawing power of the recreation attracttions, overnight accommodations, and in- or out-terminals; (b) probable types of travel; (c) probable routes of travel; and (d) total number of cars in the recreation system. Once the basic model...

  7. Measuring Cognitive Load in Test Items: Static Graphics versus Animated Graphics

    ERIC Educational Resources Information Center

    Dindar, M.; Kabakçi Yurdakul, I.; Inan Dönmez, F.

    2015-01-01

    The majority of multimedia learning studies focus on the use of graphics in learning process but very few of them examine the role of graphics in testing students' knowledge. This study investigates the use of static graphics versus animated graphics in a computer-based English achievement test from a cognitive load theory perspective. Three…

  8. Micromechanics based simulation of ductile fracture in structural steels

    NASA Astrophysics Data System (ADS)

    Yellavajjala, Ravi Kiran

    The broader aim of this research is to develop fundamental understanding of ductile fracture process in structural steels, propose robust computational models to quantify the associated damage, and provide numerical tools to simplify the implementation of these computational models into general finite element framework. Mechanical testing on different geometries of test specimens made of ASTM A992 steels is conducted to experimentally characterize the ductile fracture at different stress states under monotonic and ultra-low cycle fatigue (ULCF) loading. Scanning electron microscopy studies of the fractured surfaces is conducted to decipher the underlying microscopic damage mechanisms that cause fracture in ASTM A992 steels. Detailed micromechanical analyses for monotonic and cyclic loading are conducted to understand the influence of stress triaxiality and Lode parameter on the void growth phase of ductile fracture. Based on monotonic analyses, an uncoupled micromechanical void growth model is proposed to predict ductile fracture. This model is then incorporated in to finite element program as a weakly coupled model to simulate the loss of load carrying capacity in the post microvoid coalescence regime for high triaxialities. Based on the cyclic analyses, an uncoupled micromechanics based cyclic void growth model is developed to predict the ULCF life of ASTM A992 steels subjected to high stress triaxialities. Furthermore, a computational fracture locus for ASTM A992 steels is developed and incorporated in to finite element program as an uncoupled ductile fracture model. This model can be used to predict the ductile fracture initiation under monotonic loading in a wide range of triaxiality and Lode parameters. Finally, a coupled microvoid elongation and dilation based continuum damage model is proposed, implemented, calibrated and validated. This model is capable of simulating the local softening caused by the various phases of ductile fracture process under monotonic loading for a wide range of stress states. Novel differentiation procedures based on complex analyses along with existing finite difference methods and automatic differentiation are extended using perturbation techniques to evaluate tensor derivatives. These tensor differentiation techniques are then used to automate nonlinear constitutive models into implicit finite element framework. Finally, the efficiency of these automation procedures is demonstrated using benchmark problems.

  9. Fatigue assessment of vibrating rail vehicle bogie components under non-Gaussian random excitations using power spectral densities

    NASA Astrophysics Data System (ADS)

    Wolfsteiner, Peter; Breuer, Werner

    2013-10-01

    The assessment of fatigue load under random vibrations is usually based on load spectra. Typically they are computed with counting methods (e.g. Rainflow) based on a time domain signal. Alternatively methods are available (e.g. Dirlik) enabling the estimation of load spectra directly from power spectral densities (PSDs) of the corresponding time signals; the knowledge of the time signal is then not necessary. These PSD based methods have the enormous advantage that if for example the signal to assess results from a finite element method based vibration analysis, the computation time of the simulation of PSDs in the frequency domain outmatches by far the simulation of time signals in the time domain. This is especially true for random vibrations with very long signals in the time domain. The disadvantage of the PSD based simulation of vibrations and also the PSD based load spectra estimation is their limitation to Gaussian distributed time signals. Deviations from this Gaussian distribution cause relevant deviations in the estimated load spectra. In these cases usually only computation time intensive time domain calculations produce accurate results. This paper presents a method dealing with non-Gaussian signals with real statistical properties that is still able to use the efficient PSD approach with its computation time advantages. Essentially it is based on a decomposition of the non-Gaussian signal in Gaussian distributed parts. The PSDs of these rearranged signals are then used to perform usual PSD analyses. In particular, detailed methods are described for the decomposition of time signals and the derivation of PSDs and cross power spectral densities (CPSDs) from multiple real measurements without using inaccurate standard procedures. Furthermore the basic intention is to design a general and integrated method that is not just able to analyse a certain single load case for a small time interval, but to generate representative PSD and CPSD spectra replacing extensive measured loads in time domain without losing the necessary accuracy for the fatigue load results. These long measurements may even represent the whole application range of the railway vehicle. The presented work demonstrates the application of this method to railway vehicle components subjected to random vibrations caused by the wheel rail contact. Extensive measurements of axle box accelerations have been used to verify the proposed procedure for this class of railway vehicle applications. The linearity is not a real limitation, because the structural vibrations caused by the random excitations are usually small for rail vehicle applications. The impact of nonlinearities is usually covered by separate nonlinear models and only needed for the deterministic part of the loads. Linear vibration systems subjected to Gaussian vibrations respond with vibrations having also a Gaussian distribution. A non-Gaussian distribution in the excitation signal produces also a non-Gaussian response with statistical properties different from these excitations. A drawback is the fact that there is no simple mathematical relation between excitation and response concerning these deviations from the Gaussian distribution (see e.g. Ito calculus [6], which is usually not part of commercial codes!). There are a couple of well-established procedures for the prediction of fatigue load spectra from PSDs designed for Gaussian loads (see [4]); the question of the impact of non-Gaussian distributions on the fatigue load prediction has been studied for decades (see e.g. [3,4,11-13]) and is still subject of the ongoing research; e.g. [13] proposed a procedure, capable of considering non-Gaussian broadbanded loads. It is based on the knowledge of the response PSD and some statistical data, defining the non-Gaussian character of the underlying time signal. As already described above, these statistical data are usually not available for a PSD vibration response that has been calculated in the frequency domain. Summarizing the above and considering the fact of having highly non-Gaussian excitations on railway vehicles caused by the wheel rail contact means that the fast PSD analysis in the frequency domain cannot be combined with load spectra prediction methods for PSDs.

  10. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    NASA Technical Reports Server (NTRS)

    Robers, James L.; Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    Only recently have engineers begun making use of Artificial Intelligence (AI) tools in the area of conceptual design. To continue filling this void in the design process, a prototype knowledge-based system, called STRUTEX has been developed to initially configure a structure to support point loads in two dimensions. This prototype was developed for testing the application of AI tools to conceptual design as opposed to being a testbed for new methods for improving structural analysis and optimization. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user. How the system is constructed to interact with the user is described. Of special interest is the information flow between the knowledge base and the data base under control of the algorithmic main program. Examples of computed and refined structures are presented during the explanation of the system.

  11. The modelling of the flow-induced vibrations of periodic flat and axial-symmetric structures with a wave-based method

    NASA Astrophysics Data System (ADS)

    Errico, F.; Ichchou, M.; De Rosa, S.; Bareille, O.; Franco, F.

    2018-06-01

    The stochastic response of periodic flat and axial-symmetric structures, subjected to random and spatially-correlated loads, is here analysed through an approach based on the combination of a wave finite element and a transfer matrix method. Although giving a lower computational cost, the present approach keeps the same accuracy of classic finite element methods. When dealing with homogeneous structures, the accuracy is also extended to higher frequencies, without increasing the time of calculation. Depending on the complexity of the structure and the frequency range, the computational cost can be reduced more than two orders of magnitude. The presented methodology is validated both for simple and complex structural shapes, under deterministic and random loads.

  12. Helicopter noise prediction - The current status and future direction

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Farassat, F.

    1992-01-01

    The paper takes stock of the progress, assesses the current prediction capabilities, and forecasts the direction of future helicopter noise prediction research. The acoustic analogy approach, specifically, theories based on the Ffowcs Williams-Hawkings equations, are the most widely used for deterministic noise sources. Thickness and loading noise can be routinely predicted given good plane motion and blade loading inputs. Blade-vortex interaction noise can also be predicted well with measured input data, but prediction of airloads with the high spatial and temporal resolution required for BVI is still difficult. Current semiempirical broadband noise predictions are useful and reasonably accurate. New prediction methods based on a Kirchhoff formula and direct computation appear to be very promising, but are currently very demanding computationally.

  13. Performance of distributed multiscale simulations

    PubMed Central

    Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.

    2014-01-01

    Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258

  14. Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh

    NASA Astrophysics Data System (ADS)

    Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru

    2017-11-01

    We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.

  15. Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.

  16. Dynamic Load Predictions for Launchers Using Extra-Large Eddy Simulations X-Les

    NASA Astrophysics Data System (ADS)

    Maseland, J. E. J.; Soemarwoto, B. I.; Kok, J. C.

    2005-02-01

    Flow-induced unsteady loads can have a strong impact on performance and flight characteristics of aerospace vehicles and therefore play a crucial role in their design and operation. Complementary to costly flight tests and delicate wind-tunnel experiments, unsteady loads can be calculated using time-accurate Computational Fluid Dynamics. A capability to accurately predict the dynamic loads on aerospace structures at flight Reynolds numbers can be of great value for the design and analysis of aerospace vehicles. Advanced space launchers are subject to dynamic loads in the base region during the ascent to space. In particular the engine and nozzle experience aerodynamic pressure fluctuations resulting from massive flow separations. Understanding these phenomena is essential for performance enhancements for future launchers which operate a larger nozzle. A new hybrid RANS-LES turbulence modelling approach termed eXtra-Large Eddy Simulations (X-LES) holds the promise to capture the flow structures associated with massive separations and enables the prediction of the broad-band spectrum of dynamic loads. This type of method has become a focal point, reducing the cost of full LES, driven by the demand for their applicability in an industrial environment. The industrial feasibility of X-LES simulations is demonstrated by computing the unsteady aerodynamic loads on the main-engine nozzle of a generic space launcher configuration. The potential to calculate the dynamic loads is qualitatively assessed for transonic flow conditions in a comparison to wind-tunnel experiments. In terms of turn-around-times, X-LES computations are already feasible within the time-frames of the development process to support the structural design. Key words: massive separated flows; buffet loads; nozzle vibrations; space launchers; time-accurate CFD; composite RANS-LES formulation.

  17. A NASTRAN-based computer program for structural dynamic analysis of Horizontal Axis Wind Turbines

    NASA Technical Reports Server (NTRS)

    Lobitz, Don W.

    1995-01-01

    This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWT's). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower end rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWT's driven by turbulent winds.

  18. Innovative telecommunications for law enforcement

    NASA Technical Reports Server (NTRS)

    Sohn, R. L.

    1976-01-01

    The operation of computer-aided dispatch, mobile digital communications, and automatic vehicle location systems used in law enforcement is discussed, and characteristics of systems used by different agencies are compared. With reference to computer-aided dispatch systems, the data base components, dispatcher work load, extent of usage, and design trends are surveyed. The capabilities, levels of communication, and traffic load of mobile digital communications systems are examined. Different automatic vehicle location systems are distinguished, and two systems are evaluated. Other aspects of the application of innovative technology to operational command, control, and communications systems for law enforcement agencies are described.

  19. Unstructured P2P Network Load Balance Strategy Based on Multilevel Partitioning of Hypergraph

    NASA Astrophysics Data System (ADS)

    Feng, Lv; Chunlin, Gao; Kaiyang, Ma

    2017-05-01

    With rapid development of computer performance and distributed technology, P2P-based resource sharing mode plays important role in Internet. P2P network users continued to increase so the high dynamic characteristics of the system determine that it is difficult to obtain the load of other nodes. Therefore, a dynamic load balance strategy based on hypergraph is proposed in this article. The scheme develops from the idea of hypergraph theory in multilevel partitioning. It adopts optimized multilevel partitioning algorithms to partition P2P network into several small areas, and assigns each area a supernode for the management and load transferring of the nodes in this area. In the case of global scheduling is difficult to be achieved, the priority of a number of small range of load balancing can be ensured first. By the node load balance in each small area the whole network can achieve relative load balance. The experiments indicate that the load distribution of network nodes in our scheme is obviously compacter. It effectively solves the unbalanced problems in P2P network, which also improve the scalability and bandwidth utilization of system.

  20. Computer-assisted surgery in the lower jaw: double surgical guide for immediately loaded implants in postextractive sites-technical notes and a case report.

    PubMed

    De Santis, Daniele; Canton, Luciano Claudio; Cucchi, Alessandro; Zanotti, Guglielmo; Pistoia, Enrico; Nocini, Pier Francesco

    2010-01-01

    Computer-assisted surgery is based on computerized tomography (CT) scan technology to plan the placement of dental implants and a computer-aided design/computer-aided manufacturing (CAD-CAM) technology to create a custom surgical template. It provides guidance for insertion implants after analysis of existing alveolar bone and planning of implant position, which can be immediately loaded, therefore achieving esthetic and functional results in a surgical stage. The absence of guidelines to treat dentulous areas is often due to a lack of computer-assisted surgery. The authors have attempted to use this surgical methodology to replace residual teeth with an immediate implantoprosthetic restoration. The aim of this case report is to show the possibility of treating a dentulous patient by applying a computer-assisted surgical protocol associated with the use of a double surgical template: one before extraction and a second one after extraction of selected teeth.

  1. Computer Program for Thin Wire Antenna over a Perfectly Conducting Ground Plane. [using Galerkins method and sinusoidal bases

    NASA Technical Reports Server (NTRS)

    Richmond, J. H.

    1974-01-01

    A computer program is presented for a thin-wire antenna over a perfect ground plane. The analysis is performed in the frequency domain, and the exterior medium is free space. The antenna may have finite conductivity and lumped loads. The output data includes the current distribution, impedance, radiation efficiency, and gain. The program uses sinusoidal bases and Galerkin's method.

  2. Transport aircraft loading and balancing system: Using a CLIPS expert system for military aircraft load planning

    NASA Technical Reports Server (NTRS)

    Richardson, J.; Labbe, M.; Belala, Y.; Leduc, Vincent

    1994-01-01

    The requirement for improving aircraft utilization and responsiveness in airlift operations has been recognized for quite some time by the Canadian Forces. To date, the utilization of scarce airlift resources has been planned mainly through the employment of manpower-intensive manual methods in combination with the expertise of highly qualified personnel. In this paper, we address the problem of facilitating the load planning process for military aircraft cargo planes through the development of a computer-based system. We introduce TALBAS (Transport Aircraft Loading and BAlancing System), a knowledge-based system designed to assist personnel involved in preparing valid load plans for the C130 Hercules aircraft. The main features of this system which are accessible through a convivial graphical user interface, consists of the automatic generation of valid cargo arrangements given a list of items to be transported, the user-definition of load plans and the automatic validation of such load plans.

  3. Equivalent Viscous Damping Methodologies Applied on VEGA Launch Vehicle Numerical Model

    NASA Astrophysics Data System (ADS)

    Bartoccini, D.; Di Trapani, C.; Fransen, S.

    2014-06-01

    Part of the mission analysis of a spacecraft is the so- called launcher-satellite coupled loads analysis which aims at computing the dynamic environment of the satellite and of the launch vehicle for the most severe load cases in flight. Evidently the damping of the coupled system shall be defined with care as to not overestimate or underestimate the loads derived for the spacecraft. In this paper the application of several EqVD (Equivalent Viscous Damping) for Craig an Bampton (CB)-systems are investigated. Based on the structural damping defined for the various materials in the parent FE-models of the CB-components, EqVD matrices can be computed according to different methodologies. The effect of these methodologies on the numerical reconstruction of the VEGA launch vehicle dynamic environment will be presented.

  4. Spartan Release Engagement Mechanism (REM) stress and fracture analysis

    NASA Technical Reports Server (NTRS)

    Marlowe, D. S.; West, E. J.

    1984-01-01

    The revised stress and fracture analysis of the Spartan REM hardware for current load conditions and mass properties is presented. The stress analysis was performed using a NASTRAN math model of the Spartan REM adapter, base, and payload. Appendix A contains the material properties, loads, and stress analysis of the hardware. The computer output and model description are in Appendix B. Factors of safety used in the stress analysis were 1.4 on tested items and 2.0 on all other items. Fracture analysis of the items considered fracture critical was accomplished using the MSFC Crack Growth Analysis code. Loads and stresses were obtaind from the stress analysis. The fracture analysis notes are located in Appendix A and the computer output in Appendix B. All items analyzed met design and fracture criteria.

  5. Normal loads program for aerodynamic lifting surface theory. [evaluation of spanwise and chordwise loading distributions

    NASA Technical Reports Server (NTRS)

    Medan, R. T.; Ray, K. S.

    1974-01-01

    A description of and users manual are presented for a U.S.A. FORTRAN 4 computer program which evaluates spanwise and chordwise loading distributions, lift coefficient, pitching moment coefficient, and other stability derivatives for thin wings in linearized, steady, subsonic flow. The program is based on a kernel function method lifting surface theory and is applicable to a large class of planforms including asymmetrical ones and ones with mixed straight and curved edges.

  6. Angular-contact ball-bearing internal load estimation algorithm using runtime adaptive relaxation

    NASA Astrophysics Data System (ADS)

    Medina, H.; Mutu, R.

    2017-07-01

    An algorithm to estimate internal loads for single-row angular contact ball bearings due to externally applied thrust loads and high-operating speeds is presented. A new runtime adaptive relaxation procedure and blending function is proposed which ensures algorithm stability whilst also reducing the number of iterations needed to reach convergence, leading to an average reduction in computation time in excess of approximately 80%. The model is validated based on a 218 angular contact bearing and shows excellent agreement compared to published results.

  7. Application of Dynamic Analysis in Semi-Analytical Finite Element Method.

    PubMed

    Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus

    2017-08-30

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.

  8. Passively Targeted Curcumin-Loaded PEGylated PLGA Nanocapsules for Colon Cancer Therapy In Vivo.

    PubMed

    Klippstein, Rebecca; Wang, Julie Tzu-Wen; El-Gogary, Riham I; Bai, Jie; Mustafa, Falisa; Rubio, Noelia; Bansal, Sukhvinder; Al-Jamal, Wafa T; Al-Jamal, Khuloud T

    2015-09-01

    Clinical applications of curcumin for the treatment of cancer and other chronic diseases have been mainly hindered by its short biological half-life and poor water solubility. Nanotechnology-based drug delivery systems have the potential to enhance the efficacy of poorly soluble drugs for systemic delivery. This study proposes the use of poly(lactic-co-glycolic acid) (PLGA)-based polymeric oil-cored nanocapsules (NCs) for curcumin loading and delivery to colon cancer in mice after systemic injection. Formulations of different oil compositions are prepared and characterized for their curcumin loading, physico-chemical properties, and shelf-life stability. The results indicate that castor oil-cored PLGA-based NC achieves high drug loading efficiency (≈18% w(drug)/w(polymer)%) compared to previously reported NCs. Curcumin-loaded NCs internalize more efficiently in CT26 cells than the free drug, and exert therapeutic activity in vitro, leading to apoptosis and blocking the cell cycle. In addition, the formulated NC exhibits an extended blood circulation profile compared to the non-PEGylated NC, and accumulates in the subcutaneous CT26-tumors in mice, after systemic administration. The results are confirmed by optical and single photon emission computed tomography/computed tomography (SPECT/CT) imaging. In vivo growth delay studies are performed, and significantly smaller tumor volumes are achieved compared to empty NC injected animals. This study shows the great potential of the formulated NC for treating colon cancer. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Manual for computing bed load transport using BAGS (Bedload Assessment for Gravel-bed Streams) Software

    Treesearch

    John Pitlick; Yantao Cui; Peter Wilcock

    2009-01-01

    This manual provides background information and instructions on the use of a spreadsheet-based program for Bedload Assessment in Gravel-bed Streams (BAGS). The program implements six bed load transport equations developed specifically for gravel-bed rivers. Transport capacities are calculated on the basis of field measurements of channel geometry, reach-average slope,...

  10. A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems

    PubMed Central

    Liang, Junbin; Liu, Renping; Ni, Wei; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda

    2018-01-01

    In the post-Cloud era, the proliferation of Internet of Things (IoT) has pushed the horizon of Edge computing, which is a new computing paradigm with data processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs) play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems. PMID:29439442

  11. A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems.

    PubMed

    Ma, Xingpo; Liang, Junbin; Liu, Renping; Ni, Wei; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda

    2018-02-10

    In the post-Cloud era, the proliferation of Internet of Things (IoT) has pushed the horizon of Edge computing, which is a new computing paradigm with data are processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs) play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems.

  12. Mission-based Scenario Research: Experimental Design And Analysis

    DTIC Science & Technology

    2012-01-01

    neurotechnologies called Brain-Computer Interaction Technologies. 15. SUBJECT TERMS neuroimaging, EEG, task loading, neurotechnologies , ground... neurotechnologies called Brain-Computer Interaction Technologies. INTRODUCTION Imagine a system that can identify operator fatigue during a long-term...BCIT), a class of neurotechnologies , that aim to improve task performance by incorporating measures of brain activity to optimize the interactions

  13. Analysis of Computer Algebra System Tutorials Using Cognitive Load Theory

    ERIC Educational Resources Information Center

    May, Patricia

    2004-01-01

    Most research in the area of Computer Algebra Systems (CAS) has been designed to compare the effectiveness of instructional technology to traditional lecture-based formats. While results are promising, research also indicates evidence of the steep learning curve imposed by the technology. Yet no studies have been conducted to investigate this…

  14. Load Balancing Strategies for Multiphase Flows on Structured Grids

    NASA Astrophysics Data System (ADS)

    Olshefski, Kristopher; Owkes, Mark

    2017-11-01

    The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.

  15. Further studies using matched filter theory and stochastic simulation for gust loads prediction

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd Iii

    1993-01-01

    This paper describes two analysis methods -- one deterministic, the other stochastic -- for computing maximized and time-correlated gust loads for aircraft with nonlinear control systems. The first method is based on matched filter theory; the second is based on stochastic simulation. The paper summarizes the methods, discusses the selection of gust intensity for each method and presents numerical results. A strong similarity between the results from the two methods is seen to exist for both linear and nonlinear configurations.

  16. A computer program to obtain time-correlated gust loads for nonlinear aircraft using the matched-filter-based method

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1994-01-01

    NASA Langley Research Center has, for several years, conducted research in the area of time-correlated gust loads for linear and nonlinear aircraft. The results of this work led NASA to recommend that the Matched-Filter-Based One-Dimensional Search Method be used for gust load analyses of nonlinear aircraft. This manual describes this method, describes a FORTRAN code which performs this method, and presents example calculations for a sample nonlinear aircraft model. The name of the code is MFD1DS (Matched-Filter-Based One-Dimensional Search). The program source code, the example aircraft equations of motion, a sample input file, and a sample program output are all listed in the appendices.

  17. Application of Hybrid Optimization-Expert System for Optimal Power Management on Board Space Power Station

    NASA Technical Reports Server (NTRS)

    Momoh, James; Chattopadhyay, Deb; Basheer, Omar Ali AL

    1996-01-01

    The space power system has two sources of energy: photo-voltaic blankets and batteries. The optimal power management problem on-board has two broad operations: off-line power scheduling to determine the load allocation schedule of the next several hours based on the forecast of load and solar power availability. The nature of this study puts less emphasis on speed requirement for computation and more importance on the optimality of the solution. The second category problem, on-line power rescheduling, is needed in the event of occurrence of a contingency to optimally reschedule the loads to minimize the 'unused' or 'wasted' energy while keeping the priority on certain type of load and minimum disturbance of the original optimal schedule determined in the first-stage off-line study. The computational performance of the on-line 'rescheduler' is an important criterion and plays a critical role in the selection of the appropriate tool. The Howard University Center for Energy Systems and Control has developed a hybrid optimization-expert systems based power management program. The pre-scheduler has been developed using a non-linear multi-objective optimization technique called the Outer Approximation method and implemented using the General Algebraic Modeling System (GAMS). The optimization model has the capability of dealing with multiple conflicting objectives viz. maximizing energy utilization, minimizing the variation of load over a day, etc. and incorporates several complex interaction between the loads in a space system. The rescheduling is performed using an expert system developed in PROLOG which utilizes a rule-base for reallocation of the loads in an emergency condition viz. shortage of power due to solar array failure, increase of base load, addition of new activity, repetition of old activity etc. Both the modules handle decision making on battery charging and discharging and allocation of loads over a time-horizon of a day divided into intervals of 10 minutes. The models have been extensively tested using a case study for the Space Station Freedom and the results for the case study will be presented. Several future enhancements of the pre-scheduler and the 'rescheduler' have been outlined which include graphic analyzer for the on-line module, incorporating probabilistic considerations, including spatial location of the loads and the connectivity using a direct current (DC) load flow model.

  18. Investigations of Sediment Transportation, Middle Loup River at Dunning, Nebraska: With Application of Data from Turbulence Flume

    USGS Publications Warehouse

    Hubbell, David Wellington; Matejka, Donald Quintin

    1959-01-01

    An investigation of fluvial sediments of the Middle Loup River at Dunning, Nebr., was begun in 1946 and expanded in 1949 to provide information on sediment transportation. Construction of an artificial turbulence flume at which the total sediment discharge of the Middle Loup River at Dunning, Nebr., could be measured with suspended-sediment sampling equipment was completed in 1949. Since that time. measurements have been made at the turbulence flume and at several selected sections in a reach upstream and downstream from the flume. The Middle Loup River upstream from Dunning traverses the sandhills region of north-central Nebraska and has a drainage area of approximately 1,760 square miles. The sandhills are underlain by the Ogallala formation of Tertiary age and are mantled by loess and dune sand. The topography is characterized by northwest-trending sand dunes, which are stabilized by grass cover. The valley floor upstream from Dunning is generally about half a mile wide, is about 80 feet lower than the uplands, and is composed of sand that was mostly stream deposited. The channel is defined by low banks. Bank erosion is prevalent and is the source of most of the sediment load. The flow originates mostly from ground-water accretion and varies between about 200 and 600 cfs (cubic feet per second). Measured suspended-sediment loads vary from about 200 to 2,000 tons per day, of which about 20 percent is finer than 0.062 millimeter and 100 percent is finer than 0.50 millimeter. Total sediment discharges vary from about 500 to 3,500 tons per day, of which about 10 percent is finer than 0.062 millimeter, about 90 percent is finer than 0.50 millimeter, and about 98 percent is finer than 2.0 millimeters. The measured suspended-sediment discharge in the reach near Dunning averages about one-half of the total sediment discharge as measured at the turbulence flume. This report contains information collected during the period October 1, 1948, to September 30, 1952. The information includes sediment discharges; particle-size analyses of total load, of measured suspended sediment, and of bed material; water discharges and other hydraulic data for the turbulence flume and the selected sections. Sediment discharges have been computed with several different formulas, and insofar as possible, each computed load has been compared with data from the turbulence flume. Sediment discharges computed with the Einstein procedure did not agree well, in general, with comparable measured loads. However, a satisfactory representative cross section for the reach could not be determined with the cross sections that were selected for this investigation. If the computed cross section was narrower and deeper than a representative cross section for the reach, computed loads were high; and if the computed cross section was wider and shallower than a representative cross section for the reach, computed loads were low. Total sediment discharges computed with the modified Einstein procedure compared very well with the loads of individual size ranges and the measured total loads at the turbulence flume. Sediment discharges computed with the Straub equation averaged about twice the measured total sediment discharge at the turbulence flume. Bed-load discharges computed with the Kalinske equation were of about the right magnitude; however, high computed loads were associated with low total loads, low unmeasured loads, and low concentrations of measured suspended sediment coarser than 0.125 millimeter. Bed-load discharges computed with the Schoklitsch equation seemed somewhat high; about one-third of the computed loads were slightly higher than comparable unmeasured loads. Although, in general, high computed discharges with the Schoklitsch equation were associated with high measured total loads, high unmeasured loads, and high concentrations of measured suspended sediment coarser than 0.125 millimeter, the trend was not consistent. Bed-load discharges computed

  19. Monitor-based evaluation of pollutant load from urban stormwater runoff in Beijing.

    PubMed

    Liu, Y; Che, W; Li, J

    2005-01-01

    As a major pollutant source to urban receiving waters, the non-point source pollution from urban runoff needs to be well studied and effectively controlled. Based on monitoring data from urban runoff pollutant sources, this article describes a systematic estimation of total pollutant loads from the urban areas of Beijing. A numerical model was developed to quantify main pollutant loads of urban runoff in Beijing. A sub-procedure is involved in this method, in which the flush process influences both the quantity and quality of stormwater runoff. A statistics-based method was applied in computing the annual pollutant load as an output of the runoff. The proportions of pollutant from point-source and non-point sources were compared. This provides a scientific basis for proper environmental input assessment of urban stormwater pollution to receiving waters, improvement of infrastructure performance, implementation of urban stormwater management, and utilization of stormwater.

  20. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 1 quarter 3 progress report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S.A.; Kulak, R.F.; Bojanowski, C.

    2011-08-26

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. Themore » analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of April through June 2011.« less

  1. Plausibility and parameter sensitivity of micro-finite element-based joint load prediction at the proximal femur.

    PubMed

    Synek, Alexander; Pahr, Dieter H

    2018-06-01

    A micro-finite element-based method to estimate the bone loading history based on bone architecture was recently presented in the literature. However, a thorough investigation of the parameter sensitivity and plausibility of this method to predict joint loads is still missing. The goals of this study were (1) to analyse the parameter sensitivity of the joint load predictions at one proximal femur and (2) to assess the plausibility of the results by comparing load predictions of ten proximal femora to in vivo hip joint forces measured with instrumented prostheses (available from www.orthoload.com ). Joint loads were predicted by optimally scaling the magnitude of four unit loads (inclined [Formula: see text] to [Formula: see text] with respect to the vertical axis) applied to micro-finite element models created from high-resolution computed tomography scans ([Formula: see text]m voxel size). Parameter sensitivity analysis was performed by varying a total of nine parameters and showed that predictions of the peak load directions (range 10[Formula: see text]-[Formula: see text]) are more robust than the predicted peak load magnitudes (range 2344.8-4689.5 N). Comparing the results of all ten femora with the in vivo loading data of ten subjects showed that peak loads are plausible both in terms of the load direction (in vivo: [Formula: see text], predicted: [Formula: see text]) and magnitude (in vivo: [Formula: see text], predicted: [Formula: see text]). Overall, this study suggests that micro-finite element-based joint load predictions are both plausible and robust in terms of the predicted peak load direction, but predicted load magnitudes should be interpreted with caution.

  2. Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows

    NASA Astrophysics Data System (ADS)

    Crowell, Andrew Rippetoe

    This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.

  3. Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach

    PubMed Central

    Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan

    2017-01-01

    In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case. PMID:28609471

  4. Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach.

    PubMed

    Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan

    2017-01-01

    In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case.

  5. FPGA-based protein sequence alignment : A review

    NASA Astrophysics Data System (ADS)

    Isa, Mohd. Nazrin Md.; Muhsen, Ku Noor Dhaniah Ku; Saiful Nurdin, Dayana; Ahmad, Muhammad Imran; Anuar Zainol Murad, Sohiful; Nizam Mohyar, Shaiful; Harun, Azizi; Hussin, Razaidi

    2017-11-01

    Sequence alignment have been optimized using several techniques in order to accelerate the computation time to obtain the optimal score by implementing DP-based algorithm into hardware such as FPGA-based platform. During hardware implementation, there will be performance challenges such as the frequent memory access and highly data dependent in computation process. Therefore, investigation in processing element (PE) configuration where involves more on memory access in load or access the data (substitution matrix, query sequence character) and the PE configuration time will be the main focus in this paper. There are various approaches to enhance the PE configuration performance that have been done in previous works such as by using serial configuration chain and parallel configuration chain i.e. the configuration data will be loaded into each PEs sequentially and simultaneously respectively. Some researchers have proven that the performance using parallel configuration chain has optimized both the configuration time and area.

  6. Computer-Assisted Drug Formulation Design: Novel Approach in Drug Delivery.

    PubMed

    Metwally, Abdelkader A; Hathout, Rania M

    2015-08-03

    We hypothesize that, by using several chemo/bio informatics tools and statistical computational methods, we can study and then predict the behavior of several drugs in model nanoparticulate lipid and polymeric systems. Accordingly, two different matrices comprising tripalmitin, a core component of solid lipid nanoparticles (SLN), and PLGA were first modeled using molecular dynamics simulation, and then the interaction of drugs with these systems was studied by means of computing the free energy of binding using the molecular docking technique. These binding energies were hence correlated with the loadings of these drugs in the nanoparticles obtained experimentally from the available literature. The obtained relations were verified experimentally in our laboratory using curcumin as a model drug. Artificial neural networks were then used to establish the effect of the drugs' molecular descriptors on the binding energies and hence on the drug loading. The results showed that the used soft computing methods can provide an accurate method for in silico prediction of drug loading in tripalmitin-based and PLGA nanoparticulate systems. These results have the prospective of being applied to other nano drug-carrier systems, and this integrated statistical and chemo/bio informatics approach offers a new toolbox to the formulation science by proposing what we present as computer-assisted drug formulation design (CADFD).

  7. Computing Shapes Of Cascade Diffuser Blades

    NASA Technical Reports Server (NTRS)

    Tran, Ken; Prueger, George H.

    1993-01-01

    Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.

  8. PIV-based estimation of unsteady loads on a flat plate at high angle of attack using momentum equation approaches

    NASA Astrophysics Data System (ADS)

    Guissart, A.; Bernal, L. P.; Dimitriadis, G.; Terrapon, V. E.

    2017-05-01

    This work presents, compares and discusses results obtained with two indirect methods for the calculation of aerodynamic forces and pitching moment from 2D Particle Image Velocimetry (PIV) measurements. Both methodologies are based on the formulations of the momentum balance: the integral Navier-Stokes equations and the "flux equation" proposed by Noca et al. (J Fluids Struct 13(5):551-578, 1999), which has been extended to the computation of moments. The indirect methods are applied to spatio-temporal data for different separated flows around a plate with a 16:1 chord-to-thickness ratio. Experimental data are obtained in a water channel for both a plate undergoing a large amplitude imposed pitching motion and a static plate at high angle of attack. In addition to PIV data, direct measurements of aerodynamic loads are carried out to assess the quality of the indirect calculations. It is found that indirect methods are able to compute the mean and the temporal evolution of the loads for two-dimensional flows with a reasonable accuracy. Nonetheless, both methodologies are noise sensitive, and the parameters impacting the computation should thus be chosen carefully. It is also shown that results can be improved through the use of dynamic mode decomposition (DMD) as a pre-processing step.

  9. A new perspective on the perceptual selectivity of attention under load.

    PubMed

    Giesbrecht, Barry; Sy, Jocelyn; Bundesen, Claus; Kyllingsbaek, Søren

    2014-05-01

    The human attention system helps us cope with a complex environment by supporting the selective processing of information relevant to our current goals. Understanding the perceptual, cognitive, and neural mechanisms that mediate selective attention is a core issue in cognitive neuroscience. One prominent model of selective attention, known as load theory, offers an account of how task demands determine when information is selected and an account of the efficiency of the selection process. However, load theory has several critical weaknesses that suggest that it is time for a new perspective. Here we review the strengths and weaknesses of load theory and offer an alternative biologically plausible computational account that is based on the neural theory of visual attention. We argue that this new perspective provides a detailed computational account of how bottom-up and top-down information is integrated to provide efficient attentional selection and allocation of perceptual processing resources. © 2014 New York Academy of Sciences.

  10. Load forecasting via suboptimal seasonal autoregressive models and iteratively reweighted least squares estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbamalu, G.A.N.; El-Hawary, M.E.

    The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less

  11. Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2011-01-01

    A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  12. Probabilistic Simulation for Combined Cycle Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multifactor interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  13. Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  14. Experimental and Numerical Evaluation of the Mechanical Behavior of Strongly Anisotropic Light-Weight Metallic Fiber Structures under Static and Dynamic Compressive Loading

    PubMed Central

    Andersen, Olaf; Vesenjak, Matej; Fiedler, Thomas; Jehring, Ulrike; Krstulović-Opara, Lovre

    2016-01-01

    Rigid metallic fiber structures made from a variety of different metals and alloys have been investigated mainly with regard to their functional properties such as heat transfer, pressure drop, or filtration characteristics. With the recent advent of aluminum and magnesium-based fiber structures, the application of such structures in light-weight crash absorbers has become conceivable. The present paper therefore elucidates the mechanical behavior of rigid sintered fiber structures under quasi-static and dynamic loading. Special attention is paid to the strongly anisotropic properties observed for different directions of loading in relation to the main fiber orientation. Basically, the structures show an orthotropic behavior; however, a finite thickness of the fiber slabs results in moderate deviations from a purely orthotropic behavior. The morphology of the tested specimens is examined by computed tomography, and experimental results for different directions of loading as well as different relative densities are presented. Numerical calculations were carried out using real structural data derived from the computed tomography data. Depending on the direction of loading, the fiber structures show a distinctively different deformation behavior both experimentally and numerically. Based on these results, the prevalent modes of deformation are discussed and a first comparison with an established polymer foam and an assessment of the applicability of aluminum fiber structures in crash protection devices is attempted. PMID:28773522

  15. Experimental and Numerical Evaluation of the Mechanical Behavior of Strongly Anisotropic Light-Weight Metallic Fiber Structures under Static and Dynamic Compressive Loading.

    PubMed

    Andersen, Olaf; Vesenjak, Matej; Fiedler, Thomas; Jehring, Ulrike; Krstulović-Opara, Lovre

    2016-05-21

    Rigid metallic fiber structures made from a variety of different metals and alloys have been investigated mainly with regard to their functional properties such as heat transfer, pressure drop, or filtration characteristics. With the recent advent of aluminum and magnesium-based fiber structures, the application of such structures in light-weight crash absorbers has become conceivable. The present paper therefore elucidates the mechanical behavior of rigid sintered fiber structures under quasi-static and dynamic loading. Special attention is paid to the strongly anisotropic properties observed for different directions of loading in relation to the main fiber orientation. Basically, the structures show an orthotropic behavior; however, a finite thickness of the fiber slabs results in moderate deviations from a purely orthotropic behavior. The morphology of the tested specimens is examined by computed tomography, and experimental results for different directions of loading as well as different relative densities are presented. Numerical calculations were carried out using real structural data derived from the computed tomography data. Depending on the direction of loading, the fiber structures show a distinctively different deformation behavior both experimentally and numerically. Based on these results, the prevalent modes of deformation are discussed and a first comparison with an established polymer foam and an assessment of the applicability of aluminum fiber structures in crash protection devices is attempted.

  16. A computer-based servo system for controlling isotonic contractions of muscle.

    PubMed

    Smith, J P; Barsotti, R J

    1993-11-01

    We have developed a computer-based servo system for controlling isotonic releases in muscle. This system is a composite of commercially available devices: an IBM personal computer, an analog-to-digital (A/D) board, an Akers AE801 force transducer, and a Cambridge Technology motor. The servo loop controlling the force clamp is generated by computer via the A/D board, using a program written in QuickBASIC 4.5. Results are shown that illustrate the ability of the system to clamp the force generated by either skinned cardiac trabeculae or single rabbit psoas fibers down to the resolution of the force transducer within 4 ms. This rate is independent of the level of activation of the tissue and the size of the load imposed during the release. The key to the effectiveness of the system consists of two algorithms that are described in detail. The first is used to calculate the error signal to hold force to the desired level. The second algorithm is used to calculate the appropriate gain of the servo for a particular fiber and the size of the desired load to be imposed. The results show that the described computer-based method for controlling isotonic releases in muscle represents a good compromise between simplicity and performance and is an alternative to the custom-built digital/analog servo devices currently being used in studies of muscle mechanics.

  17. Fracture Response Enhancement Of Aluminum Using In-Situ Selective Reinforcement

    NASA Technical Reports Server (NTRS)

    Abada, Christopher H.; Farley, Gary L.; Hyer, Michael W.

    2006-01-01

    A computer-based parametric study of the effect of reinforcement architectures on fracture response of aluminum compact-tension (CT) specimens is performed. Eleven different reinforcement architectures consisting of rectangular and triangular cross-section reinforcements were evaluated. Reinforced specimens produced between 13 and 28 percent higher fracture load than achieved with the unreinforced case. Reinforcements with blunt leading edges (rectangular reinforcements) exhibited superior performance relative to the triangular reinforcements with sharp leading edges. Relative to the rectangular reinforcements, the most important architectural feature was reinforcement thickness. At failure, the reinforcements carried between 58 and 85 percent of the load applied to the specimen, suggesting that there is considerable load transfer between the base material and the reinforcement.

  18. Effects of Mental Load and Fatigue on Steady-State Evoked Potential Based Brain Computer Interface Tasks: A Comparison of Periodic Flickering and Motion-Reversal Based Visual Attention.

    PubMed

    Xie, Jun; Xu, Guanghua; Wang, Jing; Li, Min; Han, Chengcheng; Jia, Yaguang

    Steady-state visual evoked potentials (SSVEP) based paradigm is a conventional BCI method with the advantages of high information transfer rate, high tolerance to artifacts and the robust performance across users. But the occurrence of mental load and fatigue when users stare at flickering stimuli is a critical problem in implementation of SSVEP-based BCIs. Based on electroencephalography (EEG) power indices α, θ, θ + α, ratio index θ/α and response properties of amplitude and SNR, this study quantitatively evaluated the mental load and fatigue in both of conventional flickering and the novel motion-reversal visual attention tasks. Results over nine subjects revealed significant mental load alleviation in motion-reversal task rather than flickering task. The interaction between factors of "stimulation type" and "fatigue level" also illustrated the motion-reversal stimulation as a superior anti-fatigue solution for long-term BCI operation. Taken together, our work provided an objective method favorable for the design of more practically applicable steady-state evoked potential based BCIs.

  19. Experience with ethylene plant computer control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nasi, M.; Darby, M.L.; Sourander, M.

    This article discusses the control strategies, results and opinions of management and operations of a computer based ethylene plant control system. The ethylene unit contains 9 cracking heaters, and its nameplate capacity is 200,000 tpa ethylene. Reports on control performance during different unit loading and using different feedstock types. By converting the yield and utility consumption benefits due to computer control into monetary units, the payback time of the system is less than 2 yrs.

  20. Distributing an executable job load file to compute nodes in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gooding, Thomas M.

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications linkmore » over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.« less

  1. SCANS (Shipping Cask ANalysis System) a microcomputer-based analysis system for shipping cask design review: User`s manual to Version 3a. Volume 1, Revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mok, G.C.; Thomas, G.R.; Gerhard, M.A.

    SCANS (Shipping Cask ANalysis System) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for evaluating safety analysis reports on spent fuel shipping casks. SCANS is an easy-to-use system that calculates the global response to impact loads, pressure loads and thermal conditions, providing reviewers with an independent check on analyses submitted by licensees. SCANS is based on microcomputers compatible with the IBM-PC family of computers. The system is composed of a series of menus, input programs, cask analysis programs, and output display programs. All data is entered through fill-in-the-blank input screens thatmore » contain descriptive data requests. Analysis options are based on regulatory cases described in the Code of Federal Regulations 10 CFR 71 and Regulatory Guides published by the US Nuclear Regulatory Commission in 1977 and 1978.« less

  2. Comparison of Building Loads Analysis and System Thermodynamics (BLAST) Computer Program Simulations and Measured Energy Use for Army Buildings.

    DTIC Science & Technology

    1980-05-01

    engineering ,ZteNo D R RPTE16 research w 9 laboratory COMPARISON OF BUILDING LOADS ANALYSIS AND SYSTEM THERMODYNAMICS (BLAST) AD 0 5 5,0 3COMPUTER PROGRAM...Building Loads Analysis and System Thermodynamics (BLAST) computer program. A dental clinic and a battalion headquarters and classroom building were...Building and HVAC System Data Computer Simulation Comparison of Actual and Simulated Results ANALYSIS AND FINDINGS

  3. Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model

    NASA Astrophysics Data System (ADS)

    Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr

    2017-10-01

    Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations, gradient based and nature inspired optimization algorithms and experimental data, the latter of which take the form of a load-extension curve obtained from the evaluation of uniaxial tensile test results. The aim of this research was to obtain material model parameters corresponding to the quasi-static tensile loading which may be further used for the research involving dynamic and high-speed tensile loading. Based on the obtained results it can be concluded that the set goal has been reached.

  4. A generalized threshold model for computing bed load grain size distribution

    NASA Astrophysics Data System (ADS)

    Recking, Alain

    2016-12-01

    For morphodynamic studies, it is important to compute not only the transported volumes of bed load, but also the size of the transported material. A few bed load equations compute fractional transport (i.e., both the volume and grain size distribution), but many equations compute only the bulk transport (a volume) with no consideration of the transported grain sizes. To fill this gap, a method is proposed to compute the bed load grain size distribution separately to the bed load flux. The method is called the Generalized Threshold Model (GTM), because it extends the flow competence method for threshold of motion of the largest transported grain size to the full bed surface grain size distribution. This was achieved by replacing dimensional diameters with their size indices in the standard hiding function, which offers a useful framework for computation, carried out for each indices considered in the range [1, 100]. New functions are also proposed to account for partial transport. The method is very simple to implement and is sufficiently flexible to be tested in many environments. In addition to being a good complement to standard bulk bed load equations, it could also serve as a framework to assist in analyzing the physics of bed load transport in future research.

  5. Effect of finite element model loading condition on fracture risk assessment in men and women: the AGES-Reykjavik study.

    PubMed

    Keyak, J H; Sigurdsson, S; Karlsdottir, G S; Oskarsdottir, D; Sigmarsdottir, A; Kornak, J; Harris, T B; Sigurdsson, G; Jonsson, B Y; Siggeirsdottir, K; Eiriksdottir, G; Gudnason, V; Lang, T F

    2013-11-01

    Proximal femoral (hip) strength computed by subject-specific CT scan-based finite element (FE) models has been explored as an improved measure for identifying subjects at risk of hip fracture. However, to our knowledge, no published study has reported the effect of loading condition on the association between incident hip fracture and hip strength. In the present study, we performed a nested age- and sex-matched case-control study in the Age Gene/Environment Susceptibility (AGES) Reykjavik cohort. Baseline (pre-fracture) quantitative CT (QCT) scans of 5500 older male and female subjects were obtained. During 4-7years follow-up, 51 men and 77 women sustained hip fractures. Ninety-seven men and 152 women were randomly selected as controls from a pool of age- and sex-matched subjects. From the QCT data, FE models employing nonlinear material properties computed FE-strength of the left hip of each subject in loading from a fall onto the posterolateral (FPL), posterior (FP) and lateral (FL) aspects of the greater trochanter (patent pending). For comparison, FE strength in stance loading (FStance) and total femur areal bone mineral density (aBMD) were also computed. For all loading conditions, the reductions in strength associated with fracture in men were more than twice those in women (p≤0.01). For fall loading specifically, posterolateral loading in men and posterior loading in women were most strongly associated with incident hip fracture. After adjusting for aBMD, the association between FP and fracture in women fell short of statistical significance (p=0.08), indicating that FE strength provides little advantage over aBMD for identifying female hip fracture subjects. However, in men, after controlling for aBMD, FPL was 424N (11%) less in subjects with fractures than in controls (p=0.003). Thus, in men, FE models of posterolateral loading include information about incident hip fracture beyond that in aBMD. © 2013.

  6. An optimized network for phosphorus load monitoring for Lake Okeechobee, Florida

    USGS Publications Warehouse

    Gain, W.S.

    1997-01-01

    Phosphorus load data were evaluated for Lake Okeechobee, Florida, for water years 1982 through 1991. Standard errors for load estimates were computed from available phosphorus concentration and daily discharge data. Components of error were associated with uncertainty in concentration and discharge data and were calculated for existing conditions and for 6 alternative load-monitoring scenarios for each of 48 distinct inflows. Benefit-cost ratios were computed for each alternative monitoring scenario at each site by dividing estimated reductions in load uncertainty by the 5-year average costs of each scenario in 1992 dollars. Absolute and marginal benefit-cost ratios were compared in an iterative optimization scheme to determine the most cost-effective combination of discharge and concentration monitoring scenarios for the lake. If the current (1992) discharge-monitoring network around the lake is maintained, the water-quality sampling at each inflow site twice each year is continued, and the nature of loading remains the same, the standard error of computed mean-annual load is estimated at about 98 metric tons per year compared to an absolute loading rate (inflows and outflows) of 530 metric tons per year. This produces a relative uncertainty of nearly 20 percent. The standard error in load can be reduced to about 20 metric tons per year (4 percent) by adopting an optimized set of monitoring alternatives at a cost of an additional $200,000 per year. The final optimized network prescribes changes to improve both concentration and discharge monitoring. These changes include the addition of intensive sampling with automatic samplers at 11 sites, the initiation of event-based sampling by observers at another 5 sites, the continuation of periodic sampling 12 times per year at 1 site, the installation of acoustic velocity meters to improve discharge gaging at 9 sites, and the improvement of a discharge rating at 1 site.

  7. Time-averaged aerodynamic loads on the vane sets of the 40- by 80-foot and 80- by 120-foot wind tunnel complex

    NASA Technical Reports Server (NTRS)

    Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.

    1987-01-01

    Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.

  8. Application of Dynamic Analysis in Semi-Analytical Finite Element Method

    PubMed Central

    Oeser, Markus

    2017-01-01

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement’s state. PMID:28867813

  9. Computer-aided design of liposomal drugs: In silico prediction and experimental validation of drug candidates for liposomal remote loading.

    PubMed

    Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram

    2014-01-10

    Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs' structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al., J. Control. Release 160 (2012) 147-157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-Nearest Neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used by us in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. © 2013.

  10. Computer-aided design of liposomal drugs: in silico prediction and experimental validation of drug candidates for liposomal remote loading

    PubMed Central

    Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram

    2014-01-01

    Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs’ structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al, Journal of Controlled Release, 160(2012) 14–157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-nearest neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. PMID:24184343

  11. A self-analysis of the NASA-TLX workload measure.

    PubMed

    Noyes, Jan M; Bruneau, Daniel P J

    2007-04-01

    Computer use and, more specifically, the administration of tests and materials online continue to proliferate. A number of subjective, self-report workload measures exist, but the National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is probably the most well known and used. The aim of this paper is to consider the workload costs associated with the computer-based and paper versions of the NASA-TLX measure. It was found that there is a significant difference between the workload scores for the two media, with the computer version of the NASA-TLX incurring more workload. This has implications for the practical use of the NASA-TLX as well as for other computer-based workload measures.

  12. Study and characterization of a MEMS micromirror device

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2004-08-01

    In this paper, advances in our study and characterization of a MEMS micromirror device are presented. The micromirror device, of 510 mm characteristic length, operates in a dynamic mode with a maximum displacement on the order of 10 mm along its principal optical axis and oscillation frequencies of up to 1.3 kHz. Developments are carried on by analytical, computational, and experimental methods. Analytical and computational nonlinear geometrical models are developed in order to determine the optimal loading-displacement operational characteristics of the micromirror. Due to the operational mode of the micromirror, the experimental characterization of its loading-displacement transfer function requires utilization of advanced optical metrology methods. Optoelectronic holography (OEH) methodologies based on multiple wavelengths that we are developing to perform such characterization are described. It is shown that the analytical, computational, and experimental approach is effective in our developments.

  13. Mathematical and computational aspects of nonuniform frictional slip modeling

    NASA Astrophysics Data System (ADS)

    Gorbatikh, Larissa

    2004-07-01

    A mechanics-based model of non-uniform frictional sliding is studied from the mathematical/computational analysis point of view. This problem is of a key importance for a number of applications (particularly geomechanical ones), where materials interfaces undergo partial frictional sliding under compression and shear. We show that the problem is reduced to Dirichlet's problem for monotonic loading and to Riemman's problem for cyclic loading. The problem may look like a traditional crack interaction problem, however, it is confounded by the fact that locations of n sliding intervals are not known. They are to be determined from the condition for the stress intensity factors: KII=0 at the ends of the sliding zones. Computationally, it reduces to solving a system of 2n coupled non-linear algebraic equations involving singular integrals with unknown limits of integration.

  14. An investigation of pupil-based cognitive load measurement with low cost infrared webcam under light reflex interference.

    PubMed

    Chen, Siyuan; Epps, Julien; Chen, Fang

    2013-01-01

    Using the task-evoked pupillary response (TEPR) to index cognitive load can contribute significantly to the assessment of memory function and cognitive skills in patients. However, the measurement of pupillary response is currently limited to a well-controlled lab environment due to light reflex and also relies heavily on expensive video-based eye trackers. Furthermore, commercial eye trackers are usually dedicated to gaze direction measurement, and their calibration procedure and computing resource are largely redundant for pupil-based cognitive load measurement (PCLM). In this study, we investigate the validity of cognitive load measurement with (i) pupil light reflex in a less controlled luminance background; (ii) a low-cost infrared (IR) webcam for the TEPR in a controlled luminance background. ANOVA results show that with an appropriate baseline selection and subtraction, the light reflex is significantly reduced, suggesting the possibility of less constrained practical applications of PCLM. Compared with the TEPR from a commercial remote eye tracker, a low-cost IR webcam achieved a similar TEPR pattern and no significant difference was found between the two devices in terms of cognitive load measurement across five induced load levels.

  15. Scan Directed Load Balancing for Highly-Parallel Mesh-Connected Computers

    DTIC Science & Technology

    1991-07-01

    DTIC ~ ELECTE OCT 2 41991 AD-A242 045 Scan Directed Load Balancing for Highly-Parallel Mesh-Connected Computers’ Edoardo S. Biagioni Jan F. Prins...Department of Computer Science University of North Carolina Chapel Hill, N.C. 27599-3175 USA biagioni @cs.unc.edu prinsOcs.unc.edu Abstract Scan Directed...MasPar Computer Corpora- tion. Bibliography [1] Edoardo S. Biagioni . Scan Directed Load Balancing. PhD thesis., University of North Carolina, Chapel Hill

  16. A modular inverse elastostatics approach to resolve the pressure-induced stress state for in vivo imaging based cardiovascular modeling.

    PubMed

    Peirlinck, Mathias; De Beule, Matthieu; Segers, Patrick; Rebelo, Nuno

    2018-05-28

    Patient-specific biomechanical modeling of the cardiovascular system is complicated by the presence of a physiological pressure load given that the imaged tissue is in a pre-stressed and -strained state. Neglect of this prestressed state into solid tissue mechanics models leads to erroneous metrics (e.g. wall deformation, peak stress, wall shear stress) which in their turn are used for device design choices, risk assessment (e.g. procedure, rupture) and surgery planning. It is thus of utmost importance to incorporate this deformed and loaded tissue state into the computational models, which implies solving an inverse problem (calculating an undeformed geometry given the load and the deformed geometry). Methodologies to solve this inverse problem can be categorized into iterative and direct methodologies, both having their inherent advantages and disadvantages. Direct methodologies are typically based on the inverse elastostatics (IE) approach and offer a computationally efficient single shot methodology to compute the in vivo stress state. However, cumbersome and problem-specific derivations of the formulations and non-trivial access to the finite element analysis (FEA) code, especially for commercial products, refrain a broad implementation of these methodologies. For that reason, we developed a novel, modular IE approach and implemented this methodology in a commercial FEA solver with minor user subroutine interventions. The accuracy of this methodology was demonstrated in an arterial tube and porcine biventricular myocardium model. The computational power and efficiency of the methodology was shown by computing the in vivo stress and strain state, and the corresponding unloaded geometry, for two models containing multiple interacting incompressible, anisotropic (fiber-embedded) and hyperelastic material behaviors: a patient-specific abdominal aortic aneurysm and a full 4-chamber heart model. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Impact of remote sensing upon the planning, management and development of water resources. Summary of computers and computer growth trends for hydrologic modeling and the input of ERTS image data processing load

    NASA Technical Reports Server (NTRS)

    Castruccio, P. A.; Loats, H. L., Jr.

    1975-01-01

    An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.

  18. A literature review of the effects of computer input device design on biomechanical loading and musculoskeletal outcomes during computer work.

    PubMed

    Bruno Garza, J L; Young, J G

    2015-01-01

    Extended use of conventional computer input devices is associated with negative musculoskeletal outcomes. While many alternative designs have been proposed, it is unclear whether these devices reduce biomechanical loading and musculoskeletal outcomes. To review studies describing and evaluating the biomechanical loading and musculoskeletal outcomes associated with conventional and alternative input devices. Included studies evaluated biomechanical loading and/or musculoskeletal outcomes of users' distal or proximal upper extremity regions associated with the operation of alternative input devices (pointing devices, mice, other devices) that could be used in a desktop personal computing environment during typical office work. Some alternative pointing device designs (e.g. rollerbar) were consistently associated with decreased biomechanical loading while other designs had inconsistent results across studies. Most alternative keyboards evaluated in the literature reduce biomechanical loading and musculoskeletal outcomes. Studies of other input devices (e.g. touchscreen and gestural controls) were rare, however, those reported to date indicate that these devices are currently unsuitable as replacements for traditional devices. Alternative input devices that reduce biomechanical loading may make better choices for preventing or alleviating musculoskeletal outcomes during computer use, however, it is unclear whether many existing designs are effective.

  19. Dynamic Load Balancing for Grid Partitioning on a SP-2 Multiprocessor: A Framework

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single EBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.

  20. Dynamic Load Balancing For Grid Partitioning on a SP-2 Multiprocessor: A Framework

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single IBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.

  1. Performance Assessment of the Spare Parts for the Activation of Relocated Systems (SPARES) Forecasting Model

    DTIC Science & Technology

    1991-09-01

    constant data into the gaining base’s computer records. Among the data elements to be loaded, the 1XT434 image contains the level detail effective date...the mission support effective date, and the PBR override (19:19-203). In conjunction with the 1XT434, the Mission Change Parameter Image (Constant...the gaining base (19:19-208). The level detail effective date establishes the date the MCDDFR and MCDDR "are considered by the requirements computation

  2. Parallel performance optimizations on unstructured mesh-based simulations

    DOE PAGES

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; ...

    2015-06-01

    This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less

  3. LabVIEW Serial Driver Software for an Electronic Load

    NASA Technical Reports Server (NTRS)

    Scullin, Vincent; Garcia, Christopher

    2003-01-01

    A LabVIEW-language computer program enables monitoring and control of a Transistor Devices, Inc., Dynaload WCL232 (or equivalent) electronic load via an RS-232 serial communication link between the electronic load and a remote personal computer. (The electronic load can operate at constant voltage, current, power consumption, or resistance.) The program generates a graphical user interface (GUI) at the computer that looks and acts like the front panel of the electronic load. Once the electronic load has been placed in remote-control mode, this program first queries the electronic load for the present values of all its operational and limit settings, and then drops into a cycle in which it reports the instantaneous voltage, current, and power values in displays that resemble those on the electronic load while monitoring the GUI images of pushbuttons for control actions by the user. By means of the pushbutton images and associated prompts, the user can perform such operations as changing limit values, the operating mode, or the set point. The benefit of this software is that it relieves the user of the need to learn one method for operating the electronic load locally and another method for operating it remotely via a personal computer.

  4. A general panel sizing computer code and its application to composite structural panels

    NASA Technical Reports Server (NTRS)

    Anderson, M. S.; Stroud, W. J.

    1978-01-01

    A computer code for obtaining the dimensions of optimum (least mass) stiffened composite structural panels is described. The procedure, which is based on nonlinear mathematical programming and a rigorous buckling analysis, is applicable to general cross sections under general loading conditions causing buckling. A simplified method of accounting for bow-type imperfections is also included. Design studies in the form of structural efficiency charts for axial compression loading are made with the code for blade and hat stiffened panels. The effects on panel mass of imperfections, material strength limitations, and panel stiffness requirements are also examined. Comparisons with previously published experimental data show that accounting for imperfections improves correlation between theory and experiment.

  5. A Computer Based Moire Technique To Measure Very Small Displacements

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Amadshahi, Mansour A.; Subbaraman, B.

    1987-02-01

    The accuracy that can be achieved in the measurement of very small displacements in techniques such as moire, holography and speckle is limited by the noise inherent to the utilized optical devices. To reduce the noise to signal ratio, the moire method can be utilized. Two system of carrier fringes are introduced, an initial system before the load is applied and a final system when the load is applied. The moire pattern of these two systems contains the sought displacement information and the noise common to the two patterns is eliminated. The whole process is performed by a computer on digitized versions of the patterns. Examples of application are given.

  6. Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed

    NASA Technical Reports Server (NTRS)

    Mackin, Michael A.

    1995-01-01

    This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.

  7. Effect of Reinforcement Architecture on Fracture of Selectively Reinforced Metallic Compact Tension Specimens

    NASA Technical Reports Server (NTRS)

    Abada, Christopher H.; Farley, Gary L.; Hyer, Michael W.

    2006-01-01

    A computer-based parametric study of the effect of reinforcement architectures on fracture response of aluminum compact-tension (CT) specimens is performed. Eleven different reinforcement architectures consisting of rectangular and triangular cross-section reinforcements were evaluated. Reinforced specimens produced between 13 and 28 percent higher fracture load than achieved with the non-reinforced case. Reinforcements with blunt leading edges (rectangular reinforcements) exhibited superior performance relative to the triangular reinforcements with sharp leading edges. Relative to the rectangular reinforcements, the most important architectural feature was reinforcement thickness. At failure, the reinforcements carried between 58 and 85 percent of the load applied to the specimen, suggesting that there is considerable load transfer between the base material and the reinforcement.

  8. Development and testing of a computer assisted remote-control system for the compact loader-trammer. Report of Investigations/1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruff, T.M.

    1992-01-01

    A prototype mucking machine designed to operate in narrow vein stopes was developed by Foster-Miller, Inc., Waltham, MA, under contract with the U.S. Bureau of Mines. The machine, called a compact loader/trammer, or minimucker, was designed to replace slusher muckers in narrow-vein underground mines. The minimucker is a six-wheel-drive, skid-steered, load-haul-dump machine that loads muck at the front with a novel slide-bucket system and ejects it out the rear so that the machine does not have to be turned around. To correct deficiencies of the tether remote control system, a computer-based, radio remote control was retrofitted to the minimucker. Initialmore » tests indicated a need to assist the operator in guiding the machine in narrow stopes and an automatic guidance system that used ultrasonic ranging sensors and a wall-following algorithm was installed. Additional tests in a simulated test stope showed that these changes improved the operation of the minimucker. The design and functions of the minimucker and its computer-based, remote control system are reviewed, and an ultrasonic, sensor-based guidance system is described.« less

  9. Parallel Processing of Adaptive Meshes with Load Balancing

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.

  10. Automated validation of a computer operating system

    NASA Technical Reports Server (NTRS)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  11. A Fog Computing and Cloudlet Based Augmented Reality System for the Industry 4.0 Shipyard.

    PubMed

    Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Suárez-Albela, Manuel; Vilar-Montesinos, Miguel

    2018-06-02

    Augmented Reality (AR) is one of the key technologies pointed out by Industry 4.0 as a tool for enhancing the next generation of automated and computerized factories. AR can also help shipbuilding operators, since they usually need to interact with information (e.g., product datasheets, instructions, maintenance procedures, quality control forms) that could be handled easily and more efficiently through AR devices. This is the reason why Navantia, one of the 10 largest shipbuilders in the world, is studying the application of AR (among other technologies) in different shipyard environments in a project called "Shipyard 4.0". This article presents Navantia's industrial AR (IAR) architecture, which is based on cloudlets and on the fog computing paradigm. Both technologies are ideal for supporting physically-distributed, low-latency and QoS-aware applications that decrease the network traffic and the computational load of traditional cloud computing systems. The proposed IAR communications architecture is evaluated in real-world scenarios with payload sizes according to demanding Microsoft HoloLens applications and when using a cloud, a cloudlet and a fog computing system. The results show that, in terms of response delay, the fog computing system is the fastest when transferring small payloads (less than 128 KB), while for larger file sizes, the cloudlet solution is faster than the others. Moreover, under high loads (with many concurrent IAR clients), the cloudlet in some cases is more than four times faster than the fog computing system in terms of response delay.

  12. Waveform distortion by 2-step modeling ground vibration from trains

    NASA Astrophysics Data System (ADS)

    Wang, F.; Chen, W.; Zhang, J.; Li, F.; Liu, H.; Chen, X.; Pan, Y.; Li, G.; Xiao, F.

    2017-10-01

    The 2-step procedure is widely used in numerical research on ground vibrations from trains. The ground is inconsistently represented by a simplified model in the first step and by a refined model in the second step, which may lead to distortions in the simulation results. In order to reveal this modeling error, time histories of ground-borne vibrations were computed based on the 2-step procedure and then compared with the results from a benchmark procedure of the whole system. All parameters involved were intentionally set as equal for the 2 methods, which ensures that differences in the results originated from the inconsistencies of the ground model. Excited by wheel loads of low speeds such as 60 km/h and low frequencies less than 8 Hz, the computed responses of the subgrade were quite close to the benchmarks. However, notable distortions were found in all loading cases at higher frequencies. Moreover, significant underestimation of intensity occurred when load frequencies equaled 16 Hz. This occurred not only at the subgrade but also at the points 10 m and 20 m away from the track. When the load speed was increased to 350 km/h, all computed waveforms were distorted, including the responses to the loads at very low frequencies. The modeling error found herein suggests that the ground models in the 2 steps should be calibrated in terms of frequency bands to be investigated, and the speed of train should be taken into account at the same time.

  13. "Watts per person" paradigm to design net zero energy buildings: Examining technology interventions and integrating occupant feedback to reduce plug loads in a commercial building

    NASA Astrophysics Data System (ADS)

    Yagi Kim, Mika

    As building envelopes have improved due to more restrictive energy codes, internal loads have increased largely due to the proliferation of computers, electronics, appliances, imaging and audio visual equipment that continues to grow in commercial buildings. As the dependency on the internet for information and data transfer increases, the electricity demand will pose a challenge to design and operate Net Zero Energy Buildings (NZEBs). Plug Loads (PLs) as a proportion of the building load has become the largest non-regulated building energy load and represents the third highest electricity end-use in California's commercial office buildings, accounting for 23% of the total building electricity consumption (Ecova 2011,2). In the Annual Energy Outlook 2008 (AEO2008), prepared by the Energy Information Administration (EIA) that presents long-term projections of energy supply and demand through 2030 states that office equipment and personal computers are the "fastest growing electrical end uses" in the commercial sector. This thesis entitled "Watts Per Person" Paradigm to Design Net Zero Energy Buildings, measures the implementation of advanced controls and behavioral interventions to study the reduction of PL energy use in the commercial sector. By integrating real world data extracted from an energy efficient commercial building of its energy use, the results produce a new methodology on estimating PL energy use by calculating based on "Watts Per Person" and analyzes computational simulation methods to design NZEBs.

  14. Efficient Resources Provisioning Based on Load Forecasting in Cloud

    PubMed Central

    Hu, Rongdong; Jiang, Jingfei; Liu, Guangming; Wang, Lixin

    2014-01-01

    Cloud providers should ensure QoS while maximizing resources utilization. One optimal strategy is to timely allocate resources in a fine-grained mode according to application's actual resources demand. The necessary precondition of this strategy is obtaining future load information in advance. We propose a multi-step-ahead load forecasting method, KSwSVR, based on statistical learning theory which is suitable for the complex and dynamic characteristics of the cloud computing environment. It integrates an improved support vector regression algorithm and Kalman smoother. Public trace data taken from multitypes of resources were used to verify its prediction accuracy, stability, and adaptability, comparing with AR, BPNN, and standard SVR. Subsequently, based on the predicted results, a simple and efficient strategy is proposed for resource provisioning. CPU allocation experiment indicated it can effectively reduce resources consumption while meeting service level agreements requirements. PMID:24701160

  15. Shaper-Based Filters for the compensation of the load cell response in dynamic mass measurement

    NASA Astrophysics Data System (ADS)

    Richiedei, Dario; Trevisani, Alberto

    2018-01-01

    This paper proposes a novel model-based signal filtering technique for dynamic mass measurement through load cells. Load cells are sensors with an underdamped oscillatory response which usually imposes a long settling time. Real-time filtering is therefore necessary to compensate for such a dynamics and to quickly retrieve the mass of the measurand (which is the steady state value of the load cell response) before the measured signal actually settles. This problem has a big impact on the throughput of industrial weighing machines. In this paper a novel solution to this problem is developed: a model-based filtering technique is proposed to ensure accurate, robust and rapid estimation of the mass of the measurand. The digital filters proposed are referred to as Shaper-Based Filters (SBFs) and are based on the convolution of the load cell output signal with a sequence of few impulses (typically, between 2 and 5). The amplitudes and the instants of application of such impulses are computed through the analytical development of the load cell step response, by imposing the admissible residual oscillation in the steady-state filtered signal and by requiring the desired sensitivity of the filter. The inclusion of robustness specifications tackles effectively the unavoidable uncertainty and variability in the load cell frequency and damping. The effectiveness of the proposed filters is proved experimentally through an industrial set up: the load-cell-instrumented weigh bucket of a multihead weighing machine for packaging. A performance comparison with other benchmark filters is provided and discussed too.

  16. Study of fuel cell on-site, integrated energy systems in residential/commercial applications

    NASA Technical Reports Server (NTRS)

    Wakefield, R. A.; Karamchetty, S.; Rand, R. H.; Ku, W. S.; Tekumalla, V.

    1980-01-01

    Three building applications were selected for a detailed study: a low rise apartment building; a retail store, and a hospital. Building design data were then specified for each application, based on the design and construction of typical, actual buildings. Finally, a computerized building loads analysis program was used to estimate hourly end use load profiles for each building. Conventional and fuel cell based energy systems were designed and simulated for each building in each location. Based on the results of a computer simulation of each energy system, levelized annual costs and annual energy consumptions were calculated for all systems.

  17. Progressive Damage and Failure Analysis of Composite Laminates

    NASA Astrophysics Data System (ADS)

    Joseph, Ashith P. K.

    Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.

  18. Comparison of computer codes for calculating dynamic loads in wind turbines

    NASA Technical Reports Server (NTRS)

    Spera, D. A.

    1977-01-01

    Seven computer codes for analyzing performance and loads in large, horizontal axis wind turbines were used to calculate blade bending moment loads for two operational conditions of the 100 kW Mod-0 wind turbine. Results were compared with test data on the basis of cyclic loads, peak loads, and harmonic contents. Four of the seven codes include rotor-tower interaction and three were limited to rotor analysis. With a few exceptions, all calculated loads were within 25 percent of nominal test data.

  19. Association of education and receiving social transfers with allostatic load in the Swiss population-based CoLaus study.

    PubMed

    Nicod, Edouard; Stringhini, Silvia; Marques-Vidal, Pedro; Paccaud, Fred; Waeber, Gérard; Lamiraud, Karine; Vollenweider, Peter; Bochud, Murielle

    2014-06-01

    Allostatic load reflects cumulative exposure to stressors throughout lifetime and has been associated with several adverse health outcomes. It is hypothesized that people with low socioeconomic status (SES) are exposed to higher chronic stress and have therefore greater levels of allostatic load. To assess the association of receiving social transfers and low education with allostatic load. We included 3589 participants (1812 women) aged over 35years and under retirement age from the population-based CoLaus study (Lausanne, Switzerland, 2003-2006). We computed an allostatic load index aggregating cardiovascular, metabolic, dyslipidemic and inflammatory markers. A novel index additionally including markers of oxidative stress was also examined. Men with low vs. high SES were more likely to have higher levels of allostatic load (odds ratio (OR)=1.93/2.34 for social transfers/education, 95%CI from 1.45 to 4.17). The same patterns were observed among women. Associations persisted after controlling for health behaviors and marital status. Low education and receiving social transfers independently and cumulatively predict high allostatic load and dysregulation of several homeostatic systems in a Swiss population-based study. Participants with low SES are at higher risk of oxidative stress, which may justify its inclusion as a separate component of allostatic load. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. On the Impact of Execution Models: A Case Study in Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram

    2015-05-25

    Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less

  1. Multiscale Mechanics of Articular Cartilage: Potentials and Challenges of Coupling Musculoskeletal, Joint, and Microscale Computational Models

    PubMed Central

    Halloran, J. P.; Sibole, S.; van Donkelaar, C. C.; van Turnhout, M. C.; Oomens, C. W. J.; Weiss, J. A.; Guilak, F.; Erdemir, A.

    2012-01-01

    Articular cartilage experiences significant mechanical loads during daily activities. Healthy cartilage provides the capacity for load bearing and regulates the mechanobiological processes for tissue development, maintenance, and repair. Experimental studies at multiple scales have provided a fundamental understanding of macroscopic mechanical function, evaluation of the micromechanical environment of chondrocytes, and the foundations for mechanobiological response. In addition, computational models of cartilage have offered a concise description of experimental data at many spatial levels under healthy and diseased conditions, and have served to generate hypotheses for the mechanical and biological function. Further, modeling and simulation provides a platform for predictive risk assessment, management of dysfunction, as well as a means to relate multiple spatial scales. Simulation-based investigation of cartilage comes with many challenges including both the computational burden and often insufficient availability of data for model development and validation. This review outlines recent modeling and simulation approaches to understand cartilage function from a mechanical systems perspective, and illustrates pathways to associate mechanics with biological function. Computational representations at single scales are provided from the body down to the microstructure, along with attempts to explore multiscale mechanisms of load sharing that dictate the mechanical environment of the cartilage and chondrocytes. PMID:22648577

  2. Experimental and Numerical Analysis of Axially Compressed Circular Cylindrical Fiber-Reinforced Panels with Various Boundary Conditions.

    DTIC Science & Technology

    1981-10-01

    Numerical predictions used in the compari- sons were obtained from the energy -based, finite-difference computer proqram CLAPP. Test specimens were clamped...edges V LONGITUDINAL STIFFENERS 45 I. Introduction 45 2. Stiffener Strain Energy 46 3. Stiffener Energy in Matrix Form 47 4. Displacement Continuity 49...that theoretical bifurcation loads predicted by the energy method represent upper bounds to the classical bifurcation loads associated with the test

  3. Design, fabrication and test of a trace contaminant control system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A trace contaminant control system was designed, fabricated, and evaluated to determine suitability of the system concept to future manned spacecraft. Two different models were considered. The load model initially required by the contract was based on the Space Station Prototype (SSP) general specifications SVSK HS4655, reflecting a change from a 9 man crew to a 6 man crew of the model developed in previous phases of this effort. Trade studies and a system preliminary design were accomplished based on this contaminant load, including computer analyses to define the optimum system configuration in terms of component arrangements, flow rates and component sizing. At the completion of the preliminary design effort a revised contaminant load model was developed for the SSP. Additional analyses were then conducted to define the impact of this new contaminant load model on the system configuration. A full scale foam-core mock-up with the appropriate SSP system interfaces was also fabricated.

  4. Stormwater quality processes for three land-use areas in Broward County, Florida

    USGS Publications Warehouse

    Mattraw, H.C.; Miller, Robert A.

    1981-01-01

    Systematic collection and chemical analysis of stormwater runoff samples from three small urban areas in Broward County, Florida, were obtained between 1974 and 1977. Thirty or more runoff-constituent loads were computed for each of the homogeneous land-use areas. The areas sampled were single family residential, highway, and a commercial shopping center. Rainfall , runoff, and nutrient and metal analyses were stored in a data-management system. The data-management system permitted computation of loads, publication of basic-data reports and the interface of environmental and load information with a comprehensive statistical analysis system. Seven regression models relating water quality loads to characteristics of peak discharge, antecedent conditions, season, storm duration and rainfall intensity were constructed for each of the three sites. Total water-quality loads were computed for the collection period by summing loads for individual storms. Loads for unsampled storms were estimated by using regression models and records of storm precipitation. Loadings, pounds per day per acre of hydraulically effective impervious area, were computed for the three land-use types. Total nitrogen, total phosphorus, and total residue loadings were highest in the residential area. Chemical oxygen demand and total lead loadings were highest in the commercial area. Loadings of atmospheric fallout on each watershed were estimated by bulk precipitation samples collected at the highway and commercial site. (USGS)

  5. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  6. Formalization, equivalence and generalization of basic resonance electrical circuits

    NASA Astrophysics Data System (ADS)

    Penev, Dimitar; Arnaudov, Dimitar; Hinov, Nikolay

    2017-12-01

    In the work are presented basic resonance circuits, which are used in resonance energy converters. The following resonant circuits are considered: serial, serial with parallel load parallel capacitor, parallel and parallel with serial loaded inductance. For the circuits under consideration, expressions are generated for the frequencies of own oscillations and for the equivalence of the active power emitted in the load. Mathematical expressions are graphically constructed and verified using computer simulations. The results obtained are used in the model based design of resonant energy converters with DC or AC output. This guaranteed the output indicators of power electronic devices.

  7. Flow Control on Low-Pressure Turbine Airfoils Using Vortex Generator Jets

    NASA Technical Reports Server (NTRS)

    Volino, Ralph J.; Ibrahim, Mounir B.; Kartuzova, Olga

    2010-01-01

    Motivation - Higher loading on Low-Pressure Turbine (LPT) airfoils: Reduce airfoil count, weight, cost. Increase efficiency, and Limited by suction side separation. Growing understanding of transition, separation, wake effects: Improved models. Take advantage of wakes. Higher lift airfoils in use. Further loading increases may require flow control: Passive: trips, dimples, etc. Active: plasma actuators, vortex generator jets (VGJs). Can increased loading offset higher losses on high lift airfoils. Objectives: Advance knowledge of boundary layer separation and transition under LPT conditions. Demonstrate, improve understanding of separation control with pulsed VGJs. Produce detailed experimental data base. Test and develop computational models.

  8. Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm

    NASA Astrophysics Data System (ADS)

    Küchlin, Stephan; Jenny, Patrick

    2018-06-01

    Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.

  9. Analytical Prediction of Damage Growth in Notched Composite Panels Loaded in Axial Compression

    NASA Technical Reports Server (NTRS)

    Ambur, Damodar R.; McGowan, David M.; Davila, Carlos G.

    1999-01-01

    A progressive failure analysis method based on shell elements is developed for the computation of damage initiation and growth in stiffened thick-skin stitched graphite-epoxy panels loaded in axial compression. The analysis method involves a step-by-step simulation of material degradation based on ply-level failure mechanisms. High computational efficiency is derived from the use of superposed layers of shell elements to model each ply orientation in the laminate. Multiple integration points through the thickness are used to obtain the correct bending effects through the thickness without the need for ply-by-ply evaluations of the state of the material. The analysis results are compared with experimental results for three stiffened panels with notches oriented at 0, 15 and 30 degrees to the panel width dimension. A parametric study is performed to investigate the damage growth retardation characteristics of the Kevlar stitch lines in the pan

  10. Estimates of long-term suspended-sediment loads in Bay Creek at Nebo, Pike County, Illinois, 1940-80

    USGS Publications Warehouse

    Lazaro, Timothy R.; Fitzgerald, Kathleen K.; Frost, Leonard R.

    1984-01-01

    Five years of daily suspended-sediment discharges (1968, 1969, 1975, 1976, and 1980) for Bay Creek at Nebo, Illinois, computed from once- or twice-weekly samples (more often during storm events), were used to develop transport equations that can be used to estimate long-term suspended-sediment discharges from long-term water-discharge records. Discharge was divided into three groups based on changes in slope on a graph of logarithms of water discharge versus suspended-sediment discharge. Two subgroups were formed within each of the three groups by determining whether the flow was steady or increasing, or was decreasing. Seasonality was accounted for by introducing day of the year in sine and cosine functions. The suspended-sediment load estimated from the equations for the 5 years was 77.3 percent of that computed from daily sediment- and water-discharge records for those years. The mean annual suspended-sediment load for 41 years of estimated loads was 359 ,500 tons, which represents a yield of about 3.5 tons per acre from the Bay Creek drainage basin. (USGS)

  11. Optimization and large scale computation of an entropy-based moment closure

    NASA Astrophysics Data System (ADS)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  12. Optimization and large scale computation of an entropy-based moment closure

    DOE PAGES

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less

  13. CFD-based design load analysis of 5MW offshore wind turbine

    NASA Astrophysics Data System (ADS)

    Tran, T. T.; Ryu, G. J.; Kim, Y. H.; Kim, D. H.

    2012-11-01

    The structure and aerodynamic loads acting on NREL 5MW reference wind turbine blade are calculated and analyzed based on advanced Computational Fluid Dynamics (CFD) and unsteady Blade Element Momentum (BEM). A detailed examination of the six force components has been carried out (three force components and three moment components). Structure load (gravity and inertia load) and aerodynamic load have been obtained by additional structural calculations (CFD or BEM, respectively,). In CFD method, the Reynolds Average Navier-Stokes approach was applied to solve the continuity equation of mass conservation and momentum balance so that the complex flow around wind turbines was modeled. Written in C programming language, a User Defined Function (UDF) code which defines transient velocity profile according to the Extreme Operating Gust condition was compiled into commercial FLUENT package. Furthermore, the unsteady BEM with 3D stall model has also adopted to investigate load components on wind turbine rotor. The present study introduces a comparison between advanced CFD and unsteady BEM for determining load on wind turbine rotor. Results indicate that there are good agreements between both present methods. It is importantly shown that six load components on wind turbine rotor is significant effect under Extreme Operating Gust (EOG) condition. Using advanced CFD and additional structural calculations, this study has succeeded to construct accuracy numerical methodology to estimate total load of wind turbine that compose of aerodynamic load and structure load.

  14. A simple approach to estimate daily loads of total, refractory, and labile organic carbon from their seasonal loads in a watershed.

    PubMed

    Ouyang, Ying; Grace, Johnny M; Zipperer, Wayne C; Hatten, Jeff; Dewey, Janet

    2018-05-22

    Loads of naturally occurring total organic carbons (TOC), refractory organic carbon (ROC), and labile organic carbon (LOC) in streams control the availability of nutrients and the solubility and toxicity of contaminants and affect biological activities through absorption of light and complex metals with production of carcinogenic compounds. Although computer models have become increasingly popular in understanding and management of TOC, ROC, and LOC loads in streams, the usefulness of these models hinges on the availability of daily data for model calibration and validation. Unfortunately, these daily data are usually insufficient and/or unavailable for most watersheds due to a variety of reasons, such as budget and time constraints. A simple approach was developed here to calculate daily loads of TOC, ROC, and LOC in streams based on their seasonal loads. We concluded that the predictions from our approach adequately match field measurements based on statistical comparisons between model calculations and field measurements. Our approach demonstrates that an increase in stream discharge results in increased stream TOC, ROC, and LOC concentrations and loads, although high peak discharge did not necessarily result in high peaks of TOC, ROC, and LOC concentrations and loads. The approach developed herein is a useful tool to convert seasonal loads of TOC, ROC, and LOC into daily loads in the absence of measured daily load data.

  15. Increased Memory Load during Task Completion when Procedures Are Presented on Mobile Screens

    ERIC Educational Resources Information Center

    Byrd, Keena S.; Caldwell, Barrett S.

    2011-01-01

    The primary objective of this research was to compare procedure-based task performance using three common mobile screen sizes: ultra mobile personal computer (7 in./17.8 cm), personal data assistant (3.5 in./8.9 cm), and SmartPhone (2.8 in./7.1 cm). Subjects used these three screen sizes to view and execute a computer maintenance procedure.…

  16. Thermal stress analysis of reusable surface insulation for shuttle

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Levy, A.; Austin, F.

    1974-01-01

    An iterative procedure for accurately determining tile stresses associated with static mechanical and thermally induced internal loads is presented. The necessary conditions for convergence of the method are derived. An user-oriented computer program based upon the present method of analysis was developed. The program is capable of analyzing multi-tiled panels and determining the associated stresses. Typical numerical results from this computer program are presented.

  17. A computer solution for the dynamic load, lubricant film thickness, and surface temperatures in spiral-bevel gears

    NASA Technical Reports Server (NTRS)

    Chao, H. C.; Baxter, M.; Cheng, H. S.

    1983-01-01

    A computer method for determining the dynamic load between spiral bevel pinion and gear teeth contact along the path of contact is described. The dynamic load analysis governs both the surface temperature and film thickness. Computer methods for determining the surface temperature, and film thickness are presented along with results obtained for a pair of typical spiral bevel gears.

  18. Collectively loading an application in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  19. Computational modelling of ovine critical-sized tibial defects with implanted scaffolds and prediction of the safety of fixator removal.

    PubMed

    Doyle, Heather; Lohfeld, Stefan; Dürselen, Lutz; McHugh, Peter

    2015-04-01

    Computational model geometries of tibial defects with two types of implanted tissue engineering scaffolds, β-tricalcium phosphate (β-TCP) and poly-ε-caprolactone (PCL)/β-TCP, are constructed from µ-CT scan images of the real in vivo defects. Simulations of each defect under four-point bending and under simulated in vivo axial compressive loading are performed. The mechanical stability of each defect is analysed using stress distribution analysis. The results of this analysis highlights the influence of callus volume, and both scaffold volume and stiffness, on the load-bearing abilities of these defects. Clinically-used image-based methods to predict the safety of removing external fixation are evaluated for each defect. Comparison of these measures with the results of computational analyses indicates that care must be taken in the interpretation of these measures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Large Eddy Simulation of Ducted Propulsors in Crashback

    NASA Astrophysics Data System (ADS)

    Jang, Hyunchul; Mahesh, Krishnan

    2009-11-01

    Flow around a ducted marine propulsor is computed using the large eddy simulation methodology under crashback conditions. Crashback is an operating condition where a propulsor rotates in the reverse direction while the vessel moves in the forward direction. It is characterized by massive flow separation and highly unsteady propeller loads, which affect both blade life and maneuverability. The simulations are performed on unstructured grids using the discrete kinetic energy conserving algorithm developed by Mahesh at al. (2004, J. Comput. Phys 197). Numerical challenges posed by sharp blade edges and small blade tip clearances are discussed. The flow is computed at the advance ratio J=-0.7 and Reynolds number Re=480,000 based on the propeller diameter. Average and RMS values of the unsteady loads such as thrust, torque, and side force on the blades and duct are compared to experiment, and the effect of the duct on crashback is discussed.

  1. Derivation of Design Loads and Random Vibration specifications for Spacecraft Instruments and Sub-Units

    NASA Astrophysics Data System (ADS)

    Fransen, S.; Yamawaki, T.; Akagi, H.; Eggens, M.; van Baren, C.

    2014-06-01

    After a first estimation based on statistics, the design loads for instruments are generally estimated by coupled spacecraft/instrument sine analysis once an FE-model of the spacecraft is available. When the design loads for the instrument have been derived, the next step in the process is to estimate the random vibration environment at the instrument base and to compute the RMS load at the centre of gravity of the instrument by means of vibro-acoustic analysis. Finally the design loads of the light-weight sub-units of the instrument can be estimated through random vibration analysis at instrument level, taking into account the notches required to protect the instrument interfaces in the hard- mounted random vibration test. This paper presents the aforementioned steps of instrument and sub-units loads derivation in the preliminary design phase of the spacecraft and identifies the problems that may be encountered in terms of design load consistency between low-frequency and high-frequency environments. The SpicA FAR-infrared Instrument (SAFARI) which is currently developed for the Space Infrared Telescope for Cosmology and Astrophysics (SPICA) will be used as a guiding example.

  2. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  3. Computational analysis of water entry of a circular section at constant velocity based on Reynold's averaged Navier-Stokes method

    NASA Astrophysics Data System (ADS)

    Uddin, M. Maruf; Fuad, Muzaddid-E.-Zaman; Rahaman, Md. Mashiur; Islam, M. Rabiul

    2017-12-01

    With the rapid decrease in the cost of computational infrastructure with more efficient algorithm for solving non-linear problems, Reynold's averaged Navier-Stokes (RaNS) based Computational Fluid Dynamics (CFD) has been used widely now-a-days. As a preliminary evaluation tool, CFD is used to calculate the hydrodynamic loads on offshore installations, ships, and other structures in the ocean at initial design stages. Traditionally, wedges have been studied more than circular cylinders because cylinder section has zero deadrise angle at the instant of water impact, which increases with increase of submergence. In Present study, RaNS based commercial code ANSYS Fluent is used to simulate the water entry of a circular section at constant velocity. It is seen that present computational results were compared with experiment and other numerical method.

  4. A screening-level modeling approach to estimate nitrogen ...

    EPA Pesticide Factsheets

    This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explore best management practice (BMP) implementation to reduce loading. The modeling framework uses a hybrid statistical and process based approach to estimate source of pollutants, their transport and decay in the terrestrial and aquatic parts of watersheds. The framework is developed in the ArcGIS environment and is based on the total maximum daily load (TMDL) balance model. Nitrogen (N) is currently addressed in the framework, referred to as WQM-TMDL-N. Loading for each catchment includes non-point sources (NPS) and point sources (PS). NPS loading is estimated using export coefficient or event mean concentration methods depending on the temporal scales, i.e., annual or daily. Loading from atmospheric deposition is also included. The probability of a nutrient load to exceed a target load is evaluated using probabilistic risk assessment, by including the uncertainty associated with export coefficients of various land uses. The computed risk data can be visualized as spatial maps which show the load exceedance probability for all stream segments. In an application of this modeling approach to the Tippecanoe River watershed in Indiana, USA, total nitrogen (TN) loading and risk of standard exce

  5. A theoretical framework for strain-related trabecular bone maintenance and adaptation.

    PubMed

    Ruimerman, R; Hilbers, P; van Rietbergen, B; Huiskes, R

    2005-04-01

    It is assumed that density and morphology of trabecular bone is partially controlled by mechanical forces. How these effects are expressed in the local metabolic functions of osteoclast resorption and osteoblast formation is not known. In order to investigate possible mechano-biological pathways for these mechanisms we have proposed a mathematical theory (Nature 405 (2000) 704). This theory is based on hypothetical osteocyte stimulation of osteoblast bone formation, as an effect of elevated strain in the bone matrix, and a role for microcracks and disuse in promoting osteoclast resorption. Applied in a 2-D Finite Element Analysis model, the theory explained the formation of trabecular patterns. In this article we present a 3-D FEA model based on the same theory and investigated its potential morphological predictability of metabolic reactions to mechanical loads. The computations simulated the development of trabecular morphological details during growth, relative to measurements in growing pigs, reasonably realistic. They confirmed that the proposed mechanisms also inherently lead to optimal stress transfer. Alternative loading directions produced new trabecular orientations. Reduction of load reduced trabecular thickness, connectivity and mass in the simulation, as is seen in disuse osteoporosis. Simulating the effects of estrogen deficiency through increased osteoclast resorption frequencies produced osteoporotic morphologies as well, as seen in post-menopausal osteoporosis. We conclude that the theory provides a suitable computational framework to investigate hypothetical relationships between bone loading and metabolic expressions.

  6. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707

  7. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    PubMed

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.

  8. Computer-based teaching module design: principles derived from learning theories.

    PubMed

    Lau, K H Vincent

    2014-03-01

    The computer-based teaching module (CBTM), which has recently gained prominence in medical education, is a teaching format in which a multimedia program serves as a single source for knowledge acquisition rather than playing an adjunctive role as it does in computer-assisted learning (CAL). Despite empirical validation in the past decade, there is limited research into the optimisation of CBTM design. This review aims to summarise research in classic and modern multimedia-specific learning theories applied to computer learning, and to collapse the findings into a set of design principles to guide the development of CBTMs. Scopus was searched for: (i) studies of classic cognitivism, constructivism and behaviourism theories (search terms: 'cognitive theory' OR 'constructivism theory' OR 'behaviourism theory' AND 'e-learning' OR 'web-based learning') and their sub-theories applied to computer learning, and (ii) recent studies of modern learning theories applied to computer learning (search terms: 'learning theory' AND 'e-learning' OR 'web-based learning') for articles published between 1990 and 2012. The first search identified 29 studies, dominated in topic by the cognitive load, elaboration and scaffolding theories. The second search identified 139 studies, with diverse topics in connectivism, discovery and technical scaffolding. Based on their relative representation in the literature, the applications of these theories were collapsed into a list of CBTM design principles. Ten principles were identified and categorised into three levels of design: the global level (managing objectives, framing, minimising technical load); the rhetoric level (optimising modality, making modality explicit, scaffolding, elaboration, spaced repeating), and the detail level (managing text, managing devices). This review examined the literature in the application of learning theories to CAL to develop a set of principles that guide CBTM design. Further research will enable educators to take advantage of this unique teaching format as it gains increasing importance in medical education. © 2014 John Wiley & Sons Ltd.

  9. DSS 14 64-meter antenna. Computed RF pathlength changes under gravity loadings

    NASA Technical Reports Server (NTRS)

    Katow, M. S.

    1981-01-01

    Using a computer model of the reflector structure and its supporting assembly of the 64-m antenna rotating about the elevation axis, the radio frequency (RF) pathlengths changes resulting from gravity loadings were computed. A check on the computed values was made by comparing the computed foci offsets with actual field readings of the Z or axial focussing required for elevation angle changes.

  10. Computer program to compute buckling loads of simply supported anisotropic plates

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1973-01-01

    Program handles several types of composites and several load conditions for each plate, both compressive or tensile membrane loads, and bending-stretching coupling via the concept of reduced bending rigidities. Vibration frequencies of homogeneous or layered anisotropic plates can be calculated by slightly modifying the program.

  11. Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca; University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213; University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4

    Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respondmore » over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.« less

  12. Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine

    NASA Astrophysics Data System (ADS)

    Sharma, Gulshan B.; Robertson, Douglas D.

    2013-07-01

    Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula's material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element's remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.

  13. A local-circulation model for Darrieus vertical-axis wind turbines

    NASA Astrophysics Data System (ADS)

    Masse, B.

    1986-04-01

    A new computational model for the aerodynamics of the vertical-axis wind turbine is presented. Based on the local-circulation method generalized for curved blades, combined with a wake model for the vertical-axis wind turbine, it differs markedly from current models based on variations in the streamtube momentum and vortex models using the lifting-line theory. A computer code has been developed to calculate the loads and performance of the Darrieus vertical-axis wind turbine. The results show good agreement with experimental data and compare well with other methods.

  14. Efficient critical design load case identification for floating offshore wind turbines with a reduced nonlinear model

    NASA Astrophysics Data System (ADS)

    Matha, Denis; Sandner, Frank; Schlipf, David

    2014-12-01

    Design verification of wind turbines is performed by simulation of design load cases (DLC) defined in the IEC 61400-1 and -3 standards or equivalent guidelines. Due to the resulting large number of necessary load simulations, here a method is presented to reduce the computational effort for DLC simulations significantly by introducing a reduced nonlinear model and simplified hydro- and aerodynamics. The advantage of the formulation is that the nonlinear ODE system only contains basic mathematic operations and no iterations or internal loops which makes it very computationally efficient. Global turbine extreme and fatigue loads such as rotor thrust, tower base bending moment and mooring line tension, as well as platform motions are outputs of the model. They can be used to identify critical and less critical load situations to be then analysed with a higher fidelity tool and so speed up the design process. Results from these reduced model DLC simulations are presented and compared to higher fidelity models. Results in frequency and time domain as well as extreme and fatigue load predictions demonstrate that good agreement between the reduced and advanced model is achieved, allowing to efficiently exclude less critical DLC simulations, and to identify the most critical subset of cases for a given design. Additionally, the model is applicable for brute force optimization of floater control system parameters.

  15. Improvements to information management systems simulator

    NASA Technical Reports Server (NTRS)

    Bilek, R. W.

    1972-01-01

    The performance of personnel in the augmentation and improvement of the interactive IMSIM information management simulation model is summarized. With this augmented model, NASA now has even greater capabilities for the simulation of computer system configurations, data processing loads imposed on these configurations, and executive software to control system operations. Through these simulations, NASA has an extremely cost effective capability for the design and analysis of computer-based data management systems.

  16. Multiscale Fiber Kinking: Computational Micromechanics and a Mesoscale Continuum Damage Mechanics Models

    NASA Technical Reports Server (NTRS)

    Herraez, Miguel; Bergan, Andrew C.; Gonzalez, Carlos; Lopes, Claudio S.

    2017-01-01

    In this work, the fiber kinking phenomenon, which is known as the failure mechanism that takes place when a fiber reinforced polymer is loaded under longitudinal compression, is studied. A computational micromechanics model is employed to interrogate the assumptions of a recently developed mesoscale continuum damage mechanics (CDM) model for fiber kinking based on the deformation gradient decomposition (DGD) and the LaRC04 failure criteria.

  17. Time Warp Operating System, Version 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, Steven F.; Gieselman, John S.; Hawley, Lawrence R.; Peterson, Judy; Presley, Matthew T.; Reiher, Peter L.; Springer, Paul L.; Tupman, John R.; Wedel, John J., Jr.; Wieland, Frederick P.; hide

    1993-01-01

    Time Warp Operating System, TWOS, is special purpose computer program designed to support parallel simulation of discrete events. Complete implementation of Time Warp software mechanism, which implements distributed protocol for virtual synchronization based on rollback of processes and annihilation of messages. Supports simulations and other computations in which both virtual time and dynamic load balancing used. Program utilizes underlying resources of operating system. Written in C programming language.

  18. A Method for Computing Leading-Edge Loads

    NASA Technical Reports Server (NTRS)

    Rhode, Richard V; Pearson, Henry A

    1933-01-01

    In this report a formula is developed that enables the determination of the proper design load for the portion of the wing forward of the front spar. The formula is inherently rational in concept, as it takes into account the most important variables that affect the leading-edge load, although theoretical rigor has been sacrificed for simplicity and ease of application. Some empirical corrections, based on pressure distribution measurements on the PW-9 and M-3 airplanes have been introduced to provide properly for biplanes. Results from the formula check experimental values in a variety of cases with good accuracy in the critical loading conditions. The use of the method for design purposes is therefore felt to be justified and is recommended.

  19. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  20. Towards optimizing server performance in an educational MMORPG for teaching computer programming

    NASA Astrophysics Data System (ADS)

    Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios

    2013-10-01

    Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.

  1. An enhanced SOCP-based method for feeder load balancing using the multi-terminal soft open point in active distribution networks

    DOE PAGES

    Ji, Haoran; Wang, Chengshan; Li, Peng; ...

    2017-09-20

    The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less

  2. An enhanced SOCP-based method for feeder load balancing using the multi-terminal soft open point in active distribution networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Haoran; Wang, Chengshan; Li, Peng

    The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less

  3. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi

    This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process. The objective of this study is to verify the generalized body-modes approach in comparison to high-fidelity FSI simulations to accurately predict structural deflections and stress loads in a WEC. Two verification cases are considered, a free-floating barge and a fixed-bottom column. Details for both the generalized body-modes models and FSI models are first provided. Results for each of the models are then compared and discussed. Finally, based on the verification results obtained, future plans for incorporating the generalized body-modes method into the WEC simulation tool, WEC-Sim, and the overall WEC design process are discussed.« less

  5. Aids to Computer-Based Multimedia Learning.

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Moreno, Roxana

    2002-01-01

    Presents a cognitive theory of multimedia learning that draws on dual coding theory, cognitive load theory, and constructivist learning theory and derives some principles of instructional design for fostering multimedia learning. These include principles of multiple representation, contiguity, coherence, modality, and redundancy. (SLD)

  6. Method and apparatus for transfer function simulator for testing complex systems

    NASA Technical Reports Server (NTRS)

    Kavaya, M. J. (Inventor)

    1985-01-01

    A method and apparatus for testing the operation of a complex stabilization circuit in a closed loop system is presented. The method is comprised of a programmed analog or digital computing system for implementing the transfer function of a load thereby providing a predictable load. The digital computing system employs a table stored in a microprocessor in which precomputed values of the load transfer function are stored for values of input signal from the stabilization circuit over the range of interest. This technique may be used not only for isolating faults in the stabilization circuit, but also for analyzing a fault in a faulty load by so varying parameters of the computing system as to simulate operation of the actual load with the fault.

  7. Preliminary weight and costs of sandwich panels to distribute concentrated loads

    NASA Technical Reports Server (NTRS)

    Belleman, G.; Mccarty, J. E.

    1976-01-01

    Minimum mass honeycomb sandwich panels were sized for transmitting a concentrated load to a uniform reaction through various distances. The form skin gages were fully stressed with a finite element computer code. The panel general stability was evaluated with a buckling computer code labeled STAGS-B. Two skin materials were considered; aluminum and graphite-epoxy. The core was constant thickness aluminum honeycomb. Various panel sizes and load levels were considered. The computer generated data were generalized to allow preliminary least mass panel designs for a wide range of panel sizes and load intensities. An assessment of panel fabrication cost was also conducted. Various comparisons between panel mass, panel size, panel loading, and panel cost are presented in both tabular and graphical form.

  8. A location selection policy of live virtual machine migration for power saving and load balancing.

    PubMed

    Zhao, Jia; Ding, Yan; Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  9. A Location Selection Policy of Live Virtual Machine Migration for Power Saving and Load Balancing

    PubMed Central

    Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24348165

  10. Effect of therapeutic insoles on the medial longitudinal arch in patients with flatfoot deformity: a three-dimensional loading computed tomography study.

    PubMed

    Kido, Masamitsu; Ikoma, Kazuya; Hara, Yusuke; Imai, Kan; Maki, Masahiro; Ikeda, Takumi; Fujiwara, Hiroyoshi; Tokunaga, Daisaku; Inoue, Nozomu; Kubo, Toshikazu

    2014-12-01

    Insoles are frequently used in orthotic therapy as the standard conservative treatment for symptomatic flatfoot deformity to rebuild the arch and stabilize the foot. However, the effectiveness of therapeutic insoles remains unclear. In this study, we assessed the effectiveness of therapeutic insoles for flatfoot deformity using subject-based three-dimensional (3D) computed tomography (CT) models by evaluating the load responses of the bones in the medial longitudinal arch in vivo in 3D. We studied eight individuals (16 feet) with mild flatfoot deformity. CT scans were performed on both feet under non-loaded and full-body-loaded conditions, first with accessory insoles and then with therapeutic insoles under the same conditions. Three-dimensional CT models were constructed for the tibia and the tarsal and metatarsal bones of the medial longitudinal arch (i.e., first metatarsal bone, cuneiforms, navicular, talus, and calcaneus). The rotational angles between the tarsal bones were calculated under loading with accessory insoles or therapeutic insoles and compared. Compared with the accessory insoles, the therapeutic insoles significantly suppressed the eversion of the talocalcaneal joint. This is the first study to precisely verify the usefulness of therapeutic insoles (arch support and inner wedges) in vivo. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Surface loading of a viscoelastic earth-I. General theory

    NASA Astrophysics Data System (ADS)

    Tromp, Jeroen; Mitrovica, Jerry X.

    1999-06-01

    We present a new normal-mode formalism for computing the response of an aspherical, self-gravitating, linear viscoelastic earth model to an arbitrary surface load. The formalism makes use of recent advances in the theory of the Earth's free oscillations, and is based upon an eigenfunction expansion methodology, rather than the tradi-tional Love-number approach to surface-loading problems. We introduce a surface-load representation theorem analogous to Betti's reciprocity relation in seismology. Taking advantage of this theorem and the biorthogonality of the viscoelastic modes, we determine the complete response to a surface load in the form of a Green's function. We also demonstrate that each viscoelastic mode has its own unique energy partitioning, which can be used to characterize it. In subsequent papers, we apply the theory to spherically symmetric and aspherical earth models.

  12. Subject-specific computer simulation model for determining elbow loading in one-handed tennis backhand groundstrokes.

    PubMed

    King, Mark A; Glynn, Jonathan A; Mitchell, Sean R

    2011-11-01

    A subject-specific angle-driven computer model of a tennis player, combined with a forward dynamics, equipment-specific computer model of tennis ball-racket impacts, was developed to determine the effect of ball-racket impacts on loading at the elbow for one-handed backhand groundstrokes. Matching subject-specific computer simulations of a typical topspin/slice one-handed backhand groundstroke performed by an elite tennis player were done with root mean square differences between performance and matching simulations of < 0.5 degrees over a 50 ms period starting from ball impact. Simulation results suggest that for similar ball-racket impact conditions, the difference in elbow loading for a topspin and slice one-handed backhand groundstroke is relatively small. In this study, the relatively small differences in elbow loading may be due to comparable angle-time histories at the wrist and elbow joints with the major kinematic differences occurring at the shoulder. Using a subject-specific angle-driven computer model combined with a forward dynamics, equipment-specific computer model of tennis ball-racket impacts allows peak internal loading, net impulse, and shock due to ball-racket impact to be calculated which would not otherwise be possible without impractical invasive techniques. This study provides a basis for further investigation of the factors that may increase elbow loading during tennis strokes.

  13. High-Throughput Computation and the Applicability of Monte Carlo Integration in Fatigue Load Estimation of Floating Offshore Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter A.; Stewart, Gordon; Lackner, Matthew

    Long-term fatigue loads for floating offshore wind turbines are hard to estimate because they require the evaluation of the integral of a highly nonlinear function over a wide variety of wind and wave conditions. Current design standards involve scanning over a uniform rectangular grid of metocean inputs (e.g., wind speed and direction and wave height and period), which becomes intractable in high dimensions as the number of required evaluations grows exponentially with dimension. Monte Carlo integration offers a potentially efficient alternative because it has theoretical convergence proportional to the inverse of the square root of the number of samples, whichmore » is independent of dimension. In this paper, we first report on the integration of the aeroelastic code FAST into NREL's systems engineering tool, WISDEM, and the development of a high-throughput pipeline capable of sampling from arbitrary distributions, running FAST on a large scale, and postprocessing the results into estimates of fatigue loads. Second, we use this tool to run a variety of studies aimed at comparing grid-based and Monte Carlo-based approaches with calculating long-term fatigue loads. We observe that for more than a few dimensions, the Monte Carlo approach can represent a large improvement in computational efficiency, but that as nonlinearity increases, the effectiveness of Monte Carlo is correspondingly reduced. The present work sets the stage for future research focusing on using advanced statistical methods for analysis of wind turbine fatigue as well as extreme loads.« less

  14. Computational knee ligament modeling using experimentally determined zero-load lengths.

    PubMed

    Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin

    2012-01-01

    This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models.

  15. Reduced Design Load Basis for Ultimate Blade Loads Estimation in Multidisciplinary Design Optimization Frameworks

    NASA Astrophysics Data System (ADS)

    Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.; Kim, Taeseong; Thomsen, Kenneth

    2016-09-01

    The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar to the one computed using the full DLB. This observation is used to propose a scheme that is computationally cheap, and that can be integrated inside an MDO framework, providing a sufficiently reliable estimation of the blade ultimate loading. The latter aspect is of key importance when design variables implementing passive control methodologies are included in the formulation of the optimization problem. An MDO of a 10 MW wind turbine blade is presented as an applied case study to show the efficacy of the reduced DLB concept.

  16. The effect of cyclic feathering motions on dynamic rotor loads. [for helicopters

    NASA Technical Reports Server (NTRS)

    Harvey, K. W.

    1974-01-01

    The dynamic loads of a helicopter rotor in forward flight are influenced significantly by the geometric pitch angles between the structural axes of the hub and blade sections and the plane of rotation. The analytical study presented includes elastic coupling between inplane and out-of-plane deflections as a function of geometric pitch between the plane of rotation and the principal axes of inertia of each blade. The numerical evaluation is based on a transient analysis using lumped masses and elastic substructure techniques. A comparison of cases with and without cyclic feathering motion shows the effect on computed dynamic rotor loads.

  17. Instituto Nacional de Electrification, Guatemala Load Dispatch Center and Global Communications Center. Feasibility report (Instituto Nacional de Electrificacion, Guatemala Centro Nacional de Despacho de Carga y Sistema Global de Comunicaciones). Export trade information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-07-01

    The report presents the results of a feasibility study for the National Load Dispatch Center and Global Communications System Project in Guatemala. The project consists of a communication system which will provide Institute Nacional de Electrificacion (INDE) operations personnel direct voice access to all major power system facilities. In addition, a modern computer based load dispatch center has been configured on a secure and reliable basis to provide automatic generation control of all major interconnected generating plants within Guatemala.

  18. BALANCING THE LOAD: A VORONOI BASED SCHEME FOR PARALLEL COMPUTATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinberg, Elad; Yalinewich, Almog; Sari, Re'em

    2015-01-01

    One of the key issues when running a simulation on multiple CPUs is maintaining a proper load balance throughout the run and minimizing communications between CPUs. We propose a novel method of utilizing a Voronoi diagram to achieve a nearly perfect load balance without the need of any global redistributions of data. As a show case, we implement our method in RICH, a two-dimensional moving mesh hydrodynamical code, but it can be extended trivially to other codes in two or three dimensions. Our tests show that this method is indeed efficient and can be used in a large variety ofmore » existing hydrodynamical codes.« less

  19. Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs

    PubMed Central

    Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang

    2015-01-01

    Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n 2), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k 2 n 2) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results. PMID:26491652

  20. Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs.

    PubMed

    Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang

    2015-01-01

    Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n (2)), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k (2) n (2)) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results.

  1. Modeling Geometry and Progressive Failure of Material Interfaces in Plain Weave Composites

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Cheng, Ron-Bin

    2010-01-01

    A procedure combining a geometrically nonlinear, explicit-dynamics contact analysis, computer aided design techniques, and elasticity-based mesh adjustment is proposed to efficiently generate realistic finite element models for meso-mechanical analysis of progressive failure in textile composites. In the procedure, the geometry of fiber tows is obtained by imposing a fictitious expansion on the tows. Meshes resulting from the procedure are conformal with the computed tow-tow and tow-matrix interfaces but are incongruent at the interfaces. The mesh interfaces are treated as cohesive contact surfaces not only to resolve the incongruence but also to simulate progressive failure. The method is employed to simulate debonding at the material interfaces in a ceramic-matrix plain weave composite with matrix porosity and in a polymeric matrix plain weave composite without matrix porosity, both subject to uniaxial cyclic loading. The numerical results indicate progression of the interfacial damage during every loading and reverse loading event in a constant strain amplitude cyclic process. However, the composites show different patterns of damage advancement.

  2. Theoretical research and experimental validation of elastic dynamic load spectra on bogie frame of high-speed train

    NASA Astrophysics Data System (ADS)

    Zhu, Ning; Sun, Shouguang; Li, Qiang; Zou, Hua

    2016-05-01

    When a train runs at high speeds, the external exciting frequencies approach the natural frequencies of bogie critical components, thereby inducing strong elastic vibrations. The present international reliability test evaluation standard and design criteria of bogie frames are all based on the quasi-static deformation hypothesis. Structural fatigue damage generated by structural elastic vibrations has not yet been included. In this paper, theoretical research and experimental validation are done on elastic dynamic load spectra on bogie frame of high-speed train. The construction of the load series that correspond to elastic dynamic deformation modes is studied. The simplified form of the load series is obtained. A theory of simplified dynamic load-time histories is then deduced. Measured data from the Beijing-Shanghai Dedicated Passenger Line are introduced to derive the simplified dynamic load-time histories. The simplified dynamic discrete load spectra of bogie frame are established. Based on the damage consistency criterion and a genetic algorithm, damage consistency calibration of the simplified dynamic load spectra is finally performed. The computed result proves that the simplified load series is reasonable. The calibrated damage that corresponds to the elastic dynamic discrete load spectra can cover the actual damage at the operating conditions. The calibrated damage satisfies the safety requirement of damage consistency criterion for bogie frame. This research is helpful for investigating the standardized load spectra of bogie frame of high-speed train.

  3. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).

  4. Parallel Performance Optimizations on Unstructured Mesh-based Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas

    2015-01-01

    © The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cachemore » efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less

  5. Transmission Loss Calculation using A and B Loss Coefficients in Dynamic Economic Dispatch Problem

    NASA Astrophysics Data System (ADS)

    Jethmalani, C. H. Ram; Dumpa, Poornima; Simon, Sishaj P.; Sundareswaran, K.

    2016-04-01

    This paper analyzes the performance of A-loss coefficients while evaluating transmission losses in a Dynamic Economic Dispatch (DED) Problem. The performance analysis is carried out by comparing the losses computed using nominal A loss coefficients and nominal B loss coefficients in reference with load flow solution obtained by standard Newton-Raphson (NR) method. Density based clustering method based on connected regions with sufficiently high density (DBSCAN) is employed in identifying the best regions of A and B loss coefficients. Based on the results obtained through cluster analysis, a novel approach in improving the accuracy of network loss calculation is proposed. Here, based on the change in per unit load values between the load intervals, loss coefficients are updated for calculating the transmission losses. The proposed algorithm is tested and validated on IEEE 6 bus system, IEEE 14 bus, system IEEE 30 bus system and IEEE 118 bus system. All simulations are carried out using SCILAB 5.4 (www.scilab.org) which is an open source software.

  6. Cumulative Axial and Torsional Fatigue: An Investigation of Load-Type Sequencing Effects

    NASA Technical Reports Server (NTRS)

    Kalluri, Sreeramesh; Bonacuse, Peter J.

    2000-01-01

    Cumulative fatigue behavior of a wrought cobalt-base superalloy, Haynes 188 was investigated at 538 C under various single-step sequences of axial and torsional loading conditions. Initially, fully-reversed, axial and torsional fatigue tests were conducted under strain control at 538 C on thin-walled tubular specimens to establish baseline fatigue life relationships. Subsequently, four sequences (axial/axial, torsional/torsional, axial/torsional, and torsional/axial) of two load-level fatigue tests were conducted to characterize both the load-order (high/low) and load-type sequencing effects. For the two load-level tests, summations of life fractions and the remaining fatigue lives at the second load-level were computed by the Miner's Linear Damage Rule (LDR) and a nonlinear Damage Curve Approach (DCA). In general, for all four cases predictions by LDR were unconservative. Predictions by the DCA were within a factor of two of the experimentally observed fatigue lives for a majority of the cumulative axial and torsional fatigue tests.

  7. MSC/NASTRAN Stress Analysis of Complete Models Subjected to Random and Quasi-Static Loads

    NASA Technical Reports Server (NTRS)

    Hampton, Roy W.

    2000-01-01

    Space payloads, such as those which fly on the Space Shuttle in Spacelab, are designed to withstand dynamic loads which consist of combined acoustic random loads and quasi-static acceleration loads. Methods for computing the payload stresses due to these loads are well known and appear in texts and NASA documents, but typically involve approximations such as the Miles' equation, as well as possible adjustments based on "modal participation factors." Alternatively, an existing capability in MSC/NASTRAN may be used to output exact root mean square [rms] stresses due to the random loads for any specified elements in the Finite Element Model. However, it is time consuming to use this methodology to obtain the rms stresses for the complete structural model and then combine them with the quasi-static loading induced stresses. Special processing was developed as described here to perform the stress analysis of all elements in the model using existing MSC/NASTRAN and MSC/PATRAN and UNIX utilities. Fail-safe and buckling analyses applications are also described.

  8. Final Report to the Office of Naval Research on Precision Engineering

    DTIC Science & Technology

    1991-09-30

    Microscope equipped with a Panasonic Video Camera and Monitor was used to view the dressing process. Two scaled, transparent templates were made to...reservoir of hydraulic fluid. Loads were monitored by a miniature strain-guage load cell. A computer-based video image system was used to measure crack...was applied in a stepwise fashion, the stressing rate being approximately 1 MPa/s with hold periods of about 5 s at 2.5 - 5 MPa intervals. Video images

  9. Time-correlated gust loads using matched filter theory and random process theory - A new way of looking at things

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Zeiler, Thomas A.; Perry, Boyd, III

    1989-01-01

    This paper describes and illustrates two ways of performing time-correlated gust-load calculations. The first is based on Matched Filter Theory; the second on Random Process Theory. Both approaches yield theoretically identical results and represent novel applications of the theories, are computationally fast, and may be applied to other dynamic-response problems. A theoretical development and example calculations using both Matched Filter Theory and Random Process Theory approaches are presented.

  10. Numerical tools to predict the environmental loads for offshore structures under extreme weather conditions

    NASA Astrophysics Data System (ADS)

    Wu, Yanling

    2018-05-01

    In this paper, the extreme waves were generated using the open source computational fluid dynamic (CFD) tools — OpenFOAM and Waves2FOAM — using linear and nonlinear NewWave input. They were used to conduct the numerical simulation of the wave impact process. Numerical tools based on first-order (with and without stretching) and second-order NewWave are investigated. The simulation to predict force loading for the offshore platform under the extreme weather condition is implemented and compared.

  11. Time-correlated gust loads using Matched-Filter Theory and Random-Process Theory: A new way of looking at things

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Zeiler, Thomas A.; Perry, Boyd, III

    1989-01-01

    Two ways of performing time-correlated gust-load calculations are described and illustrated. The first is based on Matched Filter Theory; the second on Random Process Theory. Both approaches yield theoretically identical results and represent novel applications of the theories, are computationally fast, and may be applied to other dynamic-response problems. A theoretical development and example calculations using both Matched Filter Theory and Random Process Theory approaches are presented.

  12. Cyclic Load Effects on Long Term Behavior of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Chamis, C. C.

    1996-01-01

    A methodology to compute the fatigue life for different ratios, r, of applied stress to the laminate strength based on first ply failure criteria combined with thermal cyclic loads has been developed and demonstrated. Degradation effects resulting from long term environmental exposure and thermo-mechanical cyclic loads are considered in the simulation process. A unified time-stress dependent multi-factor interaction equation model developed at NASA Lewis Research Center has been used to account for the degradation of material properties caused by cyclic and aging loads. Effect of variation in the thermal cyclic load amplitude on a quasi-symmetric graphite/epoxy laminate has been studied with respect to the impending failure modes. The results show that, for the laminate under consideration, the fatigue life under combined mechanical and low thermal amplitude cyclic loads is higher than that due to mechanical loads only. However, as the thermal amplitude increases, the life also decreases. The failure mode changes from tensile under mechanical loads only to the compressive and shear at high mechanical and thermal loads. Also, implementation of the developed methodology in the design process has been discussed.

  13. Effects of Faded Scaffolding in Computer-Based Instruction on Learners' Performance, Cognitive Load, and Test Anxiety

    ERIC Educational Resources Information Center

    Hao, Shuang

    2016-01-01

    Scaffolding is a type of instructional support that helps students to complete a learning task that exceeds their current ability. Scaffolding plays an important role in augmenting other instructional approaches, such as problem-based learning, and facilitates gradual shifts of responsibility from the more advanced others to the learner (Belland,…

  14. Effects of Instructional Strategies Using Cross Sections on the Recognition of Anatomical Structures in Correlated CT and MR Images

    ERIC Educational Resources Information Center

    Khalil, Mohammed K.; Paas, Fred; Johnson, Tristan E.; Su, Yung K.; Payer, Andrew F.

    2008-01-01

    This research is an effort to best utilize the interactive anatomical images for instructional purposes based on cognitive load theory. Three studies explored the differential effects of three computer-based instructional strategies that use anatomical cross-sections to enhance the interpretation of radiological images. These strategies include:…

  15. Estimated nitrogen loads from selected tributaries in Connecticut draining to Long Island Sound, 1999–2009

    USGS Publications Warehouse

    Mullaney, John R.; Schwarz, Gregory E.

    2013-01-01

    The total nitrogen load to Long Island Sound from Connecticut and contributing areas to the north was estimated for October 1998 to September 2009. Discrete measurements of total nitrogen concentrations and continuous flow data from 37 water-quality monitoring stations in the Long Island Sound watershed were used to compute total annual nitrogen yields and loads. Total annual computed yields and basin characteristics were used to develop a generalized-least squares regression model for use in estimating the total nitrogen yields from unmonitored areas in coastal and central Connecticut. Significant variables in the regression included the percentage of developed land, percentage of row crops, point-source nitrogen yields from wastewater-treatment facilities, and annual mean streamflow. Computed annual median total nitrogen yields at individual monitoring stations ranged from less than 2,000 pounds per square mile in mostly forested basins (typically less than 10 percent developed land) to more than 13,000 pounds per square mile in urban basins (greater than 40 percent developed) with wastewater-treatment facilities and in one agricultural basin. Medians of computed total annual nitrogen yields for water years 1999–2009 at most stations were similar to those previously computed for water years 1988–98. However, computed medians of annual yields at several stations, including the Naugatuck River, Quinnipiac River, and Hockanum River, were lower than during 1988–98. Nitrogen yields estimated for 26 unmonitored areas downstream from monitoring stations ranged from less than 2,000 pounds per square mile to 34,000 pounds per square mile. Computed annual total nitrogen loads at the farthest downstream monitoring stations were combined with the corresponding estimates for the downstream unmonitored areas for a combined estimate of the total nitrogen load from the entire study area. Resulting combined total nitrogen loads ranged from 38 to 68 million pounds per year during water years 1999–2009. Total annual loads from the monitored basins represent 63 to 74 percent of the total load. Computed annual nitrogen loads from four stations near the Massachusetts border with Connecticut represent 52 to 54 percent of the total nitrogen load during water years 2008–9, the only years with data for all the border sites. During the latter part of the 1999–2009 study period, total nitrogen loads to Long Island Sound from the study area appeared to increase slightly. The apparent increase in loads may be due to higher than normal streamflows, which consequently increased nonpoint nitrogen loads during the study, offsetting major reductions of nitrogen from wastewater-treatment facilities. Nitrogen loads from wastewater treatment facilities declined as much as 2.3 million pounds per year in areas of Connecticut upstream from the monitoring stations and as much as 5.8 million pounds per year in unmonitored areas downstream in coastal and central Connecticut.

  16. Simulation and evaluation of latent heat thermal energy storage

    NASA Technical Reports Server (NTRS)

    Sigmon, T. W.

    1980-01-01

    The relative value of thermal energy storage (TES) for heat pump storage (heating and cooling) as a function of storage temperature, mode of storage (hotside or coldside), geographic locations, and utility time of use rate structures were derived. Computer models used to simulate the performance of a number of TES/heat pump configurations are described. The models are based on existing performance data of heat pump components, available building thermal load computational procedures, and generalized TES subsystem design. Life cycle costs computed for each site, configuration, and rate structure are discussed.

  17. Development of non-linear finite element computer code

    NASA Technical Reports Server (NTRS)

    Becker, E. B.; Miller, T.

    1985-01-01

    Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.

  18. COMPUTER INTERFACED TOXICITY TESTING SYSTEM FOR SIMULATING VARIABLE EFFLUENT LOADING

    EPA Science Inventory

    Water quality criteria and standards are based primarily on toxicity tests carried out with single chemicals whose concentration is as nearly constant as possible. In the 'real world', however, organisms are exposed to mixtures of chemicals which usually have markedly fluctuating...

  19. Analysis of rotor vibratory loads using higher harmonic pitch control

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Bliss, Donald B.; Boschitsch, Alexander H.; Wachspress, Daniel A.

    1992-01-01

    Experimental studies of isolated rotors in forward flight have indicated that higher harmonic pitch control can reduce rotor noise. These tests also show that such pitch inputs can generate substantial vibratory loads. The modification is summarized of the RotorCRAFT (Computation of Rotor Aerodynamics in Forward flighT) analysis of isolated rotors to study the vibratory loading generated by high frequency pitch inputs. The original RotorCRAFT code was developed for use in the computation of such loading, and uses a highly refined rotor wake model to facilitate this task. The extended version of RotorCRAFT incorporates a variety of new features including: arbitrary periodic root pitch control; computation of blade stresses and hub loads; improved modeling of near wake unsteady effects; and preliminary implementation of a coupled prediction of rotor airloads and noise. Correlation studies are carried out with existing blade stress and vibratory hub load data to assess the performance of the extended code.

  20. Flight-Time Identification of a UH-60A Helicopter and Slung Load

    NASA Technical Reports Server (NTRS)

    Cicolani, Luigi S.; McCoy, Allen H.; Tischler, Mark B.; Tucker, George E.; Gatenio, Pinhas; Marmar, Dani

    1998-01-01

    This paper describes a flight test demonstration of a system for identification of the stability and handling qualities parameters of a helicopter-slung load configuration simultaneously with flight testing, and the results obtained.Tests were conducted with a UH-60A Black Hawk at speeds from hover to 80 kts. The principal test load was an instrumented 8 x 6 x 6 ft cargo container. The identification used frequency domain analysis in the frequency range to 2 Hz, and focussed on the longitudinal and lateral control axes since these are the axes most affected by the load pendulum modes in the frequency range of interest for handling qualities. Results were computed for stability margins, handling qualities parameters and load pendulum stability. The computations took an average of 4 minutes before clearing the aircraft to the next test point. Important reductions in handling qualities were computed in some cases, depending, on control axis and load-slung combination. A database, including load dynamics measurements, was accumulated for subsequent simulation development and validation.

  1. Monitoring of self-healing composites: a nonlinear ultrasound approach

    NASA Astrophysics Data System (ADS)

    Malfense Fierro, Gian-Piero; Pinto, Fulvio; Dello Iacono, Stefania; Martone, Alfonso; Amendola, Eugenio; Meo, Michele

    2017-11-01

    Self-healing composites using a thermally mendable polymer, based on Diels-Alder reaction were fabricated and subjected to various multiple damage loads. Unlike traditional destructive methods, this work presents a nonlinear ultrasound technique to evaluate the structural recovery of the proposed self-healing laminate structures. The results were compared to computer tomography and linear ultrasound methods. The laminates were subjected to multiple loading and healing cycles and the induced damage and recovery at each stage was evaluated. The results highlight the benefit and added advantage of using a nonlinear based methodology to monitor the structural recovery of reversibly cross-linked epoxy with efficient recycling and multiple self-healing capability.

  2. Impact of Load Balancing on Unstructured Adaptive Grid Computations for Distributed-Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak; Simon, Horst D.

    1996-01-01

    The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.

  3. Computer program for thin-wire structures in a homogeneous conducting medium

    NASA Technical Reports Server (NTRS)

    Richmond, J. H.

    1974-01-01

    A computer program is presented for thin-wire antennas and scatters in a homogeneous conducting medium. The anaylsis is performed in the real or complex frequency domain. The program handles insulated and bare wires with finite conductivity and lumped loads. The output data includes the current distribution, impedance, radiation efficiency, gain, absorption cross section, scattering cross section, echo area and the polarization scattering matrix. The program uses sinusoidal bases and Galerkin's method.

  4. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Welch, Greg

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  5. Estimation of dynamic rotor loads for the rotor systems research aircraft: Methodology development and validation

    NASA Technical Reports Server (NTRS)

    Duval, R. W.; Bahrami, M.

    1985-01-01

    The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.

  6. Capability Extension to the Turbine Off-Design Computer Program AXOD With Applications to the Highly Loaded Fan-Drive Turbines

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng S.

    2011-01-01

    The axial flow turbine off-design computer program AXOD has been upgraded to include the outlet guide vane (OGV) into its acceptable turbine configurations. The mathematical bases and the techniques used for the code implementation are described and discussed in lengths in this paper. This extended capability is verified and validated with two cases of highly loaded fan-drive turbines, designed and tested in the V/STOL Program of NASA. The first case is a 4 1/2-stage turbine with an average stage loading factor of 4.66, designed by Pratt & Whitney Aircraft. The second case is a 3 1/2-stage turbine with an average loading factor of 4.0, designed in-house by the NASA Lewis Research Center (now the NASA Glenn Research Center). Both cases were experimentally tested in the turbine facility located at the Glenn Research Center. The processes conducted in these studies are described in detail in this paper, and the results in comparison with the experimental data are presented and discussed. The comparisons between the AXOD results and the experimental data are in excellent agreement.

  7. Computationally inexpensive approach for pitch control of offshore wind turbine on barge floating platform.

    PubMed

    Zuo, Shan; Song, Y D; Wang, Lei; Song, Qing-wang

    2013-01-01

    Offshore floating wind turbine (OFWT) has gained increasing attention during the past decade because of the offshore high-quality wind power and complex load environment. The control system is a tradeoff between power tracking and fatigue load reduction in the above-rated wind speed area. In allusion to the external disturbances and uncertain system parameters of OFWT due to the proximity to load centers and strong wave coupling, this paper proposes a computationally inexpensive robust adaptive control approach with memory-based compensation for blade pitch control. The method is tested and compared with a baseline controller and a conventional individual blade pitch controller with the "NREL offshore 5 MW baseline wind turbine" being mounted on a barge platform run on FAST and Matlab/Simulink, operating in the above-rated condition. It is shown that the advanced control approach is not only robust to complex wind and wave disturbances but adaptive to varying and uncertain system parameters as well. The simulation results demonstrate that the proposed method performs better in reducing power fluctuations, fatigue loads and platform vibration as compared to the conventional individual blade pitch control.

  8. Computationally Inexpensive Approach for Pitch Control of Offshore Wind Turbine on Barge Floating Platform

    PubMed Central

    Zuo, Shan; Song, Y. D.; Wang, Lei; Song, Qing-wang

    2013-01-01

    Offshore floating wind turbine (OFWT) has gained increasing attention during the past decade because of the offshore high-quality wind power and complex load environment. The control system is a tradeoff between power tracking and fatigue load reduction in the above-rated wind speed area. In allusion to the external disturbances and uncertain system parameters of OFWT due to the proximity to load centers and strong wave coupling, this paper proposes a computationally inexpensive robust adaptive control approach with memory-based compensation for blade pitch control. The method is tested and compared with a baseline controller and a conventional individual blade pitch controller with the “NREL offshore 5 MW baseline wind turbine” being mounted on a barge platform run on FAST and Matlab/Simulink, operating in the above-rated condition. It is shown that the advanced control approach is not only robust to complex wind and wave disturbances but adaptive to varying and uncertain system parameters as well. The simulation results demonstrate that the proposed method performs better in reducing power fluctuations, fatigue loads and platform vibration as compared to the conventional individual blade pitch control. PMID:24453834

  9. Quadratic partial eigenvalue assignment in large-scale stochastic dynamic systems for resilient and economic design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Sonjoy; Goswami, Kundan; Datta, Biswa N.

    2014-12-10

    Failure of structural systems under dynamic loading can be prevented via active vibration control which shifts the damped natural frequencies of the systems away from the dominant range of loading spectrum. The damped natural frequencies and the dynamic load typically show significant variations in practice. A computationally efficient methodology based on quadratic partial eigenvalue assignment technique and optimization under uncertainty has been formulated in the present work that will rigorously account for these variations and result in an economic and resilient design of structures. A novel scheme based on hierarchical clustering and importance sampling is also developed in this workmore » for accurate and efficient estimation of probability of failure to guarantee the desired resilience level of the designed system. Numerical examples are presented to illustrate the proposed methodology.« less

  10. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  11. Computation of ancestry scores with mixed families and unrelated individuals.

    PubMed

    Zhou, Yi-Hui; Marron, James S; Wright, Fred A

    2018-03-01

    The issue of robustness to family relationships in computing genotype ancestry scores such as eigenvector projections has received increased attention in genetic association, and is particularly challenging when sets of both unrelated individuals and closely related family members are included. The current standard is to compute loadings (left singular vectors) using unrelated individuals and to compute projected scores for remaining family members. However, projected ancestry scores from this approach suffer from shrinkage toward zero. We consider two main novel strategies: (i) matrix substitution based on decomposition of a target family-orthogonalized covariance matrix, and (ii) using family-averaged data to obtain loadings. We illustrate the performance via simulations, including resampling from 1000 Genomes Project data, and analysis of a cystic fibrosis dataset. The matrix substitution approach has similar performance to the current standard, but is simple and uses only a genotype covariance matrix, while the family-average method shows superior performance. Our approaches are accompanied by novel ancillary approaches that provide considerable insight, including individual-specific eigenvalue scree plots. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  12. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  13. Computational techniques for design optimization of thermal protection systems for the space shuttle vehicle. Volume 1: Final report

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Computational techniques were developed and assimilated for the design optimization. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in Fortran IV for the CDC 6400 but was subsequently converted to the Fortran V language to be used on the Univac 1108. The program allows for improvement and update of the performance prediction techniques. The program logic involves subroutines which handle the following basic functions: (1) a driver which calls for input, output, and communication between program and user and between the subroutines themselves; (2) thermodynamic analysis; (3) thermal stress analysis; (4) acoustic fatigue analysis; and (5) weights/cost analysis. In addition, a system total cost is predicted based on system weight and historical cost data of similar systems. Two basic types of input are provided, both of which are based on trajectory data. These are vehicle attitude (altitude, velocity, and angles of attack and sideslip), for external heat and pressure loads calculation, and heating rates and pressure loads as a function of time.

  14. Noise-aware dictionary-learning-based sparse representation framework for detection and removal of single and combined noises from ECG signal

    PubMed Central

    Ramkumar, Barathram; Sabarimalai Manikandan, M.

    2017-01-01

    Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal. PMID:28529758

  15. Noise-aware dictionary-learning-based sparse representation framework for detection and removal of single and combined noises from ECG signal.

    PubMed

    Satija, Udit; Ramkumar, Barathram; Sabarimalai Manikandan, M

    2017-02-01

    Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal.

  16. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Feyock, Stefan; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    The purpose of this research effort is to investigate the benefits that might be derived from applying artificial intelligence tools in the area of conceptual design. Therefore, the emphasis is on the artificial intelligence aspects of conceptual design rather than structural and optimization aspects. A prototype knowledge-based system, called STRUTEX, was developed to initially configure a structure to support point loads in two dimensions. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user by integrating a knowledge base interface and inference engine, a data base interface, and graphics while keeping the knowledge base and data base files separate. The system writes a file which can be input into a structural synthesis system, which combines structural analysis and optimization.

  17. Towards a Computational Framework for Modeling the Impact of Aortic Coarctations Upon Left Ventricular Load

    PubMed Central

    Karabelas, Elias; Gsell, Matthias A. F.; Augustin, Christoph M.; Marx, Laura; Neic, Aurel; Prassl, Anton J.; Goubergrits, Leonid; Kuehne, Titus; Plank, Gernot

    2018-01-01

    Computational fluid dynamics (CFD) models of blood flow in the left ventricle (LV) and aorta are important tools for analyzing the mechanistic links between myocardial deformation and flow patterns. Typically, the use of image-based kinematic CFD models prevails in applications such as predicting the acute response to interventions which alter LV afterload conditions. However, such models are limited in their ability to analyze any impacts upon LV load or key biomarkers known to be implicated in driving remodeling processes as LV function is not accounted for in a mechanistic sense. This study addresses these limitations by reporting on progress made toward a novel electro-mechano-fluidic (EMF) model that represents the entire physics of LV electromechanics (EM) based on first principles. A biophysically detailed finite element (FE) model of LV EM was coupled with a FE-based CFD solver for moving domains using an arbitrary Eulerian-Lagrangian (ALE) formulation. Two clinical cases of patients suffering from aortic coarctations (CoA) were built and parameterized based on clinical data under pre-treatment conditions. For one patient case simulations under post-treatment conditions after geometric repair of CoA by a virtual stenting procedure were compared against pre-treatment results. Numerical stability of the approach was demonstrated by analyzing mesh quality and solver performance under the significantly large deformations of the LV blood pool. Further, computational tractability and compatibility with clinical time scales were investigated by performing strong scaling benchmarks up to 1536 compute cores. The overall cost of the entire workflow for building, fitting and executing EMF simulations was comparable to those reported for image-based kinematic models, suggesting that EMF models show potential of evolving into a viable clinical research tool. PMID:29892227

  18. Study on validation method for femur finite element model under multiple loading conditions

    NASA Astrophysics Data System (ADS)

    Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu

    2018-03-01

    Acquisition of accurate and reliable constitutive parameters related to bio-tissue materials was beneficial to improve biological fidelity of a Finite Element (FE) model and predict impact damages more effectively. In this paper, a femur FE model was established under multiple loading conditions with diverse impact positions. Then, based on sequential response surface method and genetic algorithms, the material parameters identification was transformed to a multi-response optimization problem. Finally, the simulation results successfully coincided with force-displacement curves obtained by numerous experiments. Thus, computational accuracy and efficiency of the entire inverse calculation process were enhanced. This method was able to effectively reduce the computation time in the inverse process of material parameters. Meanwhile, the material parameters obtained by the proposed method achieved higher accuracy.

  19. Small Molecules Targeting the miRNA-Binding Domain of Argonaute 2: From Computer-Aided Molecular Design to RNA Immunoprecipitation.

    PubMed

    Bellissimo, Teresa; Masciarelli, Silvia; Poser, Elena; Genovese, Ilaria; Del Rio, Alberto; Colotti, Gianni; Fazi, Francesco

    2017-01-01

    The development of small-molecule-based target therapy design for human disease and cancer is object of growing attention. Recently, specific microRNA (miRNA) mimicking compounds able to bind the miRNA-binding domain of Argonaute 2 protein (AGO2) to inhibit miRNA loading and its functional activity were described. Computer-aided molecular design techniques and RNA immunoprecipitation represent suitable approaches to identify and experimentally determine if a compound is able to impair the loading of miRNAs on AGO2 protein. Here, we describe these two methodologies that we recently used to select a specific compound able to interfere with the AGO2 functional activity and able to improve the retinoic acid-dependent myeloid differentiation of leukemic cells.

  20. Three-Dimensional Mechanical Model of the Human Spine and the Versatility of its Use

    NASA Astrophysics Data System (ADS)

    Sokol, Milan; Velísková, Petra; Rehák, Ľuboš; Žabka, Martin

    2014-03-01

    The aim of the work is oriented towards the simulation or modeling of the lumbar and thoracic human spine as a load-bearing 3D system in a computer program (ANSYS). The human spine model includes a determination of the geometry based on X-ray pictures of frontal and lateral projections. For this reason, another computer code, BMPCOORDINATES, was developed as an aid to obtain the most precise and realistic model of the spine. Various positions, deformations, scoliosis, rotation and torsion can be modelled. Once the geometry is done, external loading on different spinal segments is entered; consequently, the response could be analysed. This can contribute a lot to medical practice as a tool for diagnoses, and developing implants or other artificial instruments for fixing the spine.

  1. Unsteady wind loads for TMT: replacing parametric models with CFD

    NASA Astrophysics Data System (ADS)

    MacMartin, Douglas G.; Vogiatzis, Konstantinos

    2014-08-01

    Unsteady wind loads due to turbulence inside the telescope enclosure result in image jitter and higher-order image degradation due to M1 segment motion. Advances in computational fluid dynamics (CFD) allow unsteady simulations of the flow around realistic telescope geometry, in order to compute the unsteady forces due to wind turbulence. These simulations can then be used to understand the characteristics of the wind loads. Previous estimates used a parametric model based on a number of assumptions about the wind characteristics, such as a von Karman spectrum and frozen-flow turbulence across M1, and relied on CFD only to estimate parameters such as mean wind speed and turbulent kinetic energy. Using the CFD-computed forces avoids the need for assumptions regarding the flow. We discuss here both the loads on the telescope that lead to image jitter, and the spatially-varying force distribution across the primary mirror, using simulations with the Thirty Meter Telescope (TMT) geometry. The amplitude, temporal spectrum, and spatial distribution of wind disturbances are all estimated; these are then used to compute the resulting image motion and degradation. There are several key differences relative to our earlier parametric model. First, the TMT enclosure provides sufficient wind reduction at the top end (near M2) to render the larger cross-sectional structural areas further inside the enclosure (including M1) significant in determining the overall image jitter. Second, the temporal spectrum is not von Karman as the turbulence is not fully developed; this applies both in predicting image jitter and M1 segment motion. And third, for loads on M1, the spatial characteristics are not consistent with propagating a frozen-flow turbulence screen across the mirror: Frozen flow would result in a relationship between temporal frequency content and spatial frequency content that does not hold in the CFD predictions. Incorporating the new estimates of wind load characteristics into TMT response predictions leads to revised estimates of the response of TMT to wind turbulence, and validates the aerodynamic design of the enclosure.

  2. Factorial Design Based Multivariate Modeling and Optimization of Tunable Bioresponsive Arginine Grafted Poly(cystaminebis(acrylamide)-diaminohexane) Polymeric Matrix Based Nanocarriers.

    PubMed

    Yang, Rongbing; Nam, Kihoon; Kim, Sung Wan; Turkson, James; Zou, Ye; Zuo, Yi Y; Haware, Rahul V; Chougule, Mahavir B

    2017-01-03

    Desired characteristics of nanocarriers are crucial to explore its therapeutic potential. This investigation aimed to develop tunable bioresponsive newly synthesized unique arginine grafted poly(cystaminebis(acrylamide)-diaminohexane) [ABP] polymeric matrix based nanocarriers by using L9 Taguchi factorial design, desirability function, and multivariate method. The selected formulation and process parameters were ABP concentration, acetone concentration, the volume ratio of acetone to ABP solution, and drug concentration. The measured nanocarrier characteristics were particle size, polydispersity index, zeta potential, and percentage drug loading. Experimental validation of nanocarrier characteristics computed from initially developed predictive model showed nonsignificant differences (p > 0.05). The multivariate modeling based optimized cationic nanocarrier formulation of <100 nm loaded with hydrophilic acetaminophen was readapted for a hydrophobic etoposide loading without significant changes (p > 0.05) except for improved loading percentage. This is the first study focusing on ABP polymeric matrix based nanocarrier development. Nanocarrier particle size was stable in PBS 7.4 for 48 h. The increase of zeta potential at lower pH 6.4, compared to the physiological pH, showed possible endosomal escape capability. The glutathione triggered release at the physiological conditions indicated the competence of cytosolic targeting delivery of the loaded drug from bioresponsive nanocarriers. In conclusion, this unique systematic approach provides rational evaluation and prediction of a tunable bioresponsive ABP based matrix nanocarrier, which was built on selected limited number of smart experimentation.

  3. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  4. Computational methods for structural load and resistance modeling

    NASA Technical Reports Server (NTRS)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  5. User's manual for a computer program for the emulation/simulation of a space station Environmental Control and Life Support System (ESCM)

    NASA Technical Reports Server (NTRS)

    Yanosy, James L.

    1988-01-01

    This manual describes how to use the Emulation Simulation Computer Model (ESCM). Based on G189A, ESCM computes the transient performance of a Space Station atmospheric revitalization subsystem (ARS) with CO2 removal provided by a solid amine water desorbed subsystem called SAWD. Many performance parameters are computed some of which are cabin CO2 partial pressure, relative humidity, temperature, O2 partial pressure, and dew point. The program allows the user to simulate various possible combinations of man loading, metabolic profiles, cabin volumes and certain hypothesized failures that could occur.

  6. A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks

    PubMed Central

    Wang, Ping; Zhang, Lin; Li, Victor O. K.

    2013-01-01

    Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708

  7. A stratified acoustic model accounting for phase shifts for underwater acoustic networks.

    PubMed

    Wang, Ping; Zhang, Lin; Li, Victor O K

    2013-05-13

    Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated.

  8. MODELING LONG-TERM NITRATE BASE-FLOW LOADING FROM TWO AGRICULTURAL WATERSHEDS

    EPA Science Inventory

    Nitrate contamination of ground water from agricultural practices may be contributing to the eutrophication of the Chesapeake Bay, degrading water quality and aquatic habitats. Groundwater flow and nitrate transport and fate are modeled, using MODFLOW and MT3D computer models, in...

  9. Analytical Fuselage and Wing Weight Estimation of Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Chambers, Mark C.; Ardema, Mark D.; Patron, Anthony P.; Hahn, Andrew S.; Miura, Hirokazu; Moore, Mark D.

    1996-01-01

    A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft, and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT has traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight. Using statistical analysis techniques, relations between the load-bearing fuselage and wing weights calculated by PDCYL and corresponding actual weights were determined.

  10. Fine Output Voltage Control Method considering Time-Delay of Digital Inverter System for X-ray Computed Tomography

    NASA Astrophysics Data System (ADS)

    Shibata, Junji; Kaneko, Kazuhide; Ohishi, Kiyoshi; Ando, Itaru; Ogawa, Mina; Takano, Hiroshi

    This paper proposes a new output voltage control for an inverter system, which has time-delay and nonlinear load. In the next generation X-ray computed tomography of a medical device (X-ray CT) that uses the contactless power transfer method, the feedback signal often contains time-delay due to AD/DA conversion and error detection/correction time. When the PID controller of the inverter system is received the adverse effects of the time-delay, the controller often has an overshoot and a oscillated response. In order to overcome this problem, this paper proposes a compensation method based on the Smith predictor for an inverter system having a time-delay and the nonlinear loads which are the diode bridge rectifier and X-ray tube. The proposed compensation method consists of the hybrid Smith predictor system based on an equivalent analog circuit and DSP. The experimental results confirm the validity of the proposed system.

  11. Thermal elastohydrodynamic lubrication of spur gears

    NASA Technical Reports Server (NTRS)

    Wang, K. L.; Cheng, H. S.

    1980-01-01

    An analysis and computer program called TELSGE were developed to predict the variations of dynamic load, surface temperature, and lubricant film thickness along the contacting path during the engagement of a pair of involute spur gears. The analysis of dynamic load includes the effect of gear inertia, the effect of load sharing of adjacent teeth, and the effect of variable tooth stiffness which are obtained by a finite-element method. Results obtained from TELSGE for the dynamic load distributions along the contacting path for various speeds of a pair of test gears show patterns similar to that observed experimentally. Effects of damping ratio, contact ratio, tip relief, and tooth error on the dynamic load were examined. In addition, two dimensionless charts are included for predicting the maximum equilibrium surface temperature, which can be used to estimate directly the lubricant film thickness based on well established EHD analysis.

  12. Computational Knee Ligament Modeling Using Experimentally Determined Zero-Load Lengths

    PubMed Central

    Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin

    2012-01-01

    This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models. PMID:22523522

  13. Large scale cardiac modeling on the Blue Gene supercomputer.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J

    2008-01-01

    Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.

  14. Three-dimensional turbopump flowfield analysis

    NASA Technical Reports Server (NTRS)

    Sharma, O. P.; Belford, K. A.; Ni, R. H.

    1992-01-01

    A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.

  15. Structural Design of a Horizontal-Axis Tidal Current Turbine Composite Blade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bir, G. S.; Lawson, M. J.; Li, Y.

    2011-10-01

    This paper describes the structural design of a tidal composite blade. The structural design is preceded by two steps: hydrodynamic design and determination of extreme loads. The hydrodynamic design provides the chord and twist distributions along the blade length that result in optimal performance of the tidal turbine over its lifetime. The extreme loads, i.e. the extreme flap and edgewise loads that the blade would likely encounter over its lifetime, are associated with extreme tidal flow conditions and are obtained using a computational fluid dynamics (CFD) software. Given the blade external shape and the extreme loads, we use a laminate-theory-basedmore » structural design to determine the optimal layout of composite laminas such that the ultimate-strength and buckling-resistance criteria are satisfied at all points in the blade. The structural design approach allows for arbitrary specification of the chord, twist, and airfoil geometry along the blade and an arbitrary number of shear webs. In addition, certain fabrication criteria are imposed, for example, each composite laminate must be an integral multiple of its constituent ply thickness. In the present effort, the structural design uses only static extreme loads; dynamic-loads-based fatigue design will be addressed in the future. Following the blade design, we compute the distributed structural properties, i.e. flap stiffness, edgewise stiffness, torsion stiffness, mass, moments of inertia, elastic-axis offset, and center-of-mass offset along the blade. Such properties are required by hydro-elastic codes to model the tidal current turbine and to perform modal, stability, loads, and response analyses.« less

  16. Unsteady Thick Airfoil Aerodynamics: Experiments, Computation, and Theory

    NASA Technical Reports Server (NTRS)

    Strangfeld, C.; Rumsey, C. L.; Mueller-Vahl, H.; Greenblatt, D.; Nayeri, C. N.; Paschereit, C. O.

    2015-01-01

    An experimental, computational and theoretical investigation was carried out to study the aerodynamic loads acting on a relatively thick NACA 0018 airfoil when subjected to pitching and surging, individually and synchronously. Both pre-stall and post-stall angles of attack were considered. Experiments were carried out in a dedicated unsteady wind tunnel, with large surge amplitudes, and airfoil loads were estimated by means of unsteady surface mounted pressure measurements. Theoretical predictions were based on Theodorsen's and Isaacs' results as well as on the relatively recent generalizations of van der Wall. Both two- and three-dimensional computations were performed on structured grids employing unsteady Reynolds-averaged Navier-Stokes (URANS). For pure surging at pre-stall angles of attack, the correspondence between experiments and theory was satisfactory; this served as a validation of Isaacs theory. Discrepancies were traced to dynamic trailing-edge separation, even at low angles of attack. Excellent correspondence was found between experiments and theory for airfoil pitching as well as combined pitching and surging; the latter appears to be the first clear validation of van der Wall's theoretical results. Although qualitatively similar to experiment at low angles of attack, two-dimensional URANS computations yielded notable errors in the unsteady load effects of pitching, surging and their synchronous combination. The main reason is believed to be that the URANS equations do not resolve wake vorticity (explicitly modeled in the theory) or the resulting rolled-up un- steady flow structures because high values of eddy viscosity tend to \\smear" the wake. At post-stall angles, three-dimensional computations illustrated the importance of modeling the tunnel side walls.

  17. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  18. Effect of elasticity on stress distribution in CAD/CAM dental crowns: Glass ceramic vs. polymer-matrix composite.

    PubMed

    Duan, Yuanyuan; Griggs, Jason A

    2015-06-01

    Further investigations are required to evaluate the mechanical behaviour of newly developed polymer-matrix composite (PMC) blocks for computer-aided design/computer-aided manufacturing (CAD/CAM) applications. The purpose of this study was to investigate the effect of elasticity on the stress distribution in dental crowns made of glass-ceramic and PMC materials using finite element (FE) analysis. Elastic constants of two materials were determined by ultrasonic pulse velocity using an acoustic thickness gauge. Three-dimensional solid models of a full-coverage dental crown on a first mandibular molar were generated based on X-ray micro-CT scanning images. A variety of load case-material property combinations were simulated and conducted using FE analysis. The first principal stress distribution in the crown and luting agent was plotted and analyzed. The glass-ceramic crown had stress concentrations on the occlusal surface surrounding the area of loading and the cemented surface underneath the area of loading, while the PMC crown had only stress concentration on the occlusal surface. The PMC crown had lower maximum stress than the glass-ceramic crown in all load cases, but this difference was not substantial when the loading had a lateral component. Eccentric loading did not substantially increase the maximum stress in the prosthesis. Both materials are resistant to fracture with physiological occlusal load. The PMC crown had lower maximum stress than the glass-ceramic crown, but the effect of a lateral loading component was more pronounced for a PMC crown than for a glass-ceramic crown. Knowledge of the stress distribution in dental crowns with low modulus of elasticity will aid clinicians in planning treatments that include such restorations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Implementation of a web-based, interactive polytrauma tutorial in computed tomography for radiology residents: how we do it.

    PubMed

    Schlorhaufer, C; Behrends, M; Diekhaus, G; Keberle, M; Weidemann, J

    2012-12-01

    Due to the time factor in polytraumatized patients all relevant pathologies in a polytrauma computed tomography (CT) scan have to be read and communicated very quickly. During radiology residency acquisition of effective reading schemes based on typical polytrauma pathologies is very important. Thus, an online tutorial for the structured diagnosis of polytrauma CT was developed. Based on current multimedia theories like the cognitive load theory a didactic concept was developed. As a web-environment the learning management system ILIAS was chosen. CT data sets were converted into online scrollable QuickTime movies. Audiovisual tutorial movies with guided image analyses by a consultant radiologist were recorded. The polytrauma tutorial consists of chapterized text content and embedded interactive scrollable CT data sets. Selected trauma pathologies are demonstrated to the user by guiding tutor movies. Basic reading schemes are communicated with the help of detailed commented movies of normal data sets. Common and important pathologies could be explored in a self-directed manner. Ambitious didactic concepts can be supported by a web based application on the basis of cognitive load theory and currently available software tools. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. The solution of the Elrod algorithm for a dynamically loaded journal bearing using multigrid techniques

    NASA Technical Reports Server (NTRS)

    Woods, Claudia M.; Brewe, David E.

    1988-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  1. The solution of the Elrod algorithm for a dynamically loaded journal bearing using multigrid techniques

    NASA Technical Reports Server (NTRS)

    Woods, C. M.; Brewe, D. E.

    1989-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  2. Exploiting symmetries in the modeling and analysis of tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Andersen, Carl M.; Tanner, John A.

    1987-01-01

    A simple and efficient computational strategy for reducing both the size of a tire model and the cost of the analysis of tires in the presence of symmetry-breaking conditions (unsymmetry in the tire material, geometry, or loading) is presented. The strategy is based on approximating the unsymmetric response of the tire with a linear combination of symmetric and antisymmetric global approximation vectors (or modes). Details are presented for the three main elements of the computational strategy, which include: use of special three-field mixed finite-element models, use of operator splitting, and substantial reduction in the number of degrees of freedom. The proposed computational stategy is applied to three quasi-symmetric problems of tires: linear analysis of anisotropic tires, through use of semianalytic finite elements, nonlinear analysis of anisotropic tires through use of two-dimensional shell finite elements, and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry (and their combinations) exhibited by the tire response are identified.

  3. Streamflow and nutrient data for the Yazoo River below Steele Bayou near Long Lake, Mississippi, 1996-2000

    USGS Publications Warehouse

    Runner, Michael S.; Turnipseed, D. Phil; Coupe, Richard H.

    2002-01-01

    Increased nutrient loading to the Gulf of Mexico from off-continent flux has been identified as contributing to the increase in the areal extent of the low dissolved-oxygen zone that develops annually off the Louisiana and Texas coast. The proximity of the Yazoo River Basin in northwestern Mississippi to the Gulf of Mexico, and the intensive agricultural activities in the basin have led to speculation that the Yazoo River Basin contributes a disproportionate amount of nitrogen and phosphorus to the Mississippi River and ultimately to the Gulf of Mexico. An empirical measurement of the flux of nitrogen and phosphorus from the Yazoo Basin has not been possible due to the hydrology of the lower Yazoo River Basin. Streamflow for the Yazoo River below Steele Bayou is affected by backwater from the Mississippi River. Flow at the gage is non-uniform and varying, with bi-directional and reverse flows possible. Streamflow was computed by using remote sensing and acoustic and conventional discharge and velocity measurement techniques. Streamflow from the Yazoo River for the 1996-2000 period accounted for 2.8 percent of the flow of the Mississippi River for the same period. Water samples from the Yazoo River were collected from February 1996 through December 2000 and were analyzed for total nitrogen, nitrate, total phosphorus, and orthophosphorus as part of the U.S. Geological Survey National Water-Quality Assessment Program. These data were used to compute annual loads of nitrogen and phosphorus discharged from the Yazoo River for the period 1996-2000. Annual loads of nitrogen and phosphorus were calculated by two methods. The first method used multivariate regression and the second method multiplied the mean annual concentration by the total annual flow. Load estimates based on the product of the mean annual concentration and the total annual flow were within the 95 percent confidence interval for the load calculated by multivariate regression in 10 of 20 cases. The Yazoo River loads, compared to average annual loads in the Mississippi River, indicated that the Yazoo River was contributing 1.4 percent of the total nitrogen load, 0.7 percent of the nitrate load, 3.4 percent of the total phosphorus load, and 1.6 percent of the orthophosphorus load during 1996 - 2000. The total nitrogen, nitrate, and orthophosphorus loads in the Yazoo River Basin were less than expected, whereas the total phosphorus load was slightly higher than expected based on discharge.

  4. Collectively loading programs in a multiple program multiple data environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.

    Techniques are disclosed for loading programs efficiently in a parallel computing system. In one embodiment, nodes of the parallel computing system receive a load description file which indicates, for each program of a multiple program multiple data (MPMD) job, nodes which are to load the program. The nodes determine, using collective operations, a total number of programs to load and a number of programs to load in parallel. The nodes further generate a class route for each program to be loaded in parallel, where the class route generated for a particular program includes only those nodes on which the programmore » needs to be loaded. For each class route, a node is selected using a collective operation to be a load leader which accesses a file system to load the program associated with a class route and broadcasts the program via the class route to other nodes which require the program.« less

  5. Derivation of improved load transformation matrices for launchers-spacecraft coupled analysis, and direct computation of margins of safety

    NASA Technical Reports Server (NTRS)

    Klein, M.; Reynolds, J.; Ricks, E.

    1989-01-01

    Load and stress recovery from transient dynamic studies are improved upon using an extended acceleration vector in the modal acceleration technique applied to structural analysis. Extension of the normal LTM (load transformation matrices) stress recovery to automatically compute margins of safety is presented with an application to the Hubble space telescope.

  6. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Taylor; Guo, Yi; Veers, Paul

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less

  7. Faded-example as a Tool to Acquire and Automate Mathematics Knowledge

    NASA Astrophysics Data System (ADS)

    Retnowati, E.

    2017-04-01

    Students themselves accomplish Knowledge acquisition and automation. The teacher plays a role as the facilitator by creating mathematics tasks that assist students in building knowledge efficiently and effectively. Cognitive load caused by learning material presented by teachers should be considered as a critical factor. While the intrinsic cognitive load is related to the degree of complexity of the material learning ones can handle, the extraneous cognitive load is directly caused by how the material is presented. Strategies to present a learning material in computational learning domains like mathematics are a namely worked example (fully-guided task) or problem-solving (discovery task with no guidance). According to the empirical evidence, learning based on problem-solving may cause high-extraneous cognitive load for students who have limited prior knowledge, conversely learn based on worked example may cause high-extraneous cognitive load for students who have mastered the knowledge base. An alternative is a faded example consisting of the partly-completed task. Learning from faded-example can facilitate students who already acquire some knowledge about the to-be-learned material but still need more practice to automate the knowledge further. This instructional strategy provides a smooth transition from a fully-guided into an independent problem solver. Designs of faded examples for learning trigonometry are discussed.

  8. Finite-element computer program for axisymmetric loading situations where components may have a relative interference fit

    NASA Technical Reports Server (NTRS)

    Taylor, C. M.

    1977-01-01

    A finite element computer program which enables the analysis of distortions and stresses occurring in compounds having a relative interference is presented. The program is limited to situations in which the loading is axisymmetric. Loads arising from the interference fit(s) and external, inertial, and thermal loadings are accommodated. The components comprise several different homogeneous isotropic materials whose properties may be a function of temperature. An example illustrating the data input and program output is given.

  9. Foundational Aero Research for Development of Efficient Power Turbines With 50% Variable-speed Capability

    DTIC Science & Technology

    2011-02-01

    expected, with increased loading (or reduced axial -chord to pitch ratio for a given turning). In addition to minimizing design-point loss due to...5  Figure 2. Computed loading diagrams and Reynolds lapse rates for aft- (L1A) and mid- loaded (L1M) LPT blading (Clark et al., 2009...reference 22 in Welch, 2010) accomplishing the same 95° flow turning at high aerodynamic loading (Z = 1.34). .................8  Figure 3. Computed 2-D

  10. Analysis and compensation of an aircraft simulator control loading system with compliant linkage. [using hydraulic equipment

    NASA Technical Reports Server (NTRS)

    Johnson, P. R.; Bardusch, R. E.

    1974-01-01

    A hydraulic control loading system for aircraft simulation was analyzed to find the causes of undesirable low frequency oscillations and loading effects in the output. The hypothesis of mechanical compliance in the control linkage was substantiated by comparing the behavior of a mathematical model of the system with previously obtained experimental data. A compensation scheme based on the minimum integral of the squared difference between desired and actual output was shown to be effective in reducing the undesirable output effects. The structure of the proposed compensation was computed by use of a dynamic programing algorithm and a linear state space model of the fixed elements in the system.

  11. Finite element modeling and analysis of tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Andersen, C. M.

    1983-01-01

    Predicting the response of tires under various loading conditions using finite element technology is addressed. Some of the recent advances in finite element technology which have high potential for application to tire modeling problems are reviewed. The analysis and modeling needs for tires are identified. Reduction methods for large-scale nonlinear analysis, with particular emphasis on treatment of combined loads, displacement-dependent and nonconservative loadings; development of simple and efficient mixed finite element models for shell analysis, identification of equivalent mixed and purely displacement models, and determination of the advantages of using mixed models; and effective computational models for large-rotation nonlinear problems, based on a total Lagrangian description of the deformation are included.

  12. Fracture resistance of implant- supported monolithic crowns cemented to zirconia hybrid-abutments: zirconia-based crowns vs. lithium disilicate crowns

    PubMed Central

    Nawafleh, Noor; Öchsner, Andreas; George, Roy

    2018-01-01

    PURPOSE The aim of this in vitro study was to investigate the fracture resistance under chewing simulation of implant-supported posterior restorations (crowns cemented to hybrid-abutments) made of different all-ceramic materials. MATERIALS AND METHODS Monolithic zirconia (MZr) and monolithic lithium disilicate (MLD) crowns for mandibular first molar were fabricated using computer-aided design/computer-aided manufacturing technology and then cemented to zirconia hybrid-abutments (Ti-based). Each group was divided into two subgroups (n=10): (A) control group, crowns were subjected to single load to fracture; (B) test group, crowns underwent chewing simulation using multiple loads for 1.2 million cycles at 1.2 Hz with simultaneous thermocycling between 5℃ and 55℃. Data was statistically analyzed with one-way ANOVA and a Post-Hoc test. RESULTS All tested crowns survived chewing simulation resulting in 100% survival rate. However, wear facets were observed on all the crowns at the occlusal contact point. Fracture load of monolithic lithium disilicate crowns was statistically significantly lower than that of monolithic zirconia crowns. Also, fracture load was significantly reduced in both of the all-ceramic materials after exposure to chewing simulation and thermocycling. Crowns of all test groups exhibited cohesive fracture within the monolithic crown structure only, and no abutment fractures or screw loosening were observed. CONCLUSION When supported by implants, monolithic zirconia restorations cemented to hybrid abutments withstand masticatory forces. Also, fatigue loading accompanied by simultaneous thermocycling significantly reduces the strength of both of the all-ceramic materials. Moreover, further research is needed to define potentials, limits, and long-term serviceability of the materials and hybrid abutments. PMID:29503716

  13. The Use of the Direct Optimized Probabilistic Calculation Method in Design of Bolt Reinforcement for Underground and Mining Workings

    PubMed Central

    Krejsa, Martin; Janas, Petr; Yilmaz, Işık; Marschalko, Marian; Bouchal, Tomas

    2013-01-01

    The load-carrying system of each construction should fulfill several conditions which represent reliable criteria in the assessment procedure. It is the theory of structural reliability which determines probability of keeping required properties of constructions. Using this theory, it is possible to apply probabilistic computations based on the probability theory and mathematic statistics. Development of those methods has become more and more popular; it is used, in particular, in designs of load-carrying structures with the required level or reliability when at least some input variables in the design are random. The objective of this paper is to indicate the current scope which might be covered by the new method—Direct Optimized Probabilistic Calculation (DOProC) in assessments of reliability of load-carrying structures. DOProC uses a purely numerical approach without any simulation techniques. This provides more accurate solutions to probabilistic tasks, and, in some cases, such approach results in considerably faster completion of computations. DOProC can be used to solve efficiently a number of probabilistic computations. A very good sphere of application for DOProC is the assessment of the bolt reinforcement in the underground and mining workings. For the purposes above, a special software application—“Anchor”—has been developed. PMID:23935412

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuller, L.C.

    The ORCENT-II digital computer program will perform calculations at valves-wide-open design conditions, maximum guaranteed rating conditions, and an approximation of part-load conditions for steam turbine cycles supplied with throttle steam characteristic of contemporary light-water reactors. Turbine performance calculations are based on a method published by the General Electric Company. Output includes all information normally shown on a turbine-cycle heat balance diagram. The program is written in FORTRAN IV for the IBM System 360 digital computers at the Oak Ridge National Laboratory.

  15. Ignition Prediction of Pressed HMX based on Hotspot Analysis Under Shock Pulse Loading

    NASA Astrophysics Data System (ADS)

    Kim, Seokpum; Miller, Christopher; Horie, Yasuyuki; Molek, Christopher; Welle, Eric; Zhou, Min

    The ignition behavior of pressed HMX under shock pulse loading with a flyer is analyzed using a cohesive finite element method (CFEM) which accounts for large deformation, microcracking, frictional heating, and thermal conduction. The simulations account for the controlled loading of thin-flyer shock experiments with flyer velocities between 1.7 and 4.0 km/s. The study focuses on the computational prediction of ignition threshold using James criterion which involves loading intensity and energy imparted to the material. The predicted thresholds are in good agreement with measurements from shock experiments. In particular, it is found that grain size significantly affects the ignition sensitivity of the materials, with smaller sizes leading to lower energy thresholds required for ignition. In addition, significant stress attenuation is observed in high intensity pulse loading as compared to low intensity pulse loading, which affects density of hotspot distribution. The microstructure-performance relations obtained can be used to design explosives with tailored attributes and safety envelopes.

  16. Novel models and algorithms of load balancing for variable-structured collaborative simulation under HLA/RTI

    NASA Astrophysics Data System (ADS)

    Yue, Yingchao; Fan, Wenhui; Xiao, Tianyuan; Ma, Cheng

    2013-07-01

    High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.

  17. A computational parametric study on edge loading in ceramic-on-ceramic total hip joint replacements.

    PubMed

    Liu, Feng; Feng, Li; Wang, Junyuan

    2018-07-01

    Edge loading in ceramic-on-ceramic total hip joint replacement is an adverse condition that occurs as the result of a direct contact between the head and the cup rim. It has been associated with translational mismatch in the centres of rotation of the cup and head, and found to cause severe wear and early failure of the implants. Edge loading has been considered in particular in relation to dynamic separation of the cup and head centres during a gait cycle. Research has been carried out both experimentally and computationally to understand the mechanism including the influence of bearing component positioning on the occurrence and severity of edge loading. However, it is experimentally difficult to measure both the load magnitude and duration of edge loading as it occurs as a short impact within the tight space of hip joints. Computationally, a dynamic contact model, for example, developed using the MSC ADAMS software for a multi-body dynamics simulation can be particularly useful for calculating the loads and characterising the edge loading. The aim of the present study was to further develop the computational model, and improve the predictions of contact force and the understanding of mechanism in order to provide guidance on design and surgical factors to avoid or to reduce edge loading and wear. The results have shown that edge loading can be avoided for a low range of translational mismatch in the centres of rotation of the cup and head during gait at the level of approximately 1.0 mm for a cup at 45° inclination, keeping a correct cup inclination at 45° is important to reduce the edge loading severity, and edge loading can be avoided for a certain range of translational mismatch of the cup and head centres with an increased swing phase load. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. An approximate solution to improve computational efficiency of impedance-type payload load prediction

    NASA Technical Reports Server (NTRS)

    White, C. W.

    1981-01-01

    The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.

  19. A computer program for simulating salinity loads in streams

    USGS Publications Warehouse

    Glover, Kent C.

    1978-01-01

    A FORTRAN IV program that simulates salinity loads in streams is described. Daily values of stream-discharge in cubic feet per second, or stream-discharge and specific conductance in micromhos, are used to estimate daily loads in tons by one of five available methods. The loads are then summarized by computing either total and mean monthly loads or various statistics for each calendar day. Results are output in tabular and, if requested, punch card format. Under selection of appropriate methods for estimating and summarizing daily loads is provided through the coding of program control cards. The program is designed to interface directly with data retrieved from the U.S. Geological Survey WATSTORE Daily Values File. (Woodard-USGS)

  20. Neural simulations on multi-core architectures.

    PubMed

    Eichner, Hubert; Klug, Tobias; Borst, Alexander

    2009-01-01

    Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing.

  1. Neural Simulations on Multi-Core Architectures

    PubMed Central

    Eichner, Hubert; Klug, Tobias; Borst, Alexander

    2009-01-01

    Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing. PMID:19636393

  2. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  3. Energy-based fatigue model for shape memory alloys including thermomechanical coupling

    NASA Astrophysics Data System (ADS)

    Zhang, Yahui; Zhu, Jihong; Moumni, Ziad; Van Herpen, Alain; Zhang, Weihong

    2016-03-01

    This paper is aimed at developing a low cycle fatigue criterion for pseudoelastic shape memory alloys to take into account thermomechanical coupling. To this end, fatigue tests are carried out at different loading rates under strain control at room temperature using NiTi wires. Temperature distribution on the specimen is measured using a high speed thermal camera. Specimens are tested to failure and fatigue lifetimes of specimens are measured. Test results show that the fatigue lifetime is greatly influenced by the loading rate: as the strain rate increases, the fatigue lifetime decreases. Furthermore, it is shown that the fatigue cracks initiate when the stored energy inside the material reaches a critical value. An energy-based fatigue criterion is thus proposed as a function of the irreversible hysteresis energy of the stabilized cycle and the loading rate. Fatigue life is calculated using the proposed model. The experimental and computational results compare well.

  4. Load monitoring of aerospace structures utilizing micro-electro-mechanical systems for static and quasi-static loading conditions

    NASA Astrophysics Data System (ADS)

    Martinez, M.; Rocha, B.; Li, M.; Shi, G.; Beltempo, A.; Rutledge, R.; Yanishevsky, M.

    2012-11-01

    The National Research Council Canada (NRC) has worked on the development of structural health monitoring (SHM) test platforms for assessing the performance of sensor systems for load monitoring applications. The first SHM platform consists of a 5.5 m cantilever aluminum beam that provides an optimal scenario for evaluating the ability of a load monitoring system to measure bending, torsion and shear loads. The second SHM platform contains an added level of structural complexity, by consisting of aluminum skins with bonded/riveted stringers, typical of an aircraft lower wing structure. These two load monitoring platforms are well characterized and documented, providing loading conditions similar to those encountered during service. In this study, a micro-electro-mechanical system (MEMS) for acquiring data from triads of gyroscopes, accelerometers and magnetometers is described. The system was used to compute changes in angles at discrete stations along the platforms. The angles obtained from the MEMS were used to compute a second, third or fourth order degree polynomial surface from which displacements at every point could be computed. The use of a new Kalman filter was evaluated for angle estimation, from which displacements in the structure were computed. The outputs of the newly developed algorithms were then compared to the displacements obtained from the linear variable displacement transducers connected to the platforms. The displacement curves were subsequently post-processed either analytically, or with the help of a finite element model of the structure, to estimate strains and loads. The estimated strains were compared with baseline strain gauge instrumentation installed on the platforms. This new approach for load monitoring was able to provide accurate estimates of applied strains and shear loads.

  5. A computer software system for the generation of global ocean tides including self-gravitation and crustal loading effects

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1977-01-01

    A computer software system is described which computes global numerical solutions of the integro-differential Laplace tidal equations, including dissipation terms and ocean loading and self-gravitation effects, for arbitrary diurnal and semidiurnal tidal constituents. The integration algorithm features a successive approximation scheme for the integro-differential system, with time stepping forward differences in the time variable and central differences in spatial variables. Solutions for M2, S2, N2, K2, K1, O1, P1 tidal constituents neglecting the effects of ocean loading and self-gravitation and a converged M2, solution including ocean loading and self-gravitation effects are presented in the form of cotidal and corange maps.

  6. Estimation of Local Bone Loads for the Volume of Interest.

    PubMed

    Kim, Jung Jin; Kim, Youkyung; Jang, In Gwun

    2016-07-01

    Computational bone remodeling simulations have recently received significant attention with the aid of state-of-the-art high-resolution imaging modalities. They have been performed using localized finite element (FE) models rather than full FE models due to the excessive computational costs of full FE models. However, these localized bone remodeling simulations remain to be investigated in more depth. In particular, applying simplified loading conditions (e.g., uniform and unidirectional loads) to localized FE models have a severe limitation in a reliable subject-specific assessment. In order to effectively determine the physiological local bone loads for the volume of interest (VOI), this paper proposes a novel method of estimating the local loads when the global musculoskeletal loads are given. The proposed method is verified for the three VOI in a proximal femur in terms of force equilibrium, displacement field, and strain energy density (SED) distribution. The effect of the global load deviation on the local load estimation is also investigated by perturbing a hip joint contact force (HCF) in the femoral head. Deviation in force magnitude exhibits the greatest absolute changes in a SED distribution due to its own greatest deviation, whereas angular deviation perpendicular to a HCF provides the greatest relative change. With further in vivo force measurements and high-resolution clinical imaging modalities, the proposed method will contribute to the development of reliable patient-specific localized FE models, which can provide enhanced computational efficiency for iterative computing processes such as bone remodeling simulations.

  7. Proceedings of the 1981 Army Numerical Analysis and Computers Conference, held U. S. Army Missile Command, Redstone Arsenal, Alabama, 26-27 February 1981

    DTIC Science & Technology

    1981-08-01

    loaded Loading done Time= 1295 msec. SQRT(3) %I + 1 SQRT(3) %I - 1 SQRT(3) %I f 1 (D13) TX = 1---------+-1** x = C-l---+-------, x = - 1, x...thnrl its competitors. As is to be expected, the table makes cJc ;lr the benefits of subincrement ing for nny approximation. For example, usiny the...Acquisition Systems ( DACS ) and a Data Analysis System (DAN), The DACs will be microprocessor-based recording devices with software-control

  8. Nonlinear analysis of a shock-loaded membrane.

    NASA Technical Reports Server (NTRS)

    Madden, R.; Remington, P. J.

    1973-01-01

    Results from a computer method for analyzing the unsteady interaction of a fluid stream and a flat circular elastic membrane are presented. The loading on the membrane is assumed to be caused by the firing of a shock tube. The fluid pressures and velocities are determined from a scheme based on the numerical method of characteristics, and the membrane is analyzed using exact relations for membrane strain. The interactive solution is found to give peak stresses 40% lower than a solution which assumes a pressure invariant in space and time.

  9. Recommendations on Model Fidelity for Wind Turbine Gearbox Simulations; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, J.; Lacava, W.; Austin, J.

    2015-02-01

    This work investigates the minimum level of fidelity required to accurately simulate wind turbine gearboxes using state-of-the-art design tools. Excessive model fidelity including drivetrain complexity, gearbox complexity, excitation sources, and imperfections, significantly increases computational time, but may not provide a commensurate increase in the value of the results. Essential designparameters are evaluated, including the planetary load-sharing factor, gear tooth load distribution, and sun orbit motion. Based on the sensitivity study results, recommendations for the minimum model fidelities are provided.

  10. Wash load and bed-material load transport in the Yellow River

    USGS Publications Warehouse

    Yang, C.T.; Simoes, F.J.M.

    2005-01-01

    It has been the conventional assumption that wash load is supply limited and is only indirectly related to the hydraulics of a river. Hydraulic engineers also assumed that bed-material load concentration is independent of wash load concentration. This paper provides a detailed analysis of the Yellow River sediment transport data to determine whether the above assumptions are true and whether wash load concentration can be computed from the original unit stream power formula and the modified unit stream power formula for sediment-laden flows. A systematic and thorough analysis of 1,160 sets of data collected from 9 gauging stations along the Middle and Lower Yellow River confirmed that the method suggested by the conjunctive use of the two formulas can be used to compute wash load, bed-material load, and total load in the Yellow River with accuracy. Journal of Hydraulic Engineering ?? ASCE.

  11. Computer simulation of the effects of shoe cushioning on internal and external loading during running impacts.

    PubMed

    Miller, Ross H; Hamill, Joseph

    2009-08-01

    Biomechanical aspects of running injuries are often inferred from external loading measurements. However, previous research has suggested that relationships between external loading and potential injury-inducing internal loads can be complex and nonintuitive. Further, the loading response to training interventions can vary widely between subjects. In this study, we use a subject-specific computer simulation approach to estimate internal and external loading of the distal tibia during the impact phase for two runners when running in shoes with different midsole cushioning parameters. The results suggest that: (1) changes in tibial loading induced by footwear are not reflected by changes in ground reaction force (GRF) magnitudes; (2) the GRF loading rate is a better surrogate measure of tibial loading and stress fracture risk than the GRF magnitude; and (3) averaging results across groups may potentially mask differential responses to training interventions between individuals.

  12. Computer program for buckling loads of orthotropic laminated stiffened panels subjected to biaxial in-place loads (BUCLASP 2)

    NASA Technical Reports Server (NTRS)

    Viswanathan, A. V.; Tamekuni, M.

    1974-01-01

    General-purpose program performs exact instability analyses for structures such as unidirectionally-stiffened, rectangular composite panels. Program was written in FORTRAN IV and COMPASS for CDC-series computers.

  13. In Vitro Analysis of the Fracture Resistance of CAD/CAM Denture Base Resins.

    PubMed

    Steinmassl, Otto; Offermanns, Vincent; Stöckl, Wolfgang; Dumfahrt, Herbert; Grunert, Ingrid; Steinmassl, Patricia-Anca

    2018-03-08

    Computer-aided design and computer-aided manufacturing (CAD/CAM) denture base manufacturers claim to produce their resin pucks under high heat and pressure. Therefore, CAD/CAM dentures are assumed to have enhanced mechanical properties and, as a result, are often produced with lower denture base thicknesses than conventional, manually fabricated dentures. The aim of this study was to investigate if commercially available CAD/CAM denture base resins have more favourable mechanical properties than conventionally processed denture base resins. For this purpose, a series of three-point bending tests conforming to ISO specifications were performed on a total of 80 standardised, rectangular CAD/CAM denture base resin specimens from five different manufacturers (AvaDent, Baltic Denture System, Vita VIONIC, Whole You Nexteeth, and Wieland Digital Dentures). A heat-polymerising resin and an autopolymerising resin served as the control groups. The breaking load, fracture toughness, and the elastic modulus were assessed. Additionally, the fracture surface roughness and texture were investigated. Only one CAD/CAM resin showed a significantly increased breaking load. Two CAD/CAM resins had a significantly higher fracture toughness than the control groups, and all CAD/CAM resins had higher elastic moduli than the controls. Our results indicate that CAD/CAM denture base resins do not generally have better mechanical properties than manually processed resins. Therefore, the lower minimum denture base thicknesses should be regarded with some caution.

  14. In Vitro Analysis of the Fracture Resistance of CAD/CAM Denture Base Resins

    PubMed Central

    Stöckl, Wolfgang; Dumfahrt, Herbert; Grunert, Ingrid

    2018-01-01

    Computer-aided design and computer-aided manufacturing (CAD/CAM) denture base manufacturers claim to produce their resin pucks under high heat and pressure. Therefore, CAD/CAM dentures are assumed to have enhanced mechanical properties and, as a result, are often produced with lower denture base thicknesses than conventional, manually fabricated dentures. The aim of this study was to investigate if commercially available CAD/CAM denture base resins have more favourable mechanical properties than conventionally processed denture base resins. For this purpose, a series of three-point bending tests conforming to ISO specifications were performed on a total of 80 standardised, rectangular CAD/CAM denture base resin specimens from five different manufacturers (AvaDent, Baltic Denture System, Vita VIONIC, Whole You Nexteeth, and Wieland Digital Dentures). A heat-polymerising resin and an autopolymerising resin served as the control groups. The breaking load, fracture toughness, and the elastic modulus were assessed. Additionally, the fracture surface roughness and texture were investigated. Only one CAD/CAM resin showed a significantly increased breaking load. Two CAD/CAM resins had a significantly higher fracture toughness than the control groups, and all CAD/CAM resins had higher elastic moduli than the controls. Our results indicate that CAD/CAM denture base resins do not generally have better mechanical properties than manually processed resins. Therefore, the lower minimum denture base thicknesses should be regarded with some caution. PMID:29518022

  15. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.

  16. A heuristic approach to optimization of structural topology including self-weight

    NASA Astrophysics Data System (ADS)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2018-01-01

    Topology optimization of structures under a design-dependent self-weight load is investigated in this paper. The problem deserves attention because of its significant importance in the engineering practice, especially nowadays as topology optimization is more often applied when designing large engineering structures, for example, bridges or carrying systems of tall buildings. It is worth noting that well-known approaches of topology optimization which have been successfully applied to structures under fixed loads cannot be directly adapted to the case of design-dependent loads, so that topology generation can be a challenge also for numerical algorithms. The paper presents the application of a simple but efficient non-gradient method to topology optimization of elastic structures under self-weight loading. The algorithm is based on the Cellular Automata concept, the application of which can produce effective solutions with low computational cost.

  17. Load Variation Influences on Joint Work During Squat Exercise in Reduced Gravity

    NASA Technical Reports Server (NTRS)

    DeWitt, John K.; Fincke, Renita S.; Logan, Rachel L.; Guilliams, Mark E.; Ploutz-Snyder, Lori L.

    2011-01-01

    Resistance exercises that load the axial skeleton, such as the parallel squat, are incorporated as a critical component of a space exercise program designed to maximize the stimuli for bone remodeling and muscle loading. Astronauts on the International Space Station perform regular resistance exercise using the Advanced Resistive Exercise Device (ARED). Squat exercises on Earth entail moving a portion of the body weight plus the added bar load, whereas in microgravity the body weight is 0, so all load must be applied via the bar. Crewmembers exercising in microgravity currently add approx.70% of their body weight to the bar load as compensation for the absence of the body weight. This level of body weight replacement (BWR) was determined by crewmember feedback and personal experience without any quantitative data. The purpose of this evaluation was to utilize computational simulation to determine the appropriate level of BWR in microgravity necessary to replicate lower extremity joint work during squat exercise in normal gravity based on joint work. We hypothesized that joint work would be positively related to BWR load.

  18. System and method for motor speed estimation of an electric motor

    DOEpatents

    Lu, Bin [Kenosha, WI; Yan, Ting [Brookfield, WI; Luebke, Charles John [Sussex, WI; Sharma, Santosh Kumar [Viman Nagar, IN

    2012-06-19

    A system and method for a motor management system includes a computer readable storage medium and a processing unit. The processing unit configured to determine a voltage value of a voltage input to an alternating current (AC) motor, determine a frequency value of at least one of a voltage input and a current input to the AC motor, determine a load value from the AC motor, and access a set of motor nameplate data, where the set of motor nameplate data includes a rated power, a rated speed, a rated frequency, and a rated voltage of the AC motor. The processing unit is also configured to estimate a motor speed based on the voltage value, the frequency value, the load value, and the set of nameplate data and also store the motor speed on the computer readable storage medium.

  19. Linkability: Orientation for the 21st Century.

    ERIC Educational Resources Information Center

    Hoag, Erin; Kisiel, Valerie

    This document outlines the steps necessary to develop computer-based student service programs that will improve service quality and alleviate the problems associated with staff shortages and heavy case loads. The analysis phase of the process entails that the project manager complete the following steps: (1) define the problem; (2) conduct a…

  20. End-User Tools Towards AN Efficient Electricity Consumption: the Dynamic Smart Grid

    NASA Astrophysics Data System (ADS)

    Kamel, Fouad; Kist, Alexander A.

    2010-06-01

    Growing uncontrolled electrical demands have caused increased supply requirements. This causes volatile electrical markets and has detrimental unsustainable environmental impacts. The market is presently characterized by regular daily peak demand conditions associated with high electricity prices. A demand-side response system can limit peak demands to an acceptable level. The proposed scheme is based on energy demand and price information which is available online. An online server is used to communicate the information of electricity suppliers to users, who are able to use the information to manage and control their own demand. A configurable, intelligent switching system is used to control local loads during peak events and mange the loads at other times as necessary. The aim is to shift end user loads towards periods where energy demand and therefore also prices are at the lowest. As a result, this will flatten the load profile and avoiding load peeks which are costly for suppliers. The scheme is an endeavour towards achieving a dynamic smart grid demand-side-response environment using information-based communication and computer-controlled switching. Diffusing the scheme shall lead to improved electrical supply services and controlled energy consumption and prices.

  1. Reliability, Risk and Cost Trade-Offs for Composite Designs

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1996-01-01

    Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.

  2. Programmable Pulse Generator

    NASA Technical Reports Server (NTRS)

    Rhim, W. K.; Dart, J. A.

    1982-01-01

    New pulse generator programmed to produce pulses from several ports at different pulse lengths and intervals and virtually any combination and sequence. Unit contains a 256-word-by-16-bit memory loaded with instructions either manually or by computer. Once loaded, unit operates independently of computer.

  3. Real-time POD-CFD Wind-Load Calculator for PV Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huayamave, Victor; Divo, Eduardo; Ceballos, Andres

    The primary objective of this project is to create an accurate web-based real-time wind-load calculator. This is of paramount importance for (1) the rapid and accurate assessments of the uplift and downforce loads on a PV mounting system, (2) identifying viable solutions from available mounting systems, and therefore helping reduce the cost of mounting hardware and installation. Wind loading calculations for structures are currently performed according to the American Society of Civil Engineers/ Structural Engineering Institute Standard ASCE/SEI 7; the values in this standard were calculated from simplified models that do not necessarily take into account relevant characteristics such asmore » those from full 3D effects, end effects, turbulence generation and dissipation, as well as minor effects derived from shear forces on installation brackets and other accessories. This standard does not include provisions that address the special requirements of rooftop PV systems, and attempts to apply this standard may lead to significant design errors as wind loads are incorrectly estimated. Therefore, an accurate calculator would be of paramount importance for the preliminary assessments of the uplift and downforce loads on a PV mounting system, identifying viable solutions from available mounting systems, and therefore helping reduce the cost of the mounting system and installation. The challenge is that although a full-fledged three-dimensional computational fluid dynamics (CFD) analysis would properly and accurately capture the complete physical effects of air flow over PV systems, it would be impractical for this tool, which is intended to be a real-time web-based calculator. CFD routinely requires enormous computation times to arrive at solutions that can be deemed accurate and grid-independent even in powerful and massively parallel computer platforms. This work is expected not only to accelerate solar deployment nationwide, but also help reach the SunShot Initiative goals of reducing the total installed cost of solar energy systems by 75%. The largest percentage of the total installed cost of solar energy system is associated with balance of system cost, with up to 40% going to “soft” costs; which include customer acquisition, financing, contracting, permitting, interconnection, inspection, installation, performance, operations, and maintenance. The calculator that is being developed will provide wind loads in real-time for any solar system designs and suggest the proper installation configuration and hardware; and therefore, it is anticipated to reduce system design, installation and permitting costs.« less

  4. Differences in muscle load between computer and non-computer work among office workers.

    PubMed

    Richter, J M; Mathiassen, S E; Slijper, H P; Over, E A B; Frens, M A

    2009-12-01

    Introduction of more non-computer tasks has been suggested to increase exposure variation and thus reduce musculoskeletal complaints (MSC) in computer-intensive office work. This study investigated whether muscle activity did, indeed, differ between computer and non-computer activities. Whole-day logs of input device use in 30 office workers were used to identify computer and non-computer work, using a range of classification thresholds (non-computer thresholds (NCTs)). Exposure during these activities was assessed by bilateral electromyography recordings from the upper trapezius and lower arm. Contrasts in muscle activity between computer and non-computer work were distinct but small, even at the individualised, optimal NCT. Using an average group-based NCT resulted in less contrast, even in smaller subgroups defined by job function or MSC. Thus, computer activity logs should be used cautiously as proxies of biomechanical exposure. Conventional non-computer tasks may have a limited potential to increase variation in muscle activity during computer-intensive office work.

  5. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  6. Real-time state estimation in a flight simulator using fNIRS.

    PubMed

    Gateau, Thibault; Durantin, Gautier; Lancelot, Francois; Scannella, Sebastien; Dehais, Frederic

    2015-01-01

    Working memory is a key executive function for flying an aircraft. This function is particularly critical when pilots have to recall series of air traffic control instructions. However, working memory limitations may jeopardize flight safety. Since the functional near-infrared spectroscopy (fNIRS) method seems promising for assessing working memory load, our objective is to implement an on-line fNIRS-based inference system that integrates two complementary estimators. The first estimator is a real-time state estimation MACD-based algorithm dedicated to identifying the pilot's instantaneous mental state (not-on-task vs. on-task). It does not require a calibration process to perform its estimation. The second estimator is an on-line SVM-based classifier that is able to discriminate task difficulty (low working memory load vs. high working memory load). These two estimators were tested with 19 pilots who were placed in a realistic flight simulator and were asked to recall air traffic control instructions. We found that the estimated pilot's mental state matched significantly better than chance with the pilot's real state (62% global accuracy, 58% specificity, and 72% sensitivity). The second estimator, dedicated to assessing single trial working memory loads, led to 80% classification accuracy, 72% specificity, and 89% sensitivity. These two estimators establish reusable blocks for further fNIRS-based passive brain computer interface development.

  7. Development of simulation computer complex specification

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The Training Simulation Computer Complex Study was one of three studies contracted in support of preparations for procurement of a shuttle mission simulator for shuttle crew training. The subject study was concerned with definition of the software loads to be imposed on the computer complex to be associated with the shuttle mission simulator and the development of procurement specifications based on the resulting computer requirements. These procurement specifications cover the computer hardware and system software as well as the data conversion equipment required to interface the computer to the simulator hardware. The development of the necessary hardware and software specifications required the execution of a number of related tasks which included, (1) simulation software sizing, (2) computer requirements definition, (3) data conversion equipment requirements definition, (4) system software requirements definition, (5) a simulation management plan, (6) a background survey, and (7) preparation of the specifications.

  8. NASA's hypersonic fluid and thermal physics program (Aerothermodynamics)

    NASA Technical Reports Server (NTRS)

    Graves, R. A.; Hunt, J. L.

    1985-01-01

    This survey paper gives an overview of NASA's hypersonic fluid and thermal physics program (recently renamed aerothermodynamics). The purpose is to present the elements of, example results from, and rationale and projection for this program. The program is based on improving the fundamental understanding of aerodynamic and aerothermodynamic flow phenomena over hypersonic vehicles in the continuum, transitional, and rarefied flow regimes. Vehicle design capabilities, computational fluid dynamics, computational chemistry, turbulence modeling, aerothermal loads, orbiter flight data analysis, orbiter experiments, laser photodiagnostics, and facilities are discussed.

  9. ARES (Automated Residential Energy Standard) 1.2: User`s guide, in support of proposed interim energy conservation voluntary performance standards for new non-federal residential buildings: Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The ARES (Automated Residential Energy Standard) User`s Guide is designed to the user successfully operate the ARES computer program. This guide assumes that the user is familiar with basic PC skills such as using a keyboard and loading a disk drive. The ARES computer program was designed to assist building code officials in creating a residential energy standard based on local climate and costs.

  10. Dynamic Magnification Factor in a Box-Shape Steel Girder

    NASA Astrophysics Data System (ADS)

    Rahbar-Ranji, A.

    2014-01-01

    The dynamic effect of moving loads on structures is treated as a dynamic magnification factor when resonant is not imminent. Studies have shown that the calculated magnification factors from field measurements could be higher than the values specified in design codes. It is the main aim of present paper to investigate the applicability and accuracy of a rule-based expression for calculation of dynamic magnification factor for lifting appliances used in marine industry. A steel box shape girder of a crane is considered and transient dynamic analysis using computer code ANSYS is implemented. Dynamic magnification factor is calculated for different loading conditions and compared with rule-based equation. The effects of lifting speeds, acceleration, damping ratio and position of cargo are examined. It is found that rule-based expression underestimate dynamic magnification factor.

  11. Rotorcraft Brownout Advanced Understanding, Control, and Mitigation

    DTIC Science & Technology

    2014-10-31

    rotor disk loading , blade loading , number and placement of rotors, number of blades, blade twist, blade tip shape, fuselage shape, as well as...Mechanical Engineering • Ramani Duraiswami, Ph.D., Associate Professor, Department of Computer Science & Insti- tute for Advanced Computer Studies • Nail ...23, 2013. 71. Mulinti, R., Corfman, K., and Kiger, K. T., “Particle-Turbulence Interaction of Suspended Load by Forced Jet Impinging on a Mobile

  12. The measurement of boundary layers on a compressor blade in cascade. Volume 1: Experimental technique, analysis and results

    NASA Technical Reports Server (NTRS)

    Zierke, William C.; Deutsch, Steven

    1989-01-01

    Measurements were made of the boundary layers and wakes about a highly loaded, double-circular-arc compressor blade in cascade. These laser Doppler velocimetry measurements have yielded a very detailed and precise data base with which to test the application of viscous computational codes to turbomachinery. In order to test the computational codes at off-design conditions, the data were acquired at a chord Reynolds number of 500,000 and at three incidence angles. Moreover, these measurements have supplied some physical insight into these very complex flows. Although some natural transition is evident, laminar boundary layers usually detach and subsequently reattach as either fully or intermittently turbulent boundary layers. These transitional separation bubbles play an important role in the development of most of the boundary layers and wakes measured in this cascade and the modeling or computing of these bubbles should prove to be the key aspect in computing the entire cascade flow field. In addition, the nonequilibrium turbulent boundary layers on these highly loaded blades always have some region of separation near the trailing edge of the suction surface. These separated flows, as well as the subsequent near wakes, show no similarity and should prove to be a challenging test for the viscous computational codes.

  13. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  14. Impact evaluation of conducted UWB transients on loads in power-line networks

    NASA Astrophysics Data System (ADS)

    Li, Bing; Månsson, Daniel

    2017-09-01

    Nowadays, faced with the ever-increasing dependence on diverse electronic devices and systems, the proliferation of potential electromagnetic interference (EMI) becomes a critical threat for reliable operation. A typical issue is the electronics working reliably in power-line networks when exposed to electromagnetic environment. In this paper, we consider a conducted ultra-wideband (UWB) disturbance, as an example of intentional electromagnetic interference (IEMI) source, and perform the impact evaluation at the loads in a network. With the aid of fast Fourier transform (FFT), the UWB transient is characterized in the frequency domain. Based on a modified Baum-Liu-Tesche (BLT) method, the EMI received at the loads, with complex impedance, is computed. Through inverse FFT (IFFT), we obtain time-domain responses of the loads. To evaluate the impact on loads, we employ five common, but important quantifiers, i.e., time-domain peak, total signal energy, peak signal power, peak time rate of change and peak time integral of the pulse. Moreover, to perform a comprehensive analysis, we also investigate the effects of the attributes (capacitive, resistive, or inductive) of other loads connected to the network, the rise time and pulse width of the UWB transient, and the lengths of power lines. It is seen that, for the loads distributed in a network, the impact evaluation of IEMI should be based on the characteristics of the IEMI source, and the network features, such as load impedances, layout, and characteristics of cables.

  15. Predictive Scheduling for Electric Vehicles Considering Uncertainty of Load and User Behaviors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bin; Huang, Rui; Wang, Yubo

    2016-05-02

    Un-coordinated Electric Vehicle (EV) charging can create unexpected load in local distribution grid, which may degrade the power quality and system reliability. The uncertainty of EV load, user behaviors and other baseload in distribution grid, is one of challenges that impedes optimal control for EV charging problem. Previous researches did not fully solve this problem due to lack of real-world EV charging data and proper stochastic model to describe these behaviors. In this paper, we propose a new predictive EV scheduling algorithm (PESA) inspired by Model Predictive Control (MPC), which includes a dynamic load estimation module and a predictive optimizationmore » module. The user-related EV load and base load are dynamically estimated based on the historical data. At each time interval, the predictive optimization program will be computed for optimal schedules given the estimated parameters. Only the first element from the algorithm outputs will be implemented according to MPC paradigm. Current-multiplexing function in each Electric Vehicle Supply Equipment (EVSE) is considered and accordingly a virtual load is modeled to handle the uncertainties of future EV energy demands. This system is validated by the real-world EV charging data collected on UCLA campus and the experimental results indicate that our proposed model not only reduces load variation up to 40% but also maintains a high level of robustness. Finally, IEC 61850 standard is utilized to standardize the data models involved, which brings significance to more reliable and large-scale implementation.« less

  16. Fast generation of Fresnel holograms based on multirate filtering.

    PubMed

    Tsang, Peter; Liu, Jung-Ping; Cheung, Wai-Keung; Poon, Ting-Chung

    2009-12-01

    One of the major problems in computer-generated holography is the high computation cost involved for the calculation of fringe patterns. Recently, the problem has been addressed by imposing a horizontal parallax only constraint whereby the process can be simplified to the computation of one-dimensional sublines, each representing a scan plane of the object scene. Subsequently the sublines can be expanded to a two-dimensional hologram through multiplication with a reference signal. Furthermore, economical hardware is available with which sublines can be generated in a computationally free manner with high throughput of approximately 100 M pixels/second. Apart from decreasing the computation loading, the sublines can be treated as intermediate data that can be compressed by simply downsampling the number of sublines. Despite these favorable features, the method is suitable only for the generation of white light (rainbow) holograms, and the resolution of the reconstructed image is inferior to the classical Fresnel hologram. We propose to generate holograms from one-dimensional sublines so that the above-mentioned problems can be alleviated. However, such an approach also leads to a substantial increase in computation loading. To overcome this problem we encapsulated the conversion of sublines to holograms as a multirate filtering process and implemented the latter by use of a fast Fourier transform. Evaluation reveals that, for holograms of moderate size, our method is capable of operating 40,000 times faster than the calculation of Fresnel holograms based on the precomputed table lookup method. Although there is no relative vertical parallax between object points at different distance planes, a global vertical parallax is preserved for the object scene as a whole and the reconstructed image can be observed easily.

  17. Pre- and post-processing for Cosmic/NASTRAN on personal computers and mainframes

    NASA Technical Reports Server (NTRS)

    Kamel, H. A.; Mobley, A. V.; Nagaraj, B.; Watkins, K. W.

    1986-01-01

    An interface between Cosmic/NASTRAN and GIFTS has recently been released, combining the powerful pre- and post-processing capabilities of GIFTS with Cosmic/NASTRAN's analysis capabilities. The interface operates on a wide range of computers, even linking Cosmic/NASTRAN and GIFTS when the two are on different computers. GIFTS offers a wide range of elements for use in model construction, each translated by the interface into the nearest Cosmic/NASTRAN equivalent; and the options of automatic or interactive modelling and loading in GIFTS make pre-processing easy and effective. The interface itself includes the programs GFTCOS, which creates the Cosmic/NASTRAN input deck (and, if desired, control deck) from the GIFTS Unified Data Base, COSGFT, which translates the displacements from the Cosmic/NASTRAN analysis back into GIFTS; and HOSTR, which handles stress computations for a few higher-order elements available in the interface, but not supported by the GIFTS processor STRESS. Finally, the versatile display options in GIFTS post-processing allow the user to examine the analysis results through an especially wide range of capabilities, including such possibilities as creating composite loading cases, plotting in color and animating the analysis.

  18. GCLAS: a graphical constituent loading analysis system

    USGS Publications Warehouse

    McKallip, T.E.; Koltun, G.F.; Gray, J.R.; Glysson, G.D.

    2001-01-01

    The U. S. Geological Survey has developed a program called GCLAS (Graphical Constituent Loading Analysis System) to aid in the computation of daily constituent loads transported in stream flow. Due to the relative paucity with which most water-quality data are collected, computation of daily constituent loads is moderately to highly dependent on human interpretation of the relation between stream hydraulics and constituent transport. GCLAS provides a visual environment for evaluating the relation between hydraulic and other covariate time series and the constituent chemograph. GCLAS replaces the computer program Sedcalc, which is the most recent USGS sanctioned tool for constructing sediment chemographs and computing suspended-sediment loads. Written in a portable language, GCLAS has an interactive graphical interface that permits easy entry of estimated values and provides new tools to aid in making those estimates. The use of a portable language for program development imparts a degree of computer platform independence that was difficult to obtain in the past, making implementation more straightforward within the USGS' s diverse computing environment. Some of the improvements introduced in GCLAS include (1) the ability to directly handle periods of zero or reverse flow, (2) the ability to analyze and apply coefficient adjustments to concentrations as a function of time, streamflow, or both, (3) the ability to compute discharges of constituents other than suspended sediment, (4) the ability to easily view data related to the chemograph at different levels of detail, and (5) the ability to readily display covariate time series data to provide enhanced visual cues for drawing the constituent chemograph.

  19. Monte Carlo Simulation for Polychromatic X-Ray Fluorescence Computed Tomography with Sheet-Beam Geometry

    PubMed Central

    Jiang, Shanghai

    2017-01-01

    X-ray fluorescence computed tomography (XFCT) based on sheet beam can save a huge amount of time to obtain a whole set of projections using synchrotron. However, it is clearly unpractical for most biomedical research laboratories. In this paper, polychromatic X-ray fluorescence computed tomography with sheet-beam geometry is tested by Monte Carlo simulation. First, two phantoms (A and B) filled with PMMA are used to simulate imaging process through GEANT 4. Phantom A contains several GNP-loaded regions with the same size (10 mm) in height and diameter but different Au weight concentration ranging from 0.3% to 1.8%. Phantom B contains twelve GNP-loaded regions with the same Au weight concentration (1.6%) but different diameter ranging from 1 mm to 9 mm. Second, discretized presentation of imaging model is established to reconstruct more accurate XFCT images. Third, XFCT images of phantoms A and B are reconstructed by filter back-projection (FBP) and maximum likelihood expectation maximization (MLEM) with and without correction, respectively. Contrast-to-noise ratio (CNR) is calculated to evaluate all the reconstructed images. Our results show that it is feasible for sheet-beam XFCT system based on polychromatic X-ray source and the discretized imaging model can be used to reconstruct more accurate images. PMID:28567054

  20. Zero side force volute development

    NASA Technical Reports Server (NTRS)

    Anderson, P. G.; Franz, R. J.; Farmer, R. C.; Chen, Y. S.

    1995-01-01

    Collector scrolls on high performance centrifugal pumps are currently designed with methods which are based on very approximate flowfield models. Such design practices result in some volute configurations causing excessive side loads even at design flowrates. The purpose of this study was to develop and verify computational design tools which may be used to optimize volute configurations with respect to avoiding excessive loads on the bearings. The new design methodology consisted of a volute grid generation module and a computational fluid dynamics (CFD) module to describe the volute geometry and predict the radial forces for a given flow condition, respectively. Initially, the CFD module was used to predict the impeller and the volute flowfields simultaneously; however, the required computation time was found to be excessive for parametric design studies. A second computational procedure was developed which utilized an analytical impeller flowfield model and an ordinary differential equation to describe the impeller/volute coupling obtained from the literature, Adkins & Brennen (1988). The second procedure resulted in 20 to 30 fold increase in computational speed for an analysis. The volute design analysis was validated by postulating a volute geometry, constructing a volute to this configuration, and measuring the steady radial forces over a range of flow coefficients. Excellent agreement between model predictions and observed pump operation prove the computational impeller/volute pump model to be a valuable design tool. Further applications are recommended to fully establish the benefits of this new methodology.

  1. The impact of water loading on postglacial decay times in Hudson Bay

    NASA Astrophysics Data System (ADS)

    Han, Holly Kyeore; Gomez, Natalya

    2018-05-01

    Ongoing glacial isostatic adjustment (GIA) due to surface loading (ice and water) variations during the last glacial cycle has been contributing to sea-level changes globally throughout the Holocene, especially in regions like Canada that were heavily glaciated during the Last Glacial Maximum (LGM). The spatial and temporal distribution of GIA, as manifested in relative sea-level (RSL) change, are sensitive to the ice history and the rheological structure of the solid Earth, both of which are uncertain. It has been shown that RSL curves near the center of previously glaciated regions with no ongoing surface loading follow an exponential-like form, with the postglacial decay times associated with that form having a weak sensitivity to the details of the ice loading history. Postglacial decay time estimates thus provide a powerful datum for constraining the Earth's viscous structure and improving GIA predictions. We explore spatial patterns of postglacial decay time predictions in Hudson Bay by decomposing numerically modeled RSL changes into contributions from water and ice loading effects, and computing their relative impact on the decay times. We demonstrate that ice loading can contribute a strong geographic trend on the decay time estimates if the time window used to compute decay times includes periods that are temporally close to (i.e. contemporaneous with, or soon after) periods of active loading. This variability can be avoided by choosing a suitable starting point for the decay time window. However, more surprisingly, we show that across any adopted time window, water loading effects associated with inundation into, and postglacial flux out of, Hudson Bay and James Bay will impart significant geographic variability onto decay time estimates. We emphasize this issue by considering both maps of predicted decay times across the region and site-specific estimates, and we conclude that variability in observed decay times (whether based on existing or future data sets) may reflect this water loading signal.

  2. Computational Thermomechanical Modelling of Early-Age Silicate Composites

    NASA Astrophysics Data System (ADS)

    Vala, J.; Št'astník, S.; Kozák, V.

    2009-09-01

    Strains and stresses in early-age silicate composites, widely used in civil engineering, especially in fresh concrete mixtures, in addition to those caused by exterior mechanical loads, are results of complicated non-deterministic physical and chemical processes. Their numerical prediction at the macro-scale level requires the non-trivial physical analysis based on the thermodynamic principles, making use of micro-structural information from both theoretical and experimental research. The paper introduces a computational model, based on a nonlinear system of macroscopic equations of evolution, supplied with certain effective material characteristics, coming from the micro-scale analysis, and sketches the algorithm for its numerical analysis.

  3. Carbon nanotube film interlayer for strain and damage sensing in composites during dynamic compressive loading

    NASA Astrophysics Data System (ADS)

    Wu, A. S.; Na, W.-J.; Yu, W.-R.; Byun, J.-H.; Chou, T.-W.

    2012-11-01

    A major challenge in the damage assessment of materials under dynamic, high strain rate loading lies in the inability to apply most health monitoring methodologies to the analysis and evaluation of damage incurred on short timescales. Here, we present a resistance-based sensing method utilizing an electrically conductive carbon nanotube film in a fiberglass/vinyl ester composite. This method reveals that applied strain and damage in the form of matrix cracking and delamination give rise to electrical resistance increases across the composite specimen; these can be measured in real-time during high strain rate loading. Damage within the composite specimens is confirmed through pre- and post-mortem x-ray micro computed tomography imaging.

  4. Towards scalable Byzantine fault-tolerant replication

    NASA Astrophysics Data System (ADS)

    Zbierski, Maciej

    2017-08-01

    Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.

  5. Viscoelasticity of human oral mucosa: implications for masticatory biomechanics.

    PubMed

    Sawada, A; Wakabayashi, N; Ona, M; Suzuki, T

    2011-05-01

    The dynamic behavior of oral soft tissues supporting removable prostheses is not well understood. We hypothesized that the stress and strain of the mucosa exhibited time-dependent behavior under masticatory loadings. Displacement of the mucosa on the maxillary residual ridge was measured in vivo by means of a magnetic actuator/sensor under vertical loading in partially edentulous individuals. Subject-specific finite element models of homogeneous bone and mucosa were constructed based on computed tomography images. A mean initial elastic modulus of 8.0 × 10(-5) GPa and relaxation time of 494 sec were obtained from the curve adaptation of the finite element output to the in vivo time-displacement relationship. Delayed increase of the maximum compressive strain on the surface of the mucosa was observed under sustained load, while the maximum strain inside the mucosa was relatively low and uninfluenced by the duration of the load. The compressive stress showed a slight decrease with sustained load, due to stress relaxation of the mucosa. On simulation of cyclic load, the increment of the maximum strain and the evidence of residual strain were revealed after each loading. The results support our hypothesis, and suggest that sustained and repetitive loads accumulate as surface strain on the mucosa.

  6. Development of a three-dimensional multistage inverse design method for aerodynamic matching of axial compressor blading

    NASA Astrophysics Data System (ADS)

    van Rooij, Michael P. C.

    Current turbomachinery design systems increasingly rely on multistage Computational Fluid Dynamics (CFD) as a means to assess performance of designs. However, design weaknesses attributed to improper stage matching are addressed using often ineffective strategies involving a costly iterative loop between blading modification, revision of design intent, and evaluation of aerodynamic performance. A design methodology is presented which greatly improves the process of achieving design-point aerodynamic matching. It is based on a three-dimensional viscous inverse design method which generates the blade camber surface based on prescribed pressure loading, thickness distribution and stacking line. This inverse design method has been extended to allow blading analysis and design in a multi-blade row environment. Blade row coupling was achieved through a mixing plane approximation. Parallel computing capability in the form of MPI has been implemented to reduce the computational time for multistage calculations. Improvements have been made to the flow solver to reach the level of accuracy required for multistage calculations. These include inclusion of heat flux, temperature-dependent treatment of viscosity, and improved calculation of stress components and artificial dissipation near solid walls. A validation study confirmed that the obtained accuracy is satisfactory at design point conditions. Improvements have also been made to the inverse method to increase robustness and design fidelity. These include the possibility to exclude spanwise sections of the blade near the endwalls from the design process, and a scheme that adjusts the specified loading area for changes resulting from the leading and trailing edge treatment. Furthermore, a pressure loading manager has been developed. Its function is to automatically adjust the pressure loading area distribution during the design calculation in order to achieve a specified design objective. Possible objectives are overall mass flow and compression ratio, and radial distribution of exit flow angle. To supplement the loading manager, mass flow inlet and exit boundary conditions have been implemented. Through appropriate combination of pressure or mass flow inflow/outflow boundary conditions and loading manager objectives, increased control over the design intent can be obtained. The three-dimensional multistage inverse design method with pressure loading manager was demonstrated to offer greatly enhanced blade row matching capabilities. Multistage design allows for simultaneous design of blade rows in a mutually interacting environment, which permits the redesigned blading to adapt to changing aerodynamic conditions resulting from the redesign. This ensures that the obtained blading geometry and performance implied by the prescribed pressure loading distribution are consistent with operation in the multi-blade row environment. The developed methodology offers high aerodynamic design quality and productivity, and constitutes a significant improvement over existing approaches used to address design-point aerodynamic matching.

  7. The Effect of Computer Simulations on Acquisition of Knowledge and Cognitive Load: A Gender Perspective

    ERIC Educational Resources Information Center

    Kaheru, Sam J.; Kriek, Jeanne

    2016-01-01

    A study on the effect of the use of computer simulations (CS) on the acquisition of knowledge and cognitive load was undertaken with 104 Grade 11 learners in four schools in rural South Africa on the physics topic geometrical optics. Owing to the lack of resources a teacher-centred approach was followed in the use of computer simulations. The…

  8. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gooding, Thomas M.

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications linkmore » over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.« less

  10. Computer Based Porosity Design by Multi Phase Topology Optimization

    NASA Astrophysics Data System (ADS)

    Burblies, Andreas; Busse, Matthias

    2008-02-01

    A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.

  11. System reliability of randomly vibrating structures: Computational modeling and laboratory testing

    NASA Astrophysics Data System (ADS)

    Sundar, V. S.; Ammanagi, S.; Manohar, C. S.

    2015-09-01

    The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.

  12. Wind Energy Conversion System Analysis Model (WECSAM) computer program documentation

    NASA Astrophysics Data System (ADS)

    Downey, W. T.; Hendrick, P. L.

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation.

  13. OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics

    NASA Astrophysics Data System (ADS)

    Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.

    2014-12-01

    OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user community comes new requests for algorithms and processing capabilities. To address this demand, OT is developing an extensible service based architecture for integrating community-developed software. This "plugable" approach to Web service deployment will enable new processing and analysis tools to run collocated with OT hosted data.

  14. Computing time-series suspended-sediment concentrations and loads from in-stream turbidity-sensor and streamflow data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Doug; Ziegler, Andrew C.

    2010-01-01

    Over the last decade, use of a method for computing suspended-sediment concentration and loads using turbidity sensors—primarily nephelometry, but also optical backscatter—has proliferated. Because an in- itu turbidity sensor is capa le of measuring turbidity instantaneously, a turbidity time series can be recorded and related directly to time-varying suspended-sediment concentrations. Depending on the suspended-sediment characteristics of the measurement site, this method can be more reliable and, in many cases, a more accurate means for computing suspended-sediment concentrations and loads than traditional U.S. Geological Survey computational methods. Guidelines and procedures for estimating time s ries of suspended-sediment concentration and loading as a function of turbidity and streamflow data have been published in a U.S. Geological Survey Techniques and Methods Report, Book 3, Chapter C4. This paper is a summary of these guidelines and discusses some of the concepts, s atistical procedures, and techniques used to maintain a multiyear suspended sediment time series.

  15. 14 CFR 25.527 - Hull and main float load factors.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... float load factors. (a) Water reaction load factors n W must be computed in the following manner: (1... following values are used: (1) n W=water reaction load factor (that is, the water reaction divided by...

  16. An Experimental Investigation Into the Feasibility of Measuring Static and Dynamic Aerodynamic Derivatives in the DSTO Water Tunnel

    DTIC Science & Technology

    2013-08-01

    The SDM was subjected to forced small (0.5) sinusoidal pitching oscillations and derivatives were computed from measured model loads, angles of... aluminium alloy when subjected to both tensile and torsional loading. He joined the Aeronautical Research Laboratories (now called the Defence...oscillations and derivatives were computed from measured model loads, angles of attack, reduced frequency of oscillation and aircraft geometrical parameters

  17. Comparison of wing-span averaging effects on lift, rolling moment, and bending moment for two span load distributions and for two turbulence representations

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1978-01-01

    An analytical method of computing the averaging effect of wing-span size on the loading of a wing induced by random turbulence was adapted for use on a digital electronic computer. The turbulence input was assumed to have a Dryden power spectral density. The computations were made for lift, rolling moment, and bending moment for two span load distributions, rectangular and elliptic. Data are presented to show the wing-span averaging effect for wing-span ratios encompassing current airplane sizes. The rectangular wing-span loading showed a slightly greater averaging effect than did the elliptic loading. In the frequency range most bothersome to airplane passengers, the wing-span averaging effect can reduce the normal lift load, and thus the acceleration, by about 7 percent for a typical medium-sized transport. Some calculations were made to evaluate the effect of using a Von Karman turbulence representation. These results showed that using the Von Karman representation generally resulted in a span averaging effect about 3 percent larger.

  18. Optimal Load Shedding and Generation Rescheduling for Overload Suppression in Large Power Systems.

    NASA Astrophysics Data System (ADS)

    Moon, Young-Hyun

    Ever-increasing size, complexity and operation costs in modern power systems have stimulated the intensive study of an optimal Load Shedding and Generator Rescheduling (LSGR) strategy in the sense of a secure and economic system operation. The conventional approach to LSGR has been based on the application of LP (Linear Programming) with the use of an approximately linearized model, and the LP algorithm is currently considered to be the most powerful tool for solving the LSGR problem. However, all of the LP algorithms presented in the literature essentially lead to the following disadvantages: (i) piecewise linearization involved in the LP algorithms requires the introduction of a number of new inequalities and slack variables, which creates significant burden to the computing facilities, and (ii) objective functions are not formulated in terms of the state variables of the adopted models, resulting in considerable numerical inefficiency in the process of computing the optimal solution. A new approach is presented, based on the development of a new linearized model and on the application of QP (Quadratic Programming). The changes in line flows as a result of changes to bus injection power are taken into account in the proposed model by the introduction of sensitivity coefficients, which avoids the mentioned second disadvantages. A precise method to calculate these sensitivity coefficients is given. A comprehensive review of the theory of optimization is included, in which results of the development of QP algorithms for LSGR as based on Wolfe's method and Kuhn -Tucker theory are evaluated in detail. The validity of the proposed model and QP algorithms has been verified and tested on practical power systems, showing the significant reduction of both computation time and memory requirements as well as the expected lower generation costs of the optimal solution as compared with those obtained from computing the optimal solution with LP. Finally, it is noted that an efficient reactive power compensation algorithm is developed to suppress voltage disturbances due to load sheddings, and that a new method for multiple contingency simulation is presented.

  19. A computer program to predict rotor rotational noise of a stationary rotor from blade loading coefficient

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, R.; Randall, D.; Hosier, R. N.

    1976-01-01

    The programing language used is FORTRAN IV. A description of all main and subprograms is provided so that any user possessing a FORTRAN compiler and random access capability can adapt the program to his facility. Rotor blade surface-pressure spectra can be used by the program to calculate: (1) blade station loading spectra, (2) chordwise and/or spanwise integrated blade-loading spectra, and (3) far-field rotational noise spectra. Any of five standard inline functions describing the chordwise distribution of the blade loading can be chosen in order to study parametrically the acoustic predictions. The program output consists of both printed and graphic descriptions of the blade-loading coefficient spectra and far-field acoustic spectrum. The results may also be written on binary file for future processing. Examples of the application of the program along with a description of the rotational noise prediction theory on which the program is based are also provided.

  20. Failure Resistance of Fiber-Reinforced Ultra-High Performance Concrete (FRUHPC) Subjected to Blast Loading

    NASA Astrophysics Data System (ADS)

    Ellis, Brett; Zhou, Min; McDowell, David

    2011-06-01

    As part of a hierarchy-based computational materials design program, a fully dynamic 3D mesoscale model is developed to quantify the effects of energy storage and dissipation mechanisms in Fiber-Reinforced Ultra-High Performance Concretes (FRUHPCs) subjected to blast loading. This model accounts for three constituent components: reinforcement fibers, cementitious matrix, and fiber-matrix interfaces. Microstructure instantiations encompass a range of fiber volume fraction (0-2%), fiber length (10-15 mm), and interfacial bonding strength (1-100 MPa). Blast loading with scaled distances between 5 and 10 m/kg1/3 are considered. Calculations have allowed the delineation and characterization of the evolutions of kinetic energy, strain energy, work expended on interfacial damage and failure, frictional dissipation along interfaces, and bulk dissipation through granular flow as functions of microstructure, loading and constituent attributes. The relations obtained point out avenues for designing FRUHPCs with properties tailored for specific load environments and reveal trade-offs between various design scenarios.

  1. Coupled CFD/CSD Analysis of an Active-Twist Rotor in a Wind Tunnel with Experimental Validation

    NASA Technical Reports Server (NTRS)

    Massey, Steven J.; Kreshock, Andrew R.; Sekula, Martin K.

    2015-01-01

    An unsteady Reynolds averaged Navier-Stokes analysis loosely coupled with a comprehensive rotorcraft code is presented for a second-generation active-twist rotor. High fidelity Navier-Stokes results for three configurations: an isolated rotor, a rotor with fuselage, and a rotor with fuselage mounted in a wind tunnel, are compared to lifting-line theory based comprehensive rotorcraft code calculations and wind tunnel data. Results indicate that CFD/CSD predictions of flapwise bending moments are in good agreement with wind tunnel measurements for configurations with a fuselage, and that modeling the wind tunnel environment does not significantly enhance computed results. Actuated rotor results for the rotor with fuselage configuration are also validated for predictions of vibratory blade loads and fixed-system vibratory loads. Varying levels of agreement with wind tunnel measurements are observed for blade vibratory loads, depending on the load component (flap, lag, or torsion) and the harmonic being examined. Predicted trends in fixed-system vibratory loads are in good agreement with wind tunnel measurements.

  2. The Load Distribution in Bolted or Riveted Joints in Light-Alloy Structures

    NASA Technical Reports Server (NTRS)

    Vogt, F.

    1947-01-01

    This report contains a theoretical discussion of the load distribution in bolted or riveted joints in light-alloy structures which is applicable not only for loads below the limit of proportionality but also for loads above this limit. The theory is developed for double and single shear joints. The methods given are illustrated by numerical examples and the values assumed for the bolt (or rivet) stiffnesses are based partly on theory and partly on known experimental values. It is shown that the load distribution does not vary greatly with the bolt (or rivet) stiffnesses and that for design purposes it is usually sufficient to know their order of magnitude. The theory may also be directly used for spot-welded structures and, with small modifications, for seam-welded structures, The computational work involved in the methods described is simple and may be completed in a reasonable time for most practical problems. A summary of earlier theoretical and experimental investigations on the subject is included in the report.

  3. Improvements to a method for the geometrically nonlinear analysis of compressively loaded stiffened composite panels

    NASA Technical Reports Server (NTRS)

    Stoll, Frederick

    1993-01-01

    The NLPAN computer code uses a finite-strip approach to the analysis of thin-walled prismatic composite structures such as stiffened panels. The code can model in-plane axial loading, transverse pressure loading, and constant through-the-thickness thermal loading, and can account for shape imperfections. The NLPAN code represents an attempt to extend the buckling analysis of the VIPASA computer code into the geometrically nonlinear regime. Buckling mode shapes generated using VIPASA are used in NLPAN as global functions for representing displacements in the nonlinear regime. While the NLPAN analysis is approximate in nature, it is computationally economical in comparison with finite-element analysis, and is thus suitable for use in preliminary design and design optimization. A comprehensive description of the theoretical approach of NLPAN is provided. A discussion of some operational considerations for the NLPAN code is included. NLPAN is applied to several test problems in order to demonstrate new program capabilities, and to assess the accuracy of the code in modeling various types of loading and response. User instructions for the NLPAN computer program are provided, including a detailed description of the input requirements and example input files for two stiffened-panel configurations.

  4. User document for computer programs for ring-stiffened shells of revolution

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1973-01-01

    A user manual and related program documentation is presented for six compatible computer programs for structural analysis of axisymmetric shell structures. The programs apply to a common structural model but analyze different modes of structural response. In particular, they are: (1) Linear static response under asymmetric loads; (2) Buckling of linear states under asymmetric loads; (3) Nonlinear static response under axisymmetric loads; (4) Buckling nonlinear states under axisymmetric (5) Imperfection sensitivity of buckling modes under axisymmetric loads; and (6) Vibrations about nonlinear states under axisymmetric loads. These programs treat branched shells of revolution with an arbitrary arrangement of a large number of open branches but with at most one closed branch.

  5. Structural biomechanics of the craniomaxillofacial skeleton under maximal masticatory loading: Inferences and critical analysis based on a validated computational model.

    PubMed

    Pakdel, Amir R; Whyne, Cari M; Fialkov, Jeffrey A

    2017-06-01

    The trend towards optimizing stabilization of the craniomaxillofacial skeleton (CMFS) with the minimum amount of fixation required to achieve union, and away from maximizing rigidity, requires a quantitative understanding of craniomaxillofacial biomechanics. This study uses computational modeling to quantify the structural biomechanics of the CMFS under maximal physiologic masticatory loading. Using an experimentally validated subject-specific finite element (FE) model of the CMFS, the patterns of stress and strain distribution as a result of physiological masticatory loading were calculated. The trajectories of the stresses were plotted to delineate compressive and tensile regimes over the entire CMFS volume. The lateral maxilla was found to be the primary vertical buttress under maximal bite force loading, with much smaller involvement of the naso-maxillary buttress. There was no evidence that the pterygo-maxillary region is a buttressing structure, counter to classical buttress theory. The stresses at the zygomatic sutures suggest that two-point fixation of zygomatic complex fractures may be sufficient for fixation under bite force loading. The current experimentally validated biomechanical FE model of the CMFS is a practical tool for in silico optimization of current practice techniques and may be used as a foundation for the development of design criteria for future technologies for the treatment of CMFS injury and disease. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  6. A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam

    In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.

  7. An incentive-based distributed mechanism for scheduling divisible loads in tree networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroll, T. E.; Grosu, D.

    The underlying assumption of Divisible Load Scheduling (DLS) theory is that the pro-cessors composing the network are obedient, i.e., they do not “cheat” the scheduling algorithm. This assumption is unrealistic if the processors are owned by autonomous, self-interested organizations that have no a priori motivation for cooperation and they will manipulate the algorithm if it is beneficial to do so. In this paper, we address this issue by designing a distributed mechanism for scheduling divisible loads in tree net-works, called DLS-T, which provides incentives to processors for reporting their true processing capacity and executing their assigned load at full processingmore » capacity. We prove that the DLS-T mechanism computes the optimal allocation in an ex post Nash equilibrium. Finally, we simulate and study the mechanism under various network structures and processor parameters.« less

  8. Evaluation of a method for comparing phosphorus loads from barnyards and croplands in Otter Creek Watershed, Wisconsin

    USGS Publications Warehouse

    Wierl, Judy A.; Giddings, Elise M.P.; Bannerman, Roger T.

    1998-01-01

    Control of phosphorus from rural nonpoint sources is a major focus of current efforts to improve and protect water resources in Wisconsin and is recommended in almost every priority watershed plan prepared for the State's Nonpoint Source (NFS) Program. Barnyards and crop- lands usually are identified as the primary rural sources of phosphorus. Numerous questions have arisen about which of these two sources to control and about the method currently being used by the NFS program to compare phosphorus loads from barnyards and croplands. To evaluate the method, the U.S. Geological Survey (USGS). in cooperation with the Wisconsin Department of Natural Resources, used phosphorus-load and sediment-load data from streams and phosphorus concentrations in soils from the Otter Creek Watershed (located in the Sheboygan River Basin: fig. 1) in conjunction with two computer-based models. 

  9. Fabrication and evaluation of cold/formed/weldbrazed beta-titanium skin-stiffened compression panels

    NASA Technical Reports Server (NTRS)

    Royster, D. M.; Bales, T. T.; Davis, R. C.; Wiant, H. R.

    1983-01-01

    The room temperature and elevated temperature buckling behavior of cold formed beta titanium hat shaped stiffeners joined by weld brazing to alpha-beta titanium skins was determined. A preliminary set of single stiffener compression panels were used to develop a data base for material and panel properties. These panels were tested at room temperature and 316 C (600 F). A final set of multistiffener compression panels were fabricated for room temperature tests by the process developed in making the single stiffener panels. The overall geometrical dimensions for the multistiffener panels were determined by the structural sizing computer code PASCO. The data presented from the panel tests include load shortening curves, local buckling strengths, and failure loads. Experimental buckling loads are compared with the buckling loads predicted by the PASCO code. Material property data obtained from tests of ASTM standard dogbone specimens are also presented.

  10. Postbuckling behavior of axially compressed graphite-epoxy cylindrical panels with circular holes

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Starnes, J. H., Jr.

    1984-01-01

    The results of an experimental and analytical study of the effects of circular holes on the postbuckling behavior of graphite-epoxy cylindrical panels loaded in axial compression are presented. The STAGSC-1 general shell analysis computer code is used to determine the buckling and postbuckling response of the panels. The loaded, curved ends of the specimens were clamped by fixtures and the unloaded, straight edges were simply supported by knife-edge restraints. The panels are loaded by uniform end shortening to several times the end shortening at buckling. The unstable equilibrium path of the postbuckling response is obtained analytically by using a method based on controlling an equilibrium-path-arc-length parameter instead of the traditional load parameter. The effects of hole diameter, panel radius, and panel thickness on postbuckling response are considered in the study. Experimental results are compared with the analytical results and the failure characteristics of the graphite-epoxy panels are described.

  11. Estimates of long-term mean-annual nutrient loads considered for use in SPARROW models of the Midcontinental region of Canada and the United States, 2002 base year

    USGS Publications Warehouse

    Saad, David A.; Benoy, Glenn A.; Robertson, Dale M.

    2018-05-11

    Streamflow and nutrient concentration data needed to compute nitrogen and phosphorus loads were compiled from Federal, State, Provincial, and local agency databases and also from selected university databases. The nitrogen and phosphorus loads are necessary inputs to Spatially Referenced Regressions on Watershed Attributes (SPARROW) models. SPARROW models are a way to estimate the distribution, sources, and transport of nutrients in streams throughout the Midcontinental region of Canada and the United States. After screening the data, approximately 1,500 sites sampled by 34 agencies were identified as having suitable data for calculating the long-term mean-annual nutrient loads required for SPARROW model calibration. These final sites represent a wide range in watershed sizes, types of nutrient sources, and land-use and watershed characteristics in the Midcontinental region of Canada and the United States.

  12. Simulation of Blast Loading on an Ultrastructurally-based Computational Model of the Ocular Lens

    DTIC Science & Technology

    2013-10-01

    gradient components in the axial ( F22 ) and radial (F11) directions. One can observe the very large deformation (approaching 800%) and 5 Figure 5...and (bottom left) show deformation gradient in axial ( F22 ) and radial (F11) directions. (bottom right) normalized force versus displacement curve for

  13. A simulation-based approach for evaluating logging residue handling systems.

    Treesearch

    B. Bruce Bare; Benjamin A. Jayne; Brian F. Anholt

    1976-01-01

    Describes a computer simulation model for evaluating logging residue handling systems. The flow of resources is traced through a prespecified combination of operations including yarding, chipping, sorting, loading, transporting, and unloading. The model was used to evaluate the feasibility of converting logging residues to chips that could be used, for example, to...

  14. Risky Business or Sharing the Load?--Social Flow in Collaborative Mobile Learning

    ERIC Educational Resources Information Center

    Ryu, Hokyoung; Parsons, David

    2012-01-01

    Mobile learning has been built upon the premise that we can transform traditional classroom or computer-based learning activities into a more ubiquitous and connected form of learning. Tentative outcomes from this assertion have been witnessed in many collaborative learning activities, but few analytic observations on what triggers this…

  15. An efficient formulation of Krylov's prediction model for train induced vibrations based on the dynamic reciprocity theorem.

    PubMed

    Degrande, G; Lombaert, G

    2001-09-01

    In Krylov's analytical prediction model, the free field vibration response during the passage of a train is written as the superposition of the effect of all sleeper forces, using Lamb's approximate solution for the Green's function of a halfspace. When this formulation is extended with the Green's functions of a layered soil, considerable computational effort is required if these Green's functions are needed in a wide range of source-receiver distances and frequencies. It is demonstrated in this paper how the free field response can alternatively be computed, using the dynamic reciprocity theorem, applied to moving loads. The formulation is based on the response of the soil due to the moving load distribution for a single axle load. The equations are written in the wave-number-frequency domain, accounting for the invariance of the geometry in the direction of the track. The approach allows for a very efficient calculation of the free field vibration response, distinguishing the quasistatic contribution from the effect of the sleeper passage frequency and its higher harmonics. The methodology is validated by means of in situ vibration measurements during the passage of a Thalys high-speed train on the track between Brussels and Paris. It is shown that the model has good predictive capabilities in the near field at low and high frequencies, but underestimates the response in the midfrequency band.

  16. Improved sonic-box computer program for calculating transonic aerodynamic loads on oscillating wings with thickness

    NASA Technical Reports Server (NTRS)

    Ruo, S. Y.

    1978-01-01

    A computer program was developed to account approximately for the effects of finite wing thickness in transonic potential flow over an oscillation wing of finite span. The program is based on the original sonic box computer program for planar wing which was extended to account for the effect of wing thickness. Computational efficiency and accuracy were improved and swept trailing edges were accounted for. Account for the nonuniform flow caused by finite thickness was made by application of the local linearization concept with appropriate coordinate transformation. A brief description of each computer routine and the applications of cubic spline and spline surface data fitting techniques used in the program are given, and the method of input was shown in detail. Sample calculations as well as a complete listing of the computer program listing are presented.

  17. Implications of the Java language on computer-based patient records.

    PubMed

    Pollard, D; Kucharz, E; Hammond, W E

    1996-01-01

    The growth of the utilization of the World Wide Web (WWW) as a medium for the delivery of computer-based patient records (CBPR) has created a new paradigm in which clinical information may be delivered. Until recently the authoring tools and environment for application development on the WWW have been limited to Hyper Text Markup Language (HTML) utilizing common gateway interface scripts. While, at times, this provides an effective medium for the delivery of CBPR, it is a less than optimal solution. The server-centric dynamics and low levels of interactivity do not provide for a robust application which is required in a clinical environment. The emergence of Sun Microsystems' Java language is a solution to the problem. In this paper we examine the Java language and its implications to the CBPR. A quantitative and qualitative assessment was performed. The Java environment is compared to HTML and Telnet CBPR environments. Qualitative comparisons include level of interactivity, server load, client load, ease of use, and application capabilities. Quantitative comparisons include data transfer time delays. The Java language has demonstrated promise for delivering CBPRs.

  18. Optimization of an electromagnetic linear actuator using a network and a finite element model

    NASA Astrophysics Data System (ADS)

    Neubert, Holger; Kamusella, Alfred; Lienig, Jens

    2011-03-01

    Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.

  19. Loads calibrations of strain gage bridges on the DAST project Aeroelastic Research Wing (ARW-1)

    NASA Technical Reports Server (NTRS)

    Eckstrom, C. V.

    1980-01-01

    The details of and results from the procedure used to calibrate strain gage bridges for measurement of wing structural loads for the DAST project ARW-1 wing are presented. Results are in the form of loads equations and comparison of computed loads vs. actual loads for two simulated flight loading conditions.

  20. Smart integrated microsystems: the energy efficiency challenge (Conference Presentation) (Plenary Presentation)

    NASA Astrophysics Data System (ADS)

    Benini, Luca

    2017-06-01

    The "internet of everything" envisions trillions of connected objects loaded with high-bandwidth sensors requiring massive amounts of local signal processing, fusion, pattern extraction and classification. From the computational viewpoint, the challenge is formidable and can be addressed only by pushing computing fabrics toward massive parallelism and brain-like energy efficiency levels. CMOS technology can still take us a long way toward this goal, but technology scaling is losing steam. Energy efficiency improvement will increasingly hinge on architecture, circuits, design techniques such as heterogeneous 3D integration, mixed-signal preprocessing, event-based approximate computing and non-Von-Neumann architectures for scalable acceleration.

  1. View southeast of computer controlled energy monitoring system. System replaced ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View southeast of computer controlled energy monitoring system. System replaced strip chart recorders and other instruments under the direct observation of the load dispatcher. - Thirtieth Street Station, Load Dispatch Center, Thirtieth & Market Streets, Railroad Station, Amtrak (formerly Pennsylvania Railroad Station), Philadelphia, Philadelphia County, PA

  2. Live load testing and load rating of five reinforced concrete bridges.

    DOT National Transportation Integrated Search

    2014-10-01

    Five cast-in-place concrete T-beam bridges Eustis #5341, Whitefield #3831, Cambridge #3291, Eddington #5107, : and Albion #2832 were live load tested. Revised load ratings were computed either using test data or detailed : analysis when possi...

  3. Analysis of selected data from the triservice missile data base

    NASA Technical Reports Server (NTRS)

    Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.

    1989-01-01

    An extremely large, systematic, axisymmetric-body/tail-fin data base has been gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but these data are also valuable as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analyses of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Flow-visualization photographs are examined to provide physical insight into the cause of these effects.

  4. SIP: A Web-Based Astronomical Image Processing Program

    NASA Astrophysics Data System (ADS)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  5. 3-D inelastic analysis methods for hot section components (base program). [turbine blades, turbine vanes, and combustor liners

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1984-01-01

    A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.

  6. A computer program for calculation of doses and prices of injectable medications based on body weight or body surface area

    PubMed Central

    2004-01-01

    Abstract A computer program (CalcAnesth) was developed with Visual Basic for the purpose of calculating the doses and prices of injectable medications on the basis of body weight or body surface area. The drug names, concentrations, and prices are loaded from a drug database. This database is a simple text file, that the user can easily create or modify. The animal names and body weights can be loaded from a similar database. After typing the dose and the units into the user interface, the results will be automatically displayed. The program is able to open and save anesthetic protocols, and export or print the results. This CalcAnesth program can be useful in clinical veterinary anesthesiology and research. The rationale for dosing on the basis of body surface area is also discussed in this article. PMID:14979437

  7. Fundamental analysis of the failure of polymer-based fiber reinforced composites

    NASA Technical Reports Server (NTRS)

    Kanninen, M. F.; Rybicki, E. F.; Griffith, W. I.; Broek, D.

    1975-01-01

    A mathematical model predicting the strength of unidirectional fiber reinforced composites containing known flaws and with linear elastic-brittle material behavior was developed. The approach was to imbed a local heterogeneous region surrounding the crack tip into an anisotropic elastic continuum. This (1) permits an explicit analysis of the micromechanical processes involved in the fracture, and (2) remains simple enough to be useful in practical computations. Computations for arbitrary flaw size and orientation under arbitrary applied loads were performed. The mechanical properties were those of graphite epoxy. With the rupture properties arbitrarily varied to test the capabilities of the model to reflect real fracture modes, it was shown that fiber breakage, matrix crazing, crack bridging, matrix-fiber debonding, and axial splitting can all occur during a period of (gradually) increasing load prior to catastrophic failure. The calculations also reveal the sequential nature of the stable crack growth process proceding fracture.

  8. Remote control missile model test

    NASA Technical Reports Server (NTRS)

    Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.

    1989-01-01

    An extremely large, systematic, axisymmetric body/tail fin data base was gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but can also be used as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analysis of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Comparisons between these data and calculations from the SWINT Euler code are also presented.

  9. Dynamic Analyses Including Joints Of Truss Structures

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith

    1991-01-01

    Method for mathematically modeling joints to assess influences of joints on dynamic response of truss structures developed in study. Only structures with low-frequency oscillations considered; only Coulomb friction and viscous damping included in analysis. Focus of effort to obtain finite-element mathematical models of joints exhibiting load-vs.-deflection behavior similar to measured load-vs.-deflection behavior of real joints. Experiments performed to determine stiffness and damping nonlinearities typical of joint hardware. Algorithm for computing coefficients of analytical joint models based on test data developed to enable study of linear and nonlinear effects of joints on global structural response. Besides intended application to large space structures, applications in nonaerospace community include ground-based antennas and earthquake-resistant steel-framed buildings.

  10. A procedure for utilization of a damage-dependent constitutive model for laminated composites

    NASA Technical Reports Server (NTRS)

    Lo, David C.; Allen, David H.; Harris, Charles E.

    1992-01-01

    Described here is the procedure for utilizing a damage constitutive model to predict progressive damage growth in laminated composites. In this model, the effects of the internal damage are represented by strain-like second order tensorial damage variables and enter the analysis through damage dependent ply level and laminate level constitutive equations. The growth of matrix cracks due to fatigue loading is predicted by an experimentally based damage evolutionary relationship. This model is incorporated into a computer code called FLAMSTR. This code is capable of predicting the constitutive response and matrix crack damage accumulation in fatigue loaded laminated composites. The structure and usage of FLAMSTR are presented along with sample input and output files to assist the code user. As an example problem, an analysis of crossply laminates subjected to two stage fatigue loading was conducted and the resulting damage accumulation and stress redistribution were examined to determine the effect of variations in fatigue load amplitude applied during the first stage of the load history. It was found that the model predicts a significant loading history effect on damage evolution.

  11. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.

  12. Structural Loads Analysis for Wave Energy Converters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi

    2017-06-03

    This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process.« less

  13. What does germane load mean? An empirical contribution to the cognitive load theory

    PubMed Central

    Debue, Nicolas; van de Leemput, Cécile

    2014-01-01

    While over the last decades, much attention has been paid to the mental workload in the field of human computer interactions, there is still a lack of consensus concerning the factors that generate it as well as the measurement methods that could reflect workload variations. Based on the multifactorial Cognitive Load Theory (CLT), our study aims to provide some food for thought about the subjective and objective measurement that can be used to disentangle the intrinsic, extraneous, and germane load. The purpose is to provide insight into the way cognitive load can explain how users' cognitive resources are allocated in the use of hypermedia, such as an online newspaper. A two-phase experiment has been conducted on the information retention from online news stories. Phase 1 (92 participants) examined the influence of multimedia content on performance as well as the relationships between cognitive loads and cognitive absorption. In Phase 2 (36 participants), eye-tracking data were collected in order to provide reliable and objective measures. Results confirmed that performance in information retention was impacted by the presence of multimedia content such as animations and pictures. The higher number of fixations on these animations suggests that users' attention could have been attracted by them. Results showed the expected opposite relationships between germane and extraneous load, a positive association between germane load and cognitive absorption and a non-linear association between intrinsic and germane load. The trends based on eye-tracking data analysis provide some interesting findings about the relationship between longer fixations, shorter saccades and cognitive load. Some issues are raised about the respective contribution of mean pupil diameter and Index of Cognitive Activity. PMID:25324806

  14. What does germane load mean? An empirical contribution to the cognitive load theory.

    PubMed

    Debue, Nicolas; van de Leemput, Cécile

    2014-01-01

    While over the last decades, much attention has been paid to the mental workload in the field of human computer interactions, there is still a lack of consensus concerning the factors that generate it as well as the measurement methods that could reflect workload variations. Based on the multifactorial Cognitive Load Theory (CLT), our study aims to provide some food for thought about the subjective and objective measurement that can be used to disentangle the intrinsic, extraneous, and germane load. The purpose is to provide insight into the way cognitive load can explain how users' cognitive resources are allocated in the use of hypermedia, such as an online newspaper. A two-phase experiment has been conducted on the information retention from online news stories. Phase 1 (92 participants) examined the influence of multimedia content on performance as well as the relationships between cognitive loads and cognitive absorption. In Phase 2 (36 participants), eye-tracking data were collected in order to provide reliable and objective measures. Results confirmed that performance in information retention was impacted by the presence of multimedia content such as animations and pictures. The higher number of fixations on these animations suggests that users' attention could have been attracted by them. Results showed the expected opposite relationships between germane and extraneous load, a positive association between germane load and cognitive absorption and a non-linear association between intrinsic and germane load. The trends based on eye-tracking data analysis provide some interesting findings about the relationship between longer fixations, shorter saccades and cognitive load. Some issues are raised about the respective contribution of mean pupil diameter and Index of Cognitive Activity.

  15. Optimization of cryoprotectant loading into murine and human oocytes.

    PubMed

    Karlsson, Jens O M; Szurek, Edyta A; Higgins, Adam Z; Lee, Sang R; Eroglu, Ali

    2014-02-01

    Loading of cryoprotectants into oocytes is an important step of the cryopreservation process, in which the cells are exposed to potentially damaging osmotic stresses and chemical toxicity. Thus, we investigated the use of physics-based mathematical optimization to guide design of cryoprotectant loading methods for mouse and human oocytes. We first examined loading of 1.5 M dimethyl sulfoxide (Me(2)SO) into mouse oocytes at 23°C. Conventional one-step loading resulted in rates of fertilization (34%) and embryonic development (60%) that were significantly lower than those of untreated controls (95% and 94%, respectively). In contrast, the mathematically optimized two-step method yielded much higher rates of fertilization (85%) and development (87%). To examine the causes for oocyte damage, we performed experiments to separate the effects of cell shrinkage and Me(2)SO exposure time, revealing that neither shrinkage nor Me(2)SO exposure single-handedly impairs the fertilization and development rates. Thus, damage during one-step Me(2)SO addition appears to result from interactions between the effects of Me(2)SO toxicity and osmotic stress. We also investigated Me(2)SO loading into mouse oocytes at 30°C. At this temperature, fertilization rates were again lower after one-step loading (8%) in comparison to mathematically optimized two-step loading (86%) and untreated controls (96%). Furthermore, our computer algorithm generated an effective strategy for reducing Me(2)SO exposure time, using hypotonic diluents for cryoprotectant solutions. With this technique, 1.5 M Me(2)SO was successfully loaded in only 2.5 min, with 92% fertilizability. Based on these promising results, we propose new methods to load cryoprotectants into human oocytes, designed using our mathematical optimization approach. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Optimization of Cryoprotectant Loading into Murine and Human Oocytes

    PubMed Central

    Karlsson, Jens O.M.; Szurek, Edyta A.; Higgins, Adam Z.; Lee, Sang R.; Eroglu, Ali

    2014-01-01

    Loading of cryoprotectants into oocytes is an important step of the cryopreservation process, in which the cells are exposed to potentially damaging osmotic stresses and chemical toxicity. Thus, we investigated the use of physics-based mathematical optimization to guide design of cryoprotectant loading methods for mouse and human oocytes. We first examined loading of 1.5 M dimethylsulfoxide (Me2SO) into mouse oocytes at 23°C. Conventional one-step loading resulted in rates of fertilization (34%) and embryonic development (60%) that were significantly lower than those of untreated controls (95% and 94%, respectively). In contrast, the mathematically optimized two-step method yielded much higher rates of fertilization (85%) and development (87%). To examine the causes for oocyte damage, we performed experiments to separate the effects of cell shrinkage and Me2SO exposure time, revealing that neither shrinkage nor Me2SO exposure single-handedly impairs the fertilization and development rates. Thus, damage during one-step Me2SO addition appears to result from interactions between the effects of Me2SO toxicity and osmotic stress. We also investigated Me2SO loading into mouse oocytes at 30°C. At this temperature, fertilization rates were again lower after one-step loading (8%) in comparison to mathematically optimized two-step loading (86%) and untreated controls (96%). Furthermore, our computer algorithm generated an effective strategy for reducing Me2SO exposure time, using hypotonic diluents for cryoprotectant solutions. With this technique, 1.5 M Me2SO was successfully loaded in only 2.5 min, with 92% fertilizability. Based on these promising results, we propose new methods to load cryoprotectants into human oocytes, designed using our mathematical optimization approach. PMID:24246951

  17. A novel strategy for load balancing of distributed medical applications.

    PubMed

    Logeswaran, Rajasvaran; Chen, Li-Choo

    2012-04-01

    Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.

  18. A simplified computer solution for the flexibility matrix of contacting teeth for spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Hsu, C. Y.; Cheng, H. S.

    1987-01-01

    A computer code, FLEXM, was developed to calculate the flexibility matrices of contacting teeth for spiral bevel gears using a simplified analysis based on the elementary beam theory for the deformation of gear and shaft. The simplified theory requires a computer time at least one order of magnitude less than that needed for the complete finite element method analysis reported earlier by H. Chao, and it is much easier to apply for different gear and shaft geometries. Results were obtained for a set of spiral bevel gears. The teeth deflections due to torsion, bending moment, shearing strain and axial force were found to be in the order 10(-5), 10(-6), 10(-7), and 10(-8) respectively. Thus, the torsional deformation was the most predominant factor. In the analysis of dynamic load, response frequencies were found to be larger when the mass or moment of inertia was smaller or the stiffness was larger. The change in damping coefficient had little influence on the resonance frequency, but has a marked influence on the dynamic load at the resonant frequencies.

  19. Estimation of Sonic Fatigue by Reduced-Order Finite Element Based Analyses

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Przekop, Adam

    2006-01-01

    A computationally efficient, reduced-order method is presented for prediction of sonic fatigue of structures exhibiting geometrically nonlinear response. A procedure to determine the nonlinear modal stiffness using commercial finite element codes allows the coupled nonlinear equations of motion in physical degrees of freedom to be transformed to a smaller coupled system of equations in modal coordinates. The nonlinear modal system is first solved using a computationally light equivalent linearization solution to determine if the structure responds to the applied loading in a nonlinear fashion. If so, a higher fidelity numerical simulation in modal coordinates is undertaken to more accurately determine the nonlinear response. Comparisons of displacement and stress response obtained from the reduced-order analyses are made with results obtained from numerical simulation in physical degrees-of-freedom. Fatigue life predictions from nonlinear modal and physical simulations are made using the rainflow cycle counting method in a linear cumulative damage analysis. Results computed for a simple beam structure under a random acoustic loading demonstrate the effectiveness of the approach and compare favorably with results obtained from the solution in physical degrees-of-freedom.

  20. Fundamental analysis of the failure of polymer-based fiber reinforced composites

    NASA Technical Reports Server (NTRS)

    Kanninen, M. F.; Rybicki, E. F.; Griffith, W. I.; Broek, D.

    1976-01-01

    A mathematical model is described which will permit predictions of the strength of fiber reinforced composites containing known flaws to be made from the basic properties of their constituents. The approach was to embed a local heterogeneous region (LHR) surrounding the crack tip into an anisotropic elastic continuum. The model should (1) permit an explicit analysis of the micromechanical processes involved in the fracture process, and (2) remain simple enough to be useful in practical computations. Computations for arbitrary flaw size and orientation under arbitrary applied load combinations were performed from unidirectional composites with linear elastic-brittle constituent behavior. The mechanical properties were nominally those of graphite epoxy. With the rupture properties arbitrarily varied to test the capability of the model to reflect real fracture modes in fiber composites, it was shown that fiber breakage, matrix crazing, crack bridging, matrix-fiber debonding, and axial splitting can all occur during a period of (gradually) increasing load prior to catastrophic fracture. The computations reveal qualitatively the sequential nature of the stable crack process that precedes fracture.

  1. Deformation and fracture of explosion-welded Ti/Al plates: A synchrotron-based study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E, J. C.; Huang, J. Y.; Bie, B. X.

    Here, explosion-welded Ti/Al plates are characterized with energy dispersive spectroscopy and x-ray computed tomography, and exhibit smooth, well-jointed, interface. We perform dynamic and quasi-static uniaxial tension experiments on Ti/Al with the loading direction either perpendicular or parallel to the Ti/Al interface, using a mini split Hopkinson tension bar and a material testing system in conjunction with time-resolved synchrotron x-ray imaging. X-ray imaging and strain-field mapping reveal different deformation mechanisms responsible for anisotropic bulk-scale responses, including yield strength, ductility and rate sensitivity. Deformation and fracture are achieved predominantly in Al layer for perpendicular loading, but both Ti and Al layers asmore » well as the interface play a role for parallel loading. The rate sensitivity of Ti/Al follows those of the constituent metals. For perpendicular loading, single deformation band develops in Al layer under quasi-static loading, while multiple deformation bands nucleate simultaneously under dynamic loading, leading to a higher dynamic fracture strain. For parallel loading, the interface impedes the growth of deformation and results in increased ductility of Ti/Al under quasi-static loading, while interface fracture occurs under dynamic loading due to the disparity in Poisson's contraction.« less

  2. Deformation and fracture of explosion-welded Ti/Al plates: A synchrotron-based study

    DOE PAGES

    E, J. C.; Huang, J. Y.; Bie, B. X.; ...

    2016-08-02

    Here, explosion-welded Ti/Al plates are characterized with energy dispersive spectroscopy and x-ray computed tomography, and exhibit smooth, well-jointed, interface. We perform dynamic and quasi-static uniaxial tension experiments on Ti/Al with the loading direction either perpendicular or parallel to the Ti/Al interface, using a mini split Hopkinson tension bar and a material testing system in conjunction with time-resolved synchrotron x-ray imaging. X-ray imaging and strain-field mapping reveal different deformation mechanisms responsible for anisotropic bulk-scale responses, including yield strength, ductility and rate sensitivity. Deformation and fracture are achieved predominantly in Al layer for perpendicular loading, but both Ti and Al layers asmore » well as the interface play a role for parallel loading. The rate sensitivity of Ti/Al follows those of the constituent metals. For perpendicular loading, single deformation band develops in Al layer under quasi-static loading, while multiple deformation bands nucleate simultaneously under dynamic loading, leading to a higher dynamic fracture strain. For parallel loading, the interface impedes the growth of deformation and results in increased ductility of Ti/Al under quasi-static loading, while interface fracture occurs under dynamic loading due to the disparity in Poisson's contraction.« less

  3. Application of long-term simulation programs for analysis of system islanding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sancha, J.L.; Llorens, M.L.; Moreno, J.M.

    1997-02-01

    This paper describes the main results and conclusions from the application of two different long-term stability programs to the analysis of a system islanding scenario for a study case developed by Red Electrica de Espana (REE), based on the Spanish system. Two main goals were to evaluate the performance of both the influence of some important control and protection elements (tie-line loss-of-synchronism relays, underfrequency load-shedding, load-frequency control, and power plant dynamics). Conclusions about modeling and computational requirements for system islanding (frequency) scenarios and use of long-term stability programs are presented.

  4. Partitioning Strategy Using Static Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Seo, Yongjin; Soo Kim, Hyeon

    2016-08-01

    Flight software is software used in satellites' on-board computers. It has requirements such as real time and reliability. The IMA architecture is used to satisfy these requirements. The IMA architecture has the concept of partitions and this affected the configuration of flight software. That is, situations occurred in which software that had been loaded on one system was divided into many partitions when being loaded. For new issues, existing studies use experience based partitioning methods. However, these methods have a problem that they cannot be reused. In this respect, this paper proposes a partitioning method that is reusable and consistent.

  5. Magnetic Random Access Memory for Embedded Computing

    DTIC Science & Technology

    2007-10-29

    layer, w he free layer hose resistan .  ce  2. Develop and model  data  storage circuits  based  on the MTJ cells. 3. Integrated the MTJ cells into a CMOS...suggested the two methods shown in Fig. 4.5 [95]. The circuit shown at the top of the figure uses NMOS pass transistors to load data , which is the simplest... method but requires careful design to avoid charge sharing and accommodate the data -dependent loading seen at the DATA input. With additional

  6. Analysis of vibrational load influence upon passengers in trains with a compulsory body tilt

    NASA Astrophysics Data System (ADS)

    Antipin, D. Ya; Kobishchanov, V. V.; Lapshin, V. F.; Mitrakov, A. S.; Shorokhov, S. G.

    2017-02-01

    The procedure for forecasting the vibrational load influence upon passengers of trains of rolling stocks equipped with a system of a compulsory body tilt on railroad curves is offered. The procedure is based on the use of computer simulation methods and application of solid-state models of anthropometrical mannequins. As a result of the carried out investigations, there are substantiated criteria of the comfort level estimate for passengers in the rolling-stock under consideration. The procedure is approved by the example of the promising domestic rolling stock with a compulsory body tilt on railroad curves.

  7. Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing

    NASA Technical Reports Server (NTRS)

    Sterling, T. L.; Zima, H. P.

    2002-01-01

    Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.

  8. Computer use, sleep duration and health symptoms: a cross-sectional study of 15-year olds in three countries.

    PubMed

    Nuutinen, Teija; Roos, Eva; Ray, Carola; Villberg, Jari; Välimaa, Raili; Rasmussen, Mette; Holstein, Bjørn; Godeau, Emmanuelle; Beck, Francois; Léger, Damien; Tynjälä, Jorma

    2014-08-01

    This study investigated whether computer use is associated with health symptoms through sleep duration among 15-year olds in Finland, France and Denmark. We used data from the WHO cross-national Health Behaviour in School-aged Children study collected in Finland, France and Denmark in 2010, including data on 5,402 adolescents (mean age 15.61 (SD 0.37), girls 53%). Symptoms assessed included feeling low, irritability/bad temper, nervousness, headache, stomachache, backache, and feeling dizzy. We used structural equation modeling to explore the mediating effect of sleep duration on the association between computer use and symptom load. Adolescents slept approximately 8 h a night and computer use was approximately 2 h a day. Computer use was associated with shorter sleep duration and higher symptom load. Sleep duration partly mediated the association between computer use and symptom load, but the indirect effects of sleep duration were quite modest in all countries. Sleep duration may be a potential underlying mechanism behind the association between computer use and health symptoms.

  9. An efficient parallel termination detection algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Ofmore » these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.« less

  10. Evaluating measurement invariance across assessment modes of phone interview and computer self-administered survey for the PROMIS measures in a population-based cohort of localized prostate cancer survivors.

    PubMed

    Wang, Mian; Chen, Ronald C; Usinger, Deborah S; Reeve, Bryce B

    2017-11-01

    To evaluate measurement invariance (phone interview vs computer self-administered survey) of 15 PROMIS measures responded by a population-based cohort of localized prostate cancer survivors. Participants were part of the North Carolina Prostate Cancer Comparative Effectiveness and Survivorship Study. Out of the 952 men who took the phone interview at 24 months post-treatment, 401 of them also completed the same survey online using a home computer. Unidimensionality of the PROMIS measures was examined using single-factor confirmatory factor analysis (CFA) models. Measurement invariance testing was conducted using longitudinal CFA via a model comparison approach. For strongly or partially strongly invariant measures, changes in the latent factors and factor autocorrelations were also estimated and tested. Six measures (sleep disturbance, sleep-related impairment, diarrhea, illness impact-negative, illness impact-positive, and global satisfaction with sex life) had locally dependent items, and therefore model modifications had to be made on these domains prior to measurement invariance testing. Overall, seven measures achieved strong invariance (all items had equal loadings and thresholds), and four measures achieved partial strong invariance (each measure had one item with unequal loadings and thresholds). Three measures (pain interference, interest in sexual activity, and global satisfaction with sex life) failed to establish configural invariance due to between-mode differences in factor patterns. This study supports the use of phone-based live interviewers in lieu of PC-based assessment (when needed) for many of the PROMIS measures.

  11. Performance analysis of parallel branch and bound search with the hypercube architecture

    NASA Technical Reports Server (NTRS)

    Mraz, Richard T.

    1987-01-01

    With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.

  12. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  13. Estimation of the net acid load of the diet of ancestral preagricultural Homo sapiens and their hominid ancestors.

    PubMed

    Sebastian, Anthony; Frassetto, Lynda A; Sellmeyer, Deborah E; Merriam, Renée L; Morris, R Curtis

    2002-12-01

    Natural selection has had < 1% of hominid evolutionary time to eliminate the inevitable maladaptations consequent to the profound transformation of the human diet resulting from the inventions of agriculture and animal husbandry. The objective was to estimate the net systemic load of acid (net endogenous acid production; NEAP) from retrojected ancestral preagricultural diets and to compare it with that of contemporary diets, which are characterized by an imbalance of nutrient precursors of hydrogen and bicarbonate ions that induces a lifelong, low-grade, pathogenically significant systemic metabolic acidosis. Using established computational methods, we computed NEAP for a large number of retrojected ancestral preagricultural diets and compared them with computed and measured values for typical American diets. The mean (+/- SD) NEAP for 159 retrojected preagricultural diets was -88 +/- 82 mEq/d; 87% were net base-producing. The computational model predicted NEAP for the average American diet (as recorded in the third National Health and Nutrition Examination Survey) as 48 mEq/d, within a few percentage points of published measured values for free-living Americans; the model, therefore, was not biased toward generating negative NEAP values. The historical shift from negative to positive NEAP was accounted for by the displacement of high-bicarbonate-yielding plant foods in the ancestral diet by cereal grains and energy-dense, nutrient-poor foods in the contemporary diet-neither of which are net base-producing. The findings suggest that diet-induced metabolic acidosis and its sequelae in humans eating contemporary diets reflect a mismatch between the nutrient composition of the diet and genetically determined nutritional requirements for optimal systemic acid-base status.

  14. Parameter estimation of a nonlinear Burger's model using nanoindentation and finite element-based inverse analysis

    NASA Astrophysics Data System (ADS)

    Hamim, Salah Uddin Ahmed

    Nanoindentation involves probing a hard diamond tip into a material, where the load and the displacement experienced by the tip is recorded continuously. This load-displacement data is a direct function of material's innate stress-strain behavior. Thus, theoretically it is possible to extract mechanical properties of a material through nanoindentation. However, due to various nonlinearities associated with nanoindentation the process of interpreting load-displacement data into material properties is difficult. Although, simple elastic behavior can be characterized easily, a method to characterize complicated material behavior such as nonlinear viscoelasticity is still lacking. In this study, a nanoindentation-based material characterization technique is developed to characterize soft materials exhibiting nonlinear viscoelasticity. Nanoindentation experiment was modeled in finite element analysis software (ABAQUS), where a nonlinear viscoelastic behavior was incorporated using user-defined subroutine (UMAT). The model parameters were calibrated using a process called inverse analysis. In this study, a surrogate model-based approach was used for the inverse analysis. The different factors affecting the surrogate model performance are analyzed in order to optimize the performance with respect to the computational cost.

  15. Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers

    NASA Technical Reports Server (NTRS)

    Tumer, K.; Lawson, J.

    2003-01-01

    Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.

  16. Machine Learning Approach for Classifying Multiple Sclerosis Courses by Combining Clinical Data with Lesion Loads and Magnetic Resonance Metabolic Features.

    PubMed

    Ion-Mărgineanu, Adrian; Kocevar, Gabriel; Stamile, Claudio; Sima, Diana M; Durand-Dubief, Françoise; Van Huffel, Sabine; Sappey-Marinier, Dominique

    2017-01-01

    Purpose: The purpose of this study is classifying multiple sclerosis (MS) patients in the four clinical forms as defined by the McDonald criteria using machine learning algorithms trained on clinical data combined with lesion loads and magnetic resonance metabolic features. Materials and Methods: Eighty-seven MS patients [12 Clinically Isolated Syndrome (CIS), 30 Relapse Remitting (RR), 17 Primary Progressive (PP), and 28 Secondary Progressive (SP)] and 18 healthy controls were included in this study. Longitudinal data available for each MS patient included clinical (e.g., age, disease duration, Expanded Disability Status Scale), conventional magnetic resonance imaging and spectroscopic imaging. We extract N -acetyl-aspartate (NAA), Choline (Cho), and Creatine (Cre) concentrations, and we compute three features for each spectroscopic grid by averaging metabolite ratios (NAA/Cho, NAA/Cre, Cho/Cre) over good quality voxels. We built linear mixed-effects models to test for statistically significant differences between MS forms. We test nine binary classification tasks on clinical data, lesion loads, and metabolic features, using a leave-one-patient-out cross-validation method based on 100 random patient-based bootstrap selections. We compute F1-scores and BAR values after tuning Linear Discriminant Analysis (LDA), Support Vector Machines with gaussian kernel (SVM-rbf), and Random Forests. Results: Statistically significant differences were found between the disease starting points of each MS form using four different response variables: Lesion Load, NAA/Cre, NAA/Cho, and Cho/Cre ratios. Training SVM-rbf on clinical and lesion loads yields F1-scores of 71-72% for CIS vs. RR and CIS vs. RR+SP, respectively. For RR vs. PP we obtained good classification results (maximum F1-score of 85%) after training LDA on clinical and metabolic features, while for RR vs. SP we obtained slightly higher classification results (maximum F1-score of 87%) after training LDA and SVM-rbf on clinical, lesion loads and metabolic features. Conclusions: Our results suggest that metabolic features are better at differentiating between relapsing-remitting and primary progressive forms, while lesion loads are better at differentiating between relapsing-remitting and secondary progressive forms. Therefore, combining clinical data with magnetic resonance lesion loads and metabolic features can improve the discrimination between relapsing-remitting and progressive forms.

  17. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  18. Global strength assessment in oblique waves of a large gas carrier ship, based on a non-linear iterative method

    NASA Astrophysics Data System (ADS)

    Domnisoru, L.; Modiga, A.; Gasparotti, C.

    2016-08-01

    At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.

  19. Mass Memory Storage Devices for AN/SLQ-32(V).

    DTIC Science & Technology

    1985-06-01

    tactical programs and libraries into the AN/UYK-19 computer , the RP-16 microprocessor, and other peripheral processors (e.g., ADLS and Band 1) will be...software must be loaded into computer memory from the 4-track magnetic tape cartridges (MTCs) on which the programs are stored. Program load begins...software. Future computer programs , which will reside in peripheral processors, include the Automated Decoy Launching System (ADLS) and Band 1. As

  20. A computer software system for the generation of global ocean tides including self-gravitation and crustal loading effects

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1977-01-01

    A computer software system is described which computes global numerical solutions of the integro-differential Laplace tidal equations, including dissipation terms and ocean loading and self-gravitation effects, for arbitrary diurnal and semidiurnal tidal constituents. The integration algorithm features a successive approximation scheme for the integro-differential system, with time stepping forward differences in the time variable and central differences in spatial variables.

  1. Computing Operating Characteristics Of Bearing/Shaft Systems

    NASA Technical Reports Server (NTRS)

    Moore, James D.

    1996-01-01

    SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.

  2. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework.

    PubMed

    Yang, Guoxiang; Best, Elly P H

    2015-09-15

    Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. The determination of equivalent bearing loading for the BSMT that simulate SSME high pressure oxidizer turbopump conditions using the SHABERTH/SINDA computer programs

    NASA Technical Reports Server (NTRS)

    Mcdonald, Gary H.

    1987-01-01

    The MSFC bearing seal material tester (BSMT) can be used to evaluate the SSME high pressure oxygen turbopump (HPOTP) bearing performance. The four HPOTP bearings have both an imposed radial and axial load. These radial and axial loads are caused by the HPOTP's shaft, main impeller, preburner impeller, turbine and by the LOX coolant flow through the bearings, respectively. These loads coupled with bearing geometry and operating speed can define bearing contact angle, contact Hertz stress, and heat generation rates. The BSMT has the capability of operating at HPOTP shaft speeds, provide proper coolant flowrates but can only apply an axial load. Due to the inability to operate the bearings in the BSMT with an applied radial load, it is important to develop an equivalency between the applied axial loads and the actual HPOTP loadings. A shaft-bearing-thermal computer code (SHABERTH/SINDA) is used to simulate the BSMT bearing-shaft geometry and thermal-fluid operating conditions.

  4. Non-Deterministic Dynamic Instability of Composite Shells

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2004-01-01

    A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.

  5. Dynamic Probabilistic Instability of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2009-01-01

    A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties in that order.

  6. Real-Time State Estimation in a Flight Simulator Using fNIRS

    PubMed Central

    Gateau, Thibault; Durantin, Gautier; Lancelot, Francois; Scannella, Sebastien; Dehais, Frederic

    2015-01-01

    Working memory is a key executive function for flying an aircraft. This function is particularly critical when pilots have to recall series of air traffic control instructions. However, working memory limitations may jeopardize flight safety. Since the functional near-infrared spectroscopy (fNIRS) method seems promising for assessing working memory load, our objective is to implement an on-line fNIRS-based inference system that integrates two complementary estimators. The first estimator is a real-time state estimation MACD-based algorithm dedicated to identifying the pilot’s instantaneous mental state (not-on-task vs. on-task). It does not require a calibration process to perform its estimation. The second estimator is an on-line SVM-based classifier that is able to discriminate task difficulty (low working memory load vs. high working memory load). These two estimators were tested with 19 pilots who were placed in a realistic flight simulator and were asked to recall air traffic control instructions. We found that the estimated pilot’s mental state matched significantly better than chance with the pilot’s real state (62% global accuracy, 58% specificity, and 72% sensitivity). The second estimator, dedicated to assessing single trial working memory loads, led to 80% classification accuracy, 72% specificity, and 89% sensitivity. These two estimators establish reusable blocks for further fNIRS-based passive brain computer interface development. PMID:25816347

  7. Preliminary In-Flight Loads Analysis of In-Line Launch Vehicles using the VLOADS 1.4 Program

    NASA Technical Reports Server (NTRS)

    Graham, J. B.; Luz, P. L.

    1998-01-01

    To calculate structural loads of in-line launch vehicles for preliminary design, a very useful computer program is VLOADS 1.4. This software may also be used to calculate structural loads for upper stages and planetary transfer vehicles. Launch vehicle inputs such as aerodynamic coefficients, mass properties, propellants, engine thrusts, and performance data are compiled and analyzed by VLOADS to produce distributed shear loads, bending moments, axial forces, and vehicle line loads as a function of X-station along the vehicle's length. Interface loads, if any, and translational accelerations are also computed. The major strength of the software is that it enables quick turnaround analysis of structural loads for launch vehicles during the preliminary design stage of its development. This represents a significant improvement over the alternative-the time-consuming, and expensive chore of developing finite element models. VLOADS was developed as a Visual BASIC macro in a Microsoft Excel 5.0 work book on a Macintosh. VLOADS has also been implemented on a PC computer using Microsoft Excel 7.0a for Windows 95. VLOADS was developed in 1996, and the current version was released to COSMIC, NASA's Software Technology Transfer Center, in 1997. The program is a copyrighted work with all copyright vested in NASA.

  8. Computation of rotor aerodynamic loads in forward flight using a full-span free wake analysis

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Bliss, Donald B.; Wachspress, Daniel A.; Boschitsch, Alexander H.; Chua, Kiat

    1990-01-01

    The development of an advanced computational analysis of unsteady aerodynamic loads on isolated helicopter rotors in forward flight is described. The primary technical focus of the development was the implementation of a freely distorting filamentary wake model composed of curved vortex elements laid out along contours of constant vortex sheet strength in the wake. This model captures the wake generated by the full span of each rotor blade and makes possible a unified treatment of the shed and trailed vorticity in the wake. This wake model was coupled to a modal analysis of the rotor blade dynamics and a vortex lattice treatment of the aerodynamic loads to produce a comprehensive model for rotor performance and air loads in forward flight dubbed RotorCRAFT (Computation of Rotor Aerodynamics in Forward Flight). The technical background on the major components of this analysis are discussed and the correlation of predictions of performance, trim, and unsteady air loads with experimental data from several representative rotor configurations is examined. The primary conclusions of this study are that the RotorCRAFT analysis correlates well with measured loads on a variety of configurations and that application of the full span free wake model is required to capture several important features of the vibratory loading on rotor blades in forward flight.

  9. Optimum load distribution between heat sources based on the Cournot model

    NASA Astrophysics Data System (ADS)

    Penkovskii, A. V.; Stennikov, V. A.; Khamisov, O. V.

    2015-08-01

    One of the widespread models of the heat supply of consumers, which is represented in the "Single buyer" format, is considered. The methodological base proposed for its description and investigation presents the use of principles of the theory of games, basic propositions of microeconomics, and models and methods of the theory of hydraulic circuits. The original mathematical model of the heat supply system operating under conditions of the "Single buyer" organizational structure provides the derivation of a solution satisfying the market Nash equilibrium. The distinctive feature of the developed mathematical model is that, along with problems solved traditionally within the bounds of bilateral relations of heat energy sources-heat consumer, it considers a network component with its inherent physicotechnical properties of the heat network and business factors connected with costs of the production and transportation of heat energy. This approach gives the possibility to determine optimum levels of load of heat energy sources. These levels provide the given heat energy demand of consumers subject to the maximum profit earning of heat energy sources and the fulfillment of conditions for formation of minimum heat network costs for a specified time. The practical realization of the search of market equilibrium is considered by the example of a heat supply system with two heat energy sources operating on integrated heat networks. The mathematical approach to the solution search is represented in the graphical form and illustrates computations based on the stepwise iteration procedure for optimization of levels of loading of heat energy sources (groping procedure by Cournot) with the corresponding computation of the heat energy price for consumers.

  10. Composite Load Spectra for Select Space Propulsion Structural Components

    NASA Technical Reports Server (NTRS)

    Ho, Hing W.; Newell, James F.

    1994-01-01

    Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.

  11. Booster Interface Loads

    NASA Technical Reports Server (NTRS)

    Gentz, Steve; Wood, Bill; Nettles, Mindy

    2015-01-01

    The interaction between shock waves and the wake shed from the forward booster/core attach hardware results in unsteady pressure fluctuations, which can lead to large buffeting loads on the vehicle. This task investigates whether computational tools can adequately predict these flows, and whether alternative booster nose shapes can reduce these loads. Results from wind tunnel tests will be used to validate the computations and provide design information for future Space Launch System (SLS) configurations. The current work combines numerical simulations with wind tunnel testing to predict buffeting loads caused by the boosters. Variations in nosecone shape, similar to the Ariane 5 design (fig. 1), are being evaluated with regard to lowering the buffet loads. The task will provide design information for the mitigation of buffet loads for SLS, along with validated simulation tools to be used to assess future SLS designs.

  12. User's manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 2 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System Analysis SEPS computer program which performs detailed load analysis including predicting energy demands and consumables requirements of the shuttle electric power system along with parameteric and special case studies on the shuttle electric power system is described. The functional flow diagram of the SEPS program is presented along with data base requirements and formats, procedure and activity definitions, and mission timeline input formats. Distribution circuit input and fixed data requirements are included. Run procedures and deck setups are described.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dag, Serkan; Yildirim, Bora; Sabuncuoglu, Baris

    The objective of this study is to develop crack growth analysis methods for functionally graded materials (FGMs) subjected to mode I cyclic loading. The study presents finite elements based computational procedures for both two and three dimensional problems to examine fatigue crack growth in functionally graded materials. Developed methods allow the computation of crack length and generation of crack front profile for a graded medium subjected to fluctuating stresses. The results presented for an elliptical crack embedded in a functionally graded medium, illustrate the competing effects of ellipse aspect ratio and material property gradation on the fatigue crack growth behavior.

  14. An evaluation of a computer code based on linear acoustic theory for predicting helicopter main rotor noise

    NASA Astrophysics Data System (ADS)

    Davis, S. J.; Egolf, T. A.

    1980-07-01

    Acoustic characteristics predicted using a recently developed computer code were correlated with measured acoustic data for two helicopter rotors. The analysis, is based on a solution of the Ffowcs-Williams-Hawkings (FW-H) equation and includes terms accounting for both the thickness and loading components of the rotational noise. Computations are carried out in the time domain and assume free field conditions. Results of the correlation show that the Farrassat/Nystrom analysis, when using predicted airload data as input, yields fair but encouraging correlation for the first 6 harmonics of blade passage. It also suggests that although the analysis represents a valuable first step towards developing a truly comprehensive helicopter rotor noise prediction capability, further work remains to be done identifying and incorporating additional noise mechanisms into the code.

  15. Continuum topology optimization considering uncertainties in load locations based on the cloud model

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wen, Guilin

    2018-06-01

    Few researchers have paid attention to designing structures in consideration of uncertainties in the loading locations, which may significantly influence the structural performance. In this work, cloud models are employed to depict the uncertainties in the loading locations. A robust algorithm is developed in the context of minimizing the expectation of the structural compliance, while conforming to a material volume constraint. To guarantee optimal solutions, sufficient cloud drops are used, which in turn leads to low efficiency. An innovative strategy is then implemented to enormously improve the computational efficiency. A modified soft-kill bi-directional evolutionary structural optimization method using derived sensitivity numbers is used to output the robust novel configurations. Several numerical examples are presented to demonstrate the effectiveness and efficiency of the proposed algorithm.

  16. Derivatives of buckling loads and vibration frequencies with respect to stiffness and initial strain parameters

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Cohen, Gerald A.; Mroz, Zenon

    1990-01-01

    A uniform variational approach to sensitivity analysis of vibration frequencies and bifurcation loads of nonlinear structures is developed. Two methods of calculating the sensitivities of bifurcation buckling loads and vibration frequencies of nonlinear structures, with respect to stiffness and initial strain parameters, are presented. A direct method requires calculation of derivatives of the prebuckling state with respect to these parameters. An adjoint method bypasses the need for these derivatives by using instead the strain field associated with the second-order postbuckling state. An operator notation is used and the derivation is based on the principle of virtual work. The derivative computations are easily implemented in structural analysis programs. This is demonstrated by examples using a general purpose, finite element program and a shell-of-revolution program.

  17. Theoretical prediction on corrugated sandwich panels under bending loads

    NASA Astrophysics Data System (ADS)

    Shu, Chengfu; Hou, Shujuan

    2018-05-01

    In this paper, an aluminum corrugated sandwich panel with triangular core under bending loads was investigated. Firstly, the equivalent material parameters of the triangular corrugated core layer, which could be considered as an orthotropic panel, were obtained by using Castigliano's theorem and equivalent homogeneous model. Secondly, contributions of the corrugated core layer and two face panels were both considered to compute the equivalent material parameters of the whole structure through the classical lamination theory, and these equivalent material parameters were compared with finite element analysis solutions. Then, based on the Mindlin orthotropic plate theory, this study obtain the closed-form solutions of the displacement for a corrugated sandwich panel under bending loads in specified boundary conditions, and parameters study and comparison by the finite element method were executed simultaneously.

  18. Load leveling on industrial refrigeration systems

    NASA Astrophysics Data System (ADS)

    Bierenbaum, H. S.; Kraus, A. D.

    1982-01-01

    A computer model was constructed of a brewery with a 2000 horsepower compressor/refrigeration system. The various conservation and load management options were simulated using the validated model. The savings available for implementing the most promising options were verified by trials in the brewery. Result show that an optimized methodology for implementing load leveling and energy conservation consisted of: (1) adjusting (or tuning) refrigeration systems controller variables to minimize unnecessary compressor starts, (2) The primary refrigeration system operating parameters, compressor suction pressure, and discharge pressure are carefully controlled (modulated) to satisfy product quality constraints (as well as in-process material cooling rates and temperature levels) and energy evaluating the energy cost savings associated with reject heat recovery, and (4) a decision is made to implement the reject heat recovery system based on a cost/benefits analysis.

  19. Ocean Tide Loading Computation

    NASA Technical Reports Server (NTRS)

    Agnew, Duncan Carr

    2005-01-01

    September 15,2003 through May 15,2005 This grant funds the maintenance, updating, and distribution of programs for computing ocean tide loading, to enable the corrections for such loading to be more widely applied in space- geodetic and gravity measurements. These programs, developed under funding from the CDP and DOSE programs, incorporate the most recent global tidal models developed from Topex/Poscidon data, and also local tide models for regions around North America; the design of the algorithm and software makes it straightforward to combine local and global models.

  20. Study of inducer load and stress, volume 2

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A program of analysis, design, fabrication and testing has been conducted to develop computer programs for predicting rocket engine turbopump inducer hydrodynamic loading, stress magnitude and distribution, and vibration characteristics. Methods of predicting blade loading, stress, and vibration characteristics were selected from a literature search and used as a basis for the computer programs. An inducer, representative of typical rocket engine inducers, was designed, fabricated, and tested with special instrumentation selected to provide measurements of blade surface pressures and stresses. Data from the tests were compared with predicted values and the computer programs were revised as required to improve correlation. For Volume 1 see N71-20403. For Volume 2 see N71-20404.

Top