Sample records for computer simulator suitable

  1. Dataflow computing approach in high-speed digital simulation

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Karplus, W. J.

    1984-01-01

    New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.

  2. Building an adiabatic quantum computer simulation in the classroom

    NASA Astrophysics Data System (ADS)

    Rodríguez-Laguna, Javier; Santalla, Silvia N.

    2018-05-01

    We present a didactic introduction to adiabatic quantum computation (AQC) via the explicit construction of a classical simulator of quantum computers. This constitutes a suitable route to introduce several important concepts for advanced undergraduates in physics: quantum many-body systems, quantum phase transitions, disordered systems, spin-glasses, and computational complexity theory.

  3. Theoretical and computational foundations of management class simulation

    Treesearch

    Denie Gerold

    1978-01-01

    Investigations on complicated, complex, and not well-ordered systems are possible only with the aid of mathematical methods and electronic data processing. Simulation as a method of operations research is particularly suitable for this purpose. Theoretical and computational foundations of management class simulation must be integrated into the planning systems of...

  4. Assessing Practical Skills in Physics Using Computer Simulations

    ERIC Educational Resources Information Center

    Walsh, Kevin

    2018-01-01

    Computer simulations have been used very effectively for many years in the teaching of science but the focus has been on cognitive development. This study, however, is an investigation into the possibility that a student's experimental skills in the real-world environment can be judged via the undertaking of a suitably chosen computer simulation…

  5. Space-filling designs for computer experiments: A review

    DOE PAGES

    Joseph, V. Roshan

    2016-01-29

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  6. Space-filling designs for computer experiments: A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  7. Evaluation of Rankine cycle air conditioning system hardware by computer simulation

    NASA Technical Reports Server (NTRS)

    Healey, H. M.; Clark, D.

    1978-01-01

    A computer program for simulating the performance of a variety of solar powered Rankine cycle air conditioning system components (RCACS) has been developed. The computer program models actual equipment by developing performance maps from manufacturers data and is capable of simulating off-design operation of the RCACS components. The program designed to be a subroutine of the Marshall Space Flight Center (MSFC) Solar Energy System Analysis Computer Program 'SOLRAD', is a complete package suitable for use by an occasional computer user in developing performance maps of heating, ventilation and air conditioning components.

  8. Computers for real time flight simulation: A market survey

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.; Karplus, W. J.

    1977-01-01

    An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.

  9. An automated procedure for developing hybrid computer simulations of turbofan engines

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.

    1980-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all of the calculations and date manipulations needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self contained engine model to match specified design point information. A test case is described and comparisons between hybrid simulation and specified engine performance data are presented.

  10. Micro-Energy Rates for Damage Tolerance and Durability of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Minnetyan, Levon

    2006-01-01

    In this paper, the adhesive bond strength of lap-jointed graphite/aluminum composites is examined by computational simulation. Computed micro-stress level energy release rates are used to identify the damage mechanisms associated with the corresponding acoustic emission (AE) signals. Computed damage regions are similarly correlated with ultrasonically scanned damage regions. Results show that computational simulation can be used with suitable NDE methods for credible in-service monitoring of composites.

  11. Undergraduate computational physics projects on quantum computing

    NASA Astrophysics Data System (ADS)

    Candela, D.

    2015-08-01

    Computational projects on quantum computing suitable for students in a junior-level quantum mechanics course are described. In these projects students write their own programs to simulate quantum computers. Knowledge is assumed of introductory quantum mechanics through the properties of spin 1/2. Initial, more easily programmed projects treat the basics of quantum computation, quantum gates, and Grover's quantum search algorithm. These are followed by more advanced projects to increase the number of qubits and implement Shor's quantum factoring algorithm. The projects can be run on a typical laptop or desktop computer, using most programming languages. Supplementing resources available elsewhere, the projects are presented here in a self-contained format especially suitable for a short computational module for physics students.

  12. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  13. Lewis hybrid computing system, users manual

    NASA Technical Reports Server (NTRS)

    Bruton, W. M.; Cwynar, D. S.

    1979-01-01

    The Lewis Research Center's Hybrid Simulation Lab contains a collection of analog, digital, and hybrid (combined analog and digital) computing equipment suitable for the dynamic simulation and analysis of complex systems. This report is intended as a guide to users of these computing systems. The report describes the available equipment' and outlines procedures for its use. Particular is given to the operation of the PACER 100 digital processor. System software to accomplish the usual digital tasks such as compiling, editing, etc. and Lewis-developed special purpose software are described.

  14. Modeling cation/anion-water interactions in functional aluminosilicate structures.

    PubMed

    Richards, A J; Barnes, P; Collins, D R; Christodoulos, F; Clark, S M

    1995-02-01

    A need for the computer simulation of hydration/dehydration processes in functional aluminosilicate structures has been noted. Full and realistic simulations of these systems can be somewhat ambitious and require the aid of interactive computer graphics to identify key structural/chemical units, both in the devising of suitable water-ion simulation potentials and in the analysis of hydrogen-bonding schemes in the subsequent simulation studies. In this article, the former is demonstrated by the assembling of a range of essential water-ion potentials. These span the range of formal charges from +4e to -2e, and are evaluated in the context of three types of structure: a porous zeolite, calcium silicate cement, and layered clay. As an example of the latter, the computer graphics output from Monte Carlo computer simulation studies of hydration/dehydration in calcium-zeolite A is presented.

  15. Automated procedure for developing hybrid computer simulations of turbofan engines. Part 1: General description

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.; Bruton, W. M.

    1982-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.

  16. Cellular automaton supercomputing

    NASA Technical Reports Server (NTRS)

    Wolfram, Stephen

    1987-01-01

    Many of the models now used in science and engineering are over a century old. And most of them can be implemented on modern digital computers only with considerable difficulty. Some new basic models are discussed which are much more directly suitable for digital computer simulation. The fundamental principle is that the models considered herein are as suitable as possible for implementation on digital computers. It is then a matter of scientific analysis to determine whether such models can reproduce the behavior seen in physical and other systems. Such analysis was carried out in several cases, and the results are very encouraging.

  17. Applications of Computer Simulation Methods in Plastic Forming Technologies for Magnesium Alloys

    NASA Astrophysics Data System (ADS)

    Zhang, S. H.; Zheng, W. T.; Shang, Y. L.; Wu, X.; Palumbo, G.; Tricarico, L.

    2007-05-01

    Applications of computer simulation methods in plastic forming of magnesium alloy parts are discussed. As magnesium alloys possess very poor plastic formability at room temperature, various methods have been tried to improve the formability, for example, suitable rolling process and annealing procedures should be found to produce qualified magnesium alloy sheets, which have the reduced anisotropy and improved formability. The blank can be heated to a warm temperature or a hot temperature; a suitable temperature field is designed, tools should be heated or the punch should be cooled; suitable deformation speed should be found to ensure suitable strain rate range. Damage theory considering non-isothermal forming is established. Various modeling methods have been tried to consider above situations. The following situations for modeling the forming process of magnesium alloy sheets and tubes are dealt with: (1) modeling for predicting wrinkling and anisotropy of sheet warm forming; (2) damage theory used for predicting ruptures in sheet warm forming; (3) modeling for optimizing of blank shape and dimensions for sheet warm forming; (4) modeling in non-steady-state creep in hot metal gas forming of AZ31 tubes.

  18. Fast Learning for Immersive Engagement in Energy Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M

    The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less

  19. A Survey and Evaluation of Simulators Suitable for Teaching Courses in Computer Architecture and Organization

    ERIC Educational Resources Information Center

    Nikolic, B.; Radivojevic, Z.; Djordjevic, J.; Milutinovic, V.

    2009-01-01

    Courses in Computer Architecture and Organization are regularly included in Computer Engineering curricula. These courses are usually organized in such a way that students obtain not only a purely theoretical experience, but also a practical understanding of the topics lectured. This practical work is usually done in a laboratory using simulators…

  20. Supporting Undergraduate Computer Architecture Students Using a Visual MIPS64 CPU Simulator

    ERIC Educational Resources Information Center

    Patti, D.; Spadaccini, A.; Palesi, M.; Fazzino, F.; Catania, V.

    2012-01-01

    The topics of computer architecture are always taught using an Assembly dialect as an example. The most commonly used textbooks in this field use the MIPS64 Instruction Set Architecture (ISA) to help students in learning the fundamentals of computer architecture because of its orthogonality and its suitability for real-world applications. This…

  1. Simulation of isoelectro focusing processes. [stationary electrolysis of charged species

    NASA Technical Reports Server (NTRS)

    Palusinski, O. A.

    1980-01-01

    This paper presents the computer implementation of a model for the stationary electrolysis of two or more charged species. This has specific application to the technique of isoelectric focussing, in which the stationary electrolysis of ampholytes is used to generate a pH gradient useful for the separation of proteins, peptides and other biomolecules. The fundamental equations describing the process are given. These equations are transformed to a form suitable for digital computer implementation. Some results of computer simulation are described and compared to data obtained in the laboratory.

  2. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less

  3. Design of a bounded wave EMP (Electromagnetic Pulse) simulator

    NASA Astrophysics Data System (ADS)

    Sevat, P. A. A.

    1989-06-01

    Electromagnetic Pulse (EMP) simulators are used to simulate the EMP generated by a nuclear weapon and to harden equipment against the effects of EMP. At present, DREO has a 1 m EMP simulator for testing computer terminal size equipment. To develop the R and D capability for testing larger objects, such as a helicopter, a much bigger threat level facility is required. This report concerns the design of a bounded wave EMP simulator suitable for testing large size equipment. Different types of simulators are described and their pros and cons are discussed. A bounded wave parallel plate type simulator is chosen for it's efficiency and the least environmental impact. Detailed designs are given for 6 m and 10 m parallel plate type wire grid simulators. Electromagnetic fields inside and outside the simulators are computed. Preliminary specifications for a pulse generator required for the simulator are also given. Finally, the electromagnetic fields radiated from the simulator are computed and discussed.

  4. Instruction Using Experiments in a Computer. Final Report.

    ERIC Educational Resources Information Center

    Fulton, John P.; Hazeltine, Barrett

    Included are four computer programs which simulate experiments suitable for freshman engineering and physics courses. The subjects of the programs are ballistic trajectories, variable mass systems, trajectory of a particle under various forces, and design of an electronic emplifier. The report includes the problem statement, its objectives, the…

  5. [Application of computer-assisted 3D imaging simulation for surgery].

    PubMed

    Matsushita, S; Suzuki, N

    1994-03-01

    This article describes trends in application of various imaging technology in surgical planning, navigation, and computer aided surgery. Imaging information is essential factor for simulation in medicine. It includes three dimensional (3D) image reconstruction, neuro-surgical navigation, creating substantial model based on 3D imaging data and etc. These developments depend mostly on 3D imaging technique, which is much contributed by recent computer technology. 3D imaging can offer new intuitive information to physician and surgeon, and this method is suitable for mechanical control. By utilizing simulated results, we can obtain more precise surgical orientation, estimation, and operation. For more advancement, automatic and high speed recognition of medical imaging is being developed.

  6. Utility of an emulation and simulation computer model for air revitalization system hardware design, development, and test

    NASA Technical Reports Server (NTRS)

    Yanosy, J. L.; Rowell, L. F.

    1985-01-01

    Efforts to make increasingly use of suitable computer programs in the design of hardware have the potential to reduce expenditures. In this context, NASA has evaluated the benefits provided by software tools through an application to the Environmental Control and Life Support (ECLS) system. The present paper is concerned with the benefits obtained by an employment of simulation tools in the case of the Air Revitalization System (ARS) of a Space Station life support system. Attention is given to the ARS functions and components, a computer program overview, a SAND (solid amine water desorbed) bed model description, a model validation, and details regarding the simulation benefits.

  7. Structural, thermodynamic, and electrical properties of polar fluids and ionic solutions on a hypersphere: Results of simulations

    NASA Astrophysics Data System (ADS)

    Caillol, J. M.; Levesque, D.

    1992-01-01

    The reliability and the efficiency of a new method suitable for the simulations of dielectric fluids and ionic solutions is established by numerical computations. The efficiency depends on the use of a simulation cell which is the surface of a four-dimensional sphere. The reliability originates from a charge-charge potential solution of the Poisson equation in this confining volume. The computation time, for systems of a few hundred molecules, is reduced by a factor of 2 or 3 compared to this of a simulation performed in a cubic volume with periodic boundary conditions and the Ewald charge-charge potential.

  8. The investigation of tethered satellite system dynamics

    NASA Technical Reports Server (NTRS)

    Lorenzini, E.

    1985-01-01

    Progress in tethered satellite system dynamics research is reported. A retrieval rate control law with no angular feedback to investigate the system's dynamic response was studied. The initial conditions for the computer code which simulates the satellite's rotational dynamics were extended to a generic orbit. The model of the satellite thrusters was modified to simulate a pulsed thrust, by making the SKYHOOK integrator suitable for dealing with delta functions without loosing computational efficiency. Tether breaks were simulated with the high resolution computer code SLACK3. Shuttle's maneuvers were tested. The electric potential around a severed conductive tether with insulator, in the case of a tether breakage at 20 km from the Shuttle, was computed. The electrodynamic hazards due to the breakage of the TSS electrodynamic tether in a plasma are evaluated.

  9. A computer simulation of an adaptive noise canceler with a single input

    NASA Astrophysics Data System (ADS)

    Albert, Stuart D.

    1991-06-01

    A description of an adaptive noise canceler using Widrows' LMS algorithm is presented. A computer simulation of canceler performance (adaptive convergence time and frequency transfer function) was written for use as a design tool. The simulations, assumptions, and input parameters are described in detail. The simulation is used in a design example to predict the performance of an adaptive noise canceler in the simultaneous presence of both strong and weak narrow-band signals (a cosited frequency hopping radio scenario). On the basis of the simulation results, it is concluded that the simulation is suitable for use as an adaptive noise canceler design tool; i.e., it can be used to evaluate the effect of design parameter changes on canceler performance.

  10. Analysis of Inlet-Compressor Acoustic Interactions Using Coupled CFD Codes

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Townsend, S. E.; Cole, G. L.; Slater, J. W.; Chima, R.

    1998-01-01

    A problem that arises in the numerical simulation of supersonic inlets is the lack of a suitable boundary condition at the engine face. In this paper, a coupled approach, in which the inlet computation is coupled dynamically to a turbomachinery computation, is proposed as a means to overcome this problem. The specific application chosen for validation of this approach is the collapsing bump experiment performed at the University of Cincinnati. The computed results are found to be in reasonable agreement with experimental results. The coupled simulation results could also be used to aid development of a simplified boundary condition.

  11. Running into Trouble with the Time-Dependent Propagation of a Wavepacket

    ERIC Educational Resources Information Center

    Garriz, Abel E.; Sztrajman, Alejandro; Mitnik, Dario

    2010-01-01

    The propagation in time of a wavepacket is a conceptually rich problem suitable to be studied in any introductory quantum mechanics course. This subject is covered analytically in most of the standard textbooks. Computer simulations have become a widespread pedagogical tool, easily implemented in computer labs and in classroom demonstrations.…

  12. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).

  13. Molecular dynamics simulations in hybrid particle-continuum schemes: Pitfalls and caveats

    NASA Astrophysics Data System (ADS)

    Stalter, S.; Yelash, L.; Emamy, N.; Statt, A.; Hanke, M.; Lukáčová-Medvid'ová, M.; Virnau, P.

    2018-03-01

    Heterogeneous multiscale methods (HMM) combine molecular accuracy of particle-based simulations with the computational efficiency of continuum descriptions to model flow in soft matter liquids. In these schemes, molecular simulations typically pose a computational bottleneck, which we investigate in detail in this study. We find that it is preferable to simulate many small systems as opposed to a few large systems, and that a choice of a simple isokinetic thermostat is typically sufficient while thermostats such as Lowe-Andersen allow for simulations at elevated viscosity. We discuss suitable choices for time steps and finite-size effects which arise in the limit of very small simulation boxes. We also argue that if colloidal systems are considered as opposed to atomistic systems, the gap between microscopic and macroscopic simulations regarding time and length scales is significantly smaller. We propose a novel reduced-order technique for the coupling to the macroscopic solver, which allows us to approximate a non-linear stress-strain relation efficiently and thus further reduce computational effort of microscopic simulations.

  14. CFD: A Castle in the Sand?

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Wood, William A.

    2004-01-01

    The computational simulation community is not routinely publishing independently verifiable tests to accompany new models or algorithms. A survey reveals that only 22% of new models published are accompanied by tests suitable for independently verifying the new model. As the community develops larger codes with increased functionality, and hence increased complexity in terms of the number of building block components and their interactions, it becomes prohibitively expensive for each development group to derive the appropriate tests for each component. Therefore, the computational simulation community is building its collective castle on a very shaky foundation of components with unpublished and unrepeatable verification tests. The computational simulation community needs to begin publishing component level verification tests before the tide of complexity undermines its foundation.

  15. A Comparative Study of High and Low Fidelity Fan Models for Turbofan Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Afjeh, Abdollah A.

    1991-01-01

    In this paper, a heterogeneous propulsion system simulation method is presented. The method is based on the formulation of a cycle model of a gas turbine engine. The model includes the nonlinear characteristics of the engine components via use of empirical data. The potential to simulate the entire engine operation on a computer without the aid of data is demonstrated by numerically generating "performance maps" for a fan component using two flow models of varying fidelity. The suitability of the fan models were evaluated by comparing the computed performance with experimental data. A discussion of the potential benefits and/or difficulties in connecting simulations solutions of differing fidelity is given.

  16. Turbulence simulation mechanization for Space Shuttle Orbiter dynamics and control studies

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; King, R. L.

    1977-01-01

    The current version of the NASA turbulent simulation model in the form of a digital computer program, TBMOD, is described. The logic of the program is discussed and all inputs and outputs are defined. An alternate method of shear simulation suitable for incorporation into the model is presented. The simulation is based on a von Karman spectrum and the assumption of isotropy. The resulting spectral density functions for the shear model are included.

  17. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    PubMed

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  18. Particle-based simulation of charge transport in discrete-charge nano-scale systems: the electrostatic problem

    PubMed Central

    2012-01-01

    The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al. PMID:22338640

  19. Particle-based simulation of charge transport in discrete-charge nano-scale systems: the electrostatic problem.

    PubMed

    Berti, Claudio; Gillespie, Dirk; Eisenberg, Robert S; Fiegna, Claudio

    2012-02-16

    The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al.

  20. Structure identification methods for atomistic simulations of crystalline materials

    DOE PAGES

    Stukowski, Alexander

    2012-05-28

    Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.

  1. Blast Load Simulator Experiments for Computational Model Validation: Report 1

    DTIC Science & Technology

    2016-08-01

    involving the inclusion of non-responding box-type structures in a BLS simulated blast environment. The BLS is a highly tunable com- pressed-gas-driven...Blast Load Simulator (BLS) to evaluate its suitability for a future effort involving the inclusion of non-responding box-type structures located in...Recommendations Preliminary testing indicated that inclusion of the grill and diaphragm striker resulted in a decrease in peak pressure of about 12

  2. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures.

    PubMed

    Zhan, Yijian; Meschke, Günther

    2017-07-08

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.

  3. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures

    PubMed Central

    Zhan, Yijian

    2017-01-01

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130

  4. A heterogeneous system based on GPU and multi-core CPU for real-time fluid and rigid body simulation

    NASA Astrophysics Data System (ADS)

    da Silva Junior, José Ricardo; Gonzalez Clua, Esteban W.; Montenegro, Anselmo; Lage, Marcos; Dreux, Marcelo de Andrade; Joselli, Mark; Pagliosa, Paulo A.; Kuryla, Christine Lucille

    2012-03-01

    Computational fluid dynamics in simulation has become an important field not only for physics and engineering areas but also for simulation, computer graphics, virtual reality and even video game development. Many efficient models have been developed over the years, but when many contact interactions must be processed, most models present difficulties or cannot achieve real-time results when executed. The advent of parallel computing has enabled the development of many strategies for accelerating the simulations. Our work proposes a new system which uses some successful algorithms already proposed, as well as a data structure organisation based on a heterogeneous architecture using CPUs and GPUs, in order to process the simulation of the interaction of fluids and rigid bodies. This successfully results in a two-way interaction between them and their surrounding objects. As far as we know, this is the first work that presents a computational collaborative environment which makes use of two different paradigms of hardware architecture for this specific kind of problem. Since our method achieves real-time results, it is suitable for virtual reality, simulation and video game fluid simulation problems.

  5. Evaluation of a grid based molecular dynamics approach for polypeptide simulations.

    PubMed

    Merelli, Ivan; Morra, Giulia; Milanesi, Luciano

    2007-09-01

    Molecular dynamics is very important for biomedical research because it makes possible simulation of the behavior of a biological macromolecule in silico. However, molecular dynamics is computationally rather expensive: the simulation of some nanoseconds of dynamics for a large macromolecule such as a protein takes very long time, due to the high number of operations that are needed for solving the Newton's equations in the case of a system of thousands of atoms. In order to obtain biologically significant data, it is desirable to use high-performance computation resources to perform these simulations. Recently, a distributed computing approach based on replacing a single long simulation with many independent short trajectories has been introduced, which in many cases provides valuable results. This study concerns the development of an infrastructure to run molecular dynamics simulations on a grid platform in a distributed way. The implemented software allows the parallel submission of different simulations that are singularly short but together bring important biological information. Moreover, each simulation is divided into a chain of jobs to avoid data loss in case of system failure and to contain the dimension of each data transfer from the grid. The results confirm that the distributed approach on grid computing is particularly suitable for molecular dynamics simulations thanks to the elevated scalability.

  6. Algodoo: A Tool for Encouraging Creativity in Physics Teaching and Learning

    NASA Astrophysics Data System (ADS)

    Gregorcic, Bor; Bodin, Madelen

    2017-01-01

    Algodoo (http://www.algodoo.com) is a digital sandbox for physics 2D simulations. It allows students and teachers to easily create simulated "scenes" and explore physics through a user-friendly and visually attractive interface. In this paper, we present different ways in which students and teachers can use Algodoo to visualize and solve physics problems, investigate phenomena and processes, and engage in out-of-school activities and projects. Algodoo, with its approachable interface, inhabits a middle ground between computer games and "serious" computer modeling. It is suitable as an entry-level modeling tool for students of all ages and can facilitate discussions about the role of computer modeling in physics.

  7. Development of an autonomous video rendezous and docking system

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Kelly, J. H.

    1982-01-01

    Video control systems using three flashing lights and two other types of docking aids were evaluated through computer simulation and other approaches. The three light system performed much better than the others. Its accuracy is affected little by tumbling of the target spacecraft, and in the simulations it was able to cope with attitude rates up to 20,000 degrees per hour about the docking axis. Its performance with rotation about other axes is determined primarily by the state estimation and goal setting portions of the control system, not by measurement accuracy. A suitable control system, and a computer program that can serve as the basis for the physical simulation are discussed.

  8. Development of a Aerothermoelastic-Acoustics Simulation Capability of Flight Vehicles

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Choi, S. B.; Ibrahim, A.

    2010-01-01

    A novel numerical, finite element based analysis methodology is presented in this paper suitable for accurate and efficient simulation of practical, complex flight vehicles. An associated computer code, developed in this connection, is also described in some detail. Thermal effects of high speed flow obtained from a heat conduction analysis are incorporated in the modal analysis which in turn affects the unsteady flow arising out of interaction of elastic structures with the air. Numerical examples pertaining to representative problems are given in much detail testifying to the efficacy of the advocated techniques. This is a unique implementation of temperature effects in a finite element CFD based multidisciplinary simulation analysis capability involving large scale computations.

  9. A comparison of hardware description languages. [describing digital systems structure and behavior to a computer

    NASA Technical Reports Server (NTRS)

    Shiva, S. G.

    1978-01-01

    Several high level languages which evolved over the past few years for describing and simulating the structure and behavior of digital systems, on digital computers are assessed. The characteristics of the four prominent languages (CDL, DDL, AHPL, ISP) are summarized. A criterion for selecting a suitable hardware description language for use in an automatic integrated circuit design environment is provided.

  10. Simulating Human Cognition in the Domain of Air Traffic Control

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Johnston, James C.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Experiments intended to assess performance in human-machine interactions are often prohibitively expensive, unethical or otherwise impractical to run. Approximations of experimental results can be obtained, in principle, by simulating the behavior of subjects using computer models of human mental behavior. Computer simulation technology has been developed for this purpose. Our goal is to produce a cognitive model suitable to guide the simulation machinery and enable it to closely approximate a human subject's performance in experimental conditions. The described model is designed to simulate a variety of cognitive behaviors involved in routine air traffic control. As the model is elaborated, our ability to predict the effects of novel circumstances on controller error rates and other performance characteristics should increase. This will enable the system to project the impact of proposed changes to air traffic control procedures and equipment on controller performance.

  11. Unstructured grid methods for the simulation of 3D transient flows

    NASA Technical Reports Server (NTRS)

    Morgan, K.; Peraire, J.; Peiro, J.

    1994-01-01

    A description of the research work undertaken under NASA Research Grant NAGW-2962 has been given. Basic algorithmic development work, undertaken for the simulation of steady three dimensional inviscid flow, has been used as the basis for the construction of a procedure for the simulation of truly transient flows in three dimensions. To produce a viable procedure for implementation on the current generation of computers, moving boundary components are simulated by fixed boundaries plus a suitably modified boundary condition. Computational efficiency is increased by the use of an implicit time stepping scheme in which the equation system is solved by explicit multistage time stepping with multigrid acceleration. The viability of the proposed approach has been demonstrated by considering the application of the procedure to simulation of a transonic flow over an oscillating ONERA M6 wing.

  12. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations

    PubMed Central

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061

  13. Computational Modeling in Plasma Processing for 300 mm Wafers

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    Migration toward 300 mm wafer size has been initiated recently due to process economics and to meet future demands for integrated circuits. A major issue facing the semiconductor community at this juncture is development of suitable processing equipment, for example, plasma processing reactors that can accomodate 300 mm wafers. In this Invited Talk, scaling of reactors will be discussed with the aid of computational fluid dynamics results. We have undertaken reactor simulations using CFD with reactor geometry, pressure, and precursor flow rates as parameters in a systematic investigation. These simulations provide guidelines for scaling up in reactor design.

  14. SU-F-T-193: Evaluation of a GPU-Based Fast Monte Carlo Code for Proton Therapy Biological Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taleei, R; Qin, N; Jiang, S

    2016-06-15

    Purpose: Biological treatment plan optimization is of great interest for proton therapy. It requires extensive Monte Carlo (MC) simulations to compute physical dose and biological quantities. Recently, a gPMC package was developed for rapid MC dose calculations on a GPU platform. This work investigated its suitability for proton therapy biological optimization in terms of accuracy and efficiency. Methods: We performed simulations of a proton pencil beam with energies of 75, 150 and 225 MeV in a homogeneous water phantom using gPMC and FLUKA. Physical dose and energy spectra for each ion type on the central beam axis were scored. Relativemore » Biological Effectiveness (RBE) was calculated using repair-misrepair-fixation model. Microdosimetry calculations were performed using Monte Carlo Damage Simulation (MCDS). Results: Ranges computed by the two codes agreed within 1 mm. Physical dose difference was less than 2.5 % at the Bragg peak. RBE-weighted dose agreed within 5 % at the Bragg peak. Differences in microdosimetric quantities such as dose average lineal energy transfer and specific energy were < 10%. The simulation time per source particle with FLUKA was 0.0018 sec, while gPMC was ∼ 600 times faster. Conclusion: Physical dose computed by FLUKA and gPMC were in a good agreement. The RBE differences along the central axis were small, and RBE-weighted dose difference was found to be acceptable. The combined accuracy and efficiency makes gPMC suitable for proton therapy biological optimization.« less

  15. Investigation of television transmission using adaptive delta modulation principles

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1976-01-01

    The results are presented of a study on the use of the delta modulator as a digital encoder of television signals. The computer simulation of different delta modulators was studied in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results were analyzed and then implemented in hardware to study its ability to encode real time motion pictures from an NTSC format television camera. The effects of channel errors on the delta modulated video signal were tested along with several error correction algorithms via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. Delta modulators were investigated which could achieve significant bandwidth reduction without regard to complexity or speed. The first scheme investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved the computer simulation of two dimensional delta modulator algorithms.

  16. FDA’s Nozzle Numerical Simulation Challenge: Non-Newtonian Fluid Effects and Blood Damage

    PubMed Central

    Trias, Miquel; Arbona, Antonio; Massó, Joan; Miñano, Borja; Bona, Carles

    2014-01-01

    Data from FDA’s nozzle challenge–a study to assess the suitability of simulating fluid flow in an idealized medical device–is used to validate the simulations obtained from a numerical, finite-differences code. Various physiological indicators are computed and compared with experimental data from three different laboratories, getting a very good agreement. Special care is taken with the derivation of blood damage (hemolysis). The paper is focused on the laminar regime, in order to investigate non-Newtonian effects (non-constant fluid viscosity). The code can deal with these effects with just a small extra computational cost, improving Newtonian estimations up to a ten percent. The relevance of non-Newtonian effects for hemolysis parameters is discussed. PMID:24667931

  17. Quantum Metropolis sampling.

    PubMed

    Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F

    2011-03-03

    The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.

  18. A space-efficient quantum computer simulator suitable for high-speed FPGA implementation

    NASA Astrophysics Data System (ADS)

    Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel

    2009-05-01

    Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.

  19. Indirect Reconstruction of Pore Morphology for Parametric Computational Characterization of Unidirectional Porous Iron.

    PubMed

    Kovačič, Aljaž; Borovinšek, Matej; Vesenjak, Matej; Ren, Zoran

    2018-01-26

    This paper addresses the problem of reconstructing realistic, irregular pore geometries of lotus-type porous iron for computer models that allow for simple porosity and pore size variation in computational characterization of their mechanical properties. The presented methodology uses image-recognition algorithms for the statistical analysis of pore morphology in real material specimens, from which a unique fingerprint of pore morphology at a certain porosity level is derived. The representative morphology parameter is introduced and used for the indirect reconstruction of realistic and statistically representative pore morphologies, which can be used for the generation of computational models with an arbitrary porosity. Such models were subjected to parametric computer simulations to characterize the dependence of engineering elastic modulus on the porosity of lotus-type porous iron. The computational results are in excellent agreement with experimental observations, which confirms the suitability of the presented methodology of indirect pore geometry reconstruction for computational simulations of similar porous materials.

  20. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  1. Mixtures of GAMs for habitat suitability analysis with overdispersed presence / absence data

    PubMed Central

    Pleydell, David R.J.; Chrétien, Stéphane

    2009-01-01

    A new approach to species distribution modelling based on unsupervised classification via a finite mixture of GAMs incorporating habitat suitability curves is proposed. A tailored EM algorithm is outlined for computing maximum likelihood estimates. Several submodels incorporating various parameter constraints are explored. Simulation studies confirm, that under certain constraints, the habitat suitability curves are recovered with good precision. The method is also applied to a set of real data concerning presence/absence of observable small mammal indices collected on the Tibetan plateau. The resulting classification was found to correspond to species-level differences in habitat preference described in previous ecological work. PMID:20401331

  2. Effects of experimental design on calibration curve precision in routine analysis

    PubMed Central

    Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.

    1998-01-01

    A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816

  3. Studying Transonic Gases With a Hydraulic Analog

    NASA Technical Reports Server (NTRS)

    Wagner, W.; Lepore, F.

    1986-01-01

    Water table for hydraulic-flow research yields valuable information about gas flow at transonic speeds. Used to study fuel and oxidizer flow in high-pressure rocket engines. Method applied to gas flows in such equipment as furnaces, nozzles, and chemical lasers. Especially suitable when wall contours nonuniform, discontinuous, or unusually shaped. Wall shapes changed quickly for study and evaluated on spot. Method used instead of computer simulation when computer models unavailable, inaccurate, or costly to run.

  4. Simulation research: A vital step for human missions to Mars

    NASA Astrophysics Data System (ADS)

    Perino, Maria Antonietta; Apel, Uwe; Bichi, Alessandro

    The complex nature of the challenge as humans embark on exploration missions beyond Earth orbit will require that, in the early stages, simulation facilities be established at least on Earth. Suitable facilities in Low Earth Orbit and on the Moon surface would provide complementary information of critical importance for the overall design of a human mission to Mars. A full range of simulation campaigns is required, in fact, to reach a better understanding of the complexities involved in exploration missions that will bring humans back to the Moon and then outward to Mars. The corresponding simulation means may range from small scale environmental simulation chambers and/or computer models that will aid in the development of new materials, to full scale mock-ups of spacecraft and planetary habitats and/or orbiting infrastructues. This paper describes how a suitable simulation campaign will contribute to the definition of the required countermeasures with respect to the expected duration of the flight. This will allow to be traded contermeasure payload and astronaut time against effort in technological development of propulsion systems.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djidel, S.; Bouamar, M.; Khedrouche, D., E-mail: dkhedrouche@yahoo.com

    This paper presents a performances study of UWB monopole antenna using half-elliptic radiator conformed on elliptical surface. The proposed antenna, simulated using microwave studio computer CST and High frequency simulator structure HFSS, is designed to operate in frequency interval over 3.1 to 40 GHz. Good return loss and radiation pattern characteristics are obtained in the frequency band of interest. The proposed antenna structure is suitable for ultra-wideband applications, which is, required for many wearable electronics applications.

  6. Modeling Flue Pipes: Subsonic Flow, Lattice Boltzmann, and Parallel Distributed Computers.

    DTIC Science & Technology

    1995-01-01

    Abstract The problem of simulating the hydrodynamics and the acoustic waves inside wind musical instruments such as the recorder, the organ, and the ute...inside wind musical instruments such as the recorder, the organ, and the ute is considered. The problem is attacked by developing suitable local...applications such as the simulation of uid dynamics inside wind musical instruments. In the past, he has also worked on numerical methods for ordinary di

  7. HRLSim: a high performance spiking neural network simulator for GPGPU clusters.

    PubMed

    Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan

    2014-02-01

    Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.

  8. Imaging Near-Earth Electron Densities Using Thomson Scattering

    DTIC Science & Technology

    2009-01-15

    geocentric solar magnetospheric (GSM) coordinates1. TECs were initially computed from a viewing loca- tion at the Sun-Earth L1 Lagrange point2 for both...further find that an elliptical Earth orbit (apogee ~30 RE) is a suitable lower- cost option for a demonstration mission. 5. SIMULATED OBSERVATIONS We

  9. A Conservation Strategy for the Florida Scrub-Jay on John F. Kennedy Space Center/Merritt Island National Wildlife Refuge: An Initial Scientific Basis for Recovery

    NASA Technical Reports Server (NTRS)

    Breininger, D. R.; Larson, V. L.; Schaub, R.; Duncan, B. W.; Schmalzer, P. A.; Oddy, D. M.; Smith, R. B.; Adrian, F.; Hill, H., Jr.

    1996-01-01

    The Florida Scrub-Jay (Aphelocoma coerulescens) is an indicator of ecosystem integrity of Florida scrub, an endangered ecosystem that requires frequent fire. One of the largest populations of this federally threatened species occurs on John F. Kennedy Space Center/Merritt Island National Wildlife Refuge. Population trends were predicted using population modeling and field data on reproduction and survival of Florida Scrub-Jays collected from 1988 - 1995. Analyses of historical photography indicated that habitat suitability has been declining for 30 years. Field data and computer simulations suggested that the population declined by at least 40% and will decline by another 40% in 1 0 years, if habitat management is not greatly intensified. Data and computer simulations suggest that habitat suitability cannot deviate greatly from optimal for the jay population to persist. Landscape trajectories of vegetation structure, responsible for declining habitat suitability, are associated with the disruption of natural fire regimes. Prescribed fire alone can not reverse the trajectories. A recovery strategy was developed, based on studies of Florida Scrub-Jays and scrub vegetation. A reserve design was formulated based on conservation science principles for scrub ecosystems. The strategy emphasizes frequent fire to restore habitat, but includes mechanical tree cutting for severely degraded areas. Pine thinning across large areas can produce rapid increases in habitat quality. Site-specific strategies will need to be developed, monitored, and modified to achieve conditions suitable for population persistence.

  10. Condor-COPASI: high-throughput computing for biochemical networks

    PubMed Central

    2012-01-01

    Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945

  11. Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems

    NASA Astrophysics Data System (ADS)

    Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.

    2003-09-01

    In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.

  12. Efficient electron open boundaries for simulating electrochemical cells

    NASA Astrophysics Data System (ADS)

    Zauchner, Mario G.; Horsfield, Andrew P.; Todorov, Tchavdar N.

    2018-01-01

    Nonequilibrium electrochemistry raises new challenges for atomistic simulation: we need to perform molecular dynamics for the nuclear degrees of freedom with an explicit description of the electrons, which in turn must be free to enter and leave the computational cell. Here we present a limiting form for electron open boundaries that we expect to apply when the magnitude of the electric current is determined by the drift and diffusion of ions in a solution and which is sufficiently computationally efficient to be used with molecular dynamics. We present tight-binding simulations of a parallel-plate capacitor with nothing, a dimer, or an atomic wire situated in the space between the plates. These simulations demonstrate that this scheme can be used to perform molecular dynamics simulations when there is an applied bias between two metal plates with, at most, weak electronic coupling between them. This simple system captures some of the essential features of an electrochemical cell, suggesting this approach might be suitable for simulations of electrochemical cells out of equilibrium.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaurov, Alexander A., E-mail: kaurov@uchicago.edu

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less

  14. Simulation of probabilistic wind loads and building analysis

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Chamis, Christos C.

    1991-01-01

    Probabilistic wind loads likely to occur on a structure during its design life are predicted. Described here is a suitable multifactor interactive equation (MFIE) model and its use in the Composite Load Spectra (CLS) computer program to simulate the wind pressure cumulative distribution functions on four sides of a building. The simulated probabilistic wind pressure load was applied to a building frame, and cumulative distribution functions of sway displacements and reliability against overturning were obtained using NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), a stochastic finite element computer code. The geometry of the building and the properties of building members were also considered as random in the NESSUS analysis. The uncertainties of wind pressure, building geometry, and member section property were qualified in terms of their respective sensitivities on the structural response.

  15. Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing

    PubMed Central

    Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud

    2015-01-01

    This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, “MOPSOSA”. The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets. PMID:26132309

  16. Importance of inlet boundary conditions for numerical simulation of combustor flows

    NASA Technical Reports Server (NTRS)

    Sturgess, G. J.; Syed, S. A.; Mcmanus, K. R.

    1983-01-01

    Fluid dynamic computer codes for the mathematical simulation of problems in gas turbine engine combustion systems are required as design and diagnostic tools. To eventually achieve a performance standard with these codes of more than qualitative accuracy it is desirable to use benchmark experiments for validation studies. Typical of the fluid dynamic computer codes being developed for combustor simulations is the TEACH (Teaching Elliptic Axisymmetric Characteristics Heuristically) solution procedure. It is difficult to find suitable experiments which satisfy the present definition of benchmark quality. For the majority of the available experiments there is a lack of information concerning the boundary conditions. A standard TEACH-type numerical technique is applied to a number of test-case experiments. It is found that numerical simulations of gas turbine combustor-relevant flows can be sensitive to the plane at which the calculations start and the spatial distributions of inlet quantities for swirling flows.

  17. Research in the Aloha system

    NASA Technical Reports Server (NTRS)

    Abramson, N.

    1974-01-01

    The Aloha system was studied and developed and extended to advanced forms of computer communications networks. Theoretical and simulation studies of Aloha type radio channels for use in packet switched communications networks were performed. Improved versions of the Aloha communications techniques and their extensions were tested experimentally. A packet radio repeater suitable for use with the Aloha system operational network was developed. General studies of the organization of multiprocessor systems centered on the development of the BCC 500 computer were concluded.

  18. New tendencies in wildland fire simulation for understanding fire phenomena: An overview of the WFDS system capabilities in Mediterranean ecosystems

    NASA Astrophysics Data System (ADS)

    Pastor, E.; Tarragó, D.; Planas, E.

    2012-04-01

    Wildfire theoretical modeling endeavors predicting fire behavior characteristics, such as the rate of spread, the flames geometry and the energy released by the fire front by applying the physics and the chemistry laws that govern fire phenomena. Its ultimate aim is to help fire managers to improve fire prevention and suppression and hence reducing damage to population and protecting ecosystems. WFDS is a 3D computational fluid dynamics (CFD) model of a fire-driven flow. It is particularly appropriate for predicting the fire behaviour burning through the wildland-urban interface, since it is able to predict the fire behaviour in the intermix of vegetative and structural fuels that comprise the wildland urban interface. This model is not suitable for operational fire management yet due to computational costs constrains, but given the fact that it is open-source and that it has a detailed description of the fuels and of the combustion and heat transfer mechanisms it is currently a suitable system for research purposes. In this paper we present the most important characteristics of the WFDS simulation tool in terms of the models implemented, the input information required and the outputs that the simulator gives useful for understanding fire phenomena. We briefly discuss its advantages and opportunities through some simulation exercises of Mediterranean ecosystems.

  19. A fast analytical undulator model for realistic high-energy FEL simulations

    NASA Astrophysics Data System (ADS)

    Tatchyn, R.; Cremer, T.

    1997-02-01

    A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.

  20. Computer simulations of optimum boost and buck-boost converters

    NASA Technical Reports Server (NTRS)

    Rahman, S.

    1982-01-01

    The development of mathematicl models suitable for minimum weight boost and buck-boost converter designs are presented. The facility of an augumented Lagrangian (ALAG) multiplier-based nonlinear programming technique is demonstrated for minimum weight design optimizations of boost and buck-boost power converters. ALAG-based computer simulation results for those two minimum weight designs are discussed. Certain important features of ALAG are presented in the framework of a comprehensive design example for boost and buck-boost power converter design optimization. The study provides refreshing design insight of power converters and presents such information as weight annd loss profiles of various semiconductor components and magnetics as a function of the switching frequency.

  1. Simulating electric field interactions with polar molecules using spectroscopic databases

    NASA Astrophysics Data System (ADS)

    Owens, Alec; Zak, Emil J.; Chubb, Katy L.; Yurchenko, Sergei N.; Tennyson, Jonathan; Yachmenev, Andrey

    2017-03-01

    Ro-vibrational Stark-associated phenomena of small polyatomic molecules are modelled using extensive spectroscopic data generated as part of the ExoMol project. The external field Hamiltonian is built from the computed ro-vibrational line list of the molecule in question. The Hamiltonian we propose is general and suitable for any polar molecule in the presence of an electric field. By exploiting precomputed data, the often prohibitively expensive computations associated with high accuracy simulations of molecule-field interactions are avoided. Applications to strong terahertz field-induced ro-vibrational dynamics of PH3 and NH3, and spontaneous emission data for optoelectrical Sisyphus cooling of H2CO and CH3Cl are discussed.

  2. A Low-Cost Simulation Model for R-Wave Synchronized Atrial Pacing in Pediatric Patients with Postoperative Junctional Ectopic Tachycardia

    PubMed Central

    Michel, Miriam; Egender, Friedemann; Heßling, Vera; Dähnert, Ingo; Gebauer, Roman

    2016-01-01

    Background Postoperative junctional ectopic tachycardia (JET) occurs frequently after pediatric cardiac surgery. R-wave synchronized atrial (AVT) pacing is used to re-establish atrioventricular synchrony. AVT pacing is complex, with technical pitfalls. We sought to establish and to test a low-cost simulation model suitable for training and analysis in AVT pacing. Methods A simulation model was developed based on a JET simulator, a simulation doll, a cardiac monitor, and a pacemaker. A computer program simulated electrocardiograms. Ten experienced pediatric cardiologists tested the model. Their performance was analyzed using a testing protocol with 10 working steps. Results Four testers found the simulation model realistic; 6 found it very realistic. Nine claimed that the trial had improved their skills. All testers considered the model useful in teaching AVT pacing. The simulation test identified 5 working steps in which major mistakes in performance test may impede safe and effective AVT pacing and thus permitted specific training. The components of the model (exclusive monitor and pacemaker) cost less than $50. Assembly and training-session expenses were trivial. Conclusions A realistic, low-cost simulation model of AVT pacing is described. The model is suitable for teaching and analyzing AVT pacing technique. PMID:26943363

  3. Multiscale Modeling of UHTC: Thermal Conductivity

    NASA Technical Reports Server (NTRS)

    Lawson, John W.; Murry, Daw; Squire, Thomas; Bauschlicher, Charles W.

    2012-01-01

    We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.

  4. Realistic natural atmospheric phenomena and weather effects for interactive virtual environments

    NASA Astrophysics Data System (ADS)

    McLoughlin, Leigh

    Clouds and the weather are important aspects of any natural outdoor scene, but existing dynamic techniques within computer graphics only offer the simplest of cloud representations. The problem that this work looks to address is how to provide a means of simulating clouds and weather features such as precipitation, that are suitable for virtual environments. Techniques for cloud simulation are available within the area of meteorology, but numerical weather prediction systems are computationally expensive, give more numerical accuracy than we require for graphics and are restricted to the laws of physics. Within computer graphics, we often need to direct and adjust physical features or to bend reality to meet artistic goals, which is a key difference between the subjects of computer graphics and physical science. Pure physically-based simulations, however, evolve their solutions according to pre-set rules and are notoriously difficult to control. The challenge then is for the solution to be computationally lightweight and able to be directed in some measure while at the same time producing believable results. This work presents a lightweight physically-based cloud simulation scheme that simulates the dynamic properties of cloud formation and weather effects. The system simulates water vapour, cloud water, cloud ice, rain, snow and hail. The water model incorporates control parameters and the cloud model uses an arbitrary vertical temperature profile, with a tool described to allow the user to define this. The result of this work is that clouds can now be simulated in near real-time complete with precipitation. The temperature profile and tool then provide a means of directing the resulting formation..

  5. Long-range interactions and parallel scalability in molecular simulations

    NASA Astrophysics Data System (ADS)

    Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko

    2007-01-01

    Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.

  6. Iterative load-balancing method with multigrid level relaxation for particle simulation with short-range interactions

    NASA Astrophysics Data System (ADS)

    Furuichi, Mikito; Nishiura, Daisuke

    2017-10-01

    We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.

  7. Construction of Interaction Layer on Socio-Environmental Simulation

    NASA Astrophysics Data System (ADS)

    Torii, Daisuke; Ishida, Toru

    In this study, we propose a method to construct a system based on a legacy socio-environmental simulator which enables to design more realistic interaction models in socio-environmetal simulations. First, to provide a computational model suitable for agent interactions, an interaction layer is constructed and connected from outside of a legacy socio-environmental simulator. Next, to configure the agents interacting ability, connection description for controlling the flow of information in the connection area is provided. As a concrete example, we realized an interaction layer by Q which is a scenario description language and connected it to CORMAS, a socio-envirionmental simulator. Finally, we discuss the capability of our method, using the system, in the Fire-Fighter domain.

  8. TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow

    USGS Publications Warehouse

    Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.

    1993-01-01

    A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.

  9. Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO

    PubMed Central

    Zhang, Chaozhu; Han, Jinan; Li, Ke

    2014-01-01

    The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750

  10. Eddylicious: A Python package for turbulent inflow generation

    NASA Astrophysics Data System (ADS)

    Mukha, Timofey; Liefvendahl, Mattias

    2018-01-01

    A Python package for generating inflow for scale-resolving computer simulations of turbulent flow is presented. The purpose of the package is to unite existing inflow generation methods in a single code-base and make them accessible to users of various Computational Fluid Dynamics (CFD) solvers. The currently existing functionality consists of an accurate inflow generation method suitable for flows with a turbulent boundary layer inflow and input/output routines for coupling with the open-source CFD solver OpenFOAM.

  11. Performances study of UWB monopole antennas using half-elliptic radiator conformed on elliptical surface

    NASA Astrophysics Data System (ADS)

    Djidel, S.; Bouamar, M.; Khedrouche, D.

    2016-04-01

    This paper presents a performances study of UWB monopole antenna using half-elliptic radiator conformed on elliptical surface. The proposed antenna, simulated using microwave studio computer CST and High frequency simulator structure HFSS, is designed to operate in frequency interval over 3.1 to 40 GHz. Good return loss and radiation pattern characteristics are obtained in the frequency band of interest. The proposed antenna structure is suitable for ultra-wideband applications, which is, required for many wearable electronics applications.

  12. Organizational Adaptative Behavior: The Complex Perspective of Individuals-Tasks Interaction

    NASA Astrophysics Data System (ADS)

    Wu, Jiang; Sun, Duoyong; Hu, Bin; Zhang, Yu

    Organizations with different organizational structures have different organizational behaviors when responding environmental changes. In this paper, we use a computational model to examine organizational adaptation on four dimensions: Agility, Robustness, Resilience, and Survivability. We analyze the dynamics of organizational adaptation by a simulation study from a complex perspective of the interaction between tasks and individuals in a sales enterprise. The simulation studies in different scenarios show that more flexible communication between employees and less hierarchy level with the suitable centralization can improve organizational adaptation.

  13. Natural three-qubit interactions in one-way quantum computing

    NASA Astrophysics Data System (ADS)

    Tame, M. S.; Paternostro, M.; Kim, M. S.; Vedral, V.

    2006-02-01

    We address the effects of natural three-qubit interactions on the computational power of one-way quantum computation. A benefit of using more sophisticated entanglement structures is the ability to construct compact and economic simulations of quantum algorithms with limited resources. We show that the features of our study are embodied by suitably prepared optical lattices, where effective three-spin interactions have been theoretically demonstrated. We use this to provide a compact construction for the Toffoli gate. Information flow and two-qubit interactions are also outlined, together with a brief analysis of relevant sources of imperfection.

  14. Monte Carlo errors with less errors

    NASA Astrophysics Data System (ADS)

    Wolff, Ulli; Alpha Collaboration

    2004-01-01

    We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.

  15. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  16. Sub-grid drag model for immersed vertical cylinders in fluidized beds

    DOE PAGES

    Verma, Vikrant; Li, Tingwen; Dietiker, Jean -Francois; ...

    2017-01-03

    Immersed vertical cylinders are often used as heat exchanger in gas-solid fluidized beds. Computational Fluid Dynamics (CFD) simulations are computationally expensive for large scale systems with bundles of cylinders. Therefore sub-grid models are required to facilitate simulations on a coarse grid, where internal cylinders are treated as a porous medium. The influence of cylinders on the gas-solid flow tends to enhance segregation and affect the gas-solid drag. A correction to gas-solid drag must be modeled using a suitable sub-grid constitutive relationship. In the past, Sarkar et al. have developed a sub-grid drag model for horizontal cylinder arrays based on 2Dmore » simulations. However, the effect of a vertical cylinder arrangement was not considered due to computational complexities. In this study, highly resolved 3D simulations with vertical cylinders were performed in small periodic domains. These simulations were filtered to construct a sub-grid drag model which can then be implemented in coarse-grid simulations. Gas-solid drag was filtered for different solids fractions and a significant reduction in drag was identified when compared with simulation without cylinders and simulation with horizontal cylinders. Slip velocities significantly increase when vertical cylinders are present. Lastly, vertical suspension drag due to vertical cylinders is insignificant however substantial horizontal suspension drag is observed which is consistent to the finding for horizontal cylinders.« less

  17. Multilevel summation method for electrostatic force evaluation.

    PubMed

    Hardy, David J; Wu, Zhe; Phillips, James C; Stone, John E; Skeel, Robert D; Schulten, Klaus

    2015-02-10

    The multilevel summation method (MSM) offers an efficient algorithm utilizing convolution for evaluating long-range forces arising in molecular dynamics simulations. Shifting the balance of computation and communication, MSM provides key advantages over the ubiquitous particle–mesh Ewald (PME) method, offering better scaling on parallel computers and permitting more modeling flexibility, with support for periodic systems as does PME but also for semiperiodic and nonperiodic systems. The version of MSM available in the simulation program NAMD is described, and its performance and accuracy are compared with the PME method. The accuracy feasible for MSM in practical applications reproduces PME results for water property calculations of density, diffusion constant, dielectric constant, surface tension, radial distribution function, and distance-dependent Kirkwood factor, even though the numerical accuracy of PME is higher than that of MSM. Excellent agreement between MSM and PME is found also for interface potentials of air–water and membrane–water interfaces, where long-range Coulombic interactions are crucial. Applications demonstrate also the suitability of MSM for systems with semiperiodic and nonperiodic boundaries. For this purpose, simulations have been performed with periodic boundaries along directions parallel to a membrane surface but not along the surface normal, yielding membrane pore formation induced by an imbalance of charge across the membrane. Using a similar semiperiodic boundary condition, ion conduction through a graphene nanopore driven by an ion gradient has been simulated. Furthermore, proteins have been simulated inside a single spherical water droplet. Finally, parallel scalability results show the ability of MSM to outperform PME when scaling a system of modest size (less than 100 K atoms) to over a thousand processors, demonstrating the suitability of MSM for large-scale parallel simulation.

  18. A new computational approach to simulate pattern formation in Paenibacillus dendritiformis bacterial colonies

    NASA Astrophysics Data System (ADS)

    Tucker, Laura Jane

    Under the harsh conditions of limited nutrient and hard growth surface, Paenibacillus dendritiformis in agar plates form two classes of patterns (morphotypes). The first class, called the dendritic morphotype, has radially directed branches. The second class, called the chiral morphotype, exhibits uniform handedness. The dendritic morphotype has been modeled successfully using a continuum model on a regular lattice; however, a suitable computational approach was not known to solve a continuum chiral model. This work details a new computational approach to solving the chiral continuum model of pattern formation in P. dendritiformis. The approach utilizes a random computational lattice and new methods for calculating certain derivative terms found in the model.

  19. A multiarchitecture parallel-processing development environment

    NASA Technical Reports Server (NTRS)

    Townsend, Scott; Blech, Richard; Cole, Gary

    1993-01-01

    A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems.

  20. GPU-accelerated computation of electron transfer.

    PubMed

    Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco

    2012-11-05

    Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.

  1. 4P: fast computing of population genetics statistics from large DNA polymorphism panels

    PubMed Central

    Benazzo, Andrea; Panziera, Alex; Bertorelle, Giorgio

    2015-01-01

    Massive DNA sequencing has significantly increased the amount of data available for population genetics and molecular ecology studies. However, the parallel computation of simple statistics within and between populations from large panels of polymorphic sites is not yet available, making the exploratory analyses of a set or subset of data a very laborious task. Here, we present 4P (parallel processing of polymorphism panels), a stand-alone software program for the rapid computation of genetic variation statistics (including the joint frequency spectrum) from millions of DNA variants in multiple individuals and multiple populations. It handles a standard input file format commonly used to store DNA variation from empirical or simulation experiments. The computational performance of 4P was evaluated using large SNP (single nucleotide polymorphism) datasets from human genomes or obtained by simulations. 4P was faster or much faster than other comparable programs, and the impact of parallel computing using multicore computers or servers was evident. 4P is a useful tool for biologists who need a simple and rapid computer program to run exploratory population genetics analyses in large panels of genomic data. It is also particularly suitable to analyze multiple data sets produced in simulation studies. Unix, Windows, and MacOs versions are provided, as well as the source code for easier pipeline implementations. PMID:25628874

  2. The investigation of tethered satellite system dynamics

    NASA Technical Reports Server (NTRS)

    Lorenzini, E.

    1985-01-01

    A progress report is presented that deals with three major topics related to Tethered Satellite System Dynamics. The SAO rotational dynamics computer code was updated. The program is now suitable to deal with inclined orbits. The output has been also modified in order to show the satellite Euler angles referred to the rotating orbital frame. The three-dimensional high resolution computer program SLACK3 was developed. The code simulates the three-dimensional dynamics of a tether going slack taking into account the effect produced by boom rotations. Preliminary simulations on the three-dimensional dynamics of a recoiling slack tether are shown in this report. A program to evaluate the electric potential around a severed tether is immersed in a plasma. The potential is computed on a three-dimensional grid axially symmetric with respect to the tether longitudinal axis. The electric potential variations due to the plasma are presently under investigation.

  3. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  4. Solving Equations of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Lim, Christopher

    2007-01-01

    Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.

  5. Symplectic multi-particle tracking on GPUs

    NASA Astrophysics Data System (ADS)

    Liu, Zhicong; Qiang, Ji

    2018-05-01

    A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.

  6. Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acharya, Naresh; Baone, Chaitanya; Veda, Santosh

    2014-12-31

    Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve gridmore » resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed decades ago, when High Performance Computing (HPC) resources were not commonly available.« less

  7. Computational fluid dynamics-habitat suitability index (CFD-HSI) modelling as an exploratory tool for assessing passability of riverine migratory challenge zones for fish

    USGS Publications Warehouse

    Haro, Alexander J.; Chelminski, Michael; Dudley, Robert W.

    2015-01-01

    We developed two-dimensional computational fluid hydraulics-habitat suitability index (CFD-HSI) models to identify and qualitatively assess potential zones of shallow water depth and high water velocity that may present passage challenges for five major anadromous fish species in a 2.63-km reach of the main stem Penobscot River, Maine, as a result of a dam removal downstream of the reach. Suitability parameters were based on distribution of fish lengths and body depths and transformed to cruising, maximum sustained and sprint swimming speeds. Zones of potential depth and velocity challenges were calculated based on the hydraulic models; ability of fish to pass a challenge zone was based on the percent of river channel that the contiguous zone spanned and its maximum along-current length. Three river flows (low: 99.1 m3 sec-1; normal: 344.9 m3 sec-1; and high: 792.9 m3 sec-1) were modelled to simulate existing hydraulic conditions and hydraulic conditions simulating removal of a dam at the downstream boundary of the reach. Potential depth challenge zones were nonexistent for all low-flow simulations of existing conditions for deeper-bodied fishes. Increasing flows for existing conditions and removal of the dam under all flow conditions increased the number and size of potential velocity challenge zones, with the effects of zones being more pronounced for smaller species. The two-dimensional CFD-HSI model has utility in demonstrating gross effects of flow and hydraulic alteration, but may not be as precise a predictive tool as a three-dimensional model. Passability of the potential challenge zones cannot be precisely quantified for two-dimensional or three-dimensional models due to untested assumptions and incomplete data on fish swimming performance and behaviours.

  8. Simulating the Physical World

    NASA Astrophysics Data System (ADS)

    Berendsen, Herman J. C.

    2004-06-01

    The simulation of physical systems requires a simplified, hierarchical approach which models each level from the atomistic to the macroscopic scale. From quantum mechanics to fluid dynamics, this book systematically treats the broad scope of computer modeling and simulations, describing the fundamental theory behind each level of approximation. Berendsen evaluates each stage in relation to its applications giving the reader insight into the possibilities and limitations of the models. Practical guidance for applications and sample programs in Python are provided. With a strong emphasis on molecular models in chemistry and biochemistry, this book will be suitable for advanced undergraduate and graduate courses on molecular modeling and simulation within physics, biophysics, physical chemistry and materials science. It will also be a useful reference to all those working in the field. Additional resources for this title including solutions for instructors and programs are available online at www.cambridge.org/9780521835275. The first book to cover the wide range of modeling and simulations, from atomistic to the macroscopic scale, in a systematic fashion Providing a wealth of background material, it does not assume advanced knowledge and is eminently suitable for course use Contains practical examples and sample programs in Python

  9. MSM/RD: Coupling Markov state models of molecular kinetics with reaction-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Dibak, Manuel; del Razo, Mauricio J.; De Sancho, David; Schütte, Christof; Noé, Frank

    2018-06-01

    Molecular dynamics (MD) simulations can model the interactions between macromolecules with high spatiotemporal resolution but at a high computational cost. By combining high-throughput MD with Markov state models (MSMs), it is now possible to obtain long time-scale behavior of small to intermediate biomolecules and complexes. To model the interactions of many molecules at large length scales, particle-based reaction-diffusion (RD) simulations are more suitable but lack molecular detail. Thus, coupling MSMs and RD simulations (MSM/RD) would be highly desirable, as they could efficiently produce simulations at large time and length scales, while still conserving the characteristic features of the interactions observed at atomic detail. While such a coupling seems straightforward, fundamental questions are still open: Which definition of MSM states is suitable? Which protocol to merge and split RD particles in an association/dissociation reaction will conserve the correct bimolecular kinetics and thermodynamics? In this paper, we make the first step toward MSM/RD by laying out a general theory of coupling and proposing a first implementation for association/dissociation of a protein with a small ligand (A + B ⇌ C). Applications on a toy model and CO diffusion into the heme cavity of myoglobin are reported.

  10. Analysis of Plane-Parallel Electron Beam Propagation in Different Media by Numerical Simulation Methods

    NASA Astrophysics Data System (ADS)

    Miloichikova, I. A.; Bespalov, V. I.; Krasnykh, A. A.; Stuchebrov, S. G.; Cherepennikov, Yu. M.; Dusaev, R. R.

    2018-04-01

    Simulation by the Monte Carlo method is widely used to calculate the character of ionizing radiation interaction with substance. A wide variety of programs based on the given method allows users to choose the most suitable package for solving computational problems. In turn, it is important to know exactly restrictions of numerical systems to avoid gross errors. Results of estimation of the feasibility of application of the program PCLab (Computer Laboratory, version 9.9) for numerical simulation of the electron energy distribution absorbed in beryllium, aluminum, gold, and water for industrial, research, and clinical beams are presented. The data obtained using programs ITS and Geant4 being the most popular software packages for solving the given problems and the program PCLab are presented in the graphic form. A comparison and an analysis of the results obtained demonstrate the feasibility of application of the program PCLab for simulation of the absorbed energy distribution and dose of electrons in various materials for energies in the range 1-20 MeV.

  11. Computing Radiative Transfer in a 3D Medium

    NASA Technical Reports Server (NTRS)

    Von Allmen, Paul; Lee, Seungwon

    2012-01-01

    A package of software computes the time-dependent propagation of a narrow laser beam in an arbitrary three- dimensional (3D) medium with absorption and scattering, using the transient-discrete-ordinates method and a direct integration method. Unlike prior software that utilizes a Monte Carlo method, this software enables simulation at very small signal-to-noise ratios. The ability to simulate propagation of a narrow laser beam in a 3D medium is an improvement over other discrete-ordinate software. Unlike other direct-integration software, this software is not limited to simulation of propagation of thermal radiation with broad angular spread in three dimensions or of a laser pulse with narrow angular spread in two dimensions. Uses for this software include (1) computing scattering of a pulsed laser beam on a material having given elastic scattering and absorption profiles, and (2) evaluating concepts for laser-based instruments for sensing oceanic turbulence and related measurements of oceanic mixed-layer depths. With suitable augmentation, this software could be used to compute radiative transfer in ultrasound imaging in biological tissues, radiative transfer in the upper Earth crust for oil exploration, and propagation of laser pulses in telecommunication applications.

  12. Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.

    PubMed

    Slażyński, Leszek; Bohte, Sander

    2012-01-01

    The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.

  13. Deformation of Soft Tissue and Force Feedback Using the Smoothed Particle Hydrodynamics

    PubMed Central

    Liu, Xuemei; Wang, Ruiyi; Li, Yunhua; Song, Dongdong

    2015-01-01

    We study the deformation and haptic feedback of soft tissue in virtual surgery based on a liver model by using a force feedback device named PHANTOM OMNI developed by SensAble Company in USA. Although a significant amount of research efforts have been dedicated to simulating the behaviors of soft tissue and implementing force feedback, it is still a challenging problem. This paper introduces a kind of meshfree method for deformation simulation of soft tissue and force computation based on viscoelastic mechanical model and smoothed particle hydrodynamics (SPH). Firstly, viscoelastic model can present the mechanical characteristics of soft tissue which greatly promotes the realism. Secondly, SPH has features of meshless technique and self-adaption, which supply higher precision than methods based on meshes for force feedback computation. Finally, a SPH method based on dynamic interaction area is proposed to improve the real time performance of simulation. The results reveal that SPH methodology is suitable for simulating soft tissue deformation and force feedback calculation, and SPH based on dynamic local interaction area has a higher computational efficiency significantly compared with usual SPH. Our algorithm has a bright prospect in the area of virtual surgery. PMID:26417380

  14. Error threshold for color codes and random three-body Ising models.

    PubMed

    Katzgraber, Helmut G; Bombin, H; Martin-Delgado, M A

    2009-08-28

    We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation, and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random three-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p(c) = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities do not necessarily imply lower resistance to noise.

  15. Computer simulations of polymer chain structure and dynamics on a hypersphere in four-space

    NASA Astrophysics Data System (ADS)

    Râsmark, Per Johan; Ekholm, Tobias; Elvingson, Christer

    2005-05-01

    There is a rapidly growing interest in performing computer simulations in a closed space, avoiding periodic boundary conditions. To extend the range of potential systems to include also macromolecules, we describe an algorithm for computer simulations of polymer chain molecules on S3, a hypersphere in four dimensions. In particular, we show how to generate initial conformations with a bond angle distribution given by the persistence length of the chain and how to calculate the bending forces for a molecule moving on S3. Furthermore, we discuss how to describe the shape of a macromolecule on S3, by deriving the radius of gyration tensor in this non-Euclidean space. The results from both Monte Carlo and Brownian dynamics simulations in the infinite dilution limit show that the results on S3 and in R3 coincide, both with respect to the size and shape as well as for the diffusion coefficient. All data on S3 can also be described by master curves by suitable scaling by the corresponding values in R3. We thus show how to extend the use of spherical boundary conditions, which are most effective for calculating electrostatic forces, to polymer chain molecules, making it possible to perform simulations on S3 also for polyelectrolyte systems.

  16. Aeroacoustic Simulations of a Nose Landing Gear with FUN3D: A Grid Refinement Study

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Lockard, David P.

    2017-01-01

    A systematic grid refinement study is presented for numerical simulations of a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise (Registered Trademark) grid generation software are used for numerical simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A set of grids was generated in this manner to create a family of uniformly refined grids. The finest grid was then modified to coarsen the wall-normal spacing to create a grid suitable for the wall-function implementation in FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence modeling approach is used for these simulations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. These CFD solutions are used as input to a FfowcsWilliams-Hawkings (FW-H) noise propagation code to compute the farfield noise levels. The agreement of the computed results with the experimental data improves as the grid is refined.

  17. Multiscale Modeling of Ultra High Temperature Ceramics (UHTC) ZrB2 and HfB2: Application to Lattice Thermal Conductivity

    NASA Technical Reports Server (NTRS)

    Lawson, John W.; Daw, Murray S.; Squire, Thomas H.; Bauschlicher, Charles W.

    2012-01-01

    We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.

  18. Performance simulation for the design of solar heating and cooling systems

    NASA Technical Reports Server (NTRS)

    Mccormick, P. O.

    1975-01-01

    Suitable approaches for evaluating the performance and the cost of a solar heating and cooling system are considered, taking into account the value of a computer simulation concerning the entire system in connection with the large number of parameters involved. Operational relations concerning the collector efficiency in the case of a new improved collector and a reference collector are presented in a graph. Total costs for solar and conventional heating, ventilation, and air conditioning systems as a function of time are shown in another graph.

  19. dfnWorks: A discrete fracture network framework for modeling subsurface flow and transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyman, Jeffrey D.; Karra, Satish; Makedonska, Nataliia

    DFNWORKS is a parallelized computational suite to generate three-dimensional discrete fracture networks (DFN) and simulate flow and transport. Developed at Los Alamos National Laboratory over the past five years, it has been used to study flow and transport in fractured media at scales ranging from millimeters to kilometers. The networks are created and meshed using DFNGEN, which combines FRAM (the feature rejection algorithm for meshing) methodology to stochastically generate three-dimensional DFNs with the LaGriT meshing toolbox to create a high-quality computational mesh representation. The representation produces a conforming Delaunay triangulation suitable for high performance computing finite volume solvers in anmore » intrinsically parallel fashion. Flow through the network is simulated in dfnFlow, which utilizes the massively parallel subsurface flow and reactive transport finite volume code PFLOTRAN. A Lagrangian approach to simulating transport through the DFN is adopted within DFNTRANS to determine pathlines and solute transport through the DFN. Example applications of this suite in the areas of nuclear waste repository science, hydraulic fracturing and CO 2 sequestration are also included.« less

  20. dfnWorks: A discrete fracture network framework for modeling subsurface flow and transport

    DOE PAGES

    Hyman, Jeffrey D.; Karra, Satish; Makedonska, Nataliia; ...

    2015-11-01

    DFNWORKS is a parallelized computational suite to generate three-dimensional discrete fracture networks (DFN) and simulate flow and transport. Developed at Los Alamos National Laboratory over the past five years, it has been used to study flow and transport in fractured media at scales ranging from millimeters to kilometers. The networks are created and meshed using DFNGEN, which combines FRAM (the feature rejection algorithm for meshing) methodology to stochastically generate three-dimensional DFNs with the LaGriT meshing toolbox to create a high-quality computational mesh representation. The representation produces a conforming Delaunay triangulation suitable for high performance computing finite volume solvers in anmore » intrinsically parallel fashion. Flow through the network is simulated in dfnFlow, which utilizes the massively parallel subsurface flow and reactive transport finite volume code PFLOTRAN. A Lagrangian approach to simulating transport through the DFN is adopted within DFNTRANS to determine pathlines and solute transport through the DFN. Example applications of this suite in the areas of nuclear waste repository science, hydraulic fracturing and CO 2 sequestration are also included.« less

  1. Dynamic load balance scheme for the DSMC algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jin; Geng, Xiangren; Jiang, Dingwu

    The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less

  2. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    PubMed

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  3. Controlling the error on target motion through real-time mesh adaptation: Applications to deep brain stimulation.

    PubMed

    Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A

    2018-05-01

    An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Utilization of a CRT display light pen in the design of feedback control systems

    NASA Technical Reports Server (NTRS)

    Thompson, J. G.; Young, K. R.

    1972-01-01

    A hierarchical structure of the interlinked programs was developed to provide a flexible computer-aided design tool. A graphical input technique and a data structure are considered which provide the capability of entering the control system model description into the computer in block diagram form. An information storage and retrieval system was developed to keep track of the system description, and analysis and simulation results, and to provide them to the correct routines for further manipulation or display. Error analysis and diagnostic capabilities are discussed, and a technique was developed to reduce a transfer function to a set of nested integrals suitable for digital simulation. A general, automated block diagram reduction procedure was set up to prepare the system description for the analysis routines.

  5. Deterministic diffusion in flower-shaped billiards.

    PubMed

    Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre

    2002-08-01

    We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.

  6. The structure of aqueous sodium hydroxide solutions: a combined solution x-ray diffraction and simulation study.

    PubMed

    Megyes, Tünde; Bálint, Szabolcs; Grósz, Tamás; Radnai, Tamás; Bakó, Imre; Sipos, Pál

    2008-01-28

    To determine the structure of aqueous sodium hydroxide solutions, results obtained from x-ray diffraction and computer simulation (molecular dynamics and Car-Parrinello) have been compared. The capabilities and limitations of the methods in describing the solution structure are discussed. For the solutions studied, diffraction methods were found to perform very well in describing the hydration spheres of the sodium ion and yield structural information on the anion's hydration structure. Classical molecular dynamics simulations were not able to correctly describe the bulk structure of these solutions. However, Car-Parrinello simulation proved to be a suitable tool in the detailed interpretation of the hydration sphere of ions and bulk structure of solutions. The results of Car-Parrinello simulations were compared with the findings of diffraction experiments.

  7. Fast simulation of the NICER instrument

    NASA Astrophysics Data System (ADS)

    Doty, John P.; Wampler-Doty, Matthew P.; Prigozhin, Gregory Y.; Okajima, Takashi; Arzoumanian, Zaven; Gendreau, Keith

    2016-07-01

    The NICER1 mission uses a complicated physical system to collect information from objects that are, by x-ray timing science standards, rather faint. To get the most out of the data we will need a rigorous understanding of all instrumental effects. We are in the process of constructing a very fast, high fidelity simulator that will help us to assess instrument performance, support simulation-based data reduction, and improve our estimates of measurement error. We will combine and extend existing optics, detector, and electronics simulations. We will employ the Compute Unified Device Architecture (CUDA2) to parallelize these calculations. The price of suitable CUDA-compatible multi-giga op cores is about $0.20/core, so this approach will be very cost-effective.

  8. Molecular simulation of water removal from simple gases with zeolite NaA.

    PubMed

    Csányi, Eva; Ható, Zoltán; Kristóf, Tamás

    2012-06-01

    Water vapor removal from some simple gases using zeolite NaA was studied by molecular simulation. The equilibrium adsorption properties of H(2)O, CO, H(2), CH(4) and their mixtures in dehydrated zeolite NaA were computed by grand canonical Monte Carlo simulations. The simulations employed Lennard-Jones + Coulomb type effective pair potential models, which are suitable for the reproduction of thermodynamic properties of pure substances. Based on the comparison of the simulation results with experimental data for single-component adsorption at different temperatures and pressures, a modified interaction potential model for the zeolite is proposed. In the adsorption simulations with mixtures presented here, zeolite exhibits extremely high selectivity of water to the investigated weakly polar/non-polar gases demonstrating the excellent dehydration ability of zeolite NaA in engineering applications.

  9. Early experiences in developing and managing the neuroscience gateway.

    PubMed

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas T

    2015-02-01

    The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway.

  10. Early experiences in developing and managing the neuroscience gateway

    PubMed Central

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas. T.

    2015-01-01

    SUMMARY The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway. PMID:26523124

  11. Auto-adaptive finite element meshes

    NASA Technical Reports Server (NTRS)

    Richter, Roland; Leyland, Penelope

    1995-01-01

    Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.

  12. Investigation of a Nanowire Electronic Nose by Computer Simulation

    DTIC Science & Technology

    2009-04-14

    R. D. Mileham, and D. W. Galipeau. Gas sensing based on inelastic electron tunneling spectroscopy. IEEE Sensors Journal, 8(6):983988, 2008. [6] J...explosives in the hold of passenger aircraft . More generally they can be used to detect the presence of molecules that could be a threat to human health...design suitable for subsequent fabrication and then characterization. 15. SUBJECT TERMS EOARD, Sensor Technology, electronic

  13. New variational principles for locating periodic orbits of differential equations.

    PubMed

    Boghosian, Bruce M; Fazendeiro, Luis M; Lätt, Jonas; Tang, Hui; Coveney, Peter V

    2011-06-13

    We present new methods for the determination of periodic orbits of general dynamical systems. Iterative algorithms for finding solutions by these methods, for both the exact continuum case, and for approximate discrete representations suitable for numerical implementation, are discussed. Finally, we describe our approach to the computation of unstable periodic orbits of the driven Navier-Stokes equations, simulated using the lattice Boltzmann equation.

  14. 3D automatic Cartesian grid generation for Euler flows

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Enomoto, Francis Y.; Berger, Marsha J.

    1993-01-01

    We describe a Cartesian grid strategy for the study of three dimensional inviscid flows about arbitrary geometries that uses both conventional and CAD/CAM surface geometry databases. Initial applications of the technique are presented. The elimination of the body-fitted constraint allows the grid generation process to be automated, significantly reducing the time and effort required to develop suitable computational grids for inviscid flowfield simulations.

  15. Following the Ions through a Mass Spectrometer with Atmospheric Pressure Interface: Simulation of Complete Ion Trajectories from Ion Source to Mass Analyzer.

    PubMed

    Zhou, Xiaoyu; Ouyang, Zheng

    2016-07-19

    Ion trajectory simulation is an important and useful tool in instrumentation development for mass spectrometry. Accurate simulation of the ion motion through the mass spectrometer with atmospheric pressure ionization source has been extremely challenging, due to the complexity in gas hydrodynamic flow field across a wide pressure range as well as the computational burden. In this study, we developed a method of generating the gas flow field for an entire mass spectrometer with an atmospheric pressure interface. In combination with the electric force, for the first time simulation of ion trajectories from an atmospheric pressure ion source to a mass analyzer in vacuum has been enabled. A stage-by-stage ion repopulation method has also been implemented for the simulation, which helped to avoid an intolerable computational burden for simulations at high pressure regions while it allowed statistically meaningful results obtained for the mass analyzer. It has been demonstrated to be suitable to identify a joint point for combining the high and low pressure fields solved individually. Experimental characterization has also been done to validate the new method for simulation. Good agreement was obtained between simulated and experimental results for ion transfer though an atmospheric pressure interface with a curtain gas.

  16. Critiquing: A Different Approach to Expert Computer Advice in Medicine

    PubMed Central

    Miller, Perry L.

    1984-01-01

    The traditional approach to computer-based advice in medicine has been to design systems which simulate a physician's decision process. This paper describes a different approach to computer advice in medicine: a critiquing approach. A critiquing system first asks how the physician is planning to manage his patient and then critiques that plan, discussing the advantages and disadvantages of the proposed approach, compared to other approaches which might be reasonable or preferred. Several critiquing systems are currently in different stages of implementation. The paper describes these systems and discusses the characteristics which make each domain suitable for critiquing. The critiquing approach may prove especially well-suited in domains where decisions involve a great deal of subjective judgement.

  17. Computer Algebra Reexamination of the Scaled Particle Theory for Hard-Sphere and Lennard-Jones Fluids

    NASA Astrophysics Data System (ADS)

    Khasare, S. B.

    In the present work, an extension of the scaled particle theory (ESPT) for fluid using computer algebra is developed to obtain an equation of state (EOS), for Lennard-Jones fluid. A suitable functional form for surface tension S(r,d,ɛ) is assumed with intermolecular separation r as a variable, given below: $$S(r,d,\\epsilon)=S_{0}[1+2\\delta(d/r)^{m}],\\qquad r\\geq d/2\\,,$$ where m is arbitrary real number, and d and ɛ are related to physical property such as average or suitable molecular diameter and the binding energy of the molecule respectively. It is found that, for hard sphere fluid ɛ = 0, the above assumption when introduced in scaled particle theory (SPT) frame and choosing arbitrary real number, m = 1/3, the corresponding EOS is in good agreement with the computer simulation of molecular dynamics (MD) result. Furthermore, for the value of m = -1 it gives a Percus-Yevick (pressure), and for the value of m = 1, it corresponds Percus-Yevick (compressibility) EOS.

  18. Rigid Body Rate Inference from Attitude Variation

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, I. Y.; Harman, Richard R.; Thienel, Julie K.

    2006-01-01

    In this paper we research the extraction of the angular rate vector from attitude information without differentiation, in particular from quaternion measurements. We show that instead of using a Kalman filter of some kind, it is possible to obtain good rate estimates, suitable for spacecraft attitude control loop damping, using simple feedback loops, thereby eliminating the need for recurrent covariance computation performed when a Kalman filter is used. This considerably simplifies the computations required for rate estimation in gyro-less spacecraft. Some interesting qualities of the Kalman filter gain are explored, proven and utilized. We examine two kinds of feedback loops, one with varying gain that is proportional to the well known Q matrix, which is computed using the measured quaternion, and the other type of feedback loop is one with constant coefficients. The latter type includes two kinds; namely, a proportional feedback loop, and a proportional-integral feedback loop. The various schemes are examined through simulations and their performance is compared. It is shown that all schemes are adequate for extracting the angular velocity at an accuracy suitable for control loop damping.

  19. On the Extraction of Angular Velocity from Attitude Measurements

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, I. Y.; Harman, Richard R.; Thienel, Julie K.

    2006-01-01

    In this paper we research the extraction of the angular rate vector from attitude information without differentiation, in particular from quaternion measurements. We show that instead of using a Kalman filter of some kind, it is possible to obtain good rate estimates, suitable for spacecraft attitude control loop damping, using simple feedback loops, thereby eliminating the need for recurrent covariance computation performed when a Kalman filter is used. This considerably simplifies the computations required for rate estimation in gyro-less spacecraft. Some interesting qualities of the Kalman filter gain are explored, proven and utilized. We examine two kinds of feedback loops, one with varying gain that is proportional to the well known Q matrix, which is computed using the measured quaternion, and the other type of feedback loop is one with constant coefficients. The latter type includes two kinds; namely, a proportional feedback loop, and a proportional-integral feedback loop. The various schemes are examined through simulations and their performance is compared. It is shown that all schemes are adequate for extracting the angular velocity at an accuracy suitable for control loop damping.

  20. Trellis coding with Continuous Phase Modulation (CPM) for satellite-based land-mobile communications

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This volume of the final report summarizes the results of our studies on the satellite-based mobile communications project. It includes: a detailed analysis, design, and simulations of trellis coded, full/partial response CPM signals with/without interleaving over various Rician fading channels; analysis and simulation of computational cutoff rates for coherent, noncoherent, and differential detection of CPM signals; optimization of the complete transmission system; analysis and simulation of power spectrum of the CPM signals; design and development of a class of Doppler frequency shift estimators; design and development of a symbol timing recovery circuit; and breadboard implementation of the transmission system. Studies prove the suitability of the CPM system for mobile communications.

  1. Coulomb interactions in charged fluids.

    PubMed

    Vernizzi, Graziano; Guerrero-García, Guillermo Iván; de la Cruz, Monica Olvera

    2011-07-01

    The use of Ewald summation schemes for calculating long-range Coulomb interactions, originally applied to ionic crystalline solids, is a very common practice in molecular simulations of charged fluids at present. Such a choice imposes an artificial periodicity which is generally absent in the liquid state. In this paper we propose a simple analytical O(N(2)) method which is based on Gauss's law for computing exactly the Coulomb interaction between charged particles in a simulation box, when it is averaged over all possible orientations of a surrounding infinite lattice. This method mitigates the periodicity typical of crystalline systems and it is suitable for numerical studies of ionic liquids, charged molecular fluids, and colloidal systems with Monte Carlo and molecular dynamics simulations.

  2. Formulation and implementation of a practical algorithm for parameter estimation with process and measurement noise

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A new formulation is proposed for the problem of parameter estimation of dynamic systems with both process and measurement noise. The formulation gives estimates that are maximum likelihood asymptotically in time. The means used to overcome the difficulties encountered by previous formulations are discussed. It is then shown how the proposed formulation can be efficiently implemented in a computer program. A computer program using the proposed formulation is available in a form suitable for routine application. Examples with simulated and real data are given to illustrate that the program works well.

  3. Full-color large-scaled computer-generated holograms using RGB color filters.

    PubMed

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji

    2017-02-06

    A technique using RGB color filters is proposed for creating high-quality full-color computer-generated holograms (CGHs). The fringe of these CGHs is composed of more than a billion pixels. The CGHs reconstruct full-parallax three-dimensional color images with a deep sensation of depth caused by natural motion parallax. The simulation technique as well as the principle and challenges of high-quality full-color reconstruction are presented to address the design of filter properties suitable for large-scaled CGHs. Optical reconstructions of actual fabricated full-color CGHs are demonstrated in order to verify the proposed techniques.

  4. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less

  5. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  6. Volumetric visualization algorithm development for an FPGA-based custom computing machine

    NASA Astrophysics Data System (ADS)

    Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim

    1998-05-01

    Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.

  7. Computer simulation of two-dimensional unsteady flows in estuaries and embayments by the method of characteristics : basic theory and the formulation of the numerical method

    USGS Publications Warehouse

    Lai, Chintu

    1977-01-01

    Two-dimensional unsteady flows of homogeneous density in estuaries and embayments can be described by hyperbolic, quasi-linear partial differential equations involving three dependent and three independent variables. A linear combination of these equations leads to a parametric equation of characteristic form, which consists of two parts: total differentiation along the bicharacteristics and partial differentiation in space. For its numerical solution, the specified-time-interval scheme has been used. The unknown, partial space-derivative terms can be eliminated first by suitable combinations of difference equations, converted from the corresponding differential forms and written along four selected bicharacteristics and a streamline. Other unknowns are thus made solvable from the known variables on the current time plane. The computation is carried to the second-order accuracy by using trapezoidal rule of integration. Means to handle complex boundary conditions are developed for practical application. Computer programs have been written and a mathematical model has been constructed for flow simulation. The favorable computer outputs suggest further exploration and development of model worthwhile. (Woodard-USGS)

  8. Comparative simulation study of chemical synthesis of functional DADNE material.

    PubMed

    Liu, Min Hsien; Liu, Chuan Wen

    2017-01-01

    Amorphous molecular simulation to model the reaction species in the synthesis of chemically inert and energetic 1,1-diamino-2,2-dinitroethene (DADNE) explosive material was performed in this work. Nitromethane was selected as the starting reactant to undergo halogenation, nitration, deprotonation, intermolecular condensation, and dehydration to produce the target DADNE product. The Materials Studio (MS) forcite program allowed fast energy calculations and reliable geometric optimization of all aqueous molecular reaction systems (0.1-0.5 M) at 283 K and 298 K. The MS forcite-computed and Gaussian polarizable continuum model (PCM)-computed results were analyzed and compared in order to explore feasible reaction pathways under suitable conditions for the synthesis of DADNE. Through theoretical simulation, the findings revealed that synthesis was possible, and a total energy barrier of 449.6 kJ mol -1 needed to be overcome in order to carry out the reaction according to MS calculation of the energy barriers at each stage at 283 K, as shown by the reaction profiles. Local analysis of intermolecular interaction, together with calculation of the stabilization energy of each reaction system, provided information that can be used as a reference regarding molecular integrated stability. Graphical Abstract Materials Studio software has been suggested for the computation and simulation of DADNE synthesis.

  9. A proposed definition for a pitch attitude target for the microburst escape maneuver

    NASA Technical Reports Server (NTRS)

    Bray, Richard S.

    1990-01-01

    The Windshear Training Aid promulgated by the Federal Aviation Administration (FAA) defines the practical recovery maneuver following a microburst encounter as application of maximum thrust accompanied by rotation to an aircraft-specific target pitch attitude. In search of a simple method of determining this target, appropriate to a variety of aircraft types, a computer simulation was used to explore the suitability of a pitch target equal in numerical value to that of the angle of attack associated with stall warning. For the configurations and critical microburst shears simulated, this pitch target was demonstrated to be close to optimum.

  10. Development of a rotorcraft. Propulsion dynamics interface analysis, volume 2

    NASA Technical Reports Server (NTRS)

    Hull, R.

    1982-01-01

    A study was conducted to establish a coupled rotor/propulsion analysis that would be applicable to a wide range of rotorcraft systems. The effort included the following tasks: (1) development of a model structure suitable for simulating a wide range of rotorcraft configurations; (2) defined a methodology for parameterizing the model structure to represent a particular rotorcraft; (3) constructing a nonlinear coupled rotor/propulsion model as a test case to use in analyzing coupled system dynamics; and (4) an attempt to develop a mostly linear coupled model derived from the complete nonlinear simulations. Documentation of the computer models developed is presented.

  11. Dancing with Black Holes

    NASA Astrophysics Data System (ADS)

    Aarseth, S. J.

    2008-05-01

    We describe efforts over the last six years to implement regularization methods suitable for studying one or more interacting black holes by direct N-body simulations. Three different methods have been adapted to large-N systems: (i) Time-Transformed Leapfrog, (ii) Wheel-Spoke, and (iii) Algorithmic Regularization. These methods have been tried out with some success on GRAPE-type computers. Special emphasis has also been devoted to including post-Newtonian terms, with application to moderately massive black holes in stellar clusters. Some examples of simulations leading to coalescence by gravitational radiation will be presented to illustrate the practical usefulness of such methods.

  12. An atomic finite element model for biodegradable polymers. Part 1. Formulation of the finite elements.

    PubMed

    Gleadall, Andrew; Pan, Jingzhe; Ding, Lifeng; Kruft, Marc-Anton; Curcó, David

    2015-11-01

    Molecular dynamics (MD) simulations are widely used to analyse materials at the atomic scale. However, MD has high computational demands, which may inhibit its use for simulations of structures involving large numbers of atoms such as amorphous polymer structures. An atomic-scale finite element method (AFEM) is presented in this study with significantly lower computational demands than MD. Due to the reduced computational demands, AFEM is suitable for the analysis of Young's modulus of amorphous polymer structures. This is of particular interest when studying the degradation of bioresorbable polymers, which is the topic of an accompanying paper. AFEM is derived from the inter-atomic potential energy functions of an MD force field. The nonlinear MD functions were adapted to enable static linear analysis. Finite element formulations were derived to represent interatomic potential energy functions between two, three and four atoms. Validation of the AFEM was conducted through its application to atomic structures for crystalline and amorphous poly(lactide). Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Numerical Simulation of a Seaway with Breaking

    NASA Astrophysics Data System (ADS)

    Dommermuth, Douglas; O'Shea, Thomas; Brucker, Kyle; Wyatt, Donald

    2012-11-01

    The focus of this presentation is to describe the recent efforts to simulate a fully non-linear seaway with breaking by using a high-order spectral (HOS) solution of the free-surface boundary value problem to drive a three-dimensional Volume of Fluid (VOF) solution. Historically, the two main types of simulations to simulate free-surface flows are the boundary integral equations method (BIEM) and high-order spectral (HOS) methods. BIEM calculations fail at the point at which the surface impacts upon itself, if not sooner, and HOS methods can only simulate a single valued free-surface. Both also employ a single-phase approximation in which the effects of the air on the water are neglected. Due to these limitations they are unable to simulate breaking waves and air entrainment. The Volume of Fluid (VOF) method on the other hand is suitable for modeling breaking waves and air entrainment. However it is computationally intractable to generate a realistic non-linear sea-state. Here, we use the HOS solution to quickly drive, or nudge, the VOF solution into a non-linear state. The computational strategies, mathematical formulation, and numerical implementation will be discussed. The results of the VOF simulation of a seaway with breaking will also be presented, and compared to the single phase, single valued HOS results.

  14. Simulation and Optimization of an Airfoil with Leading Edge Slat

    NASA Astrophysics Data System (ADS)

    Schramm, Matthias; Stoevesandt, Bernhard; Peinke, Joachim

    2016-09-01

    A gradient-based optimization is used in order to improve the shape of a leading edge slat upstream of a DU 91-W2-250 airfoil. The simulations are performed by solving the Reynolds-Averaged Navier-Stokes equations (RANS) using the open source CFD code OpenFOAM. Gradients are computed via the adjoint approach, which is suitable to deal with many design parameters, but keeping the computational costs low. The implementation is verified by comparing the gradients from the adjoint method with gradients obtained by finite differences for a NACA 0012 airfoil. The simulations of the leading edge slat are validated against measurements from the acoustic wind tunnel of Oldenburg University at a Reynolds number of Re = 6 • 105. The shape of the slat is optimized using the adjoint approach resulting in a drag reduction of 2%. Although the optimization is done for Re = 6 • 105, the improvements also hold for a higher Reynolds number of Re = 7.9 • 106, which is more realistic at modern wind turbines.

  15. Economical graphics display system for flight simulation avionics

    NASA Technical Reports Server (NTRS)

    1990-01-01

    During the past academic year the focal point of this project has been to enhance the economical flight simulator system by incorporating it into the aero engineering educational environment. To accomplish this goal it was necessary to develop appropriate software modules that provide a foundation for student interaction with the system. In addition experiments had to be developed and tested to determine if they were appropriate for incorporation into the beginning flight simulation course, AERO-41B. For the most part these goals were accomplished. Experiments were developed and evaluated by graduate students. More work needs to be done in this area. The complexity and length of the experiments must be refined to match the programming experience of the target students. It was determined that few undergraduate students are ready to absorb the full extent and complexity of a real-time flight simulation. For this reason the experiments developed are designed to introduce basic computer architectures suitable for simulation, the programming environment and languages, the concept of math modules, evaluation of acquired data, and an introduction to the meaning of real-time. An overview is included of the system environment as it pertains to the students, an example of a flight simulation experiment performed by the students, and a summary of the executive programming modules created by the students to achieve a user-friendly multi-processor system suitable to an aero engineering educational program.

  16. Electronic and mechanical improvement of the receiving terminal of a free-space microwave power transmission system

    NASA Technical Reports Server (NTRS)

    Brown, W. C.

    1977-01-01

    Significant advancements were made in a number of areas: improved efficiency of basic receiving element at low power density levels, improved resolution and confidence in efficiency measurements mathematical modelling and computer simulation of the receiving element and the design, construction, and testing of an environmentally protected two-plane construction suitable for low cost, highly automated construction of large receiving arrays.

  17. A Technology Analysis to Support Acquisition of UAVs for Gulf Coalition Forces Operations

    DTIC Science & Technology

    2017-06-01

    their selection of the most suitable and cost-effective unmanned aerial vehicles to support detection operations. This study uses Map Aware Non ...being detected by Gulf Coalition Forces and improved time to detect them, support the use of UAVs in detection missions. Computer experimentations and...aerial vehicles to support detection operations. We use Map Aware Non - Uniform Automata, an agent-based simulation software platform, for the

  18. Kinetic Monte Carlo Simulation of Cation Diffusion in Low-K Ceramics

    NASA Technical Reports Server (NTRS)

    Good, Brian

    2013-01-01

    Low thermal conductivity (low-K) ceramic materials are of interest to the aerospace community for use as the thermal barrier component of coating systems for turbine engine components. In particular, zirconia-based materials exhibit both low thermal conductivity and structural stability at high temperature, making them suitable for such applications. Because creep is one of the potential failure modes, and because diffusion is a mechanism by which creep takes place, we have performed computer simulations of cation diffusion in a variety of zirconia-based low-K materials. The kinetic Monte Carlo simulation method is an alternative to the more widely known molecular dynamics (MD) method. It is designed to study "infrequent-event" processes, such as diffusion, for which MD simulation can be highly inefficient. We describe the results of kinetic Monte Carlo computer simulations of cation diffusion in several zirconia-based materials, specifically, zirconia doped with Y, Gd, Nb and Yb. Diffusion paths are identified, and migration energy barriers are obtained from density functional calculations and from the literature. We present results on the temperature dependence of the diffusivity, and on the effects of the presence of oxygen vacancies in cation diffusion barrier complexes as well.

  19. Towards Automatic Processing of Virtual City Models for Simulations

    NASA Astrophysics Data System (ADS)

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2016-10-01

    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  20. T-cell epitope prediction and immune complex simulation using molecular dynamics: state of the art and persisting challenges

    PubMed Central

    2010-01-01

    Atomistic Molecular Dynamics provides powerful and flexible tools for the prediction and analysis of molecular and macromolecular systems. Specifically, it provides a means by which we can measure theoretically that which cannot be measured experimentally: the dynamic time-evolution of complex systems comprising atoms and molecules. It is particularly suitable for the simulation and analysis of the otherwise inaccessible details of MHC-peptide interaction and, on a larger scale, the simulation of the immune synapse. Progress has been relatively tentative yet the emergence of truly high-performance computing and the development of coarse-grained simulation now offers us the hope of accurately predicting thermodynamic parameters and of simulating not merely a handful of proteins but larger, longer simulations comprising thousands of protein molecules and the cellular scale structures they form. We exemplify this within the context of immunoinformatics. PMID:21067546

  1. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  2. The effect of interference on delta modulation encoded video signals

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1979-01-01

    The results of a study on the use of the delta modulator as a digital encoder of television signals are presented. The computer simulation was studied of different delta modulators in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results are analyzed and then implemented in hardware to study the ability to encode real time motion pictures from an NTSC format television camera. The effects were investigated of channel errors on the delta modulated video signal and several error correction algorithms were tested via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. The final area of investigation concerned itself with finding delta modulators which could achieve significant bandwidth reduction without regard to complexity or speed. The first such scheme to be investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved two dimensional delta modulator algorithms.

  3. Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements

    PubMed Central

    Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.

    2016-01-01

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037

  4. Finite-Element Methods for Real-Time Simulation of Surgery

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay

    2003-01-01

    Two finite-element methods have been developed for mathematical modeling of the time-dependent behaviors of deformable objects and, more specifically, the mechanical responses of soft tissues and organs in contact with surgical tools. These methods may afford the computational efficiency needed to satisfy the requirement to obtain computational results in real time for simulating surgical procedures as described in Simulation System for Training in Laparoscopic Surgery (NPO-21192) on page 31 in this issue of NASA Tech Briefs. Simulation of the behavior of soft tissue in real time is a challenging problem because of the complexity of soft-tissue mechanics. The responses of soft tissues are characterized by nonlinearities and by spatial inhomogeneities and rate and time dependences of material properties. Finite-element methods seem promising for integrating these characteristics of tissues into computational models of organs, but they demand much central-processing-unit (CPU) time and memory, and the demand increases with the number of nodes and degrees of freedom in a given finite-element model. Hence, as finite-element models become more realistic, it becomes more difficult to compute solutions in real time. In both of the present methods, one uses approximate mathematical models trading some accuracy for computational efficiency and thereby increasing the feasibility of attaining real-time up36 NASA Tech Briefs, October 2003 date rates. The first of these methods is based on modal analysis. In this method, one reduces the number of differential equations by selecting only the most significant vibration modes of an object (typically, a suitable number of the lowest-frequency modes) for computing deformations of the object in response to applied forces.

  5. Noninvasive, automatic optimization strategy in cardiac resynchronization therapy.

    PubMed

    Reumann, Matthias; Osswald, Brigitte; Doessel, Olaf

    2007-07-01

    Optimization of cardiac resynchronization therapy (CRT) is still unsolved. It has been shown that optimal electrode position,atrioventricular (AV) and interventricular (VV) delays improve the success of CRT and reduce the number of non-responders. However, no automatic, noninvasive optimization strategy exists to date. Cardiac resynchronization therapy was simulated on the Visible Man and a patient data-set including fiber orientation and ventricular heterogeneity. A cellular automaton was used for fast computation of ventricular excitation. An AV block and a left bundle branch block were simulated with 100%, 80% and 60% interventricular conduction velocity. A right apical and 12 left ventricular lead positions were set. Sequential optimization and optimization with the downhill simplex algorithm (DSA) were carried out. The minimal error between isochrones of the physiologic excitation and the therapy was computed automatically and leads to an optimal lead position and timing. Up to 1512 simulations were carried out per pathology per patient. One simulation took 4 minutes on an Apple Macintosh 2 GHz PowerPC G5. For each electrode pair an optimal pacemaker delay was found. The DSA reduced the number of simulations by an order of magnitude and the AV-delay and VV - delay were determined with a much higher resolution. The findings are well comparable with clinical studies. The presented computer model of CRT automatically evaluates an optimal lead position and AV-delay and VV-delay, which can be used to noninvasively plan an optimal therapy for an individual patient. The application of the DSA reduces the simulation time so that the strategy is suitable for pre-operative planning in clinical routine. Future work will focus on clinical evaluation of the computer models and integration of patient data for individualized therapy planning and optimization.

  6. A new model to compute the desired steering torque for steer-by-wire vehicles and driving simulators

    NASA Astrophysics Data System (ADS)

    Fankem, Steve; Müller, Steffen

    2014-05-01

    This paper deals with the control of the hand wheel actuator in steer-by-wire (SbW) vehicles and driving simulators (DSs). A novel model for the computation of the desired steering torque is presented. The introduced steering torque computation does not only aim to generate a realistic steering feel, which means that the driver should not miss the basic steering functionality of a modern conventional steering system such as an electric power steering (EPS) or hydraulic power steering (HPS), and this in every driving situation. In addition, the modular structure of the steering torque computation combined with suitably selected tuning parameters has the objective to offer a high degree of customisability of the steering feel and thus to provide each driver with his preferred steering feel in a very intuitive manner. The task and the tuning of each module are firstly described. Then, the steering torque computation is parameterised such that the steering feel of a series EPS system is reproduced. For this purpose, experiments are conducted in a hardware-in-the-loop environment where a test EPS is mounted on a steering test bench coupled with a vehicle simulator and parameter identification techniques are applied. Subsequently, how appropriate the steering torque computation mimics the test EPS system is objectively evaluated with respect to criteria concerning the steering torque level and gradient, the feedback behaviour and the steering return ability. Finally, the intuitive tuning of the modular steering torque computation is demonstrated for deriving a sportier steering feel configuration.

  7. In vitro flow assessment: from PC-MRI to computational fluid dynamics including fluid-structure interaction

    NASA Astrophysics Data System (ADS)

    Kratzke, Jonas; Rengier, Fabian; Weis, Christian; Beller, Carsten J.; Heuveline, Vincent

    2016-04-01

    Initiation and development of cardiovascular diseases can be highly correlated to specific biomechanical parameters. To examine and assess biomechanical parameters, numerical simulation of cardiovascular dynamics has the potential to complement and enhance medical measurement and imaging techniques. As such, computational fluid dynamics (CFD) have shown to be suitable to evaluate blood velocity and pressure in scenarios, where vessel wall deformation plays a minor role. However, there is a need for further validation studies and the inclusion of vessel wall elasticity for morphologies being subject to large displacement. In this work, we consider a fluid-structure interaction (FSI) model including the full elasticity equation to take the deformability of aortic wall soft tissue into account. We present a numerical framework, in which either a CFD study can be performed for less deformable aortic segments or an FSI simulation for regions of large displacement such as the aortic root and arch. Both of the methods are validated by means of an aortic phantom experiment. The computational results are in good agreement with 2D phase-contrast magnetic resonance imaging (PC-MRI) velocity measurements as well as catheter-based pressure measurements. The FSI simulation shows a characteristic vessel compliance effect on the flow field induced by the elasticity of the vessel wall, which the CFD model is not capable of. The in vitro validated FSI simulation framework can enable the computation of complementary biomechanical parameters such as the stress distribution within the vessel wall.

  8. A second golden age of aeroacoustics?

    PubMed

    Lele, Sanjiva K; Nichols, Joseph W

    2014-08-13

    In 1992, Sir James Lighthill foresaw the dawn of a second golden age in aeroacoustics enabled by computer simulations (Hardin JC, Hussaini MY (eds) 1993 Computational aeroacoustics, New York, NY: Springer (doi:10.1007/978-1-4613-8342-0)). This review traces the progress in large-scale computations to resolve the noise-source processes and the methods devised to predict the far-field radiated sound using this information. Keeping focus on aviation-related noise sources a brief account of the progress in simulations of jet noise, fan noise and airframe noise is given highlighting the key technical issues and challenges. The complex geometry of nozzle elements and airframe components as well as the high Reynolds number of target applications require careful assessment of the discretization algorithms on unstructured grids and modelling compromises. High-fidelity simulations with 200-500 million points are not uncommon today and are used to improve scientific understanding of the noise generation process in specific situations. We attempt to discern where the future might take us, especially if exascale computing becomes a reality in 10 years. A pressing question in this context concerns the role of modelling in the coming era. While the sheer scale of the data generated by large-scale simulations will require new methods for data analysis and data visualization, it is our view that suitable theoretical formulations and reduced models will be even more important in future. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  9. Parameter Sweep and Optimization of Loosely Coupled Simulations Using the DAKOTA Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elwasif, Wael R; Bernholdt, David E; Pannala, Sreekanth

    2012-01-01

    The increasing availability of large scale computing capabilities has accelerated the development of high-fidelity coupled simulations. Such simulations typically involve the integration of models that implement various aspects of the complex phenomena under investigation. Coupled simulations are playing an integral role in fields such as climate modeling, earth systems modeling, rocket simulations, computational chemistry, fusion research, and many other computational fields. Model coupling provides scientists with systematic ways to virtually explore the physical, mathematical, and computational aspects of the problem. Such exploration is rarely done using a single execution of a simulation, but rather by aggregating the results from manymore » simulation runs that, together, serve to bring to light novel knowledge about the system under investigation. Furthermore, it is often the case (particularly in engineering disciplines) that the study of the underlying system takes the form of an optimization regime, where the control parameter space is explored to optimize an objective functions that captures system realizability, cost, performance, or a combination thereof. Novel and flexible frameworks that facilitate the integration of the disparate models into a holistic simulation are used to perform this research, while making efficient use of the available computational resources. In this paper, we describe the integration of the DAKOTA optimization and parameter sweep toolkit with the Integrated Plasma Simulator (IPS), a component-based framework for loosely coupled simulations. The integration allows DAKOTA to exploit the internal task and resource management of the IPS to dynamically instantiate simulation instances within a single IPS instance, allowing for greater control over the trade-off between efficiency of resource utilization and time to completion. We present a case study showing the use of the combined DAKOTA-IPS system to aid in the design of a lithium ion battery (LIB) cell, by studying a coupled system involving the electrochemistry and ion transport at the lower length scales and thermal energy transport at the device scales. The DAKOTA-IPS system provides a flexible tool for use in optimization and parameter sweep studies involving loosely coupled simulations that is suitable for use in situations where changes to the constituent components in the coupled simulation are impractical due to intellectual property or code heritage issues.« less

  10. Multiphase, multi-electrode Joule heat computations for glass melter and in situ vitrification simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowery, P.S.; Lessor, D.L.

    Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less

  11. Contrast‐enhanced computed tomography with myocardial three‐dimensional printing can guide treatment in symptomatic hypertrophic obstructive cardiomyopathy

    PubMed Central

    Hamatani, Yasuhiro; Amaki, Makoto; Kanzaki, Hideaki; Yamashita, Kizuku; Nakashima, Yasuteru; Shibata, Atsushi; Okada, Atsushi; Takahama, Hiroyuki; Hasegawa, Takuya; Shimahara, Yusuke; Sugano, Yasuo; Fujita, Tomoyuki; Shiraishi, Isao; Yasuda, Satoshi; Kobayashi, Junjiro

    2017-01-01

    Abstract Both surgical myectomy and percutaneous transluminal septal myocardial ablation are effective treatments for drug‐refractory symptomatic hypertrophic obstructive cardiomyopathy (HOCM). However, in some cases, it is not easy to elucidate the abnormal structure of left ventricular outflow obstruction to adopt these treatments. Here, we presented a young female patient with drug‐refractory symptomatic HOCM. In this case, contrast‐enhanced computed tomography enabled us to assess the suitability of percutaneous transluminal septal myocardial ablation. By creating three‐dimensional printed models using computed tomography data, we could also visualize intracardiac structure and simulate the surgical procedure. A multimodality assessment strategy is useful for evaluating patients complicated with drug‐refractory symptomatic HOCM. PMID:29154429

  12. An atomic charge model for graphene oxide for exploring its bioadhesive properties in explicit water.

    PubMed

    Stauffer, D; Dragneva, N; Floriano, W B; Mawhinney, R C; Fanchini, G; French, S; Rubel, O

    2014-07-28

    Graphene Oxide (GO) has been shown to exhibit properties that are useful in applications such as biomedical imaging, biological sensors, and drug delivery. The binding properties of biomolecules at the surface of GO can provide insight into the potential biocompatibility of GO. Here we assess the intrinsic affinity of amino acids to GO by simulating their adsorption onto a GO surface. The simulation is done using Amber03 force-field molecular dynamics in explicit water. The emphasis is placed on developing an atomic charge model for GO. The adsorption energies are computed using atomic charges obtained from an ab initio electrostatic potential based method. The charges reported here are suitable for simulating peptide adsorption to GO.

  13. Elevated temperature crack growth

    NASA Technical Reports Server (NTRS)

    Malik, S. N.; Vanstone, R. H.; Kim, K. S.; Laflen, J. H.

    1985-01-01

    The purpose is to determine the ability of currently available P-I integrals to correlate fatigue crack propagation under conditions that simulate the turbojet engine combustor liner environment. The utility of advanced fracture mechanics measurements will also be evaluated during the course of the program. To date, an appropriate specimen design, a crack displacement measurement method, and boundary condition simulation in the computational model of the specimen were achieved. Alloy 718 was selected as an analog material based on its ability to simulate high temperature behavior at lower temperatures. Tensile and cyclic tests were run at several strain rates so that an appropriate constitutive model could be developed. Suitable P-I integrals were programmed into a finite element post-processor for eventual comparison with experimental data.

  14. Numerical investigation of field enhancement by metal nano-particles using a hybrid FDTD-PSTD algorithm.

    PubMed

    Pernice, W H; Payne, F P; Gallagher, D F

    2007-09-03

    We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.

  15. A real time Pegasus propulsion system model for VSTOL piloted simulation evaluation

    NASA Technical Reports Server (NTRS)

    Mihaloew, J. R.; Roth, S. P.; Creekmore, R.

    1981-01-01

    A real time propulsion system modeling technique suitable for use in man-in-the-loop simulator studies was developd. This technique provides the system accuracy, stability, and transient response required for integrated aircraft and propulsion control system studies. A Pegasus-Harrier propulsion system was selected as a baseline for developing mathematical modeling and simulation techniques for VSTOL. Initially, static and dynamic propulsion system characteristics were modeled in detail to form a nonlinear aerothermodynamic digital computer simulation of a Pegasus engine. From this high fidelity simulation, a real time propulsion model was formulated by applying a piece-wise linear state variable methodology. A hydromechanical and water injection control system was also simulated. The real time dynamic model includes the detail and flexibility required for the evaluation of critical control parameters and propulsion component limits over a limited flight envelope. The model was programmed for interfacing with a Harrier aircraft simulation. Typical propulsion system simulation results are presented.

  16. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond

    2016-04-15

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less

  17. Adapting to life: ocean biogeochemical modelling and adaptive remeshing

    NASA Astrophysics Data System (ADS)

    Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.

    2013-11-01

    An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. As an example, state-of-the-art models give values of primary production approximately two orders of magnitude lower than those observed in the ocean's oligotrophic gyres, which cover a third of the Earth's surface. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a~simple vertical column (quasi 1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The simulations capture both the seasonal and inter-annual variations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3, so reducing computational overhead. We then show the potential of this method in two case studies where we change the metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate adaptive meshes may provide a~suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high spatial resolution whilst minimising computational cost.

  18. Simplified Modeling of Oxidation of Hydrocarbons

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Harstad, Kenneth

    2008-01-01

    A method of simplified computational modeling of oxidation of hydrocarbons is undergoing development. This is one of several developments needed to enable accurate computational simulation of turbulent, chemically reacting flows. At present, accurate computational simulation of such flows is difficult or impossible in most cases because (1) the numbers of grid points needed for adequate spatial resolution of turbulent flows in realistically complex geometries are beyond the capabilities of typical supercomputers now in use and (2) the combustion of typical hydrocarbons proceeds through decomposition into hundreds of molecular species interacting through thousands of reactions. Hence, the combination of detailed reaction- rate models with the fundamental flow equations yields flow models that are computationally prohibitive. Hence, further, a reduction of at least an order of magnitude in the dimension of reaction kinetics is one of the prerequisites for feasibility of computational simulation of turbulent, chemically reacting flows. In the present method of simplified modeling, all molecular species involved in the oxidation of hydrocarbons are classified as either light or heavy; heavy molecules are those having 3 or more carbon atoms. The light molecules are not subject to meaningful decomposition, and the heavy molecules are considered to decompose into only 13 specified constituent radicals, a few of which are listed in the table. One constructs a reduced-order model, suitable for use in estimating the release of heat and the evolution of temperature in combustion, from a base comprising the 13 constituent radicals plus a total of 26 other species that include the light molecules and related light free radicals. Then rather than following all possible species through their reaction coordinates, one follows only the reduced set of reaction coordinates of the base. The behavior of the base was examined in test computational simulations of the combustion of heptane in a stirred reactor at various initial pressures ranging from 0.1 to 6 MPa. Most of the simulations were performed for stoichiometric mixtures; some were performed for fuel/oxygen mole ratios of 1/2 and 2.

  19. Computational Investigations in Rectangular Convergent and Divergent Ribbed Channels

    NASA Astrophysics Data System (ADS)

    Sivakumar, Karthikeyan; Kulasekharan, N.; Natarajan, E.

    2018-05-01

    Computational investigations on the rib turbulated flow inside a convergent and divergent rectangular channel with square ribs of different rib heights and different Reynolds numbers (Re=20,000, 40,000 and 60,000). The ribs were arranged in a staggered fashion between the upper and lower surfaces of the test section. Computational investigations are carried out using computational fluid dynamic software ANSYS Fluent 14.0. Suitable solver settings like turbulence models were identified from the literature and the boundary conditions for the simulations on a solution of independent grid. Computations were carried out for both convergent and divergent channels with 0 (smooth duct), 1.5, 3, 6, 9 and 12 mm rib heights, to identify the ribbed channel with optimal performance, assessed using a thermo hydraulic performance parameter. The convergent and divergent rectangular channels show higher Nu values than the standard correlation values.

  20. Space shuttle main engine controller assembly, phase C-D. [with lagging system design and analysis

    NASA Technical Reports Server (NTRS)

    1973-01-01

    System design and system analysis and simulation are slightly behind schedule, while design verification testing has improved. Input/output circuit design has improved, but digital computer unit (DCU) and mechanical design continue to lag. Part procurement was impacted by delays in printed circuit board, assembly drawing releases. These are the result of problems in generating suitable printed circuit artwork for the very complex and high density multilayer boards.

  1. Droplet sizing instrumentation used for icing research: Operation, calibration, and accuracy

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.

    1989-01-01

    The accuracy of the Forward Scattering Spectrometer Probe (FSSP) is determined using laboratory tests, wind tunnel comparisons, and computer simulations. Operation in an icing environment is discussed and a new calibration device for the FSSP (the rotating pinhole) is demonstrated to be a valuable tool. Operation of the Optical Array Probe is also presented along with a calibration device (the rotating reticle) which is suitable for performing detailed analysis of that instrument.

  2. Testing of Error-Correcting Sparse Permutation Channel Codes

    NASA Technical Reports Server (NTRS)

    Shcheglov, Kirill, V.; Orlov, Sergei S.

    2008-01-01

    A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.

  3. Free-Surface Fluid-Object Interaction for the Large-Scale Computation of Ship Hydrodynamics Phenomena

    DTIC Science & Technology

    2014-05-21

    simulating air-water free -surface flow, fluid-object interaction (FOI), and fluid-structure interaction (FSI) phenomena for complex geometries, and...with no limitations on the motion of the free surface, and with particular emphasis on ship hydrodynamics. The following specific research objectives...were identified for this project: 1) Development of a theoretical framework for free -surface flow, FOI and FSI that is a suitable starting point

  4. Analysis and optimal design of moisture sensor for rice grain moisture measurement

    NASA Astrophysics Data System (ADS)

    Jain, Sweety; Mishra, Pankaj Kumar; Thakare, Vandana Vikas

    2018-04-01

    The analysis and design of a microstrip sensor for accurate determination of moisture content (MC) in rice grains based on oven drying technique, this technique is easy, fast and less time-consuming to other techniques. The sensor is designed with low insertion loss, reflection coefficient and maximum gain is -35dB and 5.88dB at 2.68GHz as well as discussed all the parameters such as axial ratio, maximum gain, smith chart etc, which is helpful for analysis the moisture measurement. The variation in percentage of moisture measurement with magnitude and phase of transmission coefficient is investigated at selected frequencies. The microstrip moisture sensor consists of one layer: substrate FR4, thickness 1.638 is simulated by computer simulated technology microwave studio (CST MWS). It is concluded that the proposed sensor is suitable for development as a complete sensor and to estimate the optimum moisture content of rice grains with accurately, sensitivity, compact, versatile and suitable for determining the moisture content of other crops and agriculture products.

  5. Computer simulation of a cruise missile using brushless dc motor fin control

    NASA Astrophysics Data System (ADS)

    Franklin, G. C.

    1985-03-01

    This thesis describes a computer simulation developed in order to provide a method of establishing the potential of brushless dc motors for applications to tactical cruise missile control surface positioning. In particular, an altitude hold controller has been developed that provides an operational load test condition for the evaluation of the electromechanical actuator. A proportional integral control scheme in conjunction with tachometer feedback provides the position control for the missile tailfin surfaces. The fin control system is further imbedded in a cruise missile model to allow altitude control of the missile. The load on the fin is developed from the dynamic fluid environment that the missile will be operating in and is proportional to such factors as fin size and air density. The program written in CSMP language is suitable for parametric studies including motor and torque load characteristics, and missile and control system parameters.

  6. Computer simulation and design of a three degree-of-freedom shoulder module

    NASA Technical Reports Server (NTRS)

    Marco, David; Torfason, L.; Tesar, Delbert

    1989-01-01

    An in-depth kinematic analysis of a three degree of freedom fully-parallel robotic shoulder module is presented. The major goal of the analysis is to determine appropriate link dimensions which will provide a maximized workspace along with desirable input to output velocity and torque amplification. First order kinematic influence coefficients which describe the output velocity properties in terms of actuator motions provide a means to determine suitable geometric dimensions for the device. Through the use of computer simulation, optimal or near optimal link dimensions based on predetermined design criteria are provided for two different structural designs of the mechanism. The first uses three rotational inputs to control the output motion. The second design involves the use of four inputs, actuating any three inputs for a given position of the output link. Alternative actuator placements are examined to determine the most effective approach to control the output motion.

  7. Advanced control schemes and kinematic analysis for a kinematically redundant 7 DOF manipulator

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Zhou, Zhen-Lei

    1990-01-01

    The kinematic analysis and control of a kinematically redundant manipulator is addressed. The manipulator is the slave arm of a telerobot system recently built at Goddard Space Flight Center (GSFC) to serve as a testbed for investigating research issues in telerobotics. A forward kinematic transformation is developed in its most simplified form, suitable for real-time control applications, and the manipulator Jacobian is derived using the vector cross product method. Using the developed forward kinematic transformation and quaternion representation of orientation matrices, we perform computer simulation to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of Jacobian pseudo-inverse for various sampling times. The equivalence between Cartesian velocities and quaternion is also verified using computer simulation. Three control schemes are proposed and discussed for controlling the motion of the slave arm end-effector.

  8. Versatile microwave-driven trapped ion spin system for quantum information processing

    PubMed Central

    Piltz, Christian; Sriarunothai, Theeraphot; Ivanov, Svetoslav S.; Wölk, Sabine; Wunderlich, Christof

    2016-01-01

    Using trapped atomic ions, we demonstrate a tailored and versatile effective spin system suitable for quantum simulations and universal quantum computation. By simply applying microwave pulses, selected spins can be decoupled from the remaining system and, thus, can serve as a quantum memory, while simultaneously, other coupled spins perform conditional quantum dynamics. Also, microwave pulses can change the sign of spin-spin couplings, as well as their effective strength, even during the course of a quantum algorithm. Taking advantage of the simultaneous long-range coupling between three spins, a coherent quantum Fourier transform—an essential building block for many quantum algorithms—is efficiently realized. This approach, which is based on microwave-driven trapped ions and is complementary to laser-based methods, opens a new route to overcoming technical and physical challenges in the quest for a quantum simulator and a quantum computer. PMID:27419233

  9. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE PAGES

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.; ...

    2017-11-26

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  10. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  11. Interactive Display of Surfaces Using Subdivision Surfaces and Wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M A; Bertram, M; Porumbescu, S

    2001-10-03

    Complex surfaces and solids are produced by large-scale modeling and simulation activities in a variety of disciplines. Productive interaction with these simulations requires that these surfaces or solids be viewable at interactive rates--yet many of these surfaced solids can contain hundreds of millions of polygondpolyhedra. Interactive display of these objects requires compression techniques to minimize storage, and fast view-dependent triangulation techniques to drive the graphics hardware. In this paper, we review recent advances in subdivision-surface wavelet compression and optimization that can be used to provide a framework for both compression and triangulation. These techniques can be used to produce suitablemore » approximations of complex surfaces of arbitrary topology, and can be used to determine suitable triangulations for display. The techniques can be used in a variety of applications in computer graphics, computer animation and visualization.« less

  12. Analysis of potential errors in real-time streamflow data and methods of data verification by digital computer

    USGS Publications Warehouse

    Lystrom, David J.

    1972-01-01

    Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.

  13. Modeling the spatial spread of infectious diseases: the GLobal Epidemic and Mobility computational model

    PubMed Central

    Balcan, Duygu; Gonçalves, Bruno; Hu, Hao; Ramasco, José J.; Colizza, Vittoria

    2010-01-01

    Here we present the Global Epidemic and Mobility (GLEaM) model that integrates sociodemographic and population mobility data in a spatially structured stochastic disease approach to simulate the spread of epidemics at the worldwide scale. We discuss the flexible structure of the model that is open to the inclusion of different disease structures and local intervention policies. This makes GLEaM suitable for the computational modeling and anticipation of the spatio-temporal patterns of global epidemic spreading, the understanding of historical epidemics, the assessment of the role of human mobility in shaping global epidemics, and the analysis of mitigation and containment scenarios. PMID:21415939

  14. Development and Validation of a Supersonic Helium-Air Coannular Jet Facility

    NASA Technical Reports Server (NTRS)

    Carty, Atherton A.; Cutler, Andrew D.

    1999-01-01

    Data are acquired in a simple coannular He/air supersonic jet suitable for validation of CFD (Computational Fluid Dynamics) codes for high speed propulsion. Helium is employed as a non-reacting hydrogen fuel simulant, constituting the core of the coannular flow while the coflow is composed of air. The mixing layer interface between the two flows in the near field and the plume region which develops further downstream constitute the primary regions of interest, similar to those present in all hypersonic air breathing propulsion systems. A computational code has been implemented from the experiment's inception, serving as a tool for model design during the development phase.

  15. Probabilistic Composite Design

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1997-01-01

    Probabilistic composite design is described in terms of a computational simulation. This simulation tracks probabilistically the composite design evolution from constituent materials, fabrication process, through composite mechanics and structural components. Comparisons with experimental data are provided to illustrate selection of probabilistic design allowables, test methods/specimen guidelines, and identification of in situ versus pristine strength, For example, results show that: in situ fiber tensile strength is 90% of its pristine strength; flat-wise long-tapered specimens are most suitable for setting ply tensile strength allowables: a composite radome can be designed with a reliability of 0.999999; and laminate fatigue exhibits wide-spread scatter at 90% cyclic-stress to static-strength ratios.

  16. Virtual reality in urban water management: communicating urban flooding with particle-based CFD simulations.

    PubMed

    Winkler, Daniel; Zischg, Jonatan; Rauch, Wolfgang

    2018-01-01

    For communicating urban flood risk to authorities and the public, a realistic three-dimensional visual display is frequently more suitable than detailed flood maps. Virtual reality could also serve to plan short-term flooding interventions. We introduce here an alternative approach for simulating three-dimensional flooding dynamics in large- and small-scale urban scenes by reaching out to computer graphics. This approach, denoted 'particle in cell', is a particle-based CFD method that is used to predict physically plausible results instead of accurate flow dynamics. We exemplify the approach for the real flooding event in July 2016 in Innsbruck.

  17. Contact Line Dynamics

    NASA Astrophysics Data System (ADS)

    Kreiss, Gunilla; Holmgren, Hanna; Kronbichler, Martin; Ge, Anthony; Brant, Luca

    2017-11-01

    The conventional no-slip boundary condition leads to a non-integrable stress singularity at a moving contact line. This makes numerical simulations of two-phase flow challenging, especially when capillarity of the contact point is essential for the dynamics of the flow. We will describe a modeling methodology, which is suitable for numerical simulations, and present results from numerical computations. The methodology is based on combining a relation between the apparent contact angle and the contact line velocity, with the similarity solution for Stokes flow at a planar interface. The relation between angle and velocity can be determined by theoretical arguments, or from simulations using a more detailed model. In our approach we have used results from phase field simulations in a small domain, but using a molecular dynamics model should also be possible. In both cases more physics is included and the stress singularity is removed.

  18. Soapy: an adaptive optics simulation written purely in Python for rapid concept development

    NASA Astrophysics Data System (ADS)

    Reeves, Andrew

    2016-07-01

    Soapy is a newly developed Adaptive Optics (AO) simulation which aims be a flexible and fast to use tool-kit for many applications in the field of AO. It is written purely in the Python language, adding to and taking advantage of the already rich ecosystem of scientific libraries and programs. The simulation has been designed to be extremely modular, such that each component can be used stand-alone for projects which do not require a full end-to-end simulation. Ease of use, modularity and code clarity have been prioritised at the expense of computational performance. Though this means the code is not yet suitable for large studies of Extremely Large Telescope AO systems, it is well suited to education, exploration of new AO concepts and investigations of current generation telescopes.

  19. Machine learning from computer simulations with applications in rail vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Taheri, Mehdi; Ahmadian, Mehdi

    2016-05-01

    The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.

  20. Model structure identification for wastewater treatment simulation based on computational fluid dynamics.

    PubMed

    Alex, J; Kolisch, G; Krause, K

    2002-01-01

    The objective of this presented project is to use the results of an CFD simulation to automatically, systematically and reliably generate an appropriate model structure for simulation of the biological processes using CSTR activated sludge compartments. Models and dynamic simulation have become important tools for research but also increasingly for the design and optimisation of wastewater treatment plants. Besides the biological models several cases are reported about the application of computational fluid dynamics ICFD) to wastewater treatment plants. One aim of the presented method to derive model structures from CFD results is to exclude the influence of empirical structure selection to the result of dynamic simulations studies of WWTPs. The second application of the approach developed is the analysis of badly performing treatment plants where the suspicion arises that bad flow behaviour such as short cut flows is part of the problem. The method suggested requires as the first step the calculation of fluid dynamics of the biological treatment step at different loading situations by use of 3-dimensional CFD simulation. The result of this information is used to generate a suitable model structure for conventional dynamic simulation of the treatment plant by use of a number of CSTR modules with a pattern of exchange flows between the tanks automatically. The method is explained in detail and the application to the WWTP Wuppertal Buchenhofen is presented.

  1. Time-Domain Simulation of Along-Track Interferometric SAR for Moving Ocean Surfaces.

    PubMed

    Yoshida, Takero; Rheem, Chang-Kyu

    2015-06-10

    A time-domain simulation of along-track interferometric synthetic aperture radar (AT-InSAR) has been developed to support ocean observations. The simulation is in the time domain and based on Bragg scattering to be applicable for moving ocean surfaces. The time-domain simulation is suitable for examining velocities of moving objects. The simulation obtains the time series of microwave backscattering as raw signals for movements of ocean surfaces. In terms of realizing Bragg scattering, the computational grid elements for generating the numerical ocean surface are set to be smaller than the wavelength of the Bragg resonant wave. In this paper, the simulation was conducted for a Bragg resonant wave and irregular waves with currents. As a result, the phases of the received signals from two antennas differ due to the movement of the numerical ocean surfaces. The phase differences shifted by currents were in good agreement with the theoretical values. Therefore, the adaptability of the simulation to observe velocities of ocean surfaces with AT-InSAR was confirmed.

  2. Time-Domain Simulation of Along-Track Interferometric SAR for Moving Ocean Surfaces

    PubMed Central

    Yoshida, Takero; Rheem, Chang-Kyu

    2015-01-01

    A time-domain simulation of along-track interferometric synthetic aperture radar (AT-InSAR) has been developed to support ocean observations. The simulation is in the time domain and based on Bragg scattering to be applicable for moving ocean surfaces. The time-domain simulation is suitable for examining velocities of moving objects. The simulation obtains the time series of microwave backscattering as raw signals for movements of ocean surfaces. In terms of realizing Bragg scattering, the computational grid elements for generating the numerical ocean surface are set to be smaller than the wavelength of the Bragg resonant wave. In this paper, the simulation was conducted for a Bragg resonant wave and irregular waves with currents. As a result, the phases of the received signals from two antennas differ due to the movement of the numerical ocean surfaces. The phase differences shifted by currents were in good agreement with the theoretical values. Therefore, the adaptability of the simulation to observe velocities of ocean surfaces with AT-InSAR was confirmed. PMID:26067197

  3. Database of tsunami scenario simulations for Western Iberia: a tool for the TRIDEC Project Decision Support System for tsunami early warning

    NASA Astrophysics Data System (ADS)

    Armigliato, Alberto; Pagnoni, Gianluca; Zaniboni, Filippo; Tinti, Stefano

    2013-04-01

    TRIDEC is a EU-FP7 Project whose main goal is, in general terms, to develop suitable strategies for the management of crises possibly arising in the Earth management field. The general paradigms adopted by TRIDEC to develop those strategies include intelligent information management, the capability of managing dynamically increasing volumes and dimensionality of information in complex events, and collaborative decision making in systems that are typically very loosely coupled. The two areas where TRIDEC applies and tests its strategies are tsunami early warning and industrial subsurface development. In the field of tsunami early warning, TRIDEC aims at developing a Decision Support System (DSS) that integrates 1) a set of seismic, geodetic and marine sensors devoted to the detection and characterisation of possible tsunamigenic sources and to monitoring the time and space evolution of the generated tsunami, 2) large-volume databases of pre-computed numerical tsunami scenarios, 3) a proper overall system architecture. Two test areas are dealt with in TRIDEC: the western Iberian margin and the eastern Mediterranean. In this study, we focus on the western Iberian margin with special emphasis on the Portuguese coasts. The strategy adopted in TRIDEC plans to populate two different databases, called "Virtual Scenario Database" (VSDB) and "Matching Scenario Database" (MSDB), both of which deal only with earthquake-generated tsunamis. In the VSDB we simulate numerically few large-magnitude events generated by the major known tectonic structures in the study area. Heterogeneous slip distributions on the earthquake faults are introduced to simulate events as "realistically" as possible. The members of the VSDB represent the unknowns that the TRIDEC platform must be able to recognise and match during the early crisis management phase. On the other hand, the MSDB contains a very large number (order of thousands) of tsunami simulations performed starting from many different simple earthquake sources of different magnitudes and located in the "vicinity" of the virtual scenario earthquake. In the DSS perspective, the members of the MSDB have to be suitably combined based on the information coming from the sensor networks, and the results are used during the crisis evolution phase to forecast the degree of exposition of different coastal areas. We provide examples from both databases whose members are computed by means of the in-house software called UBO-TSUFD, implementing the non-linear shallow-water equations and solving them over a set of nested grids that guarantee a suitable spatial resolution (few tens of meters) in specific, suitably chosen, coastal areas.

  4. Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.

    PubMed

    Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A

    2016-03-21

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Automation of closed environments in space for human comfort and safety

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This report culminates the work accomplished during a three year design project on the automation of an Environmental Control and Life Support System (ECLSS) suitable for space travel and colonization. The system would provide a comfortable living environment in space that is fully functional with limited human supervision. A completely automated ECLSS would increase astronaut productivity while contributing to their safety and comfort. The first section of this report, section 1.0, briefly explains the project, its goals, and the scheduling used by the team in meeting these goals. Section 2.0 presents an in-depth look at each of the component subsystems. Each subsection describes the mathematical modeling and computer simulation used to represent that portion of the system. The individual models have been integrated into a complete computer simulation of the CO2 removal process. In section 3.0, the two simulation control schemes are described. The classical control approach uses traditional methods to control the mechanical equipment. The expert control system uses fuzzy logic and artificial intelligence to control the system. By integrating the two control systems with the mathematical computer simulation, the effectiveness of the two schemes can be compared. The results are then used as proof of concept in considering new control schemes for the entire ECLSS. Section 4.0 covers the results and trends observed when the model was subjected to different test situations. These results provide insight into the operating procedures of the model and the different control schemes. The appendix, section 5.0, contains summaries of lectures presented during the past year, homework assignments, and the completed source code used for the computer simulation and control system.

  6. Sub-grid drag models for horizontal cylinder arrays immersed in gas-particle multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran

    2013-09-08

    Immersed cylindrical tube arrays often are used as heat exchangers in gas-particle fluidized beds. In multiphase computational fluid dynamics (CFD) simulations of large fluidized beds, explicit resolution of small cylinders is computationally infeasible. Instead, the cylinder array may be viewed as an effective porous medium in coarse-grid simulations. The cylinders' influence on the suspension as a whole, manifested as an effective drag force, and on the relative motion between gas and particles, manifested as a correction to the gas-particle drag, must be modeled via suitable sub-grid constitutive relationships. In this work, highly resolved unit-cell simulations of flow around an arraymore » of horizontal cylinders, arranged in a staggered configuration, are filtered to construct sub-grid, or `filtered', drag models, which can be implemented in coarse-grid simulations. The force on the suspension exerted by the cylinders is comprised of, as expected, a buoyancy contribution, and a kinetic component analogous to fluid drag on a single cylinder. Furthermore, the introduction of tubes also is found to enhance segregation at the scale of the cylinder size, which, in turn, leads to a reduction in the filtered gas-particle drag.« less

  7. Volunteered Cloud Computing for Disaster Management

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S. R.

    2014-12-01

    Disaster management relies increasingly on interpreting earth observations and running numerical models; which require significant computing capacity - usually on short notice and at irregular intervals. Peak computing demand during event detection, hazard assessment, or incident response may exceed agency budgets; however some of it can be met through volunteered computing, which distributes subtasks to participating computers via the Internet. This approach has enabled large projects in mathematics, basic science, and climate research to harness the slack computing capacity of thousands of desktop computers. This capacity is likely to diminish as desktops give way to battery-powered mobile devices (laptops, smartphones, tablets) in the consumer market; but as cloud computing becomes commonplace, it may offer significant slack capacity -- if its users are given an easy, trustworthy mechanism for participating. Such a "volunteered cloud computing" mechanism would also offer several advantages over traditional volunteered computing: tasks distributed within a cloud have fewer bandwidth limitations; granular billing mechanisms allow small slices of "interstitial" computing at no marginal cost; and virtual storage volumes allow in-depth, reversible machine reconfiguration. Volunteered cloud computing is especially suitable for "embarrassingly parallel" tasks, including ones requiring large data volumes: examples in disaster management include near-real-time image interpretation, pattern / trend detection, or large model ensembles. In the context of a major disaster, we estimate that cloud users (if suitably informed) might volunteer hundreds to thousands of CPU cores across a large provider such as Amazon Web Services. To explore this potential, we are building a volunteered cloud computing platform and targeting it to a disaster management context. Using a lightweight, fault-tolerant network protocol, this platform helps cloud users join parallel computing projects; automates reconfiguration of their virtual machines; ensures accountability for donated computing; and optimizes the use of "interstitial" computing. Initial applications include fire detection from multispectral satellite imagery and flood risk mapping through hydrological simulations.

  8. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  9. Research in digital adaptive flight controllers

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1976-01-01

    A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.

  10. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  11. Crystallographic Lattice Boltzmann Method

    PubMed Central

    Namburi, Manjusha; Krithivasan, Siddharth; Ansumali, Santosh

    2016-01-01

    Current approaches to Direct Numerical Simulation (DNS) are computationally quite expensive for most realistic scientific and engineering applications of Fluid Dynamics such as automobiles or atmospheric flows. The Lattice Boltzmann Method (LBM), with its simplified kinetic descriptions, has emerged as an important tool for simulating hydrodynamics. In a heterogeneous computing environment, it is often preferred due to its flexibility and better parallel scaling. However, direct simulation of realistic applications, without the use of turbulence models, remains a distant dream even with highly efficient methods such as LBM. In LBM, a fictitious lattice with suitable isotropy in the velocity space is considered to recover Navier-Stokes hydrodynamics in macroscopic limit. The same lattice is mapped onto a cartesian grid for spatial discretization of the kinetic equation. In this paper, we present an inverted argument of the LBM, by making spatial discretization as the central theme. We argue that the optimal spatial discretization for LBM is a Body Centered Cubic (BCC) arrangement of grid points. We illustrate an order-of-magnitude gain in efficiency for LBM and thus a significant progress towards feasibility of DNS for realistic flows. PMID:27251098

  12. Lattice Thermal Conductivity of Ultra High Temperature Ceramics (UHTC) ZrB2 and HfB2 from Atomistic Simulations

    NASA Technical Reports Server (NTRS)

    Lawson, John W.; Daw, Murray S.; Bauschlicher, Charles W.

    2012-01-01

    Ultra high temperature ceramics (UHTC) including ZrB2 and HfB2 have a number of properties that make them attractive for applications in extreme environments. One such property is their high thermal conductivity. Computational modeling of these materials will facilitate understanding of fundamental mechanisms, elucidate structure-property relationships, and ultimately accelerate the materials design cycle. Progress in computational modeling of UHTCs however has been limited in part due to the absence of suitable interatomic potentials. Recently, we developed Tersoff style parameterizations of such potentials for both ZrB2 and HfB2 appropriate for atomistic simulations. As an application, Green-Kubo molecular dynamics simulations were performed to evaluate the lattice thermal conductivity for single crystals of ZrB2 and HfB2. The atomic mass difference in these binary compounds leads to oscillations in the time correlation function of the heat current, in contrast to the more typical monotonic decay seen in monoatomic materials such as Silicon, for example. Results at room temperature and at elevated temperatures will be reported.

  13. Lattice Thermal Conductivity from Atomistic Simulations: ZrB2 and HfB2

    NASA Technical Reports Server (NTRS)

    Lawson, John W.; Daw, Murray S.; Bauschlicher, Charles W.

    2012-01-01

    Ultra high temperature ceramics (UHTC) including ZrB2 and HfB2 have a number of properties that make them attractive for applications in extreme environments. One such property is their high thermal conductivity. Computational modeling of these materials will facilitate understanding of fundamental mechanisms, elucidate structure-property relationships, and ultimately accelerate the materials design cycle. Progress in computational modeling of UHTCs however has been limited in part due to the absence of suitable interatomic potentials. Recently, we developed Tersoff style parameterizations of such potentials for both ZrB2 and HfB2 appropriate for atomistic simulations. As an application, Green-Kubo molecular dynamics simulations were performed to evaluate the lattice thermal conductivity for single crystals of ZrB2 and HfB2. The atomic mass difference in these binary compounds leads to oscillations in the time correlation function of the heat current, in contrast to the more typical monotonic decay seen in monoatomic materials such as Silicon, for example. Results at room temperature and at elevated temperatures will be reported.

  14. A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations

    PubMed Central

    Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang

    2008-01-01

    Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033

  15. Scaling up a CMS tier-3 site with campus resources and a 100 Gb/s network connection: what could go wrong?

    NASA Astrophysics Data System (ADS)

    Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Tovar, Benjamin; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2017-10-01

    The University of Notre Dame (ND) CMS group operates a modest-sized Tier-3 site suitable for local, final-stage analysis of CMS data. However, through the ND Center for Research Computing (CRC), Notre Dame researchers have opportunistic access to roughly 25k CPU cores of computing and a 100 Gb/s WAN network link. To understand the limits of what might be possible in this scenario, we undertook to use these resources for a wide range of CMS computing tasks from user analysis through large-scale Monte Carlo production (including both detector simulation and data reconstruction.) We will discuss the challenges inherent in effectively utilizing CRC resources for these tasks and the solutions deployed to overcome them.

  16. Numerical Predictions of Mode Reflections in an Open Circular Duct: Comparison with Theory

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray

    2015-01-01

    The NASA Broadband Aeroacoustic Stator Simulation code was used to compute the acoustic field for higher-order modes in a circular duct geometry. To test the accuracy of the results computed by the code, the duct was terminated by an open end with an infinite flange or no flange. Both open end conditions have a theoretical solution that was used to compare with the computed results. Excellent comparison for reflection matrix values was achieved after suitable refinement of the grid at the open end. The study also revealed issues with the level of the mode amplitude introduced into the acoustic held from the source boundary and the amount of reflection that occurred at the source boundary when a general nonreflecting boundary condition was applied.

  17. Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 3: Refined conceptual design report

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The results of the refined conceptual design phase (task 5) of the Simulation Computer System (SCS) study are reported. The SCS is the computational portion of the Payload Training Complex (PTC) providing simulation based training on payload operations of the Space Station Freedom (SSF). In task 4 of the SCS study, the range of architectures suitable for the SCS was explored. Identified system architectures, along with their relative advantages and disadvantages for SCS, were presented in the Conceptual Design Report. Six integrated designs-combining the most promising features from the architectural formulations-were additionally identified in the report. The six integrated designs were evaluated further to distinguish the more viable designs to be refined as conceptual designs. The three designs that were selected represent distinct approaches to achieving a capable and cost effective SCS configuration for the PTC. Here, the results of task 4 (input to this task) are briefly reviewed. Then, prior to describing individual conceptual designs, the PTC facility configuration and the SSF systems architecture that must be supported by the SCS are reviewed. Next, basic features of SCS implementation that have been incorporated into all selected SCS designs are considered. The details of the individual SCS designs are then presented before making a final comparison of the three designs.

  18. Airfoil Shape Optimization based on Surrogate Model

    NASA Astrophysics Data System (ADS)

    Mukesh, R.; Lingadurai, K.; Selvakumar, U.

    2018-02-01

    Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.

  19. Parallel Agent-Based Simulations on Clusters of GPUs and Multi-Core Processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaby, Brandon G; Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    An effective latency-hiding mechanism is presented in the parallelization of agent-based model simulations (ABMS) with millions of agents. The mechanism is designed to accommodate the hierarchical organization as well as heterogeneity of current state-of-the-art parallel computing platforms. We use it to explore the computation vs. communication trade-off continuum available with the deep computational and memory hierarchies of extant platforms and present a novel analytical model of the tradeoff. We describe our implementation and report preliminary performance results on two distinct parallel platforms suitable for ABMS: CUDA threads on multiple, networked graphical processing units (GPUs), and pthreads on multi-core processors. Messagemore » Passing Interface (MPI) is used for inter-GPU as well as inter-socket communication on a cluster of multiple GPUs and multi-core processors. Results indicate the benefits of our latency-hiding scheme, delivering as much as over 100-fold improvement in runtime for certain benchmark ABMS application scenarios with several million agents. This speed improvement is obtained on our system that is already two to three orders of magnitude faster on one GPU than an equivalent CPU-based execution in a popular simulator in Java. Thus, the overall execution of our current work is over four orders of magnitude faster when executed on multiple GPUs.« less

  20. Particle Models with Self Sustained Current

    NASA Astrophysics Data System (ADS)

    Colangeli, M.; De Masi, A.; Presutti, E.

    2017-06-01

    We present some computer simulations run on a stochastic cellular automaton (CA). The CA simulates a gas of particles which are in a channel,the interval [1, L] in Z, but also in "reservoirs" R_1 and R_2. The evolution in the channel simulates a lattice gas with Kawasaki dynamics with attractive Kac interactions; the temperature is chosen smaller than the mean field critical one. There are also exchanges of particles between the channel and the reservoirs and among reservoirs. When the rate of exchanges among reservoirs is in a suitable interval the CA reaches an apparently stationary state with a non zero current; for different choices of the initial condition the current changes sign. We have a quite satisfactory theory of the phenomenon but we miss a full mathematical proof.

  1. Subjective evaluation with FAA criteria: A multidimensional scaling approach. [ground track control management

    NASA Technical Reports Server (NTRS)

    Kreifeldt, J. G.; Parkin, L.; Wempe, T. E.; Huff, E. F.

    1975-01-01

    Perceived orderliness in the ground tracks of five A/C during their simulated flights was studied. Dynamically developing ground tracks for five A/C from 21 separate runs were reproduced from computer storage and displayed on CRTS to professional pilots and controllers for their evaluations and preferences under several criteria. The ground tracks were developed in 20 seconds as opposed to the 5 minutes of simulated flight using speedup techniques for display. Metric and nonmetric multidimensional scaling techniques are being used to analyze the subjective responses in an effort to: (1) determine the meaningfulness of basing decisions on such complex subjective criteria; (2) compare pilot/controller perceptual spaces; (3) determine the dimensionality of the subjects' perceptual spaces; and thereby (4) determine objective measures suitable for comparing alternative traffic management simulations.

  2. Generating Neuron Geometries for Detailed Three-Dimensional Simulations Using AnaMorph.

    PubMed

    Mörschel, Konstantin; Breit, Markus; Queisser, Gillian

    2017-07-01

    Generating realistic and complex computational domains for numerical simulations is often a challenging task. In neuroscientific research, more and more one-dimensional morphology data is becoming publicly available through databases. This data, however, only contains point and diameter information not suitable for detailed three-dimensional simulations. In this paper, we present a novel framework, AnaMorph, that automatically generates water-tight surface meshes from one-dimensional point-diameter files. These surface triangulations can be used to simulate the electrical and biochemical behavior of the underlying cell. In addition to morphology generation, AnaMorph also performs quality control of the semi-automatically reconstructed cells coming from anatomical reconstructions. This toolset allows an extension from the classical dimension-reduced modeling and simulation of cellular processes to a full three-dimensional and morphology-including method, leading to novel structure-function interplay studies in the medical field. The developed numerical methods can further be employed in other areas where complex geometries are an essential component of numerical simulations.

  3. Deployment Simulation Methods for Ultra-Lightweight Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Johnson, Arthur R.

    2003-01-01

    Two dynamic inflation simulation methods are employed for modeling the deployment of folded thin-membrane tubes. The simulations are necessary because ground tests include gravity effects and may poorly represent deployment in space. The two simulation methods are referred to as the Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method. They are available in the LS-DYNA nonlinear dynamic finite element code. Both methods are suitable for modeling the interactions between the inflation gas and the thin-membrane tube structures. The CV method only considers the pressure induced by the inflation gas in the simulation, while the ALE method models the actual flow of the inflation gas. Thus, the transient fluid properties at any location within the tube can be predicted by the ALE method. Deployment simulations of three packaged tube models; namely coiled, Z-folded, and telescopically-folded configurations, are performed. Results predicted by both methods for the telescopically-folded configuration are correlated and computational efficiency issues are discussed.

  4. Numerical simulation of controlled directional solidification under microgravity conditions

    NASA Astrophysics Data System (ADS)

    Holl, S.; Roos, D.; Wein, J.

    The computer-assisted simulation of solidification processes influenced by gravity has gained increased importance during the previous years regarding ground-based as well as microgravity research. Depending on the specific needs of the investigator, the simulation model ideally covers a broad spectrum of applications. These primarily include the optimization of furnace design in interaction with selected process parameters to meet the desired crystallization conditions. Different approaches concerning the complexity of the simulation models as well as their dedicated applications will be discussed in this paper. Special emphasis will be put on the potential of software tools to increase the scientific quality and cost-efficiency of microgravity experimentation. The results gained so far in the context of TEXUS, FSLP, D-1 and D-2 (preparatory program) experiments, highlighting their simulation-supported preparation and evaluation will be discussed. An outlook will then be given on the possibilities to enhance the efficiency of pre-industrial research in the Columbus era through the incorporation of suitable simulation methods and tools.

  5. Boundary Conditions for Jet Flow Computations

    NASA Technical Reports Server (NTRS)

    Hayder, M. E.; Turkel, E.

    1994-01-01

    Ongoing activities are focused on capturing the sound source in a supersonic jet through careful large eddy simulation (LES). One issue that is addressed is the effect of the boundary conditions, both inflow and outflow, on the predicted flow fluctuations, which represent the sound source. In this study, we examine the accuracy of several boundary conditions to determine their suitability for computations of time-dependent flows. Various boundary conditions are used to compute the flow field of a laminar axisymmetric jet excited at the inflow by a disturbance given by the corresponding eigenfunction of the linearized stability equations. We solve the full time dependent Navier-Stokes equations by a high order numerical scheme. For very small excitations, the computed growth of the modes closely corresponds to that predicted by the linear theory. We then vary the excitation level to see the effect of the boundary conditions in the nonlinear flow regime.

  6. Modified two-layer social force model for emergency earthquake evacuation

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Liu, Hong; Qin, Xin; Liu, Baoxi

    2018-02-01

    Studies of crowd behavior with related research on computer simulation provide an effective basis for architectural design and effective crowd management. Based on low-density group organization patterns, a modified two-layer social force model is proposed in this paper to simulate and reproduce a group gathering process. First, this paper studies evacuation videos from the Luan'xian earthquake in 2012, and extends the study of group organization patterns to a higher density. Furthermore, taking full advantage of the strength in crowd gathering simulations, a new method on grouping and guidance is proposed while using crowd dynamics. Second, a real-life grouping situation in earthquake evacuation is simulated and reproduced. Comparing with the fundamental social force model and existing guided crowd model, the modified model reduces congestion time and truly reflects group behaviors. Furthermore, the experiment result also shows that a stable group pattern and a suitable leader could decrease collision and allow a safer evacuation process.

  7. V&V Of CFD Modeling Of The Argonne Bubble Experiment: FY15 Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyt, Nathaniel C.; Wardle, Kent E.; Bailey, James L.

    2015-09-30

    In support of the development of accelerator-driven production of the fission product Mo 99, computational fluid dynamics (CFD) simulations of an electron-beam irradiated, experimental-scale bubble chamber have been conducted in order to aid in interpretation of existing experimental results, provide additional insights into the physical phenomena, and develop predictive thermal hydraulic capabilities that can be applied to full-scale target solution vessels. Toward that end, a custom hybrid Eulerian-Eulerian-Lagrangian multiphase solver was developed, and simulations have been performed on high-resolution meshes. Good agreement between experiments and simulations has been achieved, especially with respect to the prediction of the maximum temperature ofmore » the uranyl sulfate solution in the experimental vessel. These positive results suggest that the simulation methodology that has been developed will prove to be suitable to assist in the development of full-scale production hardware.« less

  8. High Speed Jet Noise Prediction Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Lele, Sanjiva K.

    2002-01-01

    Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.

  9. Probabilistic simulation of the human factor in structural reliability

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1993-01-01

    A formal approach is described in an attempt to computationally simulate the probable ranges of uncertainties of the human factor in structural probabilistic assessments. A multi-factor interaction equation (MFIE) model has been adopted for this purpose. Human factors such as marital status, professional status, home life, job satisfaction, work load and health, are considered to demonstrate the concept. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Suitability of the MFIE in the subsequently probabilistic sensitivity studies are performed to assess the validity of the whole approach. Results obtained show that the uncertainties for no error range from five to thirty percent for the most optimistic case.

  10. Probabilistic simulation of the human factor in structural reliability

    NASA Astrophysics Data System (ADS)

    Chamis, Christos C.; Singhal, Surendra N.

    1994-09-01

    The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).

  11. Probabilistic Simulation of the Human Factor in Structural Reliability

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Singhal, Surendra N.

    1994-01-01

    The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).

  12. LES, DNS, and RANS for the Analysis of High-Speed Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Colucci, P. J.; Jaberi, F. A.; Givi, P.

    1996-01-01

    A filtered density function (FDF) method suitable for chemically reactive flows is developed in the context of large eddy simulation. The advantage of the FDF methodology is its inherent ability to resolve subgrid scales (SGS) scalar correlations that otherwise have to be modeled. Because of the lack of robust models to accurately predict these correlations in turbulent reactive flows, simulations involving turbulent combustion are often met with a degree of skepticism. The FDF methodology avoids the closure problem associated with these terms and treats the reaction in an exact manner. The scalar FDF approach is particularly attractive since it can be coupled with existing hydrodynamic computational fluid dynamics (CFD) codes.

  13. Free Enthalpy Differences between α-, π-, and 310-Helices of an Atomic Level Fine-Grained Alanine Deca-Peptide Solvated in Supramolecular Coarse-Grained Water.

    PubMed

    Lin, Zhixiong; Riniker, Sereina; van Gunsteren, Wilfred F

    2013-03-12

    Atomistic molecular dynamics simulations of peptides or proteins in aqueous solution are still limited to the multi-nanosecond time scale and multi-nanometer range by computational cost. Combining atomic solutes with a supramolecular solvent model in hybrid fine-grained/coarse-grained (FG/CG) simulations allows atomic detail in the region of interest while being computationally more efficient. We used enveloping distribution sampling (EDS) to calculate the free enthalpy differences between different helical conformations, i.e., α-, π-, and 310-helices, of an atomic level FG alanine deca-peptide solvated in a supramolecular CG water solvent. The free enthalpy differences obtained show that by replacing the FG solvent by the CG solvent, the π-helix is destabilized with respect to the α-helix by about 2.5 kJ mol(-1), and the 310-helix is stabilized with respect to the α-helix by about 9 kJ mol(-1). In addition, the dynamics of the peptide becomes faster. By introducing a FG water layer of 0.8 nm around the peptide, both thermodynamic and dynamic properties are recovered, while the hybrid FG/CG simulations are still four times more efficient than the atomistic simulations, even when the cutoff radius for the nonbonded interactions is increased from 1.4 to 2.0 nm. Hence, the hybrid FG/CG model, which yields an appropriate balance between reduced accuracy and enhanced computational speed, is very suitable for molecular dynamics simulation investigations of biomolecules.

  14. Validation of the thermal code of RadTherm-IR, IR-Workbench, and F-TOM

    NASA Astrophysics Data System (ADS)

    Schwenger, Frédéric; Grossmann, Peter; Malaplate, Alain

    2009-05-01

    System assessment by image simulation requires synthetic scenarios that can be viewed by the device to be simulated. In addition to physical modeling of the camera, a reliable modeling of scene elements is necessary. Software products for modeling of target data in the IR should be capable of (i) predicting surface temperatures of scene elements over a long period of time and (ii) computing sensor views of the scenario. For such applications, FGAN-FOM acquired the software products RadTherm-IR (ThermoAnalytics Inc., Calumet, USA; IR-Workbench (OKTAL-SE, Toulouse, France). Inspection of the accuracy of simulation results by validation is necessary before using these products for applications. In the first step of validation, the performance of both "thermal solvers" was determined through comparison of the computed diurnal surface temperatures of a simple object with the corresponding values from measurements. CUBI is a rather simple geometric object with well known material parameters which makes it suitable for testing and validating object models in IR. It was used in this study as a test body. Comparison of calculated and measured surface temperature values will be presented, together with the results from the FGAN-FOM thermal object code F-TOM. In the second validation step, radiances of the simulated sensor views computed by RadTherm-IR and IR-Workbench will be compared with radiances retrieved from the recorded sensor images taken by the sensor that was simulated. Strengths and weaknesses of the models RadTherm-IR, IR-Workbench and F-TOM will be discussed.

  15. A 3-D Finite-Volume Non-hydrostatic Icosahedral Model (NIM)

    NASA Astrophysics Data System (ADS)

    Lee, Jin

    2014-05-01

    The Nonhydrostatic Icosahedral Model (NIM) formulates the latest numerical innovation of the three-dimensional finite-volume control volume on the quasi-uniform icosahedral grid suitable for ultra-high resolution simulations. NIM's modeling goal is to improve numerical accuracy for weather and climate simulations as well as to utilize the state-of-art computing architecture such as massive parallel CPUs and GPUs to deliver routine high-resolution forecasts in timely manner. NIM dynamic corel innovations include: * A local coordinate system remapped spherical surface to plane for numerical accuracy (Lee and MacDonald, 2009), * Grid points in a table-driven horizontal loop that allow any horizontal point sequence (A.E. MacDonald, et al., 2010), * Flux-Corrected Transport formulated on finite-volume operators to maintain conservative positive definite transport (J.-L, Lee, ET. Al., 2010), *Icosahedral grid optimization (Wang and Lee, 2011), * All differentials evaluated as three-dimensional finite-volume integrals around the control volume. The three-dimensional finite-volume solver in NIM is designed to improve pressure gradient calculation and orographic precipitation over complex terrain. NIM dynamical core has been successfully verified with various non-hydrostatic benchmark test cases such as internal gravity wave, and mountain waves in Dynamical Cores Model Inter-comparisons Projects (DCMIP). Physical parameterizations suitable for NWP are incorporated into NIM dynamical core and successfully tested with multimonth aqua-planet simulations. Recently, NIM has started real data simulations using GFS initial conditions. Results from the idealized tests as well as real-data simulations will be shown in the conference.

  16. A hybrid parallel architecture for electrostatic interactions in the simulation of dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Yang, Sheng-Chun; Lu, Zhong-Yuan; Qian, Hu-Jun; Wang, Yong-Lei; Han, Jie-Ping

    2017-11-01

    In this work, we upgraded the electrostatic interaction method of CU-ENUF (Yang, et al., 2016) which first applied CUNFFT (nonequispaced Fourier transforms based on CUDA) to the reciprocal-space electrostatic computation and made the computation of electrostatic interaction done thoroughly in GPU. The upgraded edition of CU-ENUF runs concurrently in a hybrid parallel way that enables the computation parallelizing on multiple computer nodes firstly, then further on the installed GPU in each computer. By this parallel strategy, the size of simulation system will be never restricted to the throughput of a single CPU or GPU. The most critical technical problem is how to parallelize a CUNFFT in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Furthermore, the upgraded method is capable of computing electrostatic interactions for both the atomistic molecular dynamics (MD) and the dissipative particle dynamics (DPD). Finally, the benchmarks conducted for validation and performance indicate that the upgraded method is able to not only present a good precision when setting suitable parameters, but also give an efficient way to compute electrostatic interactions for huge simulation systems. Program Files doi:http://dx.doi.org/10.17632/zncf24fhpv.1 Licensing provisions: GNU General Public License 3 (GPL) Programming language: C, C++, and CUDA C Supplementary material: The program is designed for effective electrostatic interactions of large-scale simulation systems, which runs on particular computers equipped with NVIDIA GPUs. It has been tested on (a) single computer node with Intel(R) Core(TM) i7-3770@ 3.40 GHz (CPU) and GTX 980 Ti (GPU), and (b) MPI parallel computer nodes with the same configurations. Nature of problem: For molecular dynamics simulation, the electrostatic interaction is the most time-consuming computation because of its long-range feature and slow convergence in simulation space, which approximately take up most of the total simulation time. Although the parallel method CU-ENUF (Yang et al., 2016) based on GPU has achieved a qualitative leap compared with previous methods in electrostatic interactions computation, the computation capability is limited to the throughput capacity of a single GPU for super-scale simulation system. Therefore, we should look for an effective method to handle the calculation of electrostatic interactions efficiently for a simulation system with super-scale size. Solution method: We constructed a hybrid parallel architecture, in which CPU and GPU are combined to accelerate the electrostatic computation effectively. Firstly, the simulation system is divided into many subtasks via domain-decomposition method. Then MPI (Message Passing Interface) is used to implement the CPU-parallel computation with each computer node corresponding to a particular subtask, and furthermore each subtask in one computer node will be executed in GPU in parallel efficiently. In this hybrid parallel method, the most critical technical problem is how to parallelize a CUNFFT (nonequispaced fast Fourier transform based on CUDA) in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Restrictions: The HP-ENUF is mainly oriented to super-scale system simulations, in which the performance superiority is shown adequately. However, for a small simulation system containing less than 106 particles, the mode of multiple computer nodes has no apparent efficiency advantage or even lower efficiency due to the serious network delay among computer nodes, than the mode of single computer node. References: (1) S.-C. Yang, H.-J. Qian, Z.-Y. Lu, Appl. Comput. Harmon. Anal. 2016, http://dx.doi.org/10.1016/j.acha.2016.04.009. (2) S.-C. Yang, Y.-L. Wang, G.-S. Jiao, H.-J. Qian, Z.-Y. Lu, J. Comput. Chem. 37 (2016) 378. (3) S.-C. Yang, Y.-L. Zhu, H.-J. Qian, Z.-Y. Lu, Appl. Chem. Res. Chin. Univ., 2017, http://dx.doi.org/10.1007/s40242-016-6354-5. (4) Y.-L. Zhu, H. Liu, Z.-W. Li, H.-J. Qian, G. Milano, Z.-Y. Lu, J. Comput. Chem. 34 (2013) 2197.

  17. Applicability of APT aided-inertial system to crustal movement monitoring

    NASA Technical Reports Server (NTRS)

    Soltz, J. A.

    1978-01-01

    The APT system, its stage of development, hardware, and operations are described. The algorithms required to perform the real-time functions of navigation and profiling are presented. The results of computer simulations demonstrate the feasibility of APT for its primary mission: topographic mapping with an accuracy of 15 cm in the vertical. Also discussed is the suitability of modifying APT for the purpose of making vertical crustal movement measurements accurate to 2 cm in the vertical, and at least marginal feasibility is indicated.

  18. Building a Simulation Toolkit for Wireless Mesh Clusters and Evaluating the Suitability of Different Families of Ad Hoc Protocols for the Tactical Network Topology

    DTIC Science & Technology

    2005-03-01

    International Conference On Computers Communications and Networks, 153- 161, Lafayette, L.A. Deitel , H.M. and P.J. Deitel . 2003. C++ How to Program ...of this study is to provide an additional performance evaluation technique for the TNT program of Naval Postgraduate School. The current approach...case are the PAMAS and DBTMA protocols. Toh (2002) illustrates how these approaches succeed in solving the problem. In order to address all the

  19. Dynamic analysis of a system of hinge-connected rigid bodies with nonrigid appendages. [equations of motion

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1974-01-01

    Equations of motion are derived for use in simulating a spacecraft or other complex electromechanical system amenable to idealization as a set of hinge-connected rigid bodies of tree topology, with rigid axisymmetric rotors and nonrigid appendages attached to each rigid body in the set. In conjunction with a previously published report on finite-element appendage vibration equations, this report provides a complete minimum-dimension formulation suitable for generic programming for digital computer numerical integration.

  20. Fourier analysis and signal processing by use of the Moebius inversion formula

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Yu, Xiaoli; Shih, Ming-Tang; Tufts, Donald W.; Truong, T. K.

    1990-01-01

    A novel Fourier technique for digital signal processing is developed. This approach to Fourier analysis is based on the number-theoretic method of the Moebius inversion of series. The Fourier transform method developed is shown also to yield the convolution of two signals. A computer simulation shows that this method for finding Fourier coefficients is quite suitable for digital signal processing. It competes with the classical FFT (fast Fourier transform) approach in terms of accuracy, complexity, and speed.

  1. Thermal insulation materials for inside applications: Hygric and thermal properties

    NASA Astrophysics Data System (ADS)

    Jerman, Miloš; Černý, Robert

    2017-11-01

    Two thermal insulation materials suitable for the application on the interior side of historical building envelopes, namely calcium silicate and polyurethane-based foam are studied. Moisture diffusivity and thermal conductivity of both materials, as fundamental moisture and heat transport parameters, are measured in a dependence on moisture content. The measured data will be used as input parameters in computer simulation studies which will provide moisture and temperature fields necessary for an appropriate design of interior thermal insulation systems.

  2. Computer modeling of test particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Decker, Robert B.

    1988-01-01

    The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.

  3. An efficient two-stage approach for image-based FSI analysis of atherosclerotic arteries

    PubMed Central

    Rayz, Vitaliy L.; Mofrad, Mohammad R. K.; Saloner, David

    2010-01-01

    Patient-specific biomechanical modeling of atherosclerotic arteries has the potential to aid clinicians in characterizing lesions and determining optimal treatment plans. To attain high levels of accuracy, recent models use medical imaging data to determine plaque component boundaries in three dimensions, and fluid–structure interaction is used to capture mechanical loading of the diseased vessel. As the plaque components and vessel wall are often highly complex in shape, constructing a suitable structured computational mesh is very challenging and can require a great deal of time. Models based on unstructured computational meshes require relatively less time to construct and are capable of accurately representing plaque components in three dimensions. These models unfortunately require additional computational resources and computing time for accurate and meaningful results. A two-stage modeling strategy based on unstructured computational meshes is proposed to achieve a reasonable balance between meshing difficulty and computational resource and time demand. In this method, a coarsegrained simulation of the full arterial domain is used to guide and constrain a fine-scale simulation of a smaller region of interest within the full domain. Results for a patient-specific carotid bifurcation model demonstrate that the two-stage approach can afford a large savings in both time for mesh generation and time and resources needed for computation. The effects of solid and fluid domain truncation were explored, and were shown to minimally affect accuracy of the stress fields predicted with the two-stage approach. PMID:19756798

  4. CFD simulation and experimental validation of a GM type double inlet pulse tube refrigerator

    NASA Astrophysics Data System (ADS)

    Banjare, Y. P.; Sahoo, R. K.; Sarangi, S. K.

    2010-04-01

    Pulse tube refrigerator has the advantages of long life and low vibration over the conventional cryocoolers, such as GM and stirling coolers because of the absence of moving parts in low temperature. This paper performs a three-dimensional computational fluid dynamic (CFD) simulation of a GM type double inlet pulse tube refrigerator (DIPTR) vertically aligned, operating under a variety of thermal boundary conditions. A commercial computational fluid dynamics (CFD) software package, Fluent 6.1 is used to model the oscillating flow inside a pulse tube refrigerator. The simulation represents fully coupled systems operating in steady-periodic mode. The externally imposed boundary conditions are sinusoidal pressure inlet by user defined function at one end of the tube and constant temperature or heat flux boundaries at the external walls of the cold-end heat exchangers. The experimental method to evaluate the optimum parameters of DIPTR is difficult. On the other hand, developing a computer code for CFD analysis is equally complex. The objectives of the present investigations are to ascertain the suitability of CFD based commercial package, Fluent for study of energy and fluid flow in DIPTR and to validate the CFD simulation results with available experimental data. The general results, such as the cool down behaviours of the system, phase relation between mass flow rate and pressure at cold end, the temperature profile along the wall of the cooler and refrigeration load are presented for different boundary conditions of the system. The results confirm that CFD based Fluent simulations are capable of elucidating complex periodic processes in DIPTR. The results also show that there is an excellent agreement between CFD simulation results and experimental results.

  5. LES on unstructured deforming meshes: Towards reciprocating IC engines

    NASA Technical Reports Server (NTRS)

    Haworth, D. C.; Jansen, K.

    1996-01-01

    A variable explicit/implicit characteristics-based advection scheme that is second-order accurate in space and time has been developed recently for unstructured deforming meshes (O'Rourke & Sahota 1996a). To explore the suitability of this methodology for Large-Eddy Simulation (LES), three subgrid-scale turbulence models have been implemented in the CHAD CFD code (O'Rourke & Sahota 1996b): a constant-coefficient Smagorinsky model, a dynamic Smagorinsky model for flows having one or more directions of statistical homogeneity, and a Lagrangian dynamic Smagorinsky model for flows having no spatial or temporal homogeneity (Meneveau et al. 1996). Computations have been made for three canonical flows, progressing towards the intended application of in-cylinder flow in a reciprocating engine. Grid sizes were selected to be comparable to the coarsest meshes used in earlier spectral LES studies. Quantitative results are reported for decaying homogeneous isotropic turbulence, and for a planar channel flow. Computations are compared to experimental measurements, to Direct-Numerical Simulation (DNS) data, and to Rapid-Distortion Theory (RDT) where appropriate. Generally satisfactory evolution of first and second moments is found on these coarse meshes; deviations are attributed to insufficient mesh resolution. Issues include mesh resolution and computational requirements for a specified level of accuracy, analytic characterization of the filtering implied by the numerical method, wall treatment, and inflow boundary conditions. To resolve these issues, finer-mesh simulations and computations of a simplified axisymmetric reciprocating piston-cylinder assembly are in progress.

  6. An additional study and implementation of tone calibrated technique of modulation

    NASA Technical Reports Server (NTRS)

    Rafferty, W.; Bechtel, L. K.; Lay, N. E.

    1985-01-01

    The Tone Calibrated Technique (TCT) was shown to be theoretically free from an error floor, and is only limited, in practice, by implementation constraints. The concept of the TCT transmission scheme along with a baseband implementation of a suitable demodulator is introduced. Two techniques for the generation of the TCT signal are considered: a Manchester source encoding scheme (MTCT) and a subcarrier based technique (STCT). The results are summarized for the TCT link computer simulation. The hardware implementation of the MTCT system is addressed and the digital signal processing design considerations involved in satisfying the modulator/demodulator requirements are outlined. The program findings are discussed and future direction are suggested based on conclusions made regarding the suitability of the TCT system for the transmission channel presently under consideration.

  7. Simulation model of a gear synchronisation unit for application in a real-time HiL environment

    NASA Astrophysics Data System (ADS)

    Kirchner, Markus; Eberhard, Peter

    2017-05-01

    Gear shifting simulations using the multibody system approach and the finite-element method are standard in the development of transmissions. However, the corresponding models are typically large due to the complex geometries and numerous contacts, which causes long calculation times. The present work sets itself apart from these detailed shifting simulations by proposing a much simpler but powerful synchronisation model which can be computed in real-time while it is still more realistic than a pure rigid multibody model. Therefore, the model is even used as part of a Hardware-in-the-Loop (HiL) test rig. The proposed real-time capable synchronization model combines the rigid multibody system approach with a multiscale simulation approach. The multibody system approach is suitable for the description of the large motions. The multiscale simulation approach is using also the finite-element method suitable for the analysis of the contact processes. An efficient contact search for the claws of a car transmission synchronisation unit is described in detail which shortens the required calculation time of the model considerably. To further shorten the calculation time, the use of a complex pre-synchronisation model with a nonlinear contour is presented. The model has to provide realistic results with the time-step size of the HiL test rig. To reach this specification, a particularly adapted multirate method for the synchronisation model is shown. Measured results of test rigs of the real-time capable synchronisation model are verified on plausibility. The simulation model is then also used in the HiL test rig for a transmission control unit.

  8. A nonrecursive order N preconditioned conjugate gradient: Range space formulation of MDOF dynamics

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.

    1990-01-01

    While excellent progress has been made in deriving algorithms that are efficient for certain combinations of system topologies and concurrent multiprocessing hardware, several issues must be resolved to incorporate transient simulation in the control design process for large space structures. Specifically, strategies must be developed that are applicable to systems with numerous degrees of freedom. In addition, the algorithms must have a growth potential in that they must also be amenable to implementation on forthcoming parallel system architectures. For mechanical system simulation, this fact implies that algorithms are required that induce parallelism on a fine scale, suitable for the emerging class of highly parallel processors; and transient simulation methods must be automatically load balancing for a wider collection of system topologies and hardware configurations. These problems are addressed by employing a combination range space/preconditioned conjugate gradient formulation of multi-degree-of-freedom dynamics. The method described has several advantages. In a sequential computing environment, the method has the features that: by employing regular ordering of the system connectivity graph, an extremely efficient preconditioner can be derived from the 'range space metric', as opposed to the system coefficient matrix; because of the effectiveness of the preconditioner, preliminary studies indicate that the method can achieve performance rates that depend linearly upon the number of substructures, hence the title 'Order N'; and the method is non-assembling. Furthermore, the approach is promising as a potential parallel processing algorithm in that the method exhibits a fine parallel granularity suitable for a wide collection of combinations of physical system topologies/computer architectures; and the method is easily load balanced among processors, and does not rely upon system topology to induce parallelism.

  9. Computer-Simulated Arthroscopic Knee Surgery: Effects of Distraction on Resident Performance.

    PubMed

    Cowan, James B; Seeley, Mark A; Irwin, Todd A; Caird, Michelle S

    2016-01-01

    Orthopedic surgeons cite "full focus" and "distraction control" as important factors for achieving excellent outcomes. Surgical simulation is a safe and cost-effective way for residents to practice surgical skills, and it is a suitable tool to study the effects of distraction on resident surgical performance. This study investigated the effects of distraction on arthroscopic knee simulator performance among residents at various levels of experience. The authors hypothesized that environmental distractions would negatively affect performance. Twenty-five orthopedic surgery residents performed a diagnostic knee arthroscopy computer simulation according to a checklist of structures to identify and tasks to complete. Participants were evaluated on arthroscopy time, number of chondral injuries, instances of looking down at their hands, and completion of checklist items. Residents repeated this task at least 2 weeks later while simultaneously answering distracting questions. During distracted simulation, the residents had significantly fewer completed checklist items (P<.02) compared with the initial simulation. Senior residents completed the initial simulation in less time (P<.001), with fewer chondral injuries (P<.005) and fewer instances of looking down at their hands (P<.012), compared with junior residents. Senior residents also completed 97% of the diagnostic checklist, whereas junior residents completed 89% (P<.019). During distracted simulation, senior residents continued to complete tasks more quickly (P<.006) and with fewer instances of looking down at their hands (P<.042). Residents at all levels appear to be susceptible to the detrimental effects of distraction when performing arthroscopic simulation. Addressing even straightforward questions intraoperatively may affect surgeon performance. Copyright 2016, SLACK Incorporated.

  10. Simulation of laser beam reflection at the sea surface modeling and validation

    NASA Astrophysics Data System (ADS)

    Schwenger, Frédéric; Repasi, Endre

    2013-06-01

    A 3D simulation of the reflection of a Gaussian shaped laser beam on the dynamic sea surface is presented. The simulation is suitable for the pre-calculation of images for cameras operating in different spectral wavebands (visible, short wave infrared) for a bistatic configuration of laser source and receiver for different atmospheric conditions. In the visible waveband the calculated detected total power of reflected laser light from a 660nm laser source is compared with data collected in a field trial. Our computer simulation comprises the 3D simulation of a maritime scene (open sea/clear sky) and the simulation of laser beam reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. To predict the view of a camera the sea surface radiance must be calculated for the specific waveband. Additionally, the radiances of laser light specularly reflected at the wind-roughened sea surface are modeled considering an analytical statistical sea surface BRDF (bidirectional reflectance distribution function). Validation of simulation results is prerequisite before applying the computer simulation to maritime laser applications. For validation purposes data (images and meteorological data) were selected from field measurements, using a 660nm cw-laser diode to produce laser beam reflection at the water surface and recording images by a TV camera. The validation is done by numerical comparison of measured total laser power extracted from recorded images with the corresponding simulation results. The results of the comparison are presented for different incident (zenith/azimuth) angles of the laser beam.

  11. A Second Order Semi-Discrete Cosserat Rod Model Suitable for Dynamic Simulations in Real Time

    NASA Astrophysics Data System (ADS)

    Lang, Holger; Linn, Joachim

    2009-09-01

    We present an alternative approach for a semi-discrete viscoelastic Cosserat rod model that allows both fast dynamic computations within milliseconds and accurate results compared to detailed finite element solutions. The model is able to represent extension, shearing, bending and torsion. For inner dissipation, a consistent damping potential from Antman is chosen. The continuous equations of motion, which consist a system of nonlinear hyperbolic partial differential algebraic equations, are derived from a two dimensional variational principle. The semi-discrete balance equations are obtained by spatial finite difference schemes on a staggered grid and standard index reduction techniques. The right-hand side of the model and its Jacobian can be chosen free of higher algebraic (e.g. root) or transcendent (e.g. trigonometric or exponential) functions and is therefore extremely cheap to evaluate numerically. For the time integration of the system, we use well established stiff solvers. As our model yields computational times within milliseconds, it is suitable for interactive manipulation. It reflects structural mechanics solutions sufficiently correct, as comparison with detailed finite element results shows.

  12. Wave propagation in equivalent continuums representing truss lattice materials

    DOE PAGES

    Messner, Mark C.; Barham, Matthew I.; Kumar, Mukul; ...

    2015-07-29

    Stiffness scales linearly with density in stretch-dominated lattice meta-materials offering the possibility of very light yet very stiff structures. Current additive manufacturing techniques can assemble structures from lattice materials, but the design of such structures will require accurate, efficient simulation methods. Equivalent continuum models have several advantages over discrete truss models of stretch dominated lattices, including computational efficiency and ease of model construction. However, the development an equivalent model suitable for representing the dynamic response of a periodic truss in the small deformation regime is complicated by microinertial effects. This study derives a dynamic equivalent continuum model for periodic trussmore » structures suitable for representing long-wavelength wave propagation and verifies it against the full Bloch wave theory and detailed finite element simulations. The model must incorporate microinertial effects to accurately reproduce long wavelength characteristics of the response such as anisotropic elastic soundspeeds. Finally, the formulation presented here also improves upon previous work by preserving equilibrium at truss joints for simple lattices and by improving numerical stability by eliminating vertices in the effective yield surface.« less

  13. Decoupled CFD-based optimization of efficiency and cavitation performance of a double-suction pump

    NASA Astrophysics Data System (ADS)

    Škerlavaj, A.; Morgut, M.; Jošt, D.; Nobile, E.

    2017-04-01

    In this study the impeller geometry of a double-suction pump ensuring the best performances in terms of hydraulic efficiency and reluctance of cavitation is determined using an optimization strategy, which was driven by means of the modeFRONTIER optimization platform. The different impeller shapes (designs) are modified according to the optimization parameters and tested with a computational fluid dynamics (CFD) software, namely ANSYS CFX. The simulations are performed using a decoupled approach, where only the impeller domain region is numerically investigated for computational convenience. The flow losses in the volute are estimated on the base of the velocity distribution at the impeller outlet. The best designs are then validated considering the computationally more expensive full geometry CFD model. The overall results show that the proposed approach is suitable for quick impeller shape optimization.

  14. A 3D virtual reality simulator for training of minimally invasive surgery.

    PubMed

    Mi, Shao-Hua; Hou, Zeng-Gunag; Yang, Fan; Xie, Xiao-Liang; Bian, Gui-Bin

    2014-01-01

    For the last decade, remarkable progress has been made in the field of cardiovascular disease treatment. However, these complex medical procedures require a combination of rich experience and technical skills. In this paper, a 3D virtual reality simulator for core skills training in minimally invasive surgery is presented. The system can generate realistic 3D vascular models segmented from patient datasets, including a beating heart, and provide a real-time computation of force and force feedback module for surgical simulation. Instruments, such as a catheter or guide wire, are represented by a multi-body mass-spring model. In addition, a realistic user interface with multiple windows and real-time 3D views are developed. Moreover, the simulator is also provided with a human-machine interaction module that gives doctors the sense of touch during the surgery training, enables them to control the motion of a virtual catheter/guide wire inside a complex vascular model. Experimental results show that the simulator is suitable for minimally invasive surgery training.

  15. Manual for a workstation-based generic flight simulation program (LaRCsim), version 1.4

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce

    1995-01-01

    LaRCsim is a set of ANSI C routines that implement a full set of equations of motion for a rigid-body aircraft in atmospheric and low-earth orbital flight, suitable for pilot-in-the-loop simulations on a workstation-class computer. All six rigid-body degrees of freedom are modeled. The modules provided include calculations of the typical aircraft rigid-body simulation variables, earth geodesy, gravity and atmospheric models, and support several data recording options. Features/limitations of the current version include English units of measure, a 1962 atmosphere model in cubic spline function lookup form, ranging from sea level to 75,000 feet, rotating oblate spheroidal earth model, with aircraft C.G. coordinates in both geocentric and geodetic axes. Angular integrations are done using quaternion state variables Vehicle X-Z symmetry is assumed.

  16. Thermo-elastic wave model of the photothermal and photoacoustic signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meja, P.; Steiger, B.; Delsanto, P.P.

    1996-12-31

    By means of the thermo-elastic wave equation the dynamical propagation of mechanical stress and temperature can be described and applied to model the photothermal and photoacoustic signal. Analytical solutions exist only in particular cases. Using massively parallel computers it is possible to simulate the photothermal and photoacoustic signal in a most sufficient way. In this paper the method of local interaction simulation approach (LISA) is presented and selected examples of its application are given. The advantages of this method, which is particularly suitable for parallel processing, consist in reduced computation time and simple description of the photoacoustic signal in opticalmore » materials. The present contribution introduces the authors model, the formalism and some results in the 1 D case for homogeneous nonattenuative materials. The photoacoustic wave can be understood as a wave with locally limited displacement. This displacement corresponds to a temperature variation. Both variables are usually measured in photoacoustics and photothermal measurements. Therefore the temperature and displacement dependence on optical, elastic and thermal constants is analysed.« less

  17. Color Helmet Mounted Display System with Real Time Computer Generated and Video Imagery for In-Flight Simulation

    NASA Technical Reports Server (NTRS)

    Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)

    1995-01-01

    NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).

  18. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less

  19. Analysis of physics-based preconditioning for single-phase subchannel equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansel, J. E.; Ragusa, J. C.; Allu, S.

    2013-07-01

    The (single-phase) subchannel approximations are used throughout nuclear engineering to provide an efficient flow simulation because the computational burden is much smaller than for computational fluid dynamics (CFD) simulations, and empirical relations have been developed and validated to provide accurate solutions in appropriate flow regimes. Here, the subchannel equations have been recast in a residual form suitable for a multi-physics framework. The Eigen spectrum of the Jacobian matrix, along with several potential physics-based preconditioning approaches, are evaluated, and the the potential for improved convergence from preconditioning is assessed. The physics-based preconditioner options include several forms of reduced equations that decouplemore » the subchannels by neglecting crossflow, conduction, and/or both turbulent momentum and energy exchange between subchannels. Eigen-scopy analysis shows that preconditioning moves clusters of eigenvalues away from zero and toward one. A test problem is run with and without preconditioning. Without preconditioning, the solution failed to converge using GMRES, but application of any of the preconditioners allowed the solution to converge. (authors)« less

  20. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography.Our derivation, which is based on the rate-summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills mature pine trees.more » This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  1. Multigrid treatment of implicit continuum diffusion

    NASA Astrophysics Data System (ADS)

    Francisquez, Manaure; Zhu, Ben; Rogers, Barrett

    2017-10-01

    Implicit treatment of diffusive terms of various differential orders common in continuum mechanics modeling, such as computational fluid dynamics, is investigated with spectral and multigrid algorithms in non-periodic 2D domains. In doubly periodic time dependent problems these terms can be efficiently and implicitly handled by spectral methods, but in non-periodic systems solved with distributed memory parallel computing and 2D domain decomposition, this efficiency is lost for large numbers of processors. We built and present here a multigrid algorithm for these types of problems which outperforms a spectral solution that employs the highly optimized FFTW library. This multigrid algorithm is not only suitable for high performance computing but may also be able to efficiently treat implicit diffusion of arbitrary order by introducing auxiliary equations of lower order. We test these solvers for fourth and sixth order diffusion with idealized harmonic test functions as well as a turbulent 2D magnetohydrodynamic simulation. It is also shown that an anisotropic operator without cross-terms can improve model accuracy and speed, and we examine the impact that the various diffusion operators have on the energy, the enstrophy, and the qualitative aspect of a simulation. This work was supported by DOE-SC-0010508. This research used resources of the National Energy Research Scientific Computing Center (NERSC).

  2. On purpose simulation model for molten salt CSP parabolic trough

    NASA Astrophysics Data System (ADS)

    Caranese, Carlo; Matino, Francesca; Maccari, Augusto

    2017-06-01

    The utilization of computer codes and simulation software is one of the fundamental aspects for the development of any kind of technology and, in particular, in CSP sector for researchers, energy institutions, EPC and others stakeholders. In that extent, several models for the simulation of CSP plant have been developed with different main objectives (dynamic simulation, productivity analysis, techno economic optimization, etc.), each of which has shown its own validity and suitability. Some of those models have been designed to study several plant configurations taking into account different CSP plant technologies (Parabolic trough, Linear Fresnel, Solar Tower or Dish) and different settings for the heat transfer fluid, the thermal storage systems and for the overall plant operating logic. Due to a lack of direct experience of Molten Salt Parabolic Trough (MSPT) commercial plant operation, most of the simulation tools do not foresee a suitable management of the thermal energy storage logic and of the solar field freeze protection system, but follow standard schemes. ASSALT, Ase Software for SALT csp plants, has been developed to improve MSPT plant's simulations, by exploiting the most correct operational strategies in order to provide more accurate technical and economical results. In particular, ASSALT applies MSPT specific control logics for the electric energy production and delivery strategy as well as the operation modes of the Solar Field in off-normal sunshine condition. With this approach, the estimated plant efficiency is increased and the electricity consumptions required for the plant operation and management is drastically reduced. Here we present a first comparative study on a real case 55 MWe Molten Salt Parabolic Trough CSP plant placed in the Tibetan highlands, using ASSALT and SAM (System Advisor Model), which is a commercially available simulation tool.

  3. Computational Study of Scenarios Regarding Explosion Risk Mitigation

    NASA Astrophysics Data System (ADS)

    Vlasin, Nicolae-Ioan; Mihai Pasculescu, Vlad; Florea, Gheorghe-Daniel; Cornel Suvar, Marius

    2016-10-01

    Exploration in order to discover new deposits of natural gas, upgrading techniques to exploit these resources and new ways to convert the heat capacity of these gases into industrial usable energy is the research areas of great interest around the globe. But all activities involving the handling of natural gas (exploitation, transport, combustion) are subjected to the same type of risk: the risk to explosion. Experiments carried out physical scenarios to determine ways to reduce this risk can be extremely costly, requiring suitable premises, equipment and apparatus, manpower, time and, not least, presenting the risk of personnel injury. Taking in account the above mentioned, the present paper deals with the possibility of studying the scenarios of gas explosion type events in virtual domain, exemplifying by performing a computer simulation of a stoichiometric air - methane explosion (methane is the main component of natural gas). The advantages of computer-assisted imply are the possibility of using complex virtual geometries of any form as the area of deployment phenomenon, the use of the same geometry for an infinite number of settings of initial parameters as input, total elimination the risk of personnel injury, decrease the execution time etc. Although computer simulations are hardware resources consuming and require specialized personnel to use the CFD (Computational Fluid Dynamics) techniques, the costs and risks associated with these methods are greatly diminished, presenting, in the same time, a major benefit in terms of execution time.

  4. Identifying Structure-Property Relationships Through DREAM.3D Representative Volume Elements and DAMASK Crystal Plasticity Simulations: An Integrated Computational Materials Engineering Approach

    NASA Astrophysics Data System (ADS)

    Diehl, Martin; Groeber, Michael; Haase, Christian; Molodov, Dmitri A.; Roters, Franz; Raabe, Dierk

    2017-05-01

    Predicting, understanding, and controlling the mechanical behavior is the most important task when designing structural materials. Modern alloy systems—in which multiple deformation mechanisms, phases, and defects are introduced to overcome the inverse strength-ductility relationship—give raise to multiple possibilities for modifying the deformation behavior, rendering traditional, exclusively experimentally-based alloy development workflows inappropriate. For fast and efficient alloy design, it is therefore desirable to predict the mechanical performance of candidate alloys by simulation studies to replace time- and resource-consuming mechanical tests. Simulation tools suitable for this task need to correctly predict the mechanical behavior in dependence of alloy composition, microstructure, texture, phase fractions, and processing history. Here, an integrated computational materials engineering approach based on the open source software packages DREAM.3D and DAMASK (Düsseldorf Advanced Materials Simulation Kit) that enables such virtual material development is presented. More specific, our approach consists of the following three steps: (1) acquire statistical quantities that describe a microstructure, (2) build a representative volume element based on these quantities employing DREAM.3D, and (3) evaluate the representative volume using a predictive crystal plasticity material model provided by DAMASK. Exemplarily, these steps are here conducted for a high-manganese steel.

  5. A Virtual Reality System for PTCD Simulation Using Direct Visuo-Haptic Rendering of Partially Segmented Image Data.

    PubMed

    Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz

    2016-01-01

    This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.

  6. Improving the mixing performances of rice straw anaerobic digestion for higher biogas production by computational fluid dynamics (CFD) simulation.

    PubMed

    Shen, Fei; Tian, Libin; Yuan, Hairong; Pang, Yunzhi; Chen, Shulin; Zou, Dexun; Zhu, Baoning; Liu, Yanping; Li, Xiujin

    2013-10-01

    As a lignocellulose-based substrate for anaerobic digestion, rice straw is characterized by low density, high water absorbability, and poor fluidity. Its mixing performances in digestion are completely different from traditional substrates such as animal manures. Computational fluid dynamics (CFD) simulation was employed to investigate mixing performances and determine suitable stirring parameters for efficient biogas production from rice straw. The results from CFD simulation were applied in the anaerobic digestion tests to further investigate their reliability. The results indicated that the mixing performances could be improved by triple impellers with pitched blade, and complete mixing was easily achieved at the stirring rate of 80 rpm, as compared to 20-60 rpm. However, mixing could not be significantly improved when the stirring rate was further increased from 80 to 160 rpm. The simulation results agreed well with the experimental results. The determined mixing parameters could achieve the highest biogas yield of 370 mL (g TS)(-1) (729 mL (g TS(digested))(-1)) and 431 mL (g TS)(-1) (632 mL (g TS(digested))(-1)) with the shortest technical digestion time (T 80) of 46 days. The results obtained in this work could provide useful guides for the design and operation of biogas plants using rice straw as substrates.

  7. Simulation model for port shunting yards

    NASA Astrophysics Data System (ADS)

    Rusca, A.; Popa, M.; Rosca, E.; Rosca, M.; Dragu, V.; Rusca, F.

    2016-08-01

    Sea ports are important nodes in the supply chain, joining two high capacity transport modes: rail and maritime transport. The huge cargo flows transiting port requires high capacity construction and installation such as berths, large capacity cranes, respectively shunting yards. However, the port shunting yards specificity raises several problems such as: limited access since these are terminus stations for rail network, the in-output of large transit flows of cargo relatively to the scarcity of the departure/arrival of a ship, as well as limited land availability for implementing solutions to serve these flows. It is necessary to identify technological solutions that lead to an answer to these problems. The paper proposed a simulation model developed with ARENA computer simulation software suitable for shunting yards which serve sea ports with access to the rail network. Are investigates the principal aspects of shunting yards and adequate measures to increase their transit capacity. The operation capacity for shunting yards sub-system is assessed taking in consideration the required operating standards and the measure of performance (e.g. waiting time for freight wagons, number of railway line in station, storage area, etc.) of the railway station are computed. The conclusion and results, drawn from simulation, help transports and logistics specialists to test the proposals for improving the port management.

  8. Molecular Dynamics based on a Generalized Born solvation model: application to protein folding

    NASA Astrophysics Data System (ADS)

    Onufriev, Alexey

    2004-03-01

    An accurate description of the aqueous environment is essential for realistic biomolecular simulations, but may become very expensive computationally. We have developed a version of the Generalized Born model suitable for describing large conformational changes in macromolecules. The model represents the solvent implicitly as continuum with the dielectric properties of water, and include charge screening effects of salt. The computational cost associated with the use of this model in Molecular Dynamics simulations is generally considerably smaller than the cost of representing water explicitly. Also, compared to traditional Molecular Dynamics simulations based on explicit water representation, conformational changes occur much faster in implicit solvation environment due to the absence of viscosity. The combined speed-up allow one to probe conformational changes that occur on much longer effective time-scales. We apply the model to folding of a 46-residue three helix bundle protein (residues 10-55 of protein A, PDB ID 1BDD). Starting from an unfolded structure at 450 K, the protein folds to the lowest energy state in 6 ns of simulation time, which takes about a day on a 16 processor SGI machine. The predicted structure differs from the native one by 2.4 A (backbone RMSD). Analysis of the structures seen on the folding pathway reveals details of the folding process unavailable form experiment.

  9. Measurement-derived heat-budget approaches for simulating coastal wetland temperature with a hydrodynamic model

    USGS Publications Warehouse

    Swain, Eric; Decker, Jeremy

    2010-01-01

    Numerical modeling is needed to predict environmental temperatures, which affect a number of biota in southern Florida, U.S.A., such as the West Indian manatee (Trichechus manatus), which uses thermal basins for refuge from lethal winter cold fronts. To numerically simulate heat-transport through a dynamic coastal wetland region, an algorithm was developed for the FTLOADDS coupled hydrodynamic surface-water/ground-water model that uses formulations and coefficients suited to the coastal wetland thermal environment. In this study, two field sites provided atmospheric data to develop coefficients for the heat flux terms representing this particular study area. Several methods were examined to represent the heat-flux components used to compute temperature. A Dalton equation was compared with a Penman formulation for latent heat computations, producing similar daily-average temperatures. Simulation of heat-transport in the southern Everglades indicates that the model represents the daily fluctuation in coastal temperatures better than at inland locations; possibly due to the lack of information on the spatial variations in heat-transport parameters such as soil heat capacity and surface albedo. These simulation results indicate that the new formulation is suitable for defining the existing thermohydrologic system and evaluating the ecological effect of proposed restoration efforts in the southern Everglades of Florida.

  10. MRXCAT: Realistic numerical phantoms for cardiovascular magnetic resonance

    PubMed Central

    2014-01-01

    Background Computer simulations are important for validating novel image acquisition and reconstruction strategies. In cardiovascular magnetic resonance (CMR), numerical simulations need to combine anatomical information and the effects of cardiac and/or respiratory motion. To this end, a framework for realistic CMR simulations is proposed and its use for image reconstruction from undersampled data is demonstrated. Methods The extended Cardiac-Torso (XCAT) anatomical phantom framework with various motion options was used as a basis for the numerical phantoms. Different tissue, dynamic contrast and signal models, multiple receiver coils and noise are simulated. Arbitrary trajectories and undersampled acquisition can be selected. The utility of the framework is demonstrated for accelerated cine and first-pass myocardial perfusion imaging using k-t PCA and k-t SPARSE. Results MRXCAT phantoms allow for realistic simulation of CMR including optional cardiac and respiratory motion. Example reconstructions from simulated undersampled k-t parallel imaging demonstrate the feasibility of simulated acquisition and reconstruction using the presented framework. Myocardial blood flow assessment from simulated myocardial perfusion images highlights the suitability of MRXCAT for quantitative post-processing simulation. Conclusion The proposed MRXCAT phantom framework enables versatile and realistic simulations of CMR including breathhold and free-breathing acquisitions. PMID:25204441

  11. NETIMIS: Dynamic Simulation of Health Economics Outcomes Using Big Data.

    PubMed

    Johnson, Owen A; Hall, Peter S; Hulme, Claire

    2016-02-01

    Many healthcare organizations are now making good use of electronic health record (EHR) systems to record clinical information about their patients and the details of their healthcare. Electronic data in EHRs is generated by people engaged in complex processes within complex environments, and their human input, albeit shaped by computer systems, is compromised by many human factors. These data are potentially valuable to health economists and outcomes researchers but are sufficiently large and complex enough to be considered part of the new frontier of 'big data'. This paper describes emerging methods that draw together data mining, process modelling, activity-based costing and dynamic simulation models. Our research infrastructure includes safe links to Leeds hospital's EHRs with 3 million secondary and tertiary care patients. We created a multidisciplinary team of health economists, clinical specialists, and data and computer scientists, and developed a dynamic simulation tool called NETIMIS (Network Tools for Intervention Modelling with Intelligent Simulation; http://www.netimis.com ) suitable for visualization of both human-designed and data-mined processes which can then be used for 'what-if' analysis by stakeholders interested in costing, designing and evaluating healthcare interventions. We present two examples of model development to illustrate how dynamic simulation can be informed by big data from an EHR. We found the tool provided a focal point for multidisciplinary team work to help them iteratively and collaboratively 'deep dive' into big data.

  12. Multi-physics CFD simulations in engineering

    NASA Astrophysics Data System (ADS)

    Yamamoto, Makoto

    2013-08-01

    Nowadays Computational Fluid Dynamics (CFD) software is adopted as a design and analysis tool in a great number of engineering fields. We can say that single-physics CFD has been sufficiently matured in the practical point of view. The main target of existing CFD software is single-phase flows such as water and air. However, many multi-physics problems exist in engineering. Most of them consist of flow and other physics, and the interactions between different physics are very important. Obviously, multi-physics phenomena are critical in developing machines and processes. A multi-physics phenomenon seems to be very complex, and it is so difficult to be predicted by adding other physics to flow phenomenon. Therefore, multi-physics CFD techniques are still under research and development. This would be caused from the facts that processing speed of current computers is not fast enough for conducting a multi-physics simulation, and furthermore physical models except for flow physics have not been suitably established. Therefore, in near future, we have to develop various physical models and efficient CFD techniques, in order to success multi-physics simulations in engineering. In the present paper, I will describe the present states of multi-physics CFD simulations, and then show some numerical results such as ice accretion and electro-chemical machining process of a three-dimensional compressor blade which were obtained in my laboratory. Multi-physics CFD simulations would be a key technology in near future.

  13. Semi-physical simulation test for micro CMOS star sensor

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Zhang, Guang-jun; Jiang, Jie; Fan, Qiao-yun

    2008-03-01

    A designed star sensor must be extensively tested before launching. Testing star sensor requires complicated process with much time and resources input. Even observing sky on the ground is a challenging and time-consuming job, requiring complicated and expensive equipments, suitable time and location, and prone to be interfered by weather. And moreover, not all stars distributed on the sky can be observed by this testing method. Semi-physical simulation in laboratory reduces the testing cost and helps to debug, analyze and evaluate the star sensor system while developing the model. The test system is composed of optical platform, star field simulator, star field simulator computer, star sensor and the central data processing computer. The test system simulates the starlight with high accuracy and good parallelism, and creates static or dynamic image in FOV (Field of View). The conditions of the test are close to observing real sky. With this system, the test of a micro star tracker designed by Beijing University of Aeronautics and Astronautics has been performed successfully. Some indices including full-sky autonomous star identification time, attitude update frequency and attitude precision etc. meet design requirement of the star sensor. Error source of the testing system is also analyzed. It is concluded that the testing system is cost-saving, efficient, and contributes to optimizing the embed arithmetic, shortening the development cycle and improving engineering design processes.

  14. Load Balancing Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearce, Olga Tkachyshyn

    2014-12-01

    The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one atmore » the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.« less

  15. A satellite-based radar wind sensor

    NASA Technical Reports Server (NTRS)

    Xin, Weizhuang

    1991-01-01

    The objective is to investigate the application of Doppler radar systems for global wind measurement. A model of the satellite-based radar wind sounder (RAWS) is discussed, and many critical problems in the designing process, such as the antenna scan pattern, tracking the Doppler shift caused by satellite motion, and backscattering of radar signals from different types of clouds, are discussed along with their computer simulations. In addition, algorithms for measuring mean frequency of radar echoes, such as the Fast Fourier Transform (FFT) estimator, the covariance estimator, and the estimators based on autoregressive models, are discussed. Monte Carlo computer simulations were used to compare the performance of these algorithms. Anti-alias methods are discussed for the FFT and the autoregressive methods. Several algorithms for reducing radar ambiguity were studied, such as random phase coding methods and staggered pulse repitition frequncy (PRF) methods. Computer simulations showed that these methods are not applicable to the RAWS because of the broad spectral widths of the radar echoes from clouds. A waveform modulation method using the concept of spread spectrum and correlation detection was developed to solve the radar ambiguity. Radar ambiguity functions were used to analyze the effective signal-to-noise ratios for the waveform modulation method. The results showed that, with suitable bandwidth product and modulation of the waveform, this method can achieve the desired maximum range and maximum frequency of the radar system.

  16. Multifaceted free-space image distributor for optical interconnects in massively parrallel processing

    NASA Astrophysics Data System (ADS)

    Zhao, Feng; Frietman, Edward E. E.; Han, Zhong; Chen, Ray T.

    1999-04-01

    A characteristic feature of a conventional von Neumann computer is that computing power is delivered by a single processing unit. Although increasing the clock frequency improves the performance of the computer, the switching speed of the semiconductor devices and the finite speed at which electrical signals propagate along the bus set the boundaries. Architectures containing large numbers of nodes can solve this performance dilemma, with the comment that main obstacles in designing such systems are caused by difficulties to come up with solutions that guarantee efficient communications among the nodes. Exchanging data becomes really a bottleneck should al nodes be connected by a shared resource. Only optics, due to its inherent parallelism, could solve that bottleneck. Here, we explore a multi-faceted free space image distributor to be used in optical interconnects in massively parallel processing. In this paper, physical and optical models of the image distributor are focused on from diffraction theory of light wave to optical simulations. the general features and the performance of the image distributor are also described. The new structure of an image distributor and the simulations for it are discussed. From the digital simulation and experiment, it is found that the multi-faceted free space image distributing technique is quite suitable for free space optical interconnection in massively parallel processing and new structure of the multifaceted free space image distributor would perform better.

  17. Computational fluid dynamics assessment: Volume 1, Computer simulations of the METC (Morgantown Energy Technology Center) entrained-flow gasifier: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celik, I.; Chattree, M.

    1988-07-01

    An assessment of the theoretical and numerical aspects of the computer code, PCGC-2, is made; and the results of the application of this code to the Morgantown Energy Technology Center (METC) advanced gasification facility entrained-flow reactor, ''the gasifier,'' are presented. PCGC-2 is a code suitable for simulating pulverized coal combustion or gasification under axisymmetric (two-dimensional) flow conditions. The governing equations for the gas and particulate phase have been reviewed. The numerical procedure and the related programming difficulties have been elucidated. A single-particle model similar to the one used in PCGC-2 has been developed, programmed, and applied to some simple situationsmore » in order to gain insight to the physics of coal particle heat-up, devolatilization, and char oxidation processes. PCGC-2 was applied to the METC entrained-flow gasifier to study numerically the flash pyrolysis of coal, and gasification of coal with steam or carbon dioxide. The results from the simulations are compared with measurements. The gas and particle residence times, particle temperature, and mass component history were also calculated and the results were analyzed. The results provide useful information for understanding the fundamentals of coal gasification and for assessment of experimental results performed using the reactor considered. 69 refs., 35 figs., 23 tabs.« less

  18. An 8-node tetrahedral finite element suitable for explicit transient dynamic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Key, S.W.; Heinstein, M.W.; Stone, C.M.

    1997-12-31

    Considerable effort has been expended in perfecting the algorithmic properties of 8-node hexahedral finite elements. Today the element is well understood and performs exceptionally well when used in modeling three-dimensional explicit transient dynamic events. However, the automatic generation of all-hexahedral meshes remains an elusive achievement. The alternative of automatic generation for all-tetrahedral finite element is a notoriously poor performer, and the 10-node quadratic tetrahedral finite element while a better performer numerically is computationally expensive. To use the all-tetrahedral mesh generation extant today, the authors have explored the creation of a quality 8-node tetrahedral finite element (a four-node tetrahedral finite elementmore » enriched with four midface nodal points). The derivation of the element`s gradient operator, studies in obtaining a suitable mass lumping and the element`s performance in applications are presented. In particular, they examine the 80node tetrahedral finite element`s behavior in longitudinal plane wave propagation, in transverse cylindrical wave propagation, and in simulating Taylor bar impacts. The element only samples constant strain states and, therefore, has 12 hourglass modes. In this regard, it bears similarities to the 8-node, mean-quadrature hexahedral finite element. Given automatic all-tetrahedral meshing, the 8-node, constant-strain tetrahedral finite element is a suitable replacement for the 8-node hexahedral finite element and handbuilt meshes.« less

  19. Proper Orthogonal Decomposition in Optimal Control of Fluids

    NASA Technical Reports Server (NTRS)

    Ravindran, S. S.

    1999-01-01

    In this article, we present a reduced order modeling approach suitable for active control of fluid dynamical systems based on proper orthogonal decomposition (POD). The rationale behind the reduced order modeling is that numerical simulation of Navier-Stokes equations is still too costly for the purpose of optimization and control of unsteady flows. We examine the possibility of obtaining reduced order models that reduce computational complexity associated with the Navier-Stokes equations while capturing the essential dynamics by using the POD. The POD allows extraction of certain optimal set of basis functions, perhaps few, from a computational or experimental data-base through an eigenvalue analysis. The solution is then obtained as a linear combination of these optimal set of basis functions by means of Galerkin projection. This makes it attractive for optimal control and estimation of systems governed by partial differential equations. We here use it in active control of fluid flows governed by the Navier-Stokes equations. We show that the resulting reduced order model can be very efficient for the computations of optimization and control problems in unsteady flows. Finally, implementational issues and numerical experiments are presented for simulations and optimal control of fluid flow through channels.

  20. Real-time Adaptive EEG Source Separation using Online Recursive Independent Component Analysis

    PubMed Central

    Hsu, Sheng-Hsiou; Mullen, Tim; Jung, Tzyy-Ping; Cauwenberghs, Gert

    2016-01-01

    Independent Component Analysis (ICA) has been widely applied to electroencephalographic (EEG) biosignal processing and brain-computer interfaces. The practical use of ICA, however, is limited by its computational complexity, data requirements for convergence, and assumption of data stationarity, especially for high-density data. Here we study and validate an optimized online recursive ICA algorithm (ORICA) with online recursive least squares (RLS) whitening for blind source separation of high-density EEG data, which offers instantaneous incremental convergence upon presentation of new data. Empirical results of this study demonstrate the algorithm's: (a) suitability for accurate and efficient source identification in high-density (64-channel) realistically-simulated EEG data; (b) capability to detect and adapt to non-stationarity in 64-ch simulated EEG data; and (c) utility for rapidly extracting principal brain and artifact sources in real 61-channel EEG data recorded by a dry and wearable EEG system in a cognitive experiment. ORICA was implemented as functions in BCILAB and EEGLAB and was integrated in an open-source Real-time EEG Source-mapping Toolbox (REST), supporting applications in ICA-based online artifact rejection, feature extraction for real-time biosignal monitoring in clinical environments, and adaptable classifications in brain-computer interfaces. PMID:26685257

  1. Monte Carlo simulations of quantum dot solar concentrators: ray tracing based on fluorescence mapping

    NASA Astrophysics Data System (ADS)

    Schuler, A.; Kostro, A.; Huriet, B.; Galande, C.; Scartezzini, J.-L.

    2008-08-01

    One promising application of semiconductor nanostructures in the field of photovoltaics might be quantum dot solar concentrators. Quantum dot containing nanocomposite thin films are synthesized at EPFL-LESO by a low cost sol-gel process. In order to study the potential of the novel planar photoluminescent concentrators, reliable computer simulations are needed. A computer code for ray tracing simulations of quantum dot solar concentrators has been developed at EPFL-LESO on the basis of Monte Carlo methods that are applied to polarization-dependent reflection/transmission at interfaces, photon absorption by the semiconductor nanocrystals and photoluminescent reemission. The software allows importing measured or theoretical absorption/reemission spectra describing the photoluminescent properties of the quantum dots. Hereby the properties of photoluminescent reemission are described by a set of emission spectra depending on the energy of the incoming photon, allowing to simulate the photoluminescent emission using the inverse function method. By our simulations, the importance of two main factors is revealed, an emission spectrum matched to the spectral efficiency curve of the photovoltaic cell, and a large Stokes shift, which is advantageous for the lateral energy transport. No significant energy losses are implied when the quantum dots are contained within a nanocomposite coating instead of being dispersed in the entire volume of the pane. Together with the knowledge on the optoelectronical properties of suitable photovoltaic cells, the simulations allow to predict the total efficiency of the envisaged concentrating PV systems, and to optimize photoluminescent emission frequencies, optical densities, and pane dimensions.

  2. Insights into the deactivation of 5-bromouracil after ultraviolet excitation

    NASA Astrophysics Data System (ADS)

    Peccati, Francesca; Mai, Sebastian; González, Leticia

    2017-03-01

    5-Bromouracil is a nucleobase analogue that can replace thymine in DNA strands and acts as a strong radiosensitizer, with potential applications in molecular biology and cancer therapy. Here, the deactivation of 5-bromouracil after ultraviolet irradiation is investigated in the singlet and triplet manifold by accurate quantum chemistry calculations and non-adiabatic dynamics simulations. It is found that, after irradiation to the bright ππ* state, three main relaxation pathways are, in principle, possible: relaxation back to the ground state, intersystem crossing (ISC) and C-Br photodissociation. Based on accurate MS-CASPT2 optimizations, we propose that ground-state relaxation should be the predominant deactivation pathway in the gas phase. We then employ different electronic structure methods to assess their suitability to carry out excited-state dynamics simulations. MRCIS (multi-reference configuration interaction including single excitations) was used in surface hopping simulations to compute the ultrafast ISC dynamics, which mostly involves the 1nOπ* and 3ππ* states. This article is part of the themed issue 'Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces'.

  3. Insights into the deactivation of 5-bromouracil after ultraviolet excitation

    PubMed Central

    2017-01-01

    5-Bromouracil is a nucleobase analogue that can replace thymine in DNA strands and acts as a strong radiosensitizer, with potential applications in molecular biology and cancer therapy. Here, the deactivation of 5-bromouracil after ultraviolet irradiation is investigated in the singlet and triplet manifold by accurate quantum chemistry calculations and non-adiabatic dynamics simulations. It is found that, after irradiation to the bright ππ* state, three main relaxation pathways are, in principle, possible: relaxation back to the ground state, intersystem crossing (ISC) and C–Br photodissociation. Based on accurate MS-CASPT2 optimizations, we propose that ground-state relaxation should be the predominant deactivation pathway in the gas phase. We then employ different electronic structure methods to assess their suitability to carry out excited-state dynamics simulations. MRCIS (multi-reference configuration interaction including single excitations) was used in surface hopping simulations to compute the ultrafast ISC dynamics, which mostly involves the 1nOπ* and 3ππ* states. This article is part of the themed issue ‘Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces’. PMID:28320905

  4. Characterization and Simulation of a New Design Parallel-Plate Ionization Chamber for CT Dosimetry at Calibration Laboratories

    NASA Astrophysics Data System (ADS)

    Perini, Ana P.; Neves, Lucio P.; Maia, Ana F.; Caldas, Linda V. E.

    2013-12-01

    In this work, a new extended-length parallel-plate ionization chamber was tested in the standard radiation qualities for computed tomography established according to the half-value layers defined at the IEC 61267 standard, at the Calibration Laboratory of the Instituto de Pesquisas Energéticas e Nucleares (IPEN). The experimental characterization was made following the IEC 61674 standard recommendations. The experimental results obtained with the ionization chamber studied in this work were compared to those obtained with a commercial pencil ionization chamber, showing a good agreement. With the use of the PENELOPE Monte Carlo code, simulations were undertaken to evaluate the influence of the cables, insulator, PMMA body, collecting electrode, guard ring, screws, as well as different materials and geometrical arrangements, on the energy deposited on the ionization chamber sensitive volume. The maximum influence observed was 13.3% for the collecting electrode, and regarding the use of different materials and design, the substitutions showed that the original project presented the most suitable configuration. The experimental and simulated results obtained in this work show that this ionization chamber has appropriate characteristics to be used at calibration laboratories, for dosimetry in standard computed tomography and diagnostic radiology quality beams.

  5. Photo-induced reactions from efficient molecular dynamics with electronic transitions using the FIREBALL local-orbital density functional theory formalism.

    PubMed

    Zobač, Vladimír; Lewis, James P; Abad, Enrique; Mendieta-Moreno, Jesús I; Hapala, Prokop; Jelínek, Pavel; Ortega, José

    2015-05-08

    The computational simulation of photo-induced processes in large molecular systems is a very challenging problem. Firstly, to properly simulate photo-induced reactions the potential energy surfaces corresponding to excited states must be appropriately accessed; secondly, understanding the mechanisms of these processes requires the exploration of complex configurational spaces and the localization of conical intersections; finally, photo-induced reactions are probability events, that require the simulation of hundreds of trajectories to obtain the statistical information for the analysis of the reaction profiles. Here, we present a detailed description of our implementation of a molecular dynamics with electronic transitions algorithm within the local-orbital density functional theory code FIREBALL, suitable for the computational study of these problems. As an example of the application of this approach, we also report results on the [2 + 2] cycloaddition of ethylene with maleic anhydride and on the [2 + 2] photo-induced polymerization reaction of two C60 molecules. We identify different deactivation channels of the initial electron excitation, depending on the time of the electronic transition from LUMO to HOMO, and the character of the HOMO after the transition.

  6. Variability of hemodynamic parameters using the common viscosity assumption in a computational fluid dynamics analysis of intracranial aneurysms.

    PubMed

    Suzuki, Takashi; Takao, Hiroyuki; Suzuki, Takamasa; Suzuki, Tomoaki; Masuda, Shunsuke; Dahmani, Chihebeddine; Watanabe, Mitsuyoshi; Mamori, Hiroya; Ishibashi, Toshihiro; Yamamoto, Hideki; Yamamoto, Makoto; Murayama, Yuichi

    2017-01-01

    In most simulations of intracranial aneurysm hemodynamics, blood is assumed to be a Newtonian fluid. However, it is a non-Newtonian fluid, and its viscosity profile differs among individuals. Therefore, the common viscosity assumption may not be valid for all patients. This study aims to test the suitability of the common viscosity assumption. Blood viscosity datasets were obtained from two healthy volunteers. Three simulations were performed for three different-sized aneurysms, two using measured value-based non-Newtonian models and one using a Newtonian model. The parameters proposed to predict an aneurysmal rupture obtained using the non-Newtonian models were compared with those obtained using the Newtonian model. The largest difference (25%) in the normalized wall shear stress (NWSS) was observed in the smallest aneurysm. Comparing the difference ratio to the NWSS with the Newtonian model between the two Non-Newtonian models, the difference of the ratio was 17.3%. Irrespective of the aneurysmal size, computational fluid dynamics simulations with either the common Newtonian or non-Newtonian viscosity assumption could lead to values different from those of the patient-specific viscosity model for hemodynamic parameters such as NWSS.

  7. Bayesian Treed Multivariate Gaussian Process with Adaptive Design: Application to a Carbon Capture Unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik

    2014-05-16

    Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less

  8. A DNA network as an information processing system.

    PubMed

    Santini, Cristina Costa; Bath, Jonathan; Turberfield, Andrew J; Tyrrell, Andy M

    2012-01-01

    Biomolecular systems that can process information are sought for computational applications, because of their potential for parallelism and miniaturization and because their biocompatibility also makes them suitable for future biomedical applications. DNA has been used to design machines, motors, finite automata, logic gates, reaction networks and logic programs, amongst many other structures and dynamic behaviours. Here we design and program a synthetic DNA network to implement computational paradigms abstracted from cellular regulatory networks. These show information processing properties that are desirable in artificial, engineered molecular systems, including robustness of the output in relation to different sources of variation. We show the results of numerical simulations of the dynamic behaviour of the network and preliminary experimental analysis of its main components.

  9. A Brief Description of the Kokkos implementation of the SNAP potential in ExaMiniMD.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Aidan P.; Trott, Christian Robert

    2017-11-01

    Within the EXAALT project, the SNAP [1] approach is being used to develop high accuracy potentials for use in large-scale long-time molecular dynamics simulations of materials behavior. In particular, we have developed a new SNAP potential that is suitable for describing the interplay between helium atoms and vacancies in high-temperature tungsten[2]. This model is now being used to study plasma-surface interactions in nuclear fusion reactors for energy production. The high-accuracy of SNAP potentials comes at the price of increased computational cost per atom and increased computational complexity. The increased cost is mitigated by improvements in strong scaling that can bemore » achieved using advanced algorithms [3].« less

  10. A novel track-before-detect algorithm based on optimal nonlinear filtering for detecting and tracking infrared dim target

    NASA Astrophysics Data System (ADS)

    Tian, Yuexin; Gao, Kun; Liu, Ying; Han, Lu

    2015-08-01

    Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.

  11. A VLBI variance-covariance analysis interactive computer program. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bock, Y.

    1980-01-01

    An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.

  12. Magnetic Skyrmion as a Nonlinear Resistive Element: A Potential Building Block for Reservoir Computing

    NASA Astrophysics Data System (ADS)

    Prychynenko, Diana; Sitte, Matthias; Litzius, Kai; Krüger, Benjamin; Bourianoff, George; Kläui, Mathias; Sinova, Jairo; Everschor-Sitte, Karin

    2018-01-01

    Inspired by the human brain, there is a strong effort to find alternative models of information processing capable of imitating the high energy efficiency of neuromorphic information processing. One possible realization of cognitive computing involves reservoir computing networks. These networks are built out of nonlinear resistive elements which are recursively connected. We propose that a Skyrmion network embedded in magnetic films may provide a suitable physical implementation for reservoir computing applications. The significant key ingredient of such a network is a two-terminal device with nonlinear voltage characteristics originating from magnetoresistive effects, such as the anisotropic magnetoresistance or the recently discovered noncollinear magnetoresistance. The most basic element for a reservoir computing network built from "Skyrmion fabrics" is a single Skyrmion embedded in a ferromagnetic ribbon. In order to pave the way towards reservoir computing systems based on Skyrmion fabrics, we simulate and analyze (i) the current flow through a single magnetic Skyrmion due to the anisotropic magnetoresistive effect and (ii) the combined physics of local pinning and the anisotropic magnetoresistive effect.

  13. Vortex rings from Sphagnum moss capsules

    NASA Astrophysics Data System (ADS)

    Whitaker, Dwight; Strassman, Sam; Cha, Jung; Chang, Emily; Guo, Xinyi; Edwards, Joan

    2010-11-01

    The capsules of Sphagnum moss use vortex rings to disperse spores to suitable habitats many kilometers away. Vortex rings are created by the sudden release of pressurized air when the capsule ruptures, and are an efficient way to carry the small spores with low terminal velocities to heights where they can be carried by turbulent wind currents. We will present our computational model of these explosions, which are carried out using a 2-D large eddy simulation (LES) on FLUENT. Our simulations can reproduce the observed motion of the spore clouds observed from moss capsules with high-speed videos, and we will discuss the roles of bursting pressure, cap mass, and capsule morphology on the formation and quality of vortex rings created by this plant.

  14. Macrosegregation Resulting from Directional Solidification Through an Abrupt Change in Cross-Sections

    NASA Technical Reports Server (NTRS)

    Lauer, M.; Poirier, D. R.; Ghods, M.; Tewari, S. N.; Grugel, R. N.

    2017-01-01

    Simulations of the directional solidification of two hypoeutectic alloys (Al-7Si alloy and Al-19Cu) and resulting macrosegregation patterns are presented. The casting geometries include abrupt changes in cross-section from a larger width of 9.5 mm to a narrower 3.2 mm width then through an expansion back to a width of 9.5 mm. The alloys were chosen as model alloys because they have similar solidification shrinkages, but the effect of Cu on changing the density of the liquid alloy is about an order of magnitude greater than that of Si. The simulations compare well with experimental castings that were directionally solidified in a graphite mold in a Bridgman furnace. In addition to the simulations of the directional solidification in graphite molds, some simulations were effected for solidification in an alumina mold. This study showed that the mold must be included in numerical simulations of directional solidification because of its effect on the temperature field and solidification. For the model alloys used for the study, the simulations clearly show the interaction of the convection field with the solidifying alloys to produce a macrosegregation pattern known as "steepling" in sections with a uniform width. Details of the complex convection- and segregation-patterns at both the contraction and expansion of the cross-sectional area are revealed by the computer simulations. The convection and solidification through the expansions suggest a possible mechanism for the formation of stray grains. The computer simulations and the experimental castings have been part of on-going ground-based research with the goal of providing necessary background for eventual experiments aboard the ISS. For casting practitioners, the results of the simulations demonstrate that computer simulations should be applied to reveal interactions between alloy solidification properties, solidification conditions, and mold geometries on macrosegregation. The simulations also presents the possibility of engineering the mold-material to avoid, or mitigate, the effects of thermosolutal convection and macrosegregation by selecting a mold material with suitable thermal properties, especially its thermal conductivity.

  15. Large eddy simulation in a rotary blood pump: Viscous shear stress computation and comparison with unsteady Reynolds-averaged Navier-Stokes simulation.

    PubMed

    Torner, Benjamin; Konnigk, Lucas; Hallier, Sebastian; Kumar, Jitendra; Witte, Matthias; Wurm, Frank-Hendrik

    2018-06-01

    Numerical flow analysis (computational fluid dynamics) in combination with the prediction of blood damage is an important procedure to investigate the hemocompatibility of a blood pump, since blood trauma due to shear stresses remains a problem in these devices. Today, the numerical damage prediction is conducted using unsteady Reynolds-averaged Navier-Stokes simulations. Investigations with large eddy simulations are rarely being performed for blood pumps. Hence, the aim of the study is to examine the viscous shear stresses of a large eddy simulation in a blood pump and compare the results with an unsteady Reynolds-averaged Navier-Stokes simulation. The simulations were carried out at two operation points of a blood pump. The flow was simulated on a 100M element mesh for the large eddy simulation and a 20M element mesh for the unsteady Reynolds-averaged Navier-Stokes simulation. As a first step, the large eddy simulation was verified by analyzing internal dissipative losses within the pump. Then, the pump characteristics and mean and turbulent viscous shear stresses were compared between the two simulation methods. The verification showed that the large eddy simulation is able to reproduce the significant portion of dissipative losses, which is a global indication that the equivalent viscous shear stresses are adequately resolved. The comparison with the unsteady Reynolds-averaged Navier-Stokes simulation revealed that the hydraulic parameters were in agreement, but differences for the shear stresses were found. The results show the potential of the large eddy simulation as a high-quality comparative case to check the suitability of a chosen Reynolds-averaged Navier-Stokes setup and turbulence model. Furthermore, the results lead to suggest that large eddy simulations are superior to unsteady Reynolds-averaged Navier-Stokes simulations when instantaneous stresses are applied for the blood damage prediction.

  16. Ion transfer from an atmospheric pressure ion funnel into a mass spectrometer with different interface options: Simulation-based optimization of ion transmission efficiency.

    PubMed

    Mayer, Thomas; Borsdorf, Helko

    2016-02-15

    We optimized an atmospheric pressure ion funnel (APIF) including different interface options (pinhole, capillary, and nozzle) regarding a maximal ion transmission. Previous computer simulations consider the ion funnel itself and do not include the geometry of the following components which can considerably influence the ion transmission into the vacuum stage. Initially, a three-dimensional computer-aided design (CAD) model of our setup was created using Autodesk Inventor. This model was imported to the Autodesk Simulation CFD program where the computational fluid dynamics (CFD) were calculated. The flow field was transferred to SIMION 8.1. Investigations of ion trajectories were carried out using the SDS (statistical diffusion simulation) tool of SIMION, which allowed us to evaluate the flow regime, pressure, and temperature values that we obtained. The simulation-based optimization of different interfaces between an atmospheric pressure ion funnel and the first vacuum stage of a mass spectrometer require the consideration of fluid dynamics. The use of a Venturi nozzle ensures the highest level of transmission efficiency in comparison to capillaries or pinholes. However, the application of radiofrequency (RF) voltage and an appropriate direct current (DC) field leads to process optimization and maximum ion transfer. The nozzle does not hinder the transfer of small ions. Our high-resolution SIMION model (0.01 mm grid unit(-1) ) under consideration of fluid dynamics is generally suitable for predicting the ion transmission through an atmospheric-vacuum system for mass spectrometry and enables the optimization of operational parameters. A Venturi nozzle inserted between the ion funnel and the mass spectrometer permits maximal ion transmission. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Evaluation of Aircraft Platforms for SOFIA by Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Klotz, S. P.; Srinivasan, G. R.; VanDalsem, William (Technical Monitor)

    1995-01-01

    The selection of an airborne platform for the Stratospheric Observatory for Infrared Astronomy (SOFIA) is based not only on economic cost, but technical criteria, as well. Technical issues include aircraft fatigue, resonant characteristics of the cavity-port shear layer, aircraft stability, the drag penalty of the open telescope bay, and telescope performance. Recently, two versions of the Boeing 747 aircraft, viz., the -SP and -200 configurations, were evaluated by computational fluid dynamics (CFD) for their suitability as SOFIA platforms. In each configuration the telescope was mounted behind the wings in an open bay with nearly circular aperture. The geometry of the cavity, cavity aperture, and telescope was identical in both platforms. The aperture was located on the port side of the aircraft and the elevation angle of the telescope, measured with respect to the vertical axis, was 500. The unsteady, viscous, three-dimensional, aerodynamic and acoustic flow fields in the vicinity of SOFIA were simulated by an implicit, finite-difference Navier-Stokes flow solver (OVERFLOW) on a Chimera, overset grid system. The computational domain was discretized by structured grids. Computations were performed at wind-tunnel and flight Reynolds numbers corresponding to one free-stream flow condition (M = 0.85, angle of attack alpha = 2.50, and sideslip angle beta = 0 degrees). The computational domains consisted of twenty-nine(29) overset grids in the wind-tunnel simulations and forty-five(45) grids in the simulations run at cruise flight conditions. The maximum number of grid points in the simulations was approximately 4 x 10(exp 6). Issues considered in the evaluation study included analysis of the unsteady flow field in the cavity, the influence of the cavity on the flow across empennage surfaces, the drag penalty caused by the open telescope bay, and the noise radiating from cavity surfaces and the cavity-port shear layer. Wind-tunnel data were also available to compare to the CFD results; the data permitted an assessment of CFD as a design tool for the SOFIA program.

  18. 3D Fiber Orientation Simulation for Plastic Injection Molding

    NASA Astrophysics Data System (ADS)

    Lin, Baojiu; Jin, Xiaoshi; Zheng, Rong; Costa, Franco S.; Fan, Zhiliang

    2004-06-01

    Glass fiber reinforced polymer is widely used in the products made using injection molding processing. The distribution of fiber orientation inside plastic parts has direct effects on quality of molded parts. Using computer simulation to predict fiber orientation distribution is one of most efficient ways to assist engineers to do warpage analysis and to find a good design solution to produce high quality plastic parts. Fiber orientation simulation software based on 2-1/2D (midplane /Dual domain mesh) techniques has been used in industry for a decade. However, the 2-1/2D technique is based on the planar Hele-Shaw approximation and it is not suitable when the geometry has complex three-dimensional features which cannot be well approximated by 2D shells. Recently, a full 3D simulation software for fiber orientation has been developed and integrated into Moldflow Plastics Insight 3D simulation software. The theory for this new 3D fiber orientation calculation module is described in this paper. Several examples are also presented to show the benefit in using 3D fiber orientation simulation.

  19. An investigation of the information propagation and entropy transport aspects of Stirling machine numerical simulation

    NASA Technical Reports Server (NTRS)

    Goldberg, Louis F.

    1992-01-01

    Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality.

  20. Range Process Simulation Tool

    NASA Technical Reports Server (NTRS)

    Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga

    2005-01-01

    Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.

  1. Adapting to life: ocean biogeochemical modelling and adaptive remeshing

    NASA Astrophysics Data System (ADS)

    Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.

    2014-05-01

    An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in vertical nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a simple vertical column (quasi-1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3. Unlike previous work the adaptivity metric used is flexible and we show that capturing the physical behaviour of the model is paramount to achieving a reasonable solution. Adding biological quantities to the adaptivity metric further refines the solution. We then show the potential of this method in two case studies where we change the adaptivity metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate that adaptive meshes may provide a suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high vertical resolution whilst minimising the number of elements in the mesh. More work is required to move this to fully 3-D simulations.

  2. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    NASA Astrophysics Data System (ADS)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  3. Numerical simulations of earthquakes and the dynamics of fault systems using the Finite Element method.

    NASA Astrophysics Data System (ADS)

    Kettle, L. M.; Mora, P.; Weatherley, D.; Gross, L.; Xing, H.

    2006-12-01

    Simulations using the Finite Element method are widely used in many engineering applications and for the solution of partial differential equations (PDEs). Computational models based on the solution of PDEs play a key role in earth systems simulations. We present numerical modelling of crustal fault systems where the dynamic elastic wave equation is solved using the Finite Element method. This is achieved using a high level computational modelling language, escript, available as open source software from ACcESS (Australian Computational Earth Systems Simulator), the University of Queensland. Escript is an advanced geophysical simulation software package developed at ACcESS which includes parallel equation solvers, data visualisation and data analysis software. The escript library was implemented to develop a flexible Finite Element model which reliably simulates the mechanism of faulting and the physics of earthquakes. Both 2D and 3D elastodynamic models are being developed to study the dynamics of crustal fault systems. Our final goal is to build a flexible model which can be applied to any fault system with user-defined geometry and input parameters. To study the physics of earthquake processes, two different time scales must be modelled, firstly the quasi-static loading phase which gradually increases stress in the system (~100years), and secondly the dynamic rupture process which rapidly redistributes stress in the system (~100secs). We will discuss the solution of the time-dependent elastic wave equation for an arbitrary fault system using escript. This involves prescribing the correct initial stress distribution in the system to simulate the quasi-static loading of faults to failure; determining a suitable frictional constitutive law which accurately reproduces the dynamics of the stick/slip instability at the faults; and using a robust time integration scheme. These dynamic models generate data and information that can be used for earthquake forecasting.

  4. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms

    NASA Astrophysics Data System (ADS)

    Cros, Maria; Joemai, Raoul M. S.; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal

    2017-08-01

    This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.

  5. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms.

    PubMed

    Cros, Maria; Joemai, Raoul M S; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal

    2017-07-17

    This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.

  6. Linearly scaling and almost Hamiltonian dielectric continuum molecular dynamics simulations through fast multipole expansions

    NASA Astrophysics Data System (ADS)

    Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul

    2015-11-01

    Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADES can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved—up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.

  7. Modified animal model and computer-assisted approach for dentoalveolar distraction osteogenesis to reconstruct unilateral maxillectomy defect.

    PubMed

    Feng, Zhihong; Zhao, Jinlong; Zhou, Libin; Dong, Yan; Zhao, Yimin

    2009-10-01

    The purpose of this report is to show the establishment of an animal model with a unilateral maxilla defect, application of virtual reality and rapid prototyping in the surgical planning for dentoalveolar distraction osteogenesis (DO). Two adult dogs were used to develop an animal model with a unilateral maxillary defect. The 3-dimensional model of the canine craniofacial skeleton was reconstructed with computed tomography data using the software Mimics, version 12.0 (Materialise Group, Leuven, Belgium). A virtual individual distractor was designed and transferred onto the model with the defect, and the osteotomies and distraction processes were simulated. A precise casting technique and numeric control technology were applied to produce the titanium distraction device, which was installed on the physical model with the defect, which was generated using Selective Laser Sintering technology, and the in vitro simulation of osteotomies and DO was done. The 2 dogs survived the operation and were lively. The osteotomies and distraction process were simulated successfully whether on the virtual or the physical model. The bone transport could be distracted to the desired position both in the virtual environment and on the physical model. The novel method to develop an animal model with a unilateral maxillary defect was feasible, and the animal model was suitable to develop the reconstruction method for unilateral maxillary defect cases with dentoalveolar DO. Computer-assisted surgical planning and simulation improved the reliability of the maxillofacial surgery, especially for the complex cases. The novel idea to reconstruct the unilateral maxillary defect with dentoalveolar DO was proved through the model experiment.

  8. Bio-Inspired Controller on an FPGA Applied to Closed-Loop Diaphragmatic Stimulation

    PubMed Central

    Zbrzeski, Adeline; Bornat, Yannick; Hillen, Brian; Siu, Ricardo; Abbas, James; Jung, Ranu; Renaud, Sylvie

    2016-01-01

    Cervical spinal cord injury can disrupt connections between the brain respiratory network and the respiratory muscles which can lead to partial or complete loss of ventilatory control and require ventilatory assistance. Unlike current open-loop technology, a closed-loop diaphragmatic pacing system could overcome the drawbacks of manual titration as well as respond to changing ventilation requirements. We present an original bio-inspired assistive technology for real-time ventilation assistance, implemented in a digital configurable Field Programmable Gate Array (FPGA). The bio-inspired controller, which is a spiking neural network (SNN) inspired by the medullary respiratory network, is as robust as a classic controller while having a flexible, low-power and low-cost hardware design. The system was simulated in MATLAB with FPGA-specific constraints and tested with a computational model of rat breathing; the model reproduced experimentally collected respiratory data in eupneic animals. The open-loop version of the bio-inspired controller was implemented on the FPGA. Electrical test bench characterizations confirmed the system functionality. Open and closed-loop paradigm simulations were simulated to test the FPGA system real-time behavior using the rat computational model. The closed-loop system monitors breathing and changes in respiratory demands to drive diaphragmatic stimulation. The simulated results inform future acute animal experiments and constitute the first step toward the development of a neuromorphic, adaptive, compact, low-power, implantable device. The bio-inspired hardware design optimizes the FPGA resource and time costs while harnessing the computational power of spike-based neuromorphic hardware. Its real-time feature makes it suitable for in vivo applications. PMID:27378844

  9. Synchronization techniques for all digital 16-ary QAM receivers operating over land mobile satellite links

    NASA Technical Reports Server (NTRS)

    Fines, P.; Aghvami, A. H.

    1990-01-01

    The performance of a low bit rate (64 Kb/s) all digital 16-ary Differentially Encoded Quadrature Amplitude Modulation (16-DEQAM) demodulator operating over a mobile satellite channel, is considered. The synchronization and detection techniques employed to overcome the Rician channel impairments, are described. The acquisition and steady state performance of this modem, are evaluated by computer simulation over AWGN and RICIAN channels. The results verify the suitability of the 16-DEQAM transmission over slowly faded and/or mildly faded channels.

  10. Using the Unified Modelling Language (UML) to guide the systemic description of biological processes and systems.

    PubMed

    Roux-Rouquié, Magali; Caritey, Nicolas; Gaubert, Laurent; Rosenthal-Sabroux, Camille

    2004-07-01

    One of the main issues in Systems Biology is to deal with semantic data integration. Previously, we examined the requirements for a reference conceptual model to guide semantic integration based on the systemic principles. In the present paper, we examine the usefulness of the Unified Modelling Language (UML) to describe and specify biological systems and processes. This makes unambiguous representations of biological systems, which would be suitable for translation into mathematical and computational formalisms, enabling analysis, simulation and prediction of these systems behaviours.

  11. Computational investigation of suitable polymer gel composition for the QA of the beam components of a BNCT irradiation field.

    PubMed

    Tanaka, Kenichi; Sakurai, Yoshinori; Hayashi, Shin-Ichiro; Kajimoto, Tsuyoshi; Uchida, Ryohei; Tanaka, Hiroki; Takata, Takushi; Bengua, Gerard; Endo, Satoru

    2017-09-01

    This study investigated the optimum composition of the MAGAT polymer gel which is to be used in the quality assurance measurement of the thermal neutron, fast neutron and gamma ray components in the irradiation field used for boron neutron capture therapy at the Kyoto University Reactor. Simulations using the PHITS code showed that when combined with the gel, 6 Li concentrations of 0, 10 and 100ppm were found to be potentially usable. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Thermophysical properties of hydrogen along the liquid-vapor coexistence

    NASA Astrophysics Data System (ADS)

    Osman, S. M.; Sulaiman, N.; Bahaa Khedr, M.

    2016-05-01

    We present Theoretical Calculations for the Liquid-Vapor Coexistence (LVC) curve of fluid Hydrogen within the first order perturbation theory with a suitable first order quantum correction to the free energy. In the present equation of state, we incorporate the dimerization of H2 molecule by treating the fluid as a hard convex body fluid. The thermophysical properties of fluid H2 along the LVC curve, including the pressure-temperature dependence, density-temperature asymmetry, volume expansivity, entropy and enthalpy, are calculated and compared with computer simulation and empirical results.

  13. Digital adaptive flight controller development

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Alag, G.; Berry, P.; Kotob, S.

    1974-01-01

    A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Two designs are described for an example aircraft. Each of these designs uses a weighted least squares procedure to identify parameters defining the dynamics of the aircraft. The two designs differ in the way in which control law parameters are determined. One uses the solution of an optimal linear regulator problem to determine these parameters while the other uses a procedure called single stage optimization. Extensive simulation results and analysis leading to the designs are presented.

  14. Performance Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis with Different Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less

  15. Quantum computing applied to calculations of molecular energies: CH2 benchmark.

    PubMed

    Veis, Libor; Pittner, Jiří

    2010-11-21

    Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.

  16. Figures of Merit Software: Description, User's Guide, Installation Notes, Versions Description, and License Agreement

    NASA Technical Reports Server (NTRS)

    hoelzer, H. D.; Fourroux, K. A.; Rickman, D. L.; Schrader, C. M.

    2011-01-01

    Figures of Merit (FoMs) and the FoM software provide a method for quantitatively evaluating the quality of a regolith simulant by comparing the simulant to a reference material. FoMs may be used for comparing a simulant to actual regolith material, specification by stating the value a simulant s FoMs must attain to be suitable for a given application and comparing simulants from different vendors or production runs. FoMs may even be used to compare different simulants to each other. A single FoM is conceptually an algorithm that computes a single number for quantifying the similarity or difference of a single characteristic of a simulant material and a reference material and provides a clear measure of how well a simulant and reference material match or compare. FoMs have been constructed to lie between zero and 1, with zero indicating a poor or no match and 1 indicating a perfect match. FoMs are defined for modal composition, particle size distribution, particle shape distribution, (aspect ratio and angularity), and density. This TM covers the mathematics, use, installation, and licensing for the existing FoM code in detail.

  17. Visualization in simulation tools: requirements and a tool specification to support the teaching of dynamic biological processes.

    PubMed

    Jørgensen, Katarina M; Haddow, Pauline C

    2011-08-01

    Simulation tools are playing an increasingly important role behind advances in the field of systems biology. However, the current generation of biological science students has either little or no experience with such tools. As such, this educational glitch is limiting both the potential use of such tools as well as the potential for tighter cooperation between the designers and users. Although some simulation tool producers encourage their use in teaching, little attempt has hitherto been made to analyze and discuss their suitability as an educational tool for noncomputing science students. In general, today's simulation tools assume that the user has a stronger mathematical and computing background than that which is found in most biological science curricula, thus making the introduction of such tools a considerable pedagogical challenge. This paper provides an evaluation of the pedagogical attributes of existing simulation tools for cell signal transduction based on Cognitive Load theory. Further, design recommendations for an improved educational simulation tool are provided. The study is based on simulation tools for cell signal transduction. However, the discussions are relevant to a broader biological simulation tool set.

  18. Numerical simulation of magnetic field for compact electromagnet consisting of REBCO coils and iron yoke

    NASA Astrophysics Data System (ADS)

    You, Shuangrong; Chi, Changxin; Guo, Yanqun; Bai, Chuanyi; Liu, Zhiyong; Lu, Yuming; Cai, Chuanbing

    2018-07-01

    This paper presents the numerical simulation of a high-temperature superconductor electromagnet consisting of REBCO (RE-Ba2Cu3O7‑x, RE: rare earth) superconducting tapes and a ferromagnetic iron yoke. The REBCO coils with multi-width design are operating at 77 K, with the iron yoke at room temperature, providing a magnetic space with a 32 mm gap between two poles. The finite element method is applied to compute the 3D model of the studied magnet. Simulated results show that the magnet generates a 1.5 T magnetic field at an operating current of 38.7 A, and the spatial inhomogeneity of the field is 0.8% in a Φ–20 mm diameter sphere volume. Compared with the conventional iron electromagnet, the present compact design is more suitable for practical application.

  19. Finite element simulation of adaptive aerospace structures with SMA actuators

    NASA Astrophysics Data System (ADS)

    Frautschi, Jason; Seelecke, Stefan

    2003-07-01

    The particular demands of aerospace engineering have spawned many of the developments in the field of adaptive structures. Shape memory alloys are particularly attractive as actuators in these types of structures due to their large strains, high specific work output and potential for structural integration. However, the requisite extensive physical testing has slowed development of potential applications and highlighted the need for a simulation tool for feasibility studies. In this paper we present an implementation of an extended version of the M'ller-Achenbach SMA model into a commercial finite element code suitable for such studies. Interaction between the SMA model and the solution algorithm for the global FE equations is thoroughly investigated with respect to the effect of tolerances and time step size on convergence, computational cost and accuracy. Finally, a simulation of a SMA-actuated flexible trailing edge of an aircraft wing modeled with beam elements is presented.

  20. Arrays of individually controlled ions suitable for two-dimensional quantum simulations

    PubMed Central

    Mielenz, Manuel; Kalis, Henning; Wittemer, Matthias; Hakelberg, Frederick; Warring, Ulrich; Schmied, Roman; Blain, Matthew; Maunz, Peter; Moehring, David L.; Leibfried, Dietrich; Schaetz, Tobias

    2016-01-01

    A precisely controlled quantum system may reveal a fundamental understanding of another, less accessible system of interest. A universal quantum computer is currently out of reach, but an analogue quantum simulator that makes relevant observables, interactions and states of a quantum model accessible could permit insight into complex dynamics. Several platforms have been suggested and proof-of-principle experiments have been conducted. Here, we operate two-dimensional arrays of three trapped ions in individually controlled harmonic wells forming equilateral triangles with side lengths 40 and 80 μm. In our approach, which is scalable to arbitrary two-dimensional lattices, we demonstrate individual control of the electronic and motional degrees of freedom, preparation of a fiducial initial state with ion motion close to the ground state, as well as a tuning of couplings between ions within experimental sequences. Our work paves the way towards a quantum simulator of two-dimensional systems designed at will. PMID:27291425

  1. Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale.

    PubMed

    Parton, Daniel L; Grinaway, Patrick B; Hanson, Sonya M; Beauchamp, Kyle A; Chodera, John D

    2016-06-01

    The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences-from a single sequence to an entire superfamily-and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics-such as Markov state models (MSMs)-which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.

  2. SU-E-I-107: Suitability of Various Radiation Detectors Used in Radiation Therapy for X-Ray Dosimetry in Computed Tomography.

    PubMed

    Liebmann, M; Poppe, B; von Boetticher, H

    2012-06-01

    Assessment of suitability for X-ray dosimetry in computed tomography of various ionization chambers, diodes and two-dimensional detector arrays primarily used in radiation therapy. An Oldelft X-ray simulation unit was used to irradiate PTW 60008, 60012 dosimetry diodes, PTW 23332, 31013, 31010, 31006 axial symmetrical ionization chambers, PTW 23343, 34001 plane parallel ionization chambers and PTW Starcheck and 2D-Array seven29 as well as a prototype Farmer chamber with a copper wall. Peak potential was varied from 50 kV up to 125 kV and beam qualities were quantified through half-value-layer measurements. Energy response was investigated free in air as well as in 2 cm depth in a solid water phantom and refers to a manufacturer calibrated PTW 60004 diode for kV-dosimetry. The thimble ionization chambers PTW 31010, 31013, the uncapsuled diode PTW 60012 and the PTW 2D-Array seven29 exhibit an energy response deviation in the investigated energy region of approximately 10% or lower thus proving good usability in X-ray dosimetry if higher spatial resolution is needed or rotational irradiations occur. It could be shown that in radiation therapy routinely used detectors are usable in a much lower energy region. The rotational symmetry is of advantage in computed tomography dosimetry and enables dose profile as well as point dose measurements in a suitable phantom for estimation of organ doses. Additional the PTW 2D-Array seven29 can give a quick overview of radiation fields in non-rotating tasks. © 2012 American Association of Physicists in Medicine.

  3. An Agent-Based Epidemic Simulation of Social Behaviors Affecting HIV Transmission among Taiwanese Homosexuals

    PubMed Central

    2015-01-01

    Computational simulations are currently used to identify epidemic dynamics, to test potential prevention and intervention strategies, and to study the effects of social behaviors on HIV transmission. The author describes an agent-based epidemic simulation model of a network of individuals who participate in high-risk sexual practices, using number of partners, condom usage, and relationship length to distinguish between high- and low-risk populations. Two new concepts—free links and fixed links—are used to indicate tendencies among individuals who either have large numbers of short-term partners or stay in long-term monogamous relationships. An attempt was made to reproduce epidemic curves of reported HIV cases among male homosexuals in Taiwan prior to using the agent-based model to determine the effects of various policies on epidemic dynamics. Results suggest that when suitable adjustments are made based on available social survey statistics, the model accurately simulates real-world behaviors on a large scale. PMID:25815047

  4. Energy efficient hybrid computing systems using spin devices

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  5. Lattice Boltzmann multi-phase simulations in porous media using Multiple GPUs

    NASA Astrophysics Data System (ADS)

    Toelke, J.; De Prisco, G.; Mu, Y.

    2011-12-01

    Ingrain's digital rock physics lab computes the physical properties and fluid flow characteristics of oil and gas reservoir rocks including shales, carbonates and sandstones. Ingrain uses advanced lattice Boltzmann methods (LBM) to simulate multiphase flow in the rocks (porous media). We present a very efficient implementation of these methods based on CUDA. Because LBM operates on a finite difference grid, is explicit in nature, and requires only next-neighbor interactions, it is suitable for implementation on GPUs. Since GPU hardware allows for very fine grain parallelism, every lattice site can be handled by a different core. Data has to be loaded from and stored to the device memory in such a way that dense access to the memory is ensured. This can be achieved by accessing the lattice nodes with respect to their contiguous memory locations [1,2]. The simulation engine uses a sparse data structure to represent the grid and advanced algorithms to handle the moving fluid-fluid interface. The simulations are accelerated on one GPU by one order of magnitude compared to a state of the art multicore desktop computer. The engine is parallelized using MPI and runs on multiple GPUs in the same node or across the Infiniband network. Simulations with up to 50 GPUs in parallel are presented. With this simulator using it is possible to perform pore scale multi-phase (oil-water-matrix) simulations in natural porous media in a commercial manner and to predict important rock properties like absolute permeability, relative permeabilites and capillary pressure [3,4]. Results and videos of these simulations in complex real world porous media and rocks are presented and discussed.

  6. Full-Body Musculoskeletal Model for Muscle-Driven Simulation of Human Gait.

    PubMed

    Rajagopal, Apoorva; Dembia, Christopher L; DeMers, Matthew S; Delp, Denny D; Hicks, Jennifer L; Delp, Scott L

    2016-10-01

    Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source 3-D musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model's musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower extremity. The model is implemented in the open-source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations.

  7. Full body musculoskeletal model for muscle-driven simulation of human gait

    PubMed Central

    Rajagopal, Apoorva; Dembia, Christopher L.; DeMers, Matthew S.; Delp, Denny D.; Hicks, Jennifer L.; Delp, Scott L.

    2017-01-01

    Objective Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source, three-dimensional musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Methods Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model’s musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Results Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. Conclusion These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower-extremity. Significance The model is implemented in the open source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations. PMID:27392337

  8. ASP-G: an ASP-based method for finding attractors in genetic regulatory networks

    PubMed Central

    Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine

    2014-01-01

    Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722

  9. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    NASA Astrophysics Data System (ADS)

    Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan

    2017-10-01

    Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  10. Determination of tissue equivalent materials of a physical 8-year-old phantom for use in computed tomography

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Parisa; Miri Hakimabad, Hashem; Rafat Motavalli, Laleh

    2015-07-01

    This paper reports on the methodology applied to select suitable tissue equivalent materials of an 8-year phantom for use in computed tomography (CT) examinations. To find the appropriate tissue substitutes, first physical properties (physical density, electronic density, effective atomic number, mass attenuation coefficient and CT number) of different materials were studied. Results showed that, the physical properties of water and polyurethane (as soft tissue), B-100 and polyvinyl chloride (PVC) (as bone) and polyurethane foam (as lung) agree more with those of original tissues. Then in the next step, the absorbed doses in the location of 25 thermoluminescent dosimeters (TLDs) as well as dose distribution in one slice of phantom were calculated for original and these proposed materials by Monte Carlo simulation at different tube voltages. The comparisons suggested that at tube voltages of 80 and 100 kVp using B-100 as bone, water as soft tissue and polyurethane foam as lung is suitable for dosimetric study in pediatric CT examinations. In addition, it was concluded that by considering just the mass attenuation coefficient of different materials, the appropriate tissue equivalent substitutes in each desired X-ray energy range could be found.

  11. Numerical simulation of cavitation flow characteristic on Pelton turbine bucket surface

    NASA Astrophysics Data System (ADS)

    Zeng, C. J.; Xiao, Y. X.; Zhu, W.; Yao, Y. Y.; Wang, Z. W.

    2015-01-01

    The internal flow in the rotating bucket of Pelton turbine is free water sheet flow with moving boundary. The runner operates under atmospheric and the cavitation in the bucket is still a controversial problem. While more and more field practice proved that there exists cavitation in the Pelton turbine bucket and the cavitation erosion may occur at the worst which will damage the bucket. So a well prediction about the cavitation flow on the bucket surface of Pelton turbine and the followed cavitation erosion characteristic can effectively guide the optimization of Pelton runner bucket and the stable operation of unit. This paper will investigate the appropriate numerical model and method for the unsteady 3D water-air-vapour multiphase cavitation flow which may occur on the Pelton bucket surface. The computational domain will include the nozzle pipe flow, semi-free surface jet and runner domain. Via comparing the numerical results of different turbulence, cavity and multiphase models, this paper will determine the suitable numerical model and method for the simulation of cavitation on the Pelton bucket surface. In order to investigate the conditions corresponding to the cavitation phenomena on the bucket surface, this paper will adopt the suitable model to simulate the various operational conditions of different water head and needle travel. Then, the characteristics of cavitation flow the development process of cavitation will be analysed in in great detail.

  12. Particle/Continuum Hybrid Simulation in a Parallel Computing Environment

    NASA Technical Reports Server (NTRS)

    Baganoff, Donald

    1996-01-01

    The objective of this study was to modify an existing parallel particle code based on the direct simulation Monte Carlo (DSMC) method to include a Navier-Stokes (NS) calculation so that a hybrid solution could be developed. In carrying out this work, it was determined that the following five issues had to be addressed before extensive program development of a three dimensional capability was pursued: (1) find a set of one-sided kinetic fluxes that are fully compatible with the DSMC method, (2) develop a finite volume scheme to make use of these one-sided kinetic fluxes, (3) make use of the one-sided kinetic fluxes together with DSMC type boundary conditions at a material surface so that velocity slip and temperature slip arise naturally for near-continuum conditions, (4) find a suitable sampling scheme so that the values of the one-sided fluxes predicted by the NS solution at an interface between the two domains can be converted into the correct distribution of particles to be introduced into the DSMC domain, (5) carry out a suitable number of tests to confirm that the developed concepts are valid, individually and in concert for a hybrid scheme.

  13. Building A Simulation Model For The Prediction Of Temperature Distribution In Pulsed Laser Spot Welding Of Dissimilar Low Carbon Steel 1020 To Aluminum Alloy 6061

    NASA Astrophysics Data System (ADS)

    Yousef, Adel K. M.; Taha, Ziad. A.; Shehab, Abeer A.

    2011-01-01

    This paper describes the development of a computer model used to analyze the heat flow during pulsed Nd: YAG laser spot welding of dissimilar metal; low carbon steel (1020) to aluminum alloy (6061). The model is built using ANSYS FLUENT 3.6 software where almost all the environments simulated to be similar to the experimental environments. A simulation analysis was implemented based on conduction heat transfer out of the key hole where no melting occurs. The effect of laser power and pulse duration was studied. Three peak powers 1, 1.66 and 2.5 kW were varied during pulsed laser spot welding (keeping the energy constant), also the effect of two pulse durations 4 and 8 ms (with constant peak power), on the transient temperature distribution and weld pool dimension were predicated using the present simulation. It was found that the present simulation model can give an indication for choosing the suitable laser parameters (i.e. pulse durations, peak power and interaction time required) during pulsed laser spot welding of dissimilar metals.

  14. A Simple Evacuation Modeling and Simulation Tool for First Responders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Daniel B; Payne, Patricia W

    2015-01-01

    Although modeling and simulation of mass evacuations during a natural or man-made disaster is an on-going and vigorous area of study, tool adoption by front-line first responders is uneven. Some of the factors that account for this situation include cost and complexity of the software. For several years, Oak Ridge National Laboratory has been actively developing the free Incident Management Preparedness and Coordination Toolkit (IMPACT) to address these issues. One of the components of IMPACT is a multi-agent simulation module for area-based and path-based evacuations. The user interface is designed so that anyone familiar with typical computer drawing tools canmore » quickly author a geospatially-correct evacuation visualization suitable for table-top exercises. Since IMPACT is designed for use in the field where network communications may not be available, quick on-site evacuation alternatives can be evaluated to keep pace with a fluid threat situation. Realism is enhanced by incorporating collision avoidance into the simulation. Statistics are gathered as the simulation unfolds, including most importantly time-to-evacuate, to help first responders choose the best course of action.« less

  15. Multi-mode horn antenna simulation

    NASA Technical Reports Server (NTRS)

    Dod, L. R.; Wolf, J. D.

    1980-01-01

    Radiation patterns were computed for a circular multimode horn antenna using waveguide electric field radiation expressions. The circular multimode horn was considered as a possible reflector feed antenna for the Large Antenna Multifrequency Microwave Radiometer (LAMMR). This horn antenna uses a summation of the TE sub 11 deg and TM sub 11 deg modes to generate far field primary radiation patterns with equal E and H plane beamwidths and low sidelobes. A computer program for the radiation field expressions using the summation of waveguide radiation modes is described. The sensitivity of the multimode horn antenna radiation patterns to phase variations between the two modes is given. Sample radiation pattern calculations for a reflector feed horn for LAMMR are shown. The multimode horn antenna provides a low noise feed suitable for radiometric applications.

  16. In Silico Investigations of the Anti-Catabolic Effects of Pamidronate and Denosumab on Multiple Myeloma-Induced Bone Disease

    PubMed Central

    Wang, Yan; Lin, Bo

    2012-01-01

    It is unclear whether the new anti-catabolic agent denosumab represents a viable alternative to the widely used anti-catabolic agent pamidronate in the treatment of Multiple Myeloma (MM)-induced bone disease. This lack of clarity primarily stems from the lack of sufficient clinical investigations, which are costly and time consuming. However, in silico investigations require less time and expense, suggesting that they may be a useful complement to traditional clinical investigations. In this paper, we aim to (i) develop integrated computational models that are suitable for investigating the effects of pamidronate and denosumab on MM-induced bone disease and (ii) evaluate the responses to pamidronate and denosumab treatments using these integrated models. To achieve these goals, pharmacokinetic models of pamidronate and denosumab are first developed and then calibrated and validated using different clinical datasets. Next, the integrated computational models are developed by incorporating the simulated transient concentrations of pamidronate and denosumab and simulations of their actions on the MM-bone compartment into the previously proposed MM-bone model. These integrated models are further calibrated and validated by different clinical datasets so that they are suitable to be applied to investigate the responses to the pamidronate and denosumab treatments. Finally, these responses are evaluated by quantifying the bone volume, bone turnover, and MM-cell density. This evaluation identifies four denosumab regimes that potentially produce an overall improved bone-related response compared with the recommended pamidronate regime. This in silico investigation supports the idea that denosumab represents an appropriate alternative to pamidronate in the treatment of MM-induced bone disease. PMID:23028650

  17. Computer simulation of population dynamics inside the urban environment

    NASA Astrophysics Data System (ADS)

    Andreev, A. S.; Inovenkov, I. N.; Echkina, E. Yu.; Nefedov, V. V.; Ponomarenko, L. S.; Tikhomirov, V. V.

    2017-12-01

    In this paper using a mathematical model of the so-called “space-dynamic” approach we investigate the problem of development and temporal dynamics of different urban population groups. For simplicity we consider an interaction of only two population groups inside a single urban area with axial symmetry. This problem can be described qualitatively by a system of two non-stationary nonlinear differential equations of the diffusion type with boundary conditions of the third type. The results of numerical simulations show that with a suitable choice of the diffusion coefficients and interaction functions between different population groups we can receive different scenarios of population dynamics: from complete displacement of one population group by another (originally more “aggressive”) to the “peaceful” situation of co-existence of them together.

  18. Incorporating individual health-protective decisions into disease transmission models: a mathematical framework.

    PubMed

    Durham, David P; Casman, Elizabeth A

    2012-03-07

    It is anticipated that the next generation of computational epidemic models will simulate both infectious disease transmission and dynamic human behaviour change. Individual agents within a simulation will not only infect one another, but will also have situational awareness and a decision algorithm that enables them to modify their behaviour. This paper develops such a model of behavioural response, presenting a mathematical interpretation of a well-known psychological model of individual decision making, the health belief model, suitable for incorporation within an agent-based disease-transmission model. We formalize the health belief model and demonstrate its application in modelling the prevalence of facemask use observed over the course of the 2003 Hong Kong SARS epidemic, a well-documented example of behaviour change in response to a disease outbreak.

  19. Simulation of dense amorphous polymers by generating representative atomistic models

    NASA Astrophysics Data System (ADS)

    Curcó, David; Alemán, Carlos

    2003-08-01

    A method for generating atomistic models of dense amorphous polymers is presented. The generated models can be used as starting structures of Monte Carlo and molecular dynamics simulations, but also are suitable for the direct evaluation physical properties. The method is organized in a two-step procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, an iterative algorithm is applied to relax the nonbonding interactions. In order to check the performance of the method we examined structure-dependent properties for three polymeric systems: polyethyelene (ρ=0.85 g/cm3), poly(L,D-lactic) acid (ρ=1.25 g/cm3), and polyglycolic acid (ρ=1.50 g/cm3). The method successfully generated representative packings for such dense systems using minimum computational resources.

  20. Incorporating individual health-protective decisions into disease transmission models: a mathematical framework

    PubMed Central

    Durham, David P.; Casman, Elizabeth A.

    2012-01-01

    It is anticipated that the next generation of computational epidemic models will simulate both infectious disease transmission and dynamic human behaviour change. Individual agents within a simulation will not only infect one another, but will also have situational awareness and a decision algorithm that enables them to modify their behaviour. This paper develops such a model of behavioural response, presenting a mathematical interpretation of a well-known psychological model of individual decision making, the health belief model, suitable for incorporation within an agent-based disease-transmission model. We formalize the health belief model and demonstrate its application in modelling the prevalence of facemask use observed over the course of the 2003 Hong Kong SARS epidemic, a well-documented example of behaviour change in response to a disease outbreak. PMID:21775324

  1. Dual-Source Linear Energy Prediction (LINE-P) Model in the Context of WSNs.

    PubMed

    Ahmed, Faisal; Tamberg, Gert; Le Moullec, Yannick; Annus, Paul

    2017-07-20

    Energy harvesting technologies such as miniature power solar panels and micro wind turbines are increasingly used to help power wireless sensor network nodes. However, a major drawback of energy harvesting is its varying and intermittent characteristic, which can negatively affect the quality of service. This calls for careful design and operation of the nodes, possibly by means of, e.g., dynamic duty cycling and/or dynamic frequency and voltage scaling. In this context, various energy prediction models have been proposed in the literature; however, they are typically compute-intensive or only suitable for a single type of energy source. In this paper, we propose Linear Energy Prediction "LINE-P", a lightweight, yet relatively accurate model based on approximation and sampling theory; LINE-P is suitable for dual-source energy harvesting. Simulations and comparisons against existing similar models have been conducted with low and medium resolutions (i.e., 60 and 22 min intervals/24 h) for the solar energy source (low variations) and with high resolutions (15 min intervals/24 h) for the wind energy source. The results show that the accuracy of the solar-based and wind-based predictions is up to approximately 98% and 96%, respectively, while requiring a lower complexity and memory than the other models. For the cases where LINE-P's accuracy is lower than that of other approaches, it still has the advantage of lower computing requirements, making it more suitable for embedded implementation, e.g., in wireless sensor network coordinator nodes or gateways.

  2. Implementation and evaluation of the Level Set method: Towards efficient and accurate simulation of wet etching for microengineering applications

    NASA Astrophysics Data System (ADS)

    Montoliu, C.; Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Colom, R. J.

    2013-10-01

    The use of atomistic methods, such as the Continuous Cellular Automaton (CCA), is currently regarded as a computationally efficient and experimentally accurate approach for the simulation of anisotropic etching of various substrates in the manufacture of Micro-electro-mechanical Systems (MEMS). However, when the features of the chemical process are modified, a time-consuming calibration process needs to be used to transform the new macroscopic etch rates into a corresponding set of atomistic rates. Furthermore, changing the substrate requires a labor-intensive effort to reclassify most atomistic neighborhoods. In this context, the Level Set (LS) method provides an alternative approach where the macroscopic forces affecting the front evolution are directly applied at the discrete level, thus avoiding the need for reclassification and/or calibration. Correspondingly, we present a fully-operational Sparse Field Method (SFM) implementation of the LS approach, discussing in detail the algorithm and providing a thorough characterization of the computational cost and simulation accuracy, including a comparison to the performance by the most recent CCA model. We conclude that the SFM implementation achieves similar accuracy as the CCA method with less fluctuations in the etch front and requiring roughly 4 times less memory. Although SFM can be up to 2 times slower than CCA for the simulation of anisotropic etchants, it can also be up to 10 times faster than CCA for isotropic etchants. In addition, we present a parallel, GPU-based implementation (gSFM) and compare it to an optimized, multicore CPU version (cSFM), demonstrating that the SFM algorithm can be successfully parallelized and the simulation times consequently reduced, while keeping the accuracy of the simulations. Although modern multicore CPUs provide an acceptable option, the massively parallel architecture of modern GPUs is more suitable, as reflected by computational times for gSFM up to 7.4 times faster than for cSFM.

  3. Three-dimensional elliptic grid generation for an F-16

    NASA Technical Reports Server (NTRS)

    Sorenson, Reese L.

    1988-01-01

    A case history depicting the effort to generate a computational grid for the simulation of transonic flow about an F-16 aircraft at realistic flight conditions is presented. The flow solver for which this grid is designed is a zonal one, using the Reynolds averaged Navier-Stokes equations near the surface of the aircraft, and the Euler equations in regions removed from the aircraft. A body conforming global grid, suitable for the Euler equation, is first generated using 3-D Poisson equations having inhomogeneous terms modeled after the 2-D GRAPE code. Regions of the global grid are then designated for zonal refinement as appropriate to accurately model the flow physics. Grid spacing suitable for solution of the Navier-Stokes equations is generated in the refinement zones by simple subdivision of the given coarse grid intervals. That grid generation project is described, with particular emphasis on the global coarse grid.

  4. Access Protocol For An Industrial Optical Fibre LAN

    NASA Astrophysics Data System (ADS)

    Senior, John M.; Walker, William M.; Ryley, Alan

    1987-09-01

    A structure for OSI levels 1 and 2 of a local area network suitable for use in a variety of industrial environments is reported. It is intended that the LAN will utilise optical fibre technology at the physical level and a hybrid of dynamically optimisable token passing and CSMA/CD techniques at the data link (IEEE 802 medium access control - logical link control) level. An intelligent token passing algorithm is employed which dynamically allocates tokens according to the known upper limits on the requirements of each device. In addition a system of stochastic tokens is used to increase efficiency when the stochastic traffic is significant. The protocol also allows user-defined priority systems to be employed and is suitable for distributed or centralised implementation. The results of computer simulated performance characteristics for the protocol using a star-ring topology are reported which demonstrate its ability to perform efficiently with the device and traffic loads anticipated within an industrial environment.

  5. In Vitro Simulation and Validation of the Circulation with Congenital Heart Defects

    PubMed Central

    Figliola, Richard S.; Giardini, Alessandro; Conover, Tim; Camp, Tiffany A.; Biglino, Giovanni; Chiulli, John; Hsia, Tain-Yen

    2010-01-01

    Despite the recent advances in computational modeling, experimental simulation of the circulation with congenital heart defect using mock flow circuits remains an important tool for device testing, and for detailing the probable flow consequences resulting from surgical and interventional corrections. Validated mock circuits can be applied to qualify the results from novel computational models. New mathematical tools, coupled with advanced clinical imaging methods, allow for improved assessment of experimental circuit performance relative to human function, as well as the potential for patient-specific adaptation. In this review, we address the development of three in vitro mock circuits specific for studies of congenital heart defects. Performance of an in vitro right heart circulation circuit through a series of verification and validation exercises is described, including correlations with animal studies, and quantifying the effects of circuit inertiance on test results. We present our experience in the design of mock circuits suitable for investigations of the characteristics of the Fontan circulation. We use one such mock circuit to evaluate the accuracy of Doppler predictions in the presence of aortic coarctation. PMID:21218147

  6. The application of an MPM-MFM method for simulating weapon-target interaction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, X.; Zou, Q.; Zhang, D. Z.

    2005-01-01

    During the past two decades, Los Alamos National Laboratory (LANL) has developed computational algorithms and software for analysis of multiphase flow suitable for high-speed projectile penetration of metallic and nonmetallic materials, using a material point method (MPM)-multiphase flow method (MFM). Recently, ACTA has teamed with LANL to advance a computational algorithm for simulating complex weapon-target interaction for penetrating and exploding munitions, such as tank rounds and artillery shells, as well as non-exploding kinetic energy penetrators. This paper will outline the mathematical basis for the MPM-MFM method as implemented in LANL's CartaBlanca code. CartaBlanca, written entirely in Java using object-oriented design,more » is used to solve complex problems involving (a) failure and penetration of solids, (b) heat transfer, (c) phase change, (d) chemical reactions, and (e) multiphase flow. We will present its application to the penetration of a steel target by a tungsten cylinder and compare results with time-resolved experimental data published by Anderson, et. al., Int. J. Impact Engng., Vol. 16, No. 1, pp. 1-18, 1995.« less

  7. A quantum–quantum Metropolis algorithm

    PubMed Central

    Yung, Man-Hong; Aspuru-Guzik, Alán

    2012-01-01

    The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584

  8. A discrete mechanics framework for real time virtual surgical simulations with application to virtual laparoscopic nephrectomy.

    PubMed

    Zhou, Xiangmin; Zhang, Nan; Sha, Desong; Shen, Yunhe; Tamma, Kumar K; Sweet, Robert

    2009-01-01

    The inability to render realistic soft-tissue behavior in real time has remained a barrier to face and content aspects of validity for many virtual reality surgical training systems. Biophysically based models are not only suitable for training purposes but also for patient-specific clinical applications, physiological modeling and surgical planning. When considering the existing approaches for modeling soft tissue for virtual reality surgical simulation, the computer graphics-based approach lacks predictive capability; the mass-spring model (MSM) based approach lacks biophysically realistic soft-tissue dynamic behavior; and the finite element method (FEM) approaches fail to meet the real-time requirement. The present development stems from physics fundamental thermodynamic first law; for a space discrete dynamic system directly formulates the space discrete but time continuous governing equation with embedded material constitutive relation and results in a discrete mechanics framework which possesses a unique balance between the computational efforts and the physically realistic soft-tissue dynamic behavior. We describe the development of the discrete mechanics framework with focused attention towards a virtual laparoscopic nephrectomy application.

  9. Shipboard communications center modernization network simulation report

    DOT National Transportation Integrated Search

    1995-08-01

    Commercially available simulation packages were investigated to determine their suitability for modeling the USCG Cutter Communications Center (CCC). The suitability of a candidate package was based upon it meeting the operational goals and hardware ...

  10. Wind Tunnel Model Design for Sonic Boom Studies of Nozzle Jet with Shock Interactions

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Denison, Marie; Sozer, Emre; Moini-Yekta, Shayan

    2016-01-01

    NASA and Industry are performing vehicle studies of configurations with low sonic boom pressure signatures. The computational analyses of modern configuration designs have matured to the point where there is confidence in the prediction of the pressure signature from the front of the vehicle, but uncertainty in the aft signatures with often greater boundary layer effects and nozzle jet pressures. Wind tunnel testing at significantly lower Reynolds numbers than in flight and without inlet and nozzle jet pressures make it difficult to accurately assess the computational solutions of flight vehicles. A wind tunnel test in the NASA Ames 9- by 7-Foot Supersonic Wind Tunnel from Mach 1.6 to 2.0 will be used to assess the effects of shocks from components passing through nozzle jet plumes on the sonic boom pressure signature and provide datasets for comparison with CFD codes. A large number of high-fidelity numerical simulations of wind tunnel test models with a variety of shock generators that simulate horizontal tails and aft decks have been studied to provide suitable models for sonic boom pressure measurements using a minimally intrusive pressure rail in the wind tunnel. The computational results are presented and the evolution of candidate wind tunnel models is summarized and discussed in this paper.

  11. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  12. Simulation of floods caused by overloaded sewer systems: extensions of shallow-water equations

    NASA Astrophysics Data System (ADS)

    Hilden, Michael

    2005-03-01

    The outflow of water from a manhole onto a street is a typical flow problem within the simulation of floods in urban areas that are caused by overloaded sewer systems in the event of heavy rains. The reliable assessment of the flood risk for the connected houses requires accurate simulations of the water flow processes in the sewer system and in the street.The Navier-Stokes equations (NSEs) describe the free surface flow of the fluid water accurately, but since their numerical solution requires high CPU times and much memory, their application is not practical. However, their solutions for selected flow problems are applied as reference states to assess the results of other model approaches.The classical shallow-water equations (SWEs) require only fractions (factor 1/100) of the NSEs' computational effort. They assume hydrostatic pressure distribution, depth-averaged horizontal velocities and neglect vertical velocities. These shallow-water assumptions are not fulfilled for the outflow of water from a manhole onto the street. Accordingly, calculations show differences between NSEs and SWEs solutions.The SWEs are extended in order to assess the flood risks in urban areas reliably within applicable computational efforts. Separating vortex regions from the main flow and approximating vertical velocities to involve their contributions into a pressure correction yield suitable results.

  13. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  14. Development of Comprehensive Reduced Kinetic Models for Supersonic Reacting Shear Layer Simulations

    NASA Technical Reports Server (NTRS)

    Zambon, A. C.; Chelliah, H. K.; Drummond, J. P.

    2006-01-01

    Large-scale simulations of multi-dimensional unsteady turbulent reacting flows with detailed chemistry and transport can be computationally extremely intensive even on distributed computing architectures. With the development of suitable reduced chemical kinetic models, the number of scalar variables to be integrated can be decreased, leading to a significant reduction in the computational time required for the simulation with limited loss of accuracy in the results. A general MATLAB-based automated mechanism reduction procedure is presented to reduce any complex starting mechanism (detailed or skeletal) with minimal human intervention. Based on the application of the quasi steady-state (QSS) approximation for certain chemical species and on the elimination of the fast reaction rates in the mechanism, several comprehensive reduced models, capable of handling different fuels such as C2H4, CH4 and H2, have been developed and thoroughly tested for several combustion problems (ignition, propagation and extinction) and physical conditions (reactant compositions, temperatures, and pressures). A key feature of the present reduction procedure is the explicit solution of the concentrations of the QSS species, needed for the evaluation of the elementary reaction rates. In contrast, previous approaches relied on an implicit solution due to the strong coupling between QSS species, requiring computationally expensive inner iterations. A novel algorithm, based on the definition of a QSS species coupling matrix, is presented to (i) introduce appropriate truncations to the QSS algebraic relations and (ii) identify the optimal sequence for the explicit solution of the concentration of the QSS species. With the automatic generation of the relevant source code, the resulting reduced models can be readily implemented into numerical codes.

  15. Numerical Simulations of Hypersonic Boundary Layer Transition

    NASA Astrophysics Data System (ADS)

    Bartkowicz, Matthew David

    Numerical schemes for supersonic flows tend to use large amounts of artificial viscosity for stability. This tends to damp out the small scale structures in the flow. Recently some low-dissipation methods have been proposed which selectively eliminate the artificial viscosity in regions which do not require it. This work builds upon the low-dissipation method of Subbareddy and Candler which uses the flux vector splitting method of Steger and Warming but identifies the dissipation portion to eliminate it. Computing accurate fluxes typically relies on large grid stencils or coupled linear systems that become computationally expensive to solve. Unstructured grids allow for CFD solutions to be obtained on complex geometries, unfortunately, it then becomes difficult to create a large stencil or the coupled linear system. Accurate solutions require grids that quickly become too large to be feasible. In this thesis a method is proposed to obtain more accurate solutions using relatively local data, making it suitable for unstructured grids composed of hexahedral elements. Fluxes are reconstructed using local gradients to extend the range of data used. The method is then validated on several test problems. Simulations of boundary layer transition are then performed. An elliptic cone at Mach 8 is simulated based on an experiment at the Princeton Gasdynamics Laboratory. A simulated acoustic noise boundary condition is imposed to model the noisy conditions of the wind tunnel and the transitioning boundary layer observed. A computation of an isolated roughness element is done based on an experiment in Purdue's Mach 6 quiet wind tunnel. The mechanism for transition is identified as an instability in the upstream separation region and a comparison is made to experimental data. In the CFD a fully turbulent boundary layer is observed downstream.

  16. Linearly scaling and almost Hamiltonian dielectric continuum molecular dynamics simulations through fast multipole expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul, E-mail: tavan@physik.uni-muenchen.de

    2015-11-14

    Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADESmore » can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved—up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.« less

  17. Multi-Physics Computational Grains (MPCGs): Newly-Developed Accurate and Efficient Numerical Methods for Micromechanical Modeling of Multifunctional Materials and Composites

    NASA Astrophysics Data System (ADS)

    Bishay, Peter L.

    This study presents a new family of highly accurate and efficient computational methods for modeling the multi-physics of multifunctional materials and composites in the micro-scale named "Multi-Physics Computational Grains" (MPCGs). Each "mathematical grain" has a random polygonal/polyhedral geometrical shape that resembles the natural shapes of the material grains in the micro-scale where each grain is surrounded by an arbitrary number of neighboring grains. The physics that are incorporated in this study include: Linear Elasticity, Electrostatics, Magnetostatics, Piezoelectricity, Piezomagnetism and Ferroelectricity. However, the methods proposed here can be extended to include more physics (thermo-elasticity, pyroelectricity, electric conduction, heat conduction, etc.) in their formulation, different analysis types (dynamics, fracture, fatigue, etc.), nonlinearities, different defect shapes, and some of the 2D methods can also be extended to 3D formulation. We present "Multi-Region Trefftz Collocation Grains" (MTCGs) as a simple and efficient method for direct and inverse problems, "Trefftz-Lekhnitskii Computational Gains" (TLCGs) for modeling porous and composite smart materials, "Hybrid Displacement Computational Grains" (HDCGs) as a general method for modeling multifunctional materials and composites, and finally "Radial-Basis-Functions Computational Grains" (RBFCGs) for modeling functionally-graded materials, magneto-electro-elastic (MEE) materials and the switching phenomena in ferroelectric materials. The first three proposed methods are suitable for direct numerical simulation (DNS) of the micromechanics of smart composite/porous materials with non-symmetrical arrangement of voids/inclusions, and provide minimal effort in meshing and minimal time in computations, since each grain can represent the matrix of a composite and can include a pore or an inclusion. The last three methods provide stiffness matrix in their formulation and hence can be readily implemented in a finite element routine. Several numerical examples are provided to show the ability and accuracy of the proposed methods to determine the effective material properties of different types of piezo-composites, and detect the damage-prone sites in a microstructure under certain loading types. The last method (RBFCGs) is also suitable for modeling the switching phenomena in ferro-materials (ferroelectric, ferromagnetic, etc.) after incorporating a certain nonlinear constitutive model and a switching criterion. Since the interaction between grains during loading cycles has a profound influence on the switching phenomena, it is important to simulate the grains with geometrical shapes that are similar to the real shapes of grains as seen in lab experiments. Hence the use of the 3D RBFCGs, which allow for the presence of all the six variants of the constitutive relations, together with the randomly generated crystallographic axes in each grain, as done in the present study, is considered to be the most realistic model that can be used for the direct mesoscale numerical simulation (DMNS) of polycrystalline ferro-materials.

  18. WRF added value to capture the spatio-temporal drought variability

    NASA Astrophysics Data System (ADS)

    García-Valdecasas Ojeda, Matilde; Quishpe-Vásquez, César; Raquel Gámiz-Fortis, Sonia; Castro-Díez, Yolanda; Jesús Esteban-Parra, María

    2017-04-01

    Regional Climate Models (RCM) has been widely used as a tool to perform high resolution climate fields in areas with high climate variability such as Spain. However, the outputs provided by downscaling techniques have many sources of uncertainty associated at different aspects. In this study, the ability of the Weather Research and Forecasting (WRF) model to capture drought conditions has been analyzed. The WRF simulation was carried out for a period that spanned from 1980 to 2010 over a domain centered in the Iberian Peninsula with a spatial resolution of 0.088°, and nested in the coarser EURO-CORDEX domain (0.44° spatial resolution). To investigate the spatiotemporal drought variability, the Standardized Precipitation Index (SPI) and the Standardized Precipitation Evapotranspiration Index (SPEI) has been computed at two different timescales: 3- and 12-months due to its suitability to study agricultural and hydrological droughts. The drought indices computed from WRF outputs were compared with those obtained from the observational (MOTEDAS and MOPREDAS) datasets. In order to assess the added value provided by downscaled fields, these indices were also computed from the ERA-Interim Re-Analysis database, which provides the lateral and boundary conditions of the WRF simulations. Results from this study indicate that WRF provides a noticeable benefit with respect to ERA-Interim for many regions in Spain in terms of drought indices, greater for SPI than for SPEI. The improvement offered by WRF depends on the region, index and timescale analyzed, being greater at longer timescales. These findings prove the reliability of the downscaled fields to detect drought events and, therefore, it is a remarkable source of knowledge for a suitable decision making related to water-resource management. Keywords: Drought, added value, Regional Climate Models, WRF, SPEI, SPI. Acknowledgements: This work has been financed by the projects P11-RNM-7941 (Junta de Andalucía-Spain) and CGL2013-48539-R (MINECO-Spain, FEDER).

  19. Multi-material 3D Models for Temporal Bone Surgical Simulation.

    PubMed

    Rose, Austin S; Kimbell, Julia S; Webster, Caroline E; Harrysson, Ola L A; Formeister, Eric J; Buchman, Craig A

    2015-07-01

    A simulated, multicolor, multi-material temporal bone model can be created using 3-dimensional (3D) printing that will prove both safe and beneficial in training for actual temporal bone surgical cases. As the process of additive manufacturing, or 3D printing, has become more practical and affordable, a number of applications for the technology in the field of Otolaryngology-Head and Neck Surgery have been considered. One area of promise is temporal bone surgical simulation. Three-dimensional representations of human temporal bones were created from temporal bone computed tomography (CT) scans using biomedical image processing software. Multi-material models were then printed and dissected in a temporal bone laboratory by attending and resident otolaryngologists. A 5-point Likert scale was used to grade the models for their anatomical accuracy and suitability as a simulation of cadaveric and operative temporal bone drilling. The models produced for this study demonstrate significant anatomic detail and a likeness to human cadaver specimens for drilling and dissection. Simulated temporal bones created by this process have potential benefit in surgical training, preoperative simulation for challenging otologic cases, and the standardized testing of temporal bone surgical skills. © The Author(s) 2015.

  20. MITHRA 1.0: A full-wave simulation tool for free electron lasers

    NASA Astrophysics Data System (ADS)

    Fallahi, Arya; Yahaghi, Alireza; Kärtner, Franz X.

    2018-07-01

    Free Electron Lasers (FELs) are a solution for providing intense, coherent and bright radiation in the hard X-ray regime. Due to the low wall-plug efficiency of FEL facilities, it is crucial and additionally very useful to develop complete and accurate simulation tools for better optimizing a FEL interaction. The highly sophisticated dynamics involved in a FEL process was the main obstacle hindering the development of general simulation tools for this problem. We present a numerical algorithm based on finite difference time domain/Particle in cell (FDTD/PIC) in a Lorentz boosted coordinate system which is able to fulfill a full-wave simulation of a FEL process. The developed software offers a suitable tool for the analysis of FEL interactions without considering any of the usual approximations. A coordinate transformation to bunch rest frame makes the very different length scales of bunch size, optical wavelengths and the undulator period transform to values with the same order. Consequently, FDTD/PIC simulations in conjunction with efficient parallelization techniques make the full-wave simulation feasible using the available computational resources. Several examples of free electron lasers are analyzed using the developed software, the results are benchmarked based on standard FEL codes and discussed in detail.

  1. Optimization of buffer injection for the effective bioremediation of chlorinated solvents in aquifers

    NASA Astrophysics Data System (ADS)

    Brovelli, A.; Robinson, C.; Barry, A.; Kouznetsova, I.; Gerhard, J.

    2008-12-01

    Various techniques have been proposed to enhance biologically-mediated reductive dechlorination of chlorinated solvents in the subsurface, including the addition of fermentable organic substrate for the generation of H2 as an electron donor. One rate-limiting factor for enhanced dechlorination is the pore fluid pH. Organic acids and H+ ions accumulate in dechlorination zones, generating unfavorable conditions for microbial activity (pH < 6.5). The pH variation is a nonlinear function of the amount of reduced chlorinated solvents, and is affected by the organic material fermented, the chemical composition of the pore fluid and the soil's buffering capacity. Consequently, in some cases enhanced remediation schemes rely on buffer injection (e.g., bicarbonate) to alleviate this problem, particularly in the presence of solvent nonaqueous phase liquid (NAPL) source zones. However, the amount of buffer required - particularly in complex, evolving biogeochemical environments - is not well understood. To investigate this question, this work builds upon a geochemical numerical model (Robinson et al., Science of the Total Environment, submitted), which computes the amount of additional buffer required to maintain the pH at a level suitable for bacterial activity for batch systems. The batch model was coupled to a groundwater flow/solute transport/chemical reaction simulator to permit buffer optimization computations within the context of flowing systems exhibiting heterogeneous hydraulic, physical and chemical properties. A suite of simulations was conducted in which buffer optimization was examined within the bounds of the minimum concentration necessary to sustain a pH favorable to microbial activity and the maximum concentration to avoid excessively high pH values (also not suitable to bacterial activity) and mineral precipitation (e.g., calcite, which may lead to pore-clogging). These simulations include an examination of the sensitivity of this buffer concentration range to aquifer heterogeneity and groundwater velocity. This work is part of SABRE (Source Area BioREmediation), a collaborative international research project that aims to evaluate and improve enhanced bioremediation of chlorinated solvent source zones. In this context, numerical simulations are supporting the upscaling of the technique, including identifying the most appropriate buffer injection strategies for field applications

  2. A class of all digital phase locked loops - Modeling and analysis

    NASA Technical Reports Server (NTRS)

    Reddy, C. P.; Gupta, S. C.

    1973-01-01

    An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a nonlinear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step and frequency step inputs for different levels of quantization without loop filter are studied. The analytical results are checked by simulating the actual system on the digital computer.

  3. Modeling the Effects of Lipid Composition on Stratum Corneum Bilayers Using Molecular Dynamics Simulations

    NASA Astrophysics Data System (ADS)

    Huzil, J. Torin; Sivaloganathan, Siv; Kohandel, Mohammad; Foldvari, Marianna

    2011-11-01

    The advancement of dermal and transdermal drug delivery requires the development of delivery systems that are suitable for large protein and nucleic acid-based therapeutic agents. However, a complete mechanistic understanding of the physical barrier properties associated with the epidermis, specifically the membrane structures within the stratum corneum, has yet to be developed. Here, we describe the assembly and computational modeling of stratum corneum lipid bilayers constructed from varying ratios of their constituent lipids (ceramide, free fatty acids and cholesterol) to determine if there is a difference in the physical properties of stratum corneum compositions.

  4. Neuron array with plastic synapses and programmable dendrites.

    PubMed

    Ramakrishnan, Shubha; Wunderlich, Richard; Hasler, Jennifer; George, Suma

    2013-10-01

    We describe a novel neuromorphic chip architecture that models neurons for efficient computation. Traditional architectures of neuron array chips consist of large scale systems that are interfaced with AER for implementing intra- or inter-chip connectivity. We present a chip that uses AER for inter-chip communication but uses fast, reconfigurable FPGA-style routing with local memory for intra-chip connectivity. We model neurons with biologically realistic channel models, synapses and dendrites. This chip is suitable for small-scale network simulations and can also be used for sequence detection, utilizing directional selectivity properties of dendrites, ultimately for use in word recognition.

  5. Fuzzy Logic Based Controller for a Grid-Connected Solid Oxide Fuel Cell Power Plant.

    PubMed

    Chatterjee, Kalyan; Shankar, Ravi; Kumar, Amit

    2014-10-01

    This paper describes a mathematical model of a solid oxide fuel cell (SOFC) power plant integrated in a multimachine power system. The utilization factor of a fuel stack maintains steady state by tuning the fuel valve in the fuel processor at a rate proportional to a current drawn from the fuel stack. A suitable fuzzy logic control is used for the overall system, its objective being controlling the current drawn by the power conditioning unit and meet a desirable output power demand. The proposed control scheme is verified through computer simulations.

  6. A platform for evolving intelligently interactive adversaries.

    PubMed

    Fogel, David B; Hays, Timothy J; Johnson, Douglas R

    2006-07-01

    Entertainment software developers face significant challenges in designing games with broad appeal. One of the challenges concerns creating nonplayer (computer-controlled) characters that can adapt their behavior in light of the current and prospective situation, possibly emulating human behaviors. This adaptation should be inherently novel, unrepeatable, yet within the bounds of realism. Evolutionary algorithms provide a suitable method for generating such behaviors. This paper provides background on the entertainment software industry, and details a prior and current effort to create a platform for evolving nonplayer characters with genetic and behavioral traits within a World War I combat flight simulator.

  7. Hormone purification by isoelectric focusing in space

    NASA Technical Reports Server (NTRS)

    Bier, M.

    1982-01-01

    The performance of a ground-prototype of an apparatus for recycling isoelectric focusing was evaluated in an effort to provide technology for large scale purification of peptide hormones, proteins, and other biologicals. Special emphasis was given to the effects of gravity on the function of the apparatus and to the determination of potential advantages deriveable from its use in a microgravity environment. A theoretical model of isoelectric focusing sing chemically defined buffer systems for the establishment of the pH gradients was developed. The model was transformed to a form suitable for computer simulations and was used extensively for the design of experimental buffers.

  8. PHAST--a program for simulating ground-water flow, solute transport, and multicomponent geochemical reactions

    USGS Publications Warehouse

    Parkhurst, David L.; Kipp, Kenneth L.; Engesgaard, Peter; Charlton, Scott R.

    2004-01-01

    The computer program PHAST simulates multi-component, reactive solute transport in three-dimensional saturated ground-water flow systems. PHAST is a versatile ground-water flow and solute-transport simulator with capabilities to model a wide range of equilibrium and kinetic geochemical reactions. The flow and transport calculations are based on a modified version of HST3D that is restricted to constant fluid density and constant temperature. The geochemical reactions are simulated with the geochemical model PHREEQC, which is embedded in PHAST. PHAST is applicable to the study of natural and contaminated ground-water systems at a variety of scales ranging from laboratory experiments to local and regional field scales. PHAST can be used in studies of migration of nutrients, inorganic and organic contaminants, and radionuclides; in projects such as aquifer storage and recovery or engineered remediation; and in investigations of the natural rock-water interactions in aquifers. PHAST is not appropriate for unsaturated-zone flow, multiphase flow, density-dependent flow, or waters with high ionic strengths. A variety of boundary conditions are available in PHAST to simulate flow and transport, including specified-head, flux, and leaky conditions, as well as the special cases of rivers and wells. Chemical reactions in PHAST include (1) homogeneous equilibria using an ion-association thermodynamic model; (2) heterogeneous equilibria between the aqueous solution and minerals, gases, surface complexation sites, ion exchange sites, and solid solutions; and (3) kinetic reactions with rates that are a function of solution composition. The aqueous model (elements, chemical reactions, and equilibrium constants), minerals, gases, exchangers, surfaces, and rate expressions may be defined or modified by the user. A number of options are available to save results of simulations to output files. The data may be saved in three formats: a format suitable for viewing with a text editor; a format suitable for exporting to spreadsheets and post-processing programs; or in Hierarchical Data Format (HDF), which is a compressed binary format. Data in the HDF file can be visualized on Windows computers with the program Model Viewer and extracted with the utility program PHASTHDF; both programs are distributed with PHAST. Operator splitting of the flow, transport, and geochemical equations is used to separate the three processes into three sequential calculations. No iterations between transport and reaction calculations are implemented. A three-dimensional Cartesian coordinate system and finite-difference techniques are used for the spatial and temporal discretization of the flow and transport equations. The non-linear chemical equilibrium equations are solved by a Newton-Raphson method, and the kinetic reaction equations are solved by a Runge-Kutta or an implicit method for integrating ordinary differential equations. The PHAST simulator may require large amounts of memory and long Central Processing Unit (CPU) times. To reduce the long CPU times, a parallel version of PHAST has been developed that runs on a multiprocessor computer or on a collection of computers that are networked. The parallel version requires Message Passing Interface, which is currently (2004) freely available. The parallel version is effective in reducing simulation times. This report documents the use of the PHAST simulator, including running the simulator, preparing the input files, selecting the output files, and visualizing the results. It also presents four examples that verify the numerical method and demonstrate the capabilities of the simulator. PHAST requires three input files. Only the flow and transport file is described in detail in this report. The other two files, the chemistry data file and the database file, are identical to PHREEQC files and the detailed description of these files is found in the PHREEQC documentation.

  9. Simulation of external contamination into water distribution systems through defects in pipes

    NASA Astrophysics Data System (ADS)

    López, P. A.; Mora, J. J.; García, F. J.; López, G.

    2009-04-01

    Water quality can be defined as a set of properties (physical, biological and chemical) that determine its suitability for human use or for its role in the biosphere. In this contribution we focus on the possible impact on water distribution systems quality of external contaminant fluids entering through defects in pipes. The physical integrity of the distribution system is a primary barrier against the entry of external contaminants and the loss in quality of the treated drinking water, but this integrity can be broken. Deficiencies in physical and hydraulic integrity can lead into water losses, but also into the influx of contaminants through pipes walls, either through breaks coming from external subsoil waters, or via cross connections coming from sewerage or other facilities. These external contamination events (the so called pathogen intrusion phenomenon) can act as a source of income by introducing nutrients and sediments as well as decreasing disinfectant concentrations within the distribution system, thus resulting in a degradation of the distribution water quality. The objective of this contribution is to represent this pathogen intrusion phenomenon. The combination of presence of defects in the infrastructures (equipment failure), suppression and back-siphonage and lack of disinfection is the cause of propagation of contamination in the clean current of water. Intrusion of pathogenic microorganisms has been studied and registered even in well maintained services. Therefore, this situation can happen when negative pressure conditions are achieved in the systems combined with the presence of defects in pipes nearby the suppression. A simulation of the process by which the external fluids can come inside pipes across their defects in a steady-state situation will be considered, by using different techniques to get such a successful modeling, combining numerical and experimental simulations. The proposed modeling process is based on experimental and computational simulations. An analysis of the intrusion behavior considering hydrodynamic and transportation of pollutant phenomena has been developed, comparing the influence of the turbulence consideration and the agreement of both computational and experimental results. This paper is focused on the analysis of such external intrusion phenomenon, the relationship between the income flow and the pressure inside the pipe, depending on the characteristics of the defect and the pressure level, as well as the effect on the water quality of the income substances dispersion. Two different experiments have been developed. In order to represent the intrusion phenomenon in steady state, two suitable assemblies have been implemented in the laboratory. In a lower order of pressures a Venturi tube has been used for generating the depression. In a higher level of pressures, a pumping system has been used. The defect on the pipe has been simulated by a circular hole, and the dispersion of pollutant has been considered by means of salinity as a conservative contaminant. The simulated scenarios of different suppressions can vary from 0.001 to 0.7 bars. The prototypes are also simulated by numerical modeling in two and three dimensions using Computational Fluid Dynamics techniques. For this purpose Fluent 6.3™ has been used, which displays the fields of hydrodynamic components and salinity. After doing a proper calibration process, the contrast made between models will allows us to establish the foundation for further pathogen intrusion simulations in the distribution system. Different turbulent models based on turbulent viscosity and different boundary conditions will also be considered. The agreement between experimental and computational models will be analyzed, and the differences between series of results will be compared, validating thus the use of computational models for representing the pathogen intrusion problem. By both, mathematical and physical models, it is intended to have a better knowledge of quantities that can not be measured, such as velocity fields, aspects of turbulence, pressure fields, concentrations, etc. existing in mixing processes related to external intrusion.

  10. Efficient Pricing Technique for Resource Allocation Problem in Downlink OFDM Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Abdulghafoor, O. B.; Shaat, M. M. R.; Ismail, M.; Nordin, R.; Yuwono, T.; Alwahedy, O. N. A.

    2017-05-01

    In this paper, the problem of resource allocation in OFDM-based downlink cognitive radio (CR) networks has been proposed. The purpose of this research is to decrease the computational complexity of the resource allocation algorithm for downlink CR network while concerning the interference constraint of primary network. The objective has been secured by adopting pricing scheme to develop power allocation algorithm with the following concerns: (i) reducing the complexity of the proposed algorithm and (ii) providing firm power control to the interference introduced to primary users (PUs). The performance of the proposed algorithm is tested for OFDM- CRNs. The simulation results show that the performance of the proposed algorithm approached the performance of the optimal algorithm at a lower computational complexity, i.e., O(NlogN), which makes the proposed algorithm suitable for more practical applications.

  11. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherlock, M.; Brodrick, J. P.; Ridgers, C. P.

    Here, we compare the reduced non-local electron transport model developed to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a one-dimensional hohlraum ablation problem. We find that the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced modelmore » reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region.« less

  13. Latency Hiding in Dynamic Partitioning and Load Balancing of Grid Computing Applications

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak

    2001-01-01

    The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the.IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under the Globus environment. The number of IPG nodes, the number of processors per node, and the interconnected speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solution are achieved when the IPG nodes are connected by a high-speed asynchronous interconnection network.

  14. Improvements in the Scalability of the NASA Goddard Multiscale Modeling Framework for Hurricane Climate Studies

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Chern, Jiun-Dar

    2007-01-01

    Improving our understanding of hurricane inter-annual variability and the impact of climate change (e.g., doubling CO2 and/or global warming) on hurricanes brings both scientific and computational challenges to researchers. As hurricane dynamics involves multiscale interactions among synoptic-scale flows, mesoscale vortices, and small-scale cloud motions, an ideal numerical model suitable for hurricane studies should demonstrate its capabilities in simulating these interactions. The newly-developed multiscale modeling framework (MMF, Tao et al., 2007) and the substantial computing power by the NASA Columbia supercomputer show promise in pursuing the related studies, as the MMF inherits the advantages of two NASA state-of-the-art modeling components: the GEOS4/fvGCM and 2D GCEs. This article focuses on the computational issues and proposes a revised methodology to improve the MMF's performance and scalability. It is shown that this prototype implementation enables 12-fold performance improvements with 364 CPUs, thereby making it more feasible to study hurricane climate.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Oishik, E-mail: oishik-sen@uiowa.edu; Gaul, Nicholas J., E-mail: nicholas-gaul@ramdosolutions.com; Choi, K.K., E-mail: kyung-choi@uiowa.edu

    Macro-scale computations of shocked particulate flows require closure laws that model the exchange of momentum/energy between the fluid and particle phases. Closure laws are constructed in this work in the form of surrogate models derived from highly resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the leastmore » number of mesoscale simulations. It is shown that if the input data is noise-free, the DKG method converges monotonically; convergence is less robust in the presence of noise. The MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. This work is the first step towards a full multiscale modeling of interaction of shocked particle laden flows.« less

  16. Nursing students' attitudes toward video games and related new media technologies.

    PubMed

    Lynch-Sauer, Judith; Vandenbosch, Terry M; Kron, Frederick; Gjerde, Craig Livingston; Arato, Nora; Sen, Ananda; Fetters, Michael D

    2011-09-01

    Little is known about Millennial nursing students' attitudes toward computer games and new media in nursing education and whether these attitudes differ between undergraduates and graduates. This study elicited nursing students' experience with computer games and new media, their attitudes toward various instructional styles and methods, and the role of computer games and new media technologies in nursing education. We e-mailed all nursing students enrolled in two universities to invite their participation in an anonymous cross-sectional online survey. The survey collected demographic data and participants' experience with and attitudes toward video gaming and multi-player online health care simulations. We used descriptive statistics and logistic regression to compare the differences between undergraduates and graduates. Two hundred eighteen nursing students participated. Many of the nursing students support using new media technologies in nursing education. Nurse educators should identify areas suitable for new media integration and further evaluate the effectiveness of these technologies. Copyright 2011, SLACK Incorporated.

  17. Rapid Airplane Parametric Input Design(RAPID)

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Bloor, Malcolm I. G.; Wilson, Michael J.; Thomas, Almuttil M.

    2004-01-01

    An efficient methodology is presented for defining a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. A small set of design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tail, horizontal tail, and canard components. The wing, tail, and canard components are manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. Grid sensitivity is obtained by applying the automatic differentiation precompiler ADIFOR to software for the grid generation. The computed surface grids, volume grids, and sensitivity derivatives are suitable for a wide range of Computational Fluid Dynamics simulation and configuration optimizations.

  18. Development of a Multi-Disciplinary Computing Environment (MDICE)

    NASA Technical Reports Server (NTRS)

    Kingsley, Gerry; Siegel, John M., Jr.; Harrand, Vincent J.; Lawrence, Charles; Luker, Joel J.

    1999-01-01

    The growing need for and importance of multi-component and multi-disciplinary engineering analysis has been understood for many years. For many applications, loose (or semi-implicit) coupling is optimal, and allows the use of various legacy codes without requiring major modifications. For this purpose, CFDRC and NASA LeRC have developed a computational environment to enable coupling between various flow analysis codes at several levels of fidelity. This has been referred to as the Visual Computing Environment (VCE), and is being successfully applied to the analysis of several aircraft engine components. Recently, CFDRC and AFRL/VAAC (WL) have extended the framework and scope of VCE to enable complex multi-disciplinary simulations. The chosen initial focus is on aeroelastic aircraft applications. The developed software is referred to as MDICE-AE, an extensible system suitable for integration of several engineering analysis disciplines. This paper describes the methodology, basic architecture, chosen software technologies, salient library modules, and the current status of and plans for MDICE. A fluid-structure interaction application is described in a separate companion paper.

  19. Optical signal processing using photonic reservoir computing

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Dehyadegari, Louiza

    2014-10-01

    As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.

  20. The Development of Duct for a Horizontal Axis Turbine Using CFD

    NASA Astrophysics Data System (ADS)

    Ghani, Mohamad Pauzi Abdul; Yaacob, Omar; Aziz, Azliza Abdul

    2010-06-01

    Malaysia is heavily dependent on the fossil fuels to satisfy its energy demand. Nowadays, renewable energy which has attracted great interest is marine current energy, which extracted by a device called a device called marine current turbine. This energy resource has agreat potential to be exploited on a large scale because of its predictability and intensity. This paper will focus on developing a Horizontal Axis Marine Current Turbine (HAMCT) rotor to extract marine current energy suitable for Malaysian sea conditions. This work incorporates the characteristic of Malaysia's ocean of shallow water and low speed current in developing the turbines. The HAMCT rotor will be developed and simulated using CAD and CFD software for various combination of inlet and oulet duct design. The computer simulation results of the HAMCT being developed will be presented.

  1. An LBM based model for initial stenosis development in the carotid artery

    NASA Astrophysics Data System (ADS)

    Stamou, A. C.; Buick, J. M.

    2016-05-01

    A numerical scheme is proposed to simulate the early stages of stenosis development based on the properties of blood flow in the carotid artery, computed using the lattice Boltzmann method. The model is developed on the premise, supported by evidence from the literature, that the stenosis develops in regions of low velocity and low wall shear stress. The model is based on two spatial parameters which relate to the extent to which the stenosis can grow in each development phase. Simulations of stenosis development are presented for a range of the spacial parameters to determine suitable ranges for their application. Flow fields are also presented which indicate that the stenosis is developing in a realistic manner, providing evidence that stenosis development is indeed influenced by the low shear stress, rather than occurring in such areas coincidentally.

  2. Instantaneous-to-daily GPP upscaling schemes based on a coupled photosynthesis-stomatal conductance model: correcting the overestimation of GPP by directly using daily average meteorological inputs.

    PubMed

    Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin

    2014-11-01

    Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.

  3. High-performance parallel computing in the classroom using the public goods game as an example

    NASA Astrophysics Data System (ADS)

    Perc, Matjaž

    2017-07-01

    The use of computers in statistical physics is common because the sheer number of equations that describe the behaviour of an entire system particle by particle often makes it impossible to solve them exactly. Monte Carlo methods form a particularly important class of numerical methods for solving problems in statistical physics. Although these methods are simple in principle, their proper use requires a good command of statistical mechanics, as well as considerable computational resources. The aim of this paper is to demonstrate how the usage of widely accessible graphics cards on personal computers can elevate the computing power in Monte Carlo simulations by orders of magnitude, thus allowing live classroom demonstration of phenomena that would otherwise be out of reach. As an example, we use the public goods game on a square lattice where two strategies compete for common resources in a social dilemma situation. We show that the second-order phase transition to an absorbing phase in the system belongs to the directed percolation universality class, and we compare the time needed to arrive at this result by means of the main processor and by means of a suitable graphics card. Parallel computing on graphics processing units has been developed actively during the last decade, to the point where today the learning curve for entry is anything but steep for those familiar with programming. The subject is thus ripe for inclusion in graduate and advanced undergraduate curricula, and we hope that this paper will facilitate this process in the realm of physics education. To that end, we provide a documented source code for an easy reproduction of presented results and for further development of Monte Carlo simulations of similar systems.

  4. Efficient computation of electrograms and ECGs in human whole heart simulations using a reaction-eikonal model.

    PubMed

    Neic, Aurel; Campos, Fernando O; Prassl, Anton J; Niederer, Steven A; Bishop, Martin J; Vigmond, Edward J; Plank, Gernot

    2017-10-01

    Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.

  5. r.avaflow v1, an advanced open-source computational framework for the propagation and interaction of two-phase mass flows

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Fischer, Jan-Thomas; Krenn, Julia; Pudasaini, Shiva P.

    2017-02-01

    r.avaflow represents an innovative open-source computational tool for routing rapid mass flows, avalanches, or process chains from a defined release area down an arbitrary topography to a deposition area. In contrast to most existing computational tools, r.avaflow (i) employs a two-phase, interacting solid and fluid mixture model (Pudasaini, 2012); (ii) is suitable for modelling more or less complex process chains and interactions; (iii) explicitly considers both entrainment and stopping with deposition, i.e. the change of the basal topography; (iv) allows for the definition of multiple release masses, and/or hydrographs; and (v) serves with built-in functionalities for validation, parameter optimization, and sensitivity analysis. r.avaflow is freely available as a raster module of the GRASS GIS software, employing the programming languages Python and C along with the statistical software R. We exemplify the functionalities of r.avaflow by means of two sets of computational experiments: (1) generic process chains consisting in bulk mass and hydrograph release into a reservoir with entrainment of the dam and impact downstream; (2) the prehistoric Acheron rock avalanche, New Zealand. The simulation results are generally plausible for (1) and, after the optimization of two key parameters, reasonably in line with the corresponding observations for (2). However, we identify some potential to enhance the analytic and numerical concepts. Further, thorough parameter studies will be necessary in order to make r.avaflow fit for reliable forward simulations of possible future mass flow events.

  6. Efficient computation of electrograms and ECGs in human whole heart simulations using a reaction-eikonal model

    NASA Astrophysics Data System (ADS)

    Neic, Aurel; Campos, Fernando O.; Prassl, Anton J.; Niederer, Steven A.; Bishop, Martin J.; Vigmond, Edward J.; Plank, Gernot

    2017-10-01

    Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.

  7. Shape optimization of pulsatile ventricular assist devices using FSI to minimize thrombotic risk

    NASA Astrophysics Data System (ADS)

    Long, C. C.; Marsden, A. L.; Bazilevs, Y.

    2014-10-01

    In this paper we perform shape optimization of a pediatric pulsatile ventricular assist device (PVAD). The device simulation is carried out using fluid-structure interaction (FSI) modeling techniques within a computational framework that combines FEM for fluid mechanics and isogeometric analysis for structural mechanics modeling. The PVAD FSI simulations are performed under realistic conditions (i.e., flow speeds, pressure levels, boundary conditions, etc.), and account for the interaction of air, blood, and a thin structural membrane separating the two fluid subdomains. The shape optimization study is designed to reduce thrombotic risk, a major clinical problem in PVADs. Thrombotic risk is quantified in terms of particle residence time in the device blood chamber. Methods to compute particle residence time in the context of moving spatial domains are presented in a companion paper published in the same issue (Comput Mech, doi: 10.1007/s00466-013-0931-y, 2013). The surrogate management framework, a derivative-free pattern search optimization method that relies on surrogates for increased efficiency, is employed in this work. For the optimization study shown here, particle residence time is used to define a suitable cost or objective function, while four adjustable design optimization parameters are used to define the device geometry. The FSI-based optimization framework is implemented in a parallel computing environment, and deployed with minimal user intervention. Using five SEARCH/ POLL steps the optimization scheme identifies a PVAD design with significantly better throughput efficiency than the original device.

  8. GPU-accelerated algorithms for many-particle continuous-time quantum walks

    NASA Astrophysics Data System (ADS)

    Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo

    2017-06-01

    Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.

  9. High-Efficiency High-Resolution Global Model Developments at the NASA Goddard Data Assimilation Office

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)

    2002-01-01

    The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 km or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed-shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.

  10. High-Efficiency High-Resolution Global Model Developments at the NASA Goddard Data Assimilation Office

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)

    2002-01-01

    The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 kin or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed- shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.

  11. Transition to turbulence in plane channel flows

    NASA Technical Reports Server (NTRS)

    Biringen, S.

    1984-01-01

    Results obtained from a numerical simulation of the final stages of transition to turbulence in plane channel flow are described. Three dimensional, incompressible Navier-Stokes equations are numerically integrated to obtain the time evolution of two and three dimensional finite amplitude disturbances. Computations are performed on CYBER-203 vector processor for a 32x51x32 grid. Results are presented for no-slip boundary conditions at the solid walls as well as for periodic suction blowing to simulate active control of transition by mass transfer. Solutions indicate that the method is capable of simulating the complex character of vorticity dynamics during the various stages of transition and final breakdown. In particular, evidence points to the formation of a lambda-shape vortex and the subsequent system of horseshoe vortices inclined to the main flow direction as the main elements of transition. Calculations involving periodic suction-blowing indicate that interference with a wave of suitable phase and amplitude reduces the disturbance growth rates.

  12. Molecular Dynamics Simulations of Intrinsically Disordered Proteins: On the Accuracy of the TIP4P-D Water Model and the Representativeness of Protein Disorder Models.

    PubMed

    Henriques, João; Skepö, Marie

    2016-07-12

    Here, we first present a follow-up to a previous work by our group on the problematic of molecular dynamics simulations of intrinsically disordered proteins (IDPs) [ Henriques et al. J. Chem. Theory Comput. 2015 , 11 , 3420 - 3431 ], using the recently developed TIP4P-D water model. When used in conjunction with the standard AMBER ff99SB-ILDN force field and applied to the simulation of Histatin 5, our IDP model, we obtain results which are in excellent agreement with the best performing IDP-suitable force field from the earlier study and with experiment. We then assess the representativeness of the IDP models used in these and similar studies, finding that most are too short in comparison to the average IDP and contain a bias toward hydrophilic amino acid residues. Moreover, several key order- and disorder-promoting residues are also found to be misrepresented. It seems appropriate for future studies to address these issues.

  13. Numerical Simulation of the Effects of Water Surface in Building Environment

    NASA Astrophysics Data System (ADS)

    Li, Guangyao; Pan, Yuqing; Yang, Li

    2018-03-01

    Water body could affect the thermal environment and airflow field in the building districts, because of its special thermal characteristics, evaporation and flat surface. The thermal influence of water body in Tongji University Jiading Campus front area was evaluated. First, a suitable evaporation model was selected and then was applied to calculate the boundary conditions of the water surface in the Fluent software. Next, the computational fluid dynamics (CFD) simulations were conducted on the models both with and without water, following the CFD practices guidelines. Finally, the outputs of the two simulations were compared with each other. Results showed that the effect of evaporative cooling from water surface strongly depends on the wind direction and temperature decrease was about 2∼5°C. The relative humidity within the enclosing area was affected by both the building arrangement and surrounding water. An increase of about 0.1∼0.2m/s of wind speed induced by the water evaporation was observed in the open space.

  14. Arrays of individually controlled ions suitable for two-dimensional quantum simulations

    DOE PAGES

    Mielenz, Manuel; Kalis, Henning; Wittemer, Matthias; ...

    2016-06-13

    A precisely controlled quantum system may reveal a fundamental understanding of another, less accessible system of interest. A universal quantum computer is currently out of reach, but an analogue quantum simulator that makes relevant observables, interactions and states of a quantum model accessible could permit insight into complex dynamics. Several platforms have been suggested and proof-of-principle experiments have been conducted. Here, we operate two-dimensional arrays of three trapped ions in individually controlled harmonic wells forming equilateral triangles with side lengths 40 and 80 μm. In our approach, which is scalable to arbitrary two-dimensional lattices, we demonstrate individual control of themore » electronic and motional degrees of freedom, preparation of a fiducial initial state with ion motion close to the ground state, as well as a tuning of couplings between ions within experimental sequences. Lastly, our work paves the way towards a quantum simulator of two-dimensional systems designed at will.« less

  15. MMAPDNG: A new, fast code backed by a memory-mapped database for simulating delayed γ-ray emission with MCNPX package

    NASA Astrophysics Data System (ADS)

    Lou, Tak Pui; Ludewigt, Bernhard

    2015-09-01

    The simulation of the emission of beta-delayed gamma rays following nuclear fission and the calculation of time-dependent energy spectra is a computational challenge. The widely used radiation transport code MCNPX includes a delayed gamma-ray routine that is inefficient and not suitable for simulating complex problems. This paper describes the code "MMAPDNG" (Memory-Mapped Delayed Neutron and Gamma), an optimized delayed gamma module written in C, discusses usage and merits of the code, and presents results. The approach is based on storing required Fission Product Yield (FPY) data, decay data, and delayed particle data in a memory-mapped file. When compared to the original delayed gamma-ray code in MCNPX, memory utilization is reduced by two orders of magnitude and the ray sampling is sped up by three orders of magnitude. Other delayed particles such as neutrons and electrons can be implemented in future versions of MMAPDNG code using its existing framework.

  16. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem

    PubMed Central

    Akutsah, Francis; Olusanya, Micheal O.; Adewumi, Aderemi O.

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems. PMID:29554662

  17. Comparison of validation methods for forming simulations

    NASA Astrophysics Data System (ADS)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  18. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem.

    PubMed

    Ezugwu, Absalom E; Akutsah, Francis; Olusanya, Micheal O; Adewumi, Aderemi O

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems.

  19. Effective Energy Simulation and Optimal Design of Side-lit Buildings with Venetian Blinds

    NASA Astrophysics Data System (ADS)

    Cheng, Tian

    Venetian blinds are popularly used in buildings to control the amount of incoming daylight for improving visual comfort and reducing heat gains in air-conditioning systems. Studies have shown that the proper design and operation of window systems could result in significant energy savings in both lighting and cooling. However, there is no convenient computer tool that allows effective and efficient optimization of the envelope of side-lit buildings with blinds now. Three computer tools, Adeline, DOE2 and EnergyPlus widely used for the above-mentioned purpose have been experimentally examined in this study. Results indicate that the two former tools give unacceptable accuracy due to unrealistic assumptions adopted while the last one may generate large errors in certain conditions. Moreover, current computer tools have to conduct hourly energy simulations, which are not necessary for life-cycle energy analysis and optimal design, to provide annual cooling loads. This is not computationally efficient, particularly not suitable for optimal designing a building at initial stage because the impacts of many design variations and optional features have to be evaluated. A methodology is therefore developed for efficient and effective thermal and daylighting simulations and optimal design of buildings with blinds. Based on geometric optics and radiosity method, a mathematical model is developed to reasonably simulate the daylighting behaviors of venetian blinds. Indoor illuminance at any reference point can be directly and efficiently computed. They have been validated with both experiments and simulations with Radiance. Validation results show that indoor illuminances computed by the new models agree well with the measured data, and the accuracy provided by them is equivalent to that of Radiance. The computational efficiency of the new models is much higher than that of Radiance as well as EnergyPlus. Two new methods are developed for the thermal simulation of buildings. A fast Fourier transform (FFT) method is presented to avoid the root-searching process in the inverse Laplace transform of multilayered walls. Generalized explicit FFT formulae for calculating the discrete Fourier transform (DFT) are developed for the first time. They can largely facilitate the implementation of FFT. The new method also provides a basis for generating the symbolic response factors. Validation simulations show that it can generate the response factors as accurate as the analytical solutions. The second method is for direct estimation of annual or seasonal cooling loads without the need for tedious hourly energy simulations. It is validated by hourly simulation results with DOE2. Then symbolic long-term cooling load can be created by combining the two methods with thermal network analysis. The symbolic long-term cooling load can keep the design parameters of interest as symbols, which is particularly useful for the optimal design and sensitivity analysis. The methodology is applied to an office building in Hong Kong for the optimal design of building envelope. Design variables such as window-to-wall ratio, building orientation, and glazing optical and thermal properties are included in the study. Results show that the selected design values could significantly impact the energy performance of windows, and the optimal design of side-lit buildings could greatly enhance energy savings. The application example also demonstrates that the developed methodology significantly facilitates the optimal building design and sensitivity analysis, and leads to high computational efficiency.

  20. DSC: software tool for simulation-based design of control strategies applied to wastewater treatment plants.

    PubMed

    Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2011-01-01

    This paper presents a computer tool called DSC (Simulation based Controllers Design) that enables an easy design of control systems and strategies applied to wastewater treatment plants. Although the control systems are developed and evaluated by simulation, this tool aims to facilitate the direct implementation of the designed control system to the PC of the full-scale WWTP (wastewater treatment plants). The designed control system can be programmed in a dedicated control application and can be connected to either the simulation software or the SCADA of the plant. To this end, the developed DSC incorporates an OPC server (OLE for process control) which facilitates an open-standard communication protocol for different industrial process applications. The potential capabilities of the DSC tool are illustrated through the example of a full-scale application. An aeration control system applied to a nutrient removing WWTP was designed, tuned and evaluated with the DSC tool before its implementation in the full scale plant. The control parameters obtained by simulation were suitable for the full scale plant with only few modifications to improve the control performance. With the DSC tool, the control systems performance can be easily evaluated by simulation. Once developed and tuned by simulation, the control systems can be directly applied to the full-scale WWTP.

  1. Influence of Torrefaction on the Conversion Efficiency of the Gasification Process of Sugarcane Bagasse

    PubMed Central

    Anukam, Anthony; Mamphweli, Sampson; Okoh, Omobola; Reddy, Prashant

    2017-01-01

    Sugarcane bagasse was torrefied to improve its quality in terms of properties prior to gasification. Torrefaction was undertaken at 300 °C in an inert atmosphere of N2 at 10 °C·min−1 heating rate. A residence time of 5 min allowed for rapid reaction of the material during torrefaction. Torrefied and untorrefied bagasse were characterized to compare their suitability as feedstocks for gasification. The results showed that torrefied bagasse had lower O–C and H–C atomic ratios of about 0.5 and 0.84 as compared to that of untorrefied bagasse with 0.82 and 1.55, respectively. A calorific value of about 20.29 MJ·kg−1 was also measured for torrefied bagasse, which is around 13% higher than that for untorrefied bagasse with a value of ca. 17.9 MJ·kg−1. This confirms the former as a much more suitable feedstock for gasification than the latter since efficiency of gasification is a function of feedstock calorific value. SEM results also revealed a fibrous structure and pith in the micrographs of both torrefied and untorrefied bagasse, indicating the carbonaceous nature of both materials, with torrefied bagasse exhibiting a more permeable structure with larger surface area, which are among the features that favour gasification. The gasification process of torrefied bagasse relied on computer simulation to establish the impact of torrefaction on gasification efficiency. Optimum efficiency was achieved with torrefied bagasse because of its slightly modified properties. Conversion efficiency of the gasification process of torrefied bagasse increased from 50% to approximately 60% after computer simulation, whereas that of untorrefied bagasse remained constant at 50%, even as the gasification time increased. PMID:28952501

  2. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    NASA Astrophysics Data System (ADS)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.

    2014-06-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  3. The Junior Computer Dictionary. 101 Useful Words and Definitions to Introduce Students to Computer Terminology.

    ERIC Educational Resources Information Center

    Willing, Kathlene R.; Girard, Suzanne

    Suitable for children from grades four to seven, this dictionary is designed to introduce children to computer terminology at a level that they will understand and find useful. It is also suitable as a home resource for parents, for library use, and as a handbook for teachers. For each word, the first sentence of the definition contains the kernel…

  4. Dual-Source Linear Energy Prediction (LINE-P) Model in the Context of WSNs

    PubMed Central

    Ahmed, Faisal

    2017-01-01

    Energy harvesting technologies such as miniature power solar panels and micro wind turbines are increasingly used to help power wireless sensor network nodes. However, a major drawback of energy harvesting is its varying and intermittent characteristic, which can negatively affect the quality of service. This calls for careful design and operation of the nodes, possibly by means of, e.g., dynamic duty cycling and/or dynamic frequency and voltage scaling. In this context, various energy prediction models have been proposed in the literature; however, they are typically compute-intensive or only suitable for a single type of energy source. In this paper, we propose Linear Energy Prediction “LINE-P”, a lightweight, yet relatively accurate model based on approximation and sampling theory; LINE-P is suitable for dual-source energy harvesting. Simulations and comparisons against existing similar models have been conducted with low and medium resolutions (i.e., 60 and 22 min intervals/24 h) for the solar energy source (low variations) and with high resolutions (15 min intervals/24 h) for the wind energy source. The results show that the accuracy of the solar-based and wind-based predictions is up to approximately 98% and 96%, respectively, while requiring a lower complexity and memory than the other models. For the cases where LINE-P’s accuracy is lower than that of other approaches, it still has the advantage of lower computing requirements, making it more suitable for embedded implementation, e.g., in wireless sensor network coordinator nodes or gateways. PMID:28726745

  5. Augmenting Sand Simulation Environments through Subdivision and Particle Refinement

    NASA Astrophysics Data System (ADS)

    Clothier, M.; Bailey, M.

    2012-12-01

    Recent advances in computer graphics and parallel processing hardware have provided disciplines with new methods to evaluate and visualize data. These advances have proven useful for earth and planetary scientists as many researchers are using this hardware to process large amounts of data for analysis. As such, this has provided opportunities for collaboration between computer graphics and the earth sciences. Through collaboration with the Oregon Space Grant and IGERT Ecosystem Informatics programs, we are investigating techniques for simulating the behavior of sand. We are also collaborating with the Jet Propulsion Laboratory's (JPL) DARTS Lab to exchange ideas and gain feedback on our research. The DARTS Lab specializes in simulation of planetary vehicles, such as the Mars rovers. Their simulations utilize a virtual "sand box" to test how a planetary vehicle responds to different environments. Our research builds upon this idea to create a sand simulation framework so that planetary environments, such as the harsh, sandy regions on Mars, are more fully realized. More specifically, we are focusing our research on the interaction between a planetary vehicle, such as a rover, and the sand beneath it, providing further insight into its performance. Unfortunately, this can be a computationally complex problem, especially if trying to represent the enormous quantities of sand particles interacting with each other. However, through the use of high-performance computing, we have developed a technique to subdivide areas of actively participating sand regions across a large landscape. Similar to a Level of Detail (LOD) technique, we only subdivide regions of a landscape where sand particles are actively participating with another object. While the sand is within this subdivision window and moves closer to the surface of the interacting object, the sand region subdivides into smaller regions until individual sand particles are left at the surface. As an example, let's say there is a planetary rover interacting with our sand simulation environment. Sand that is actively interacting with a rover wheel will be represented as individual particles whereas sand that is further under the surface will be represented by larger regions of sand. The result of this technique allows for many particles to be represented without the computational complexity. In developing this method, we have further generalized these subdivision regions into any volumetric area suitable for use in the simulation. This is a further improvement of our method as it allows for more compact subdivision sand regions. This helps to fine tune the simulation so that more emphasis can be placed on regions of actively participating sand. We feel that through the generalization of our technique, our research can provide other opportunities within the earth and planetary sciences. Through collaboration with our academic colleagues, we continue to refine our technique and look for other opportunities to utilize our research.

  6. Computational design of a Diels-Alderase from a thermophilic esterase: the importance of dynamics

    NASA Astrophysics Data System (ADS)

    Linder, Mats; Johansson, Adam Johannes; Olsson, Tjelvar S. G.; Liebeschuetz, John; Brinck, Tore

    2012-09-01

    A novel computational Diels-Alderase design, based on a relatively rare form of carboxylesterase from Geobacillus stearothermophilus, is presented and theoretically evaluated. The structure was found by mining the PDB for a suitable oxyanion hole-containing structure, followed by a combinatorial approach to find suitable substrates and rational mutations. Four lead designs were selected and thoroughly modeled to obtain realistic estimates of substrate binding and prearrangement. Molecular dynamics simulations and DFT calculations were used to optimize and estimate binding affinity and activation energies. A large quantum chemical model was used to capture the salient interactions in the crucial transition state (TS). Our quantitative estimation of kinetic parameters was validated against four experimentally characterized Diels-Alderases with good results. The final designs in this work are predicted to have rate enhancements of ≈103-106 and high predicted proficiencies. This work emphasizes the importance of considering protein dynamics in the design approach, and provides a quantitative estimate of the how the TS stabilization observed in most de novo and redesigned enzymes is decreased compared to a minimal, `ideal' model. The presented design is highly interesting for further optimization and applications since it is based on a thermophilic enzyme ( T opt = 70 °C).

  7. Assessment of disintegration of rapidly disintegrating tablets by a visiometric liquid jet-mediated disintegration apparatus.

    PubMed

    Desai, Parind M; Liew, Celine V; Heng, Paul W S

    2013-02-14

    The aim of this study was to develop a responsive disintegration test apparatus that is particularly suitable for rapidly disintegrating tablets (RDTs). The designed RDT disintegration apparatus consisted of disintegration compartment, stereomicroscope and high speed video camera. Computational fluid dynamics (CFD) was used to simulate 3 different designs of the compartment and to predict velocity and pressure patterns inside the compartment. The CFD preprocessor established the compartment models and the CFD solver determined the numerical solutions of the governing equations that described disintegration medium flow. Simulation was validated by good agreement between CFD and experimental results. Based on the results, the most suitable disintegration compartment was selected. Six types of commercial RDTs were used and disintegration times of these tablets were determined using the designed RDT disintegration apparatus and the USP disintegration apparatus. The results obtained using the designed apparatus correlated well to those obtained by the USP apparatus. Thus, the applied CFD approach had the potential to predict the fluid hydrodynamics for the design of optimal disintegration apparatus. The designed visiometric liquid jet-mediated disintegration apparatus for RDT provided efficient and precise determination of very short disintegration times of rapidly disintegrating dosage forms. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. SURVEY SIMULATIONS OF A NEW NEAR-EARTH ASTEROID DETECTION SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mainzer, A.; Bauer, J.; Giorgini, J.

    We have carried out simulations to predict the performance of a new space-based telescopic survey operating at thermal infrared wavelengths that seeks to discover and characterize a large fraction of the potentially hazardous near-Earth asteroid (NEA) population. Two potential architectures for the survey were considered: one located at the Earth–Sun L1 Lagrange point, and one in a Venus-trailing orbit. A sample cadence was formulated and tested, allowing for the self-follow-up necessary for objects discovered in the daytime sky on Earth. Synthetic populations of NEAs with sizes as small as 140 m in effective spherical diameter were simulated using recent determinationsmore » of their physical and orbital properties. Estimates of the instrumental sensitivity, integration times, and slew speeds were included for both architectures assuming the properties of newly developed large-format 10 μm HgCdTe detector arrays capable of operating at ∼35 K. Our simulation included the creation of a preliminary version of a moving object processing pipeline suitable for operating on the trial cadence. We tested this pipeline on a simulated sky populated with astrophysical sources such as stars and galaxies extrapolated from Spitzer Space Telescope and Wide-field Infrared Explorer data, the catalog of known minor planets (including Main Belt asteroids, comets, Jovian Trojans, planets, etc.), and the synthetic NEA model. Trial orbits were computed for simulated position-time pairs extracted from the synthetic surveys to verify that the tested cadence would result in orbits suitable for recovering objects at a later time. Our results indicate that the Earth–Sun L1 and Venus-trailing surveys achieve similar levels of integral completeness for potentially hazardous asteroids larger than 140 m; placing the telescope in an interior orbit does not yield an improvement in discovery rates. This work serves as a necessary first step for the detailed planning of a next-generation NEA survey.« less

  9. Uncertainty Quantification applied to flow simulations in thoracic aortic aneurysms

    NASA Astrophysics Data System (ADS)

    Boccadifuoco, Alessandro; Mariotti, Alessandro; Celi, Simona; Martini, Nicola; Salvetti, Maria Vittoria

    2015-11-01

    The thoracic aortic aneurysm is a progressive dilatation of the thoracic aorta causing a weakness in the aortic wall, which may eventually cause life-threatening events. Clinical decisions on treatment strategies are currently based on empiric criteria, like the aortic diameter value or its growth rate. Numerical simulations can give the quantification of important indexes which are impossible to be obtained through in-vivo measurements and can provide supplementary information. Hemodynamic simulations are carried out by using the open-source tool SimVascular and considering patient-specific geometries. One of the main issues in these simulations is the choice of suitable boundary conditions, modeling the organs and vessels not included in the computational domain. The current practice is to use outflow conditions based on resistance and capacitance, whose values are tuned to obtain a physiological behavior of the patient pressure. However it is not known a priori how this choice affects the results of the simulation. The impact of the uncertainties in these outflow parameters is investigated here by using the generalized Polynomial Chaos approach. This analysis also permits to calibrate the outflow-boundary parameters when patient-specific in-vivo data are available.

  10. PHAST Version 2-A Program for Simulating Groundwater Flow, Solute Transport, and Multicomponent Geochemical Reactions

    USGS Publications Warehouse

    Parkhurst, David L.; Kipp, Kenneth L.; Charlton, Scott R.

    2010-01-01

    The computer program PHAST (PHREEQC And HST3D) simulates multicomponent, reactive solute transport in three-dimensional saturated groundwater flow systems. PHAST is a versatile groundwater flow and solute-transport simulator with capabilities to model a wide range of equilibrium and kinetic geochemical reactions. The flow and transport calculations are based on a modified version of HST3D that is restricted to constant fluid density and constant temperature. The geochemical reactions are simulated with the geochemical model PHREEQC, which is embedded in PHAST. Major enhancements in PHAST Version 2 allow spatial data to be defined in a combination of map and grid coordinate systems, independent of a specific model grid (without node-by-node input). At run time, aquifer properties are interpolated from the spatial data to the model grid; regridding requires only redefinition of the grid without modification of the spatial data. PHAST is applicable to the study of natural and contaminated groundwater systems at a variety of scales ranging from laboratory experiments to local and regional field scales. PHAST can be used in studies of migration of nutrients, inorganic and organic contaminants, and radionuclides; in projects such as aquifer storage and recovery or engineered remediation; and in investigations of the natural rock/water interactions in aquifers. PHAST is not appropriate for unsaturated-zone flow, multiphase flow, or density-dependent flow. A variety of boundary conditions are available in PHAST to simulate flow and transport, including specified-head, flux (specified-flux), and leaky (head-dependent) conditions, as well as the special cases of rivers, drains, and wells. Chemical reactions in PHAST include (1) homogeneous equilibria using an ion-association or Pitzer specific interaction thermodynamic model; (2) heterogeneous equilibria between the aqueous solution and minerals, ion exchange sites, surface complexation sites, solid solutions, and gases; and (3) kinetic reactions with rates that are a function of solution composition. The aqueous model (elements, chemical reactions, and equilibrium constants), minerals, exchangers, surfaces, gases, kinetic reactants, and rate expressions may be defined or modified by the user. A number of options are available to save results of simulations to output files. The data may be saved in three formats: a format suitable for viewing with a text editor; a format suitable for exporting to spreadsheets and postprocessing programs; and in Hierarchical Data Format (HDF), which is a compressed binary format. Data in the HDF file can be visualized on Windows computers with the program Model Viewer and extracted with the utility program PHASTHDF; both programs are distributed with PHAST.

  11. Mapping suitability areas for concentrated solar power plants using remote sensing data

    DOE PAGES

    Omitaomu, Olufemi A.; Singh, Nagendra; Bhaduri, Budhendra L.

    2015-05-14

    The political push to increase power generation from renewable sources such as solar energy requires knowing the best places to site new solar power plants with respect to the applicable regulatory, operational, engineering, environmental, and socioeconomic criteria. Therefore, in this paper, we present applications of remote sensing data for mapping suitability areas for concentrated solar power plants. Our approach uses digital elevation model derived from NASA s Shuttle Radar Topographic Mission (SRTM) at a resolution of 3 arc second (approx. 90m resolution) for estimating global solar radiation for the study area. Then, we develop a computational model built on amore » Geographic Information System (GIS) platform that divides the study area into a grid of cells and estimates site suitability value for each cell by computing a list of metrics based on applicable siting requirements using GIS data. The computed metrics include population density, solar energy potential, federal lands, and hazardous facilities. Overall, some 30 GIS data are used to compute eight metrics. The site suitability value for each cell is computed as an algebraic sum of all metrics for the cell with the assumption that all metrics have equal weight. Finally, we color each cell according to its suitability value. Furthermore, we present results for concentrated solar power that drives a stream turbine and parabolic mirror connected to a Stirling Engine.« less

  12. High resolution, MRI-based, segmented, computerized head phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zubal, I.G.; Harrell, C.R.; Smith, E.O.

    1999-01-01

    The authors have created a high-resolution software phantom of the human brain which is applicable to voxel-based radiation transport calculations yielding nuclear medicine simulated images and/or internal dose estimates. A software head phantom was created from 124 transverse MRI images of a healthy normal individual. The transverse T2 slices, recorded in a 256x256 matrix from a GE Signa 2 scanner, have isotropic voxel dimensions of 1.5 mm and were manually segmented by the clinical staff. Each voxel of the phantom contains one of 62 index numbers designating anatomical, neurological, and taxonomical structures. The result is stored as a 256x256x128 bytemore » array. Internal volumes compare favorably to those described in the ICRP Reference Man. The computerized array represents a high resolution model of a typical human brain and serves as a voxel-based anthropomorphic head phantom suitable for computer-based modeling and simulation calculations. It offers an improved realism over previous mathematically described software brain phantoms, and creates a reference standard for comparing results of newly emerging voxel-based computations. Such voxel-based computations lead the way to developing diagnostic and dosimetry calculations which can utilize patient-specific diagnostic images. However, such individualized approaches lack fast, automatic segmentation schemes for routine use; therefore, the high resolution, typical head geometry gives the most realistic patient model currently available.« less

  13. Oak Ridge Institutional Cluster Autotune Test Drive Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jibonananda, Sanyal; New, Joshua Ryan

    2014-02-01

    The Oak Ridge Institutional Cluster (OIC) provides general purpose computational resources for the ORNL staff to run computation heavy jobs that are larger than desktop applications but do not quite require the scale and power of the Oak Ridge Leadership Computing Facility (OLCF). This report details the efforts made and conclusions derived in performing a short test drive of the cluster resources on Phase 5 of the OIC. EnergyPlus was used in the analysis as a candidate user program and the overall software environment was evaluated against anticipated challenges experienced with resources such as the shared memory-Nautilus (JICS) and Titanmore » (OLCF). The OIC performed within reason and was found to be acceptable in the context of running EnergyPlus simulations. The number of cores per node and the availability of scratch space per node allow non-traditional desktop focused applications to leverage parallel ensemble execution. Although only individual runs of EnergyPlus were executed, the software environment on the OIC appeared suitable to run ensemble simulations with some modifications to the Autotune workflow. From a standpoint of general usability, the system supports common Linux libraries, compilers, standard job scheduling software (Torque/Moab), and the OpenMPI library (the only MPI library) for MPI communications. The file system is a Panasas file system which literature indicates to be an efficient file system.« less

  14. PGOPHER in the Classroom and the Laboratory

    NASA Astrophysics Data System (ADS)

    Western, Colin

    2015-06-01

    PGOPHER is a general purpose program for simulating and fitting rotational, vibrational and electronic spectra. As it uses a graphical user interface the basic operation is sufficiently straightforward to make it suitable for use in undergraduate practicals and computer based classes. This talk will present two experiments that have been in regular use by Bristol undergraduates for some years based on the analysis of infra-red spectra of cigarette smoke and, for more advanced students, visible and near ultra-violet spectra of a nitrogen discharge and a hydrocarbon flame. For all of these the rotational structure is analysed and used to explore ideas of bonding. The talk will discuss the requirements for the apparatus and the support required. Other ideas for other possible experiments and computer based exercises will also be presented, including a group exercise. The PGOPHER program is open source, and is available for Microsoft Windows, Apple Mac and Linux. It can be freely downloaded from the supporting website http://pgopher.chm.bris.ac.uk. The program does not require any installation process, so can be run on student's own machines or easily setup on classroom or laboratory computers. PGOPHER, a Program for Simulating Rotational, Vibrational and Electronic Structure, C. M. Western, University of Bristol, http://pgopher.chm.bris.ac.uk PGOPHER version 8.0, C M Western, 2014, University of Bristol Research Data Repository, doi:10.5523/bris.huflggvpcuc1zvliqed497r2

  15. Research approaches to mass casualty incidents response: development from routine perspectives to complexity science.

    PubMed

    Shen, Weifeng; Jiang, Libing; Zhang, Mao; Ma, Yuefeng; Jiang, Guanyu; He, Xiaojun

    2014-01-01

    To review the research methods of mass casualty incident (MCI) systematically and introduce the concept and characteristics of complexity science and artificial system, computational experiments and parallel execution (ACP) method. We searched PubMed, Web of Knowledge, China Wanfang and China Biology Medicine (CBM) databases for relevant studies. Searches were performed without year or language restrictions and used the combinations of the following key words: "mass casualty incident", "MCI", "research method", "complexity science", "ACP", "approach", "science", "model", "system" and "response". Articles were searched using the above keywords and only those involving the research methods of mass casualty incident (MCI) were enrolled. Research methods of MCI have increased markedly over the past few decades. For now, dominating research methods of MCI are theory-based approach, empirical approach, evidence-based science, mathematical modeling and computer simulation, simulation experiment, experimental methods, scenario approach and complexity science. This article provides an overview of the development of research methodology for MCI. The progresses of routine research approaches and complexity science are briefly presented in this paper. Furthermore, the authors conclude that the reductionism underlying the exact science is not suitable for MCI complex systems. And the only feasible alternative is complexity science. Finally, this summary is followed by a review that ACP method combining artificial systems, computational experiments and parallel execution provides a new idea to address researches for complex MCI.

  16. MODA: a new algorithm to compute optical depths in multidimensional hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Perego, Albino; Gafton, Emanuel; Cabezón, Rubén; Rosswog, Stephan; Liebendörfer, Matthias

    2014-08-01

    Aims: We introduce the multidimensional optical depth algorithm (MODA) for the calculation of optical depths in approximate multidimensional radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Methods: Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any predefined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we use a tree structure that is otherwise used for searching neighbors and calculating gravity. Results: In a series of numerical experiments, we compare the MODA results with analytically known solutions. We also use snapshots from actual 3D simulations and compare the results of MODA with those obtained with other methods, such as the global and local ray-by-ray method. It turns out that MODA achieves excellent accuracy at a moderate computational cost. In appendix we also discuss implementation details and parallelization strategies.

  17. A theoretical framework for strain-related trabecular bone maintenance and adaptation.

    PubMed

    Ruimerman, R; Hilbers, P; van Rietbergen, B; Huiskes, R

    2005-04-01

    It is assumed that density and morphology of trabecular bone is partially controlled by mechanical forces. How these effects are expressed in the local metabolic functions of osteoclast resorption and osteoblast formation is not known. In order to investigate possible mechano-biological pathways for these mechanisms we have proposed a mathematical theory (Nature 405 (2000) 704). This theory is based on hypothetical osteocyte stimulation of osteoblast bone formation, as an effect of elevated strain in the bone matrix, and a role for microcracks and disuse in promoting osteoclast resorption. Applied in a 2-D Finite Element Analysis model, the theory explained the formation of trabecular patterns. In this article we present a 3-D FEA model based on the same theory and investigated its potential morphological predictability of metabolic reactions to mechanical loads. The computations simulated the development of trabecular morphological details during growth, relative to measurements in growing pigs, reasonably realistic. They confirmed that the proposed mechanisms also inherently lead to optimal stress transfer. Alternative loading directions produced new trabecular orientations. Reduction of load reduced trabecular thickness, connectivity and mass in the simulation, as is seen in disuse osteoporosis. Simulating the effects of estrogen deficiency through increased osteoclast resorption frequencies produced osteoporotic morphologies as well, as seen in post-menopausal osteoporosis. We conclude that the theory provides a suitable computational framework to investigate hypothetical relationships between bone loading and metabolic expressions.

  18. A New Reliability Analysis Model of the Chegongzhuang Heat-Supplying Tunnel Structure Considering the Coupling of Pipeline Thrust and Thermal Effect

    PubMed Central

    Zhang, Jiawen; He, Shaohui; Wang, Dahai; Liu, Yangpeng; Yao, Wenbo; Liu, Xiabing

    2018-01-01

    Based on the operating Chegongzhuang heat-supplying tunnel in Beijing, the reliability of its lining structure under the action of large thrust and thermal effect is studied. According to the characteristics of a heat-supplying tunnel service, a three-dimensional numerical analysis model was established based on the mechanical tests on the in-situ specimens. The stress and strain of the tunnel structure were obtained before and after the operation. Compared with the field monitoring data, the rationality of the model was verified. After extracting the internal force of the lining structure, the improved method of subset simulation was proposed as the performance function to calculate the reliability of the main control section of the tunnel. In contrast to the traditional calculation method, the analytic relationship between the sample numbers in the subset simulation method and Monte Carlo method was given. The results indicate that the lining structure is greatly influenced by coupling in the range of six meters from the fixed brackets, especially the tunnel floor. The improved subset simulation method can greatly save computation time and improve computational efficiency under the premise of ensuring the accuracy of calculation. It is suitable for the reliability calculation of tunnel engineering, because “the lower the probability, the more efficient the calculation.” PMID:29401691

  19. Photonic-Doppler-Velocimetry, Paraxial-Scalar Diffraction Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambrose, W. P.

    2015-07-20

    In this report I describe current progress on a paraxial, scalar-field theory suitable for simulating what is measured in Photonic Doppler Velocimetry (PDV) experiments in three dimensions. I have introduced a number of approximations in this work in order to bring the total computation time for one experiment down to around 20 hours. My goals were: to develop an approximate method of calculating the peak frequency in a spectral sideband at an instant of time based on an optical diffraction theory for a moving target, to compare the ‘measured’ velocity to the ‘input’ velocity to gain insights into how andmore » to what precision PDV measures the component of the mass velocity along the optical axis, and to investigate the effects of small amounts of roughness on the measured velocity. This report illustrates the progress I have made in describing how to perform such calculations with a full three dimensional picture including tilted target, tilted mass velocity (not necessarily in the same direction), and small amounts of surface roughness. With the method established for a calculation at one instant of time, measured velocities can be simulated for a sequence of times, similar to the process of sampling velocities in experiments. Improvements in these methods are certainly possible at hugely increased computational cost. I am hopeful that readers appreciate the insights possible at the current level of approximation.« less

  20. LOOP- SIMULATION OF THE AUTOMATIC FREQUENCY CONTROL SUBSYSTEM OF A DIFFERENTIAL MINIMUM SHIFT KEYING RECEIVER

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1994-01-01

    The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.

  1. Development of advanced control schemes for telerobot manipulators

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Zhou, Zhen-Lei

    1991-01-01

    To study space applications of telerobotics, Goddard Space Flight Center (NASA) has recently built a testbed composed mainly of a pair of redundant slave arms having seven degrees of freedom and a master hand controller system. The mathematical developments required for the computerized simulation study and motion control of the slave arms are presented. The slave arm forward kinematic transformation is presented which is derived using the D-H notation and is then reduced to its most simplified form suitable for real-time control applications. The vector cross product method is then applied to obtain the slave arm Jacobian matrix. Using the developed forward kinematic transformation and quaternions representation of the slave arm end-effector orientation, computer simulation is conducted to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of the Jacobian pseudo-inverse for various sampling times. In addition, the equivalence between Cartesian velocities and quaternion is also verified using computer simulation. The motion control of the slave arm is examined. Three control schemes, the joint-space adaptive control scheme, the Cartesian adaptive control scheme, and the hybrid position/force control scheme are proposed for controlling the motion of the slave arm end-effector. Development of the Cartesian adaptive control scheme is presented and some preliminary results of the remaining control schemes are presented and discussed.

  2. LES of a Jet Excited by the Localized Arc Filament Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2011-01-01

    The fluid dynamics of a high-speed jet are governed by the instability waves that form in the free-shear boundary layer of the jet. Jet excitation manipulates the growth and saturation of particular instability waves to control the unsteady flow structures that characterize the energy cascade in the jet.The results may include jet noise mitigation or a reduction in the infrared signature of the jet. The Localized Arc Filament Plasma Actuators (LAFPA) have demonstrated the ability to excite a high-speed jets in laboratory experiments. Extending and optimizing this excitation technology, however, is a complex process that will require many tests and trials. Computational simulations can play an important role in understanding and optimizing this actuator technology for real-world applications. Previous research has focused on developing a suitable actuator model and coupling it with the appropriate computational fluid dynamics (CFD) methods using two-dimensional spatial flow approximations. This work is now extended to three-dimensions (3-D) in space. The actuator model is adapted to a series of discrete actuators and a 3-D LES simulation of an excited jet is run. The results are used to study the fluid dynamics near the actuator and in the jet plume.

  3. Simulation and optimization of a dc SQUID with finite capacitance

    NASA Astrophysics Data System (ADS)

    de Waal, V. J.; Schrijner, P.; Llurba, R.

    1984-02-01

    This paper deals with the calculations of the noise and the optimization of the energy resolution of a dc SQUID with finite junction capacitance. Up to now noise calculations of dc SQUIDs were performed using a model without parasitic capacitances across the Josephson junctions. As the capacitances limit the performance of the SQUID, for a good optimization one must take them into account. The model consists of two coupled nonlinear second-order differential equations. The equations are very suitable for simulation with an analog circuit. We implemented the model on a hybrid computer. The noise spectrum from the model is calculated with a fast Fourier transform. A calculation of the energy resolution for one set of parameters takes about 6 min of computer time. Detailed results of the optimization are given for products of inductance and temperature of LT=1.2 and 5 nH K. Within a range of β and β c between 1 and 2, which is optimum, the energy resolution is nearly independent of these variables. In this region the energy resolution is near the value calculated without parasitic capacitances. Results of the optimized energy resolution are given as a function of LT between 1.2 and 10 mH K.

  4. Analysis of PANDA Passive Containment Cooling Steady-State Tests with the Spectra Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stempniewicz, Marek M

    2000-07-15

    Results of post test simulation of the PANDA passive containment cooling (PCC) steady-state tests (S-series tests), performed at the PANDA facility at the Paul Scherrer Institute, Switzerland, are presented. The simulation has been performed using the computer code SPECTRA, a thermal-hydraulic code, designed specifically for analyzing containment behavior of nuclear power plants.Results of the present calculations are compared to the measurement data as well as the results obtained earlier with the codes MELCOR, TRAC-BF1, and TRACG. The calculated PCC efficiencies are somewhat lower than the measured values. Similar underestimation of PCC efficiencies had been obtained in the past, with themore » other computer codes. To explain this difference, it is postulated that condensate coming into the tubes forms a stream of liquid in one or two tubes, leaving most of the tubes unaffected. The condensate entering the water box is assumed to fall down in the form of droplets. With these assumptions, the results calculated with SPECTRA are close to the experimental data.It is concluded that the SPECTRA code is a suitable tool for analyzing containments of advanced reactors, equipped with passive containment cooling systems.« less

  5. CFD-CAA Coupled Calculations of a Tandem Cylinder Configuration to Assess Facility Installation Effects

    NASA Technical Reports Server (NTRS)

    Redonnet, Stephane; Lockard, David P.; Khorrami, Mehdi R.; Choudhari, Meelan M.

    2011-01-01

    This paper presents a numerical assessment of acoustic installation effects in the tandem cylinder (TC) experiments conducted in the NASA Langley Quiet Flow Facility (QFF), an open-jet, anechoic wind tunnel. Calculations that couple the Computational Fluid Dynamics (CFD) and Computational Aeroacoustics (CAA) of the TC configuration within the QFF are conducted using the CFD simulation results previously obtained at NASA LaRC. The coupled simulations enable the assessment of installation effects associated with several specific features in the QFF facility that may have impacted the measured acoustic signature during the experiment. The CFD-CAA coupling is based on CFD data along a suitably chosen surface, and employs a technique that was recently improved to account for installed configurations involving acoustic backscatter into the CFD domain. First, a CFD-CAA calculation is conducted for an isolated TC configuration to assess the coupling approach, as well as to generate a reference solution for subsequent assessments of QFF installation effects. Direct comparisons between the CFD-CAA calculations associated with the various installed configurations allow the assessment of the effects of each component (nozzle, collector, etc.) or feature (confined vs. free jet flow, etc.) characterizing the NASA LaRC QFF facility.

  6. NOTE: Wobbled splatting—a fast perspective volume rendering method for simulation of x-ray images from CT

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Seemann, Rudolf; Figl, Michael; Hummel, Johann; Ede, Christopher; Homolka, Peter; Yang, Xinhui; Niederer, Peter; Bergmann, Helmar

    2005-05-01

    3D/2D registration, the automatic assignment of a global rigid-body transformation matching the coordinate systems of patient and preoperative volume scan using projection images, is an important topic in image-guided therapy and radiation oncology. A crucial part of most 3D/2D registration algorithms is the fast computation of digitally rendered radiographs (DRRs) to be compared iteratively to radiographs or portal images. Since registration is an iterative process, fast generation of DRRs—which are perspective summed voxel renderings—is desired. In this note, we present a simple and rapid method for generation of DRRs based on splat rendering. As opposed to conventional splatting, antialiasing of the resulting images is not achieved by means of computing a discrete point spread function (a so-called footprint), but by stochastic distortion of either the voxel positions in the volume scan or by the simulation of a focal spot of the x-ray tube with non-zero diameter. Our method generates slightly blurred DRRs suitable for registration purposes at framerates of approximately 10 Hz when rendering volume images with a size of 30 MB.

  7. An Evaluation Framework and Comparative Analysis of the Widely Used First Programming Languages

    PubMed Central

    Farooq, Muhammad Shoaib; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed; Abid, Adnan

    2014-01-01

    Computer programming is the core of computer science curriculum. Several programming languages have been used to teach the first course in computer programming, and such languages are referred to as first programming language (FPL). The pool of programming languages has been evolving with the development of new languages, and from this pool different languages have been used as FPL at different times. Though the selection of an appropriate FPL is very important, yet it has been a controversial issue in the presence of many choices. Many efforts have been made for designing a good FPL, however, there is no ample way to evaluate and compare the existing languages so as to find the most suitable FPL. In this article, we have proposed a framework to evaluate the existing imperative, and object oriented languages for their suitability as an appropriate FPL. Furthermore, based on the proposed framework we have devised a customizable scoring function to compute a quantitative suitability score for a language, which reflects its conformance to the proposed framework. Lastly, we have also evaluated the conformance of the widely used FPLs to the proposed framework, and have also computed their suitability scores. PMID:24586449

  8. An evaluation framework and comparative analysis of the widely used first programming languages.

    PubMed

    Farooq, Muhammad Shoaib; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed; Abid, Adnan

    2014-01-01

    Computer programming is the core of computer science curriculum. Several programming languages have been used to teach the first course in computer programming, and such languages are referred to as first programming language (FPL). The pool of programming languages has been evolving with the development of new languages, and from this pool different languages have been used as FPL at different times. Though the selection of an appropriate FPL is very important, yet it has been a controversial issue in the presence of many choices. Many efforts have been made for designing a good FPL, however, there is no ample way to evaluate and compare the existing languages so as to find the most suitable FPL. In this article, we have proposed a framework to evaluate the existing imperative, and object oriented languages for their suitability as an appropriate FPL. Furthermore, based on the proposed framework we have devised a customizable scoring function to compute a quantitative suitability score for a language, which reflects its conformance to the proposed framework. Lastly, we have also evaluated the conformance of the widely used FPLs to the proposed framework, and have also computed their suitability scores.

  9. A class of all digital phase locked loops - Modelling and analysis.

    NASA Technical Reports Server (NTRS)

    Reddy, C. P.; Gupta, S. C.

    1972-01-01

    An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a non-linear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step, and frequency step inputs for different levels of quantization without loop filter, are studied. The analytical results are checked by simulating the actual system on the digital computer.

  10. Application of square-root filtering for spacecraft attitude control

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Schmidt, S. F.; Goka, T.

    1978-01-01

    Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.

  11. The swirl turbine

    NASA Astrophysics Data System (ADS)

    Haluza, M.; Pochylý, F.; Rudolf, P.

    2012-11-01

    In the article is introduced the new type of the turbine - swirl turbine. This turbine is based on opposite principle than Kaplan turbine. Euler equation is satisfied in the form gHηh = -u2vu2. From this equation is seen, that inflow of liquid into the runner is without rotation and on the outflow is a rotation of liquid opposite of rotation of runner. This turbine is suitable for small head and large discharge. Some constructional variants of this turbine are introduced in the article and theoretical aspects regarding losses in the draft tube. The theory is followed by computational simulations in Fluent and experiments using laser Doppler anemometry.

  12. The Application of Leap Motion in Astronaut Virtual Training

    NASA Astrophysics Data System (ADS)

    Qingchao, Xie; Jiangang, Chao

    2017-03-01

    With the development of computer vision, virtual reality has been applied in astronaut virtual training. As an advanced optic equipment to track hand, Leap Motion can provide precise and fluid tracking of hands. Leap Motion is suitable to be used as gesture input device in astronaut virtual training. This paper built an astronaut virtual training based Leap Motion, and established the mathematics model of hands occlusion. At last the ability of Leap Motion to handle occlusion was analysed. A virtual assembly simulation platform was developed for astronaut training, and occlusion gesture would influence the recognition process. The experimental result can guide astronaut virtual training.

  13. The inverse problem of the calculus of variations for discrete systems

    NASA Astrophysics Data System (ADS)

    Barbero-Liñán, María; Farré Puiggalí, Marta; Ferraro, Sebastián; Martín de Diego, David

    2018-05-01

    We develop a geometric version of the inverse problem of the calculus of variations for discrete mechanics and constrained discrete mechanics. The geometric approach consists of using suitable Lagrangian and isotropic submanifolds. We also provide a transition between the discrete and the continuous problems and propose variationality as an interesting geometric property to take into account in the design and computer simulation of numerical integrators for constrained systems. For instance, nonholonomic mechanics is generally non variational but some special cases admit an alternative variational description. We apply some standard nonholonomic integrators to such an example to study which ones conserve this property.

  14. Optical fiber sensors and signal processing for intelligent structure monitoring

    NASA Technical Reports Server (NTRS)

    Thomas, Daniel; Cox, Dave; Lindner, D. K.; Claus, R. O.

    1989-01-01

    Few mode optical fibers have been shown to produce predictable interference patterns when placed under strain. The use is described of a modal domain sensor in a vibration control experiment. An optical fiber is bonded along the length of a flexible beam. Output from the modal domain sensor is used to suppress vibrations induced in the beam. A distributed effect model for the modal domain sensor is developed. This model is combined with the beam and actuator dynamics to produce a system suitable for control design. Computer simulations predict open and closed loop dynamic responses. An experimental apparatus is described and experimental results are presented.

  15. Wind conditions in urban layout - Numerical and experimental research

    NASA Astrophysics Data System (ADS)

    Poćwierz, Marta; Zielonko-Jung, Katarzyna

    2018-01-01

    This paper presents research which compares the numerical and the experimental results for different cases of airflow around a few urban layouts. The study is concerned mostly with the analysis of parameters, such as pressure and velocity fields, which are essential in the building industry. Numerical simulations have been performed by the commercial software Fluent, with the use of a few different turbulence models, including popular k-ɛ, k-ɛ realizable or k-ω. A particular attention has been paid to accurate description of the conditions on the inlet and the selection of suitable computing grid. The pressure measurement near buildings and oil visualization were undertaken and described accordingly.

  16. Investigation on filter method for smoothing spiral phase plate

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian

    2018-03-01

    Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.

  17. Efficient and accurate modeling of electron photoemission in nanostructures with TDDFT

    NASA Astrophysics Data System (ADS)

    Wopperer, Philipp; De Giovannini, Umberto; Rubio, Angel

    2017-03-01

    We derive and extend the time-dependent surface-flux method introduced in [L. Tao, A. Scrinzi, New J. Phys. 14, 013021 (2012)] within a time-dependent density-functional theory (TDDFT) formalism and use it to calculate photoelectron spectra and angular distributions of atoms and molecules when excited by laser pulses. We present other, existing computational TDDFT methods that are suitable for the calculation of electron emission in compact spatial regions, and compare their results. We illustrate the performance of the new method by simulating strong-field ionization of C60 fullerene and discuss final state effects in the orbital reconstruction of planar organic molecules.

  18. Mechanical discrete simulator of the electro-mechanical lift with n:1 roping

    NASA Astrophysics Data System (ADS)

    Alonso, F. J.; Herrera, I.

    2016-05-01

    The design process of new products in lift engineering is a difficult task due to, mainly, the complexity and slenderness of the lift system, demanding a predictive tool for the lift mechanics. A mechanical ad-hoc discrete simulator, as an alternative to ‘general purpose’ mechanical simulators is proposed. Firstly, the synthesis and experimentation process that has led to establish a suitable model capable of simulating accurately the response of the electromechanical lift is discussed. Then, the equations of motion are derived. The model comprises a discrete system of 5 vertically displaceable masses (car, counterweight, car frame, passengers/loads and lift drive), an inertial mass of the assembly tension pulley-rotor shaft which can rotate about the machine axis and 6 mechanical connectors with 1:1 suspension layout. The model is extended to any n:1 roping lift by setting 6 equivalent mechanical components (suspension systems for car and counterweight, lift drive silent blocks, tension pulley-lift drive stator and passengers/load equivalent spring-damper) by inductive inference from 1:1 and generalized 2:1 roping system. The application to simulate real elevator systems is proposed by numeric time integration of the governing equations using the Kutta-Meden algorithm and implemented in a computer program for ad-hoc elevator simulation called ElevaCAD.

  19. Simulation of Sweep-Jet Flow Control, Single Jet and Full Vertical Tail

    NASA Technical Reports Server (NTRS)

    Childs, Robert E.; Stremel, Paul M.; Garcia, Joseph A.; Heineck, James T.; Kushner, Laura K.; Storms, Bruce L.

    2016-01-01

    This work is a simulation technology demonstrator, of sweep jet flow control used to suppress boundary layer separation and increase the maximum achievable load coefficients. A sweep jet is a discrete Coanda jet that oscillates in the plane parallel to an aerodynamic surface. It injects mass and momentum in the approximate streamwise direction. It also generates turbulent eddies at the oscillation frequency, which are typically large relative to the scales of boundary layer turbulence, and which augment mixing across the boundary layer to attack flow separation. Simulations of a fluidic oscillator, the sweep jet emerging from a nozzle downstream of the oscillator, and an array of sweep jets which suppresses boundary layer separation are performed. Simulation results are compared to data from a dedicated validation experiment of a single oscillator and its sweep jet, and from a wind tunnel test of a full-scale Boeing 757 vertical tail augmented with an array of sweep jets. A critical step in the work is the development of realistic time-dependent sweep jet inflow boundary conditions, derived from the results of the single-oscillator simulations, which create the sweep jets in the full-tail simulations. Simulations were performed using the computational fluid dynamics (CFD) solver Overow, with high-order spatial discretization and a range of turbulence modeling. Good results were obtained for all flows simulated, when suitable turbulence modeling was used.

  20. Constrained Local UniversE Simulations: a Local Group factory

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias

    2016-05-01

    Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.

  1. A comparison of non-local electron transport models for laser-plasmas relevant to inertial confinement fusion

    DOE PAGES

    Sherlock, M.; Brodrick, J. P.; Ridgers, C. P.

    2017-08-08

    Here, we compare the reduced non-local electron transport model developed to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a one-dimensional hohlraum ablation problem. We find that the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced modelmore » reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region.« less

  2. An efficient direct method for image registration of flat objects

    NASA Astrophysics Data System (ADS)

    Nikolaev, Dmitry; Tihonkih, Dmitrii; Makovetskii, Artyom; Voronin, Sergei

    2017-09-01

    Image alignment of rigid surfaces is a rapidly developing area of research and has many practical applications. Alignment methods can be roughly divided into two types: feature-based methods and direct methods. Known SURF and SIFT algorithms are examples of the feature-based methods. Direct methods refer to those that exploit the pixel intensities without resorting to image features and image-based deformations are general direct method to align images of deformable objects in 3D space. Nevertheless, it is not good for the registration of images of 3D rigid objects since the underlying structure cannot be directly evaluated. In the article, we propose a model that is suitable for image alignment of rigid flat objects under various illumination models. The brightness consistency assumptions used for reconstruction of optimal geometrical transformation. Computer simulation results are provided to illustrate the performance of the proposed algorithm for computing of an accordance between pixels of two images.

  3. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  4. Assessing performance of flaw characterization methods through uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Miorelli, R.; Le Bourdais, F.; Artusi, X.

    2018-04-01

    In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.

  5. Electro-optic Mach-Zehnder Interferometer based Optical Digital Magnitude Comparator and 1's Complement Calculator

    NASA Astrophysics Data System (ADS)

    Kumar, Ajay; Raghuwanshi, Sanjeev Kumar

    2016-06-01

    The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.

  6. Modeling Methodologies for Design and Control of Solid Oxide Fuel Cell APUs

    NASA Astrophysics Data System (ADS)

    Pianese, C.; Sorrentino, M.

    2009-08-01

    Among the existing fuel cell technologies, Solid Oxide Fuel Cells (SOFC) are particularly suitable for both stationary and mobile applications, due to their high energy conversion efficiencies, modularity, high fuel flexibility, low emissions and noise. Moreover, the high working temperatures enable their use for efficient cogeneration applications. SOFCs are entering in a pre-industrial era and a strong interest for designing tools has growth in the last years. Optimal system configuration, components sizing, control and diagnostic system design require computational tools that meet the conflicting needs of accuracy, affordable computational time, limited experimental efforts and flexibility. The paper gives an overview on control-oriented modeling of SOFC at both single cell and stack level. Such an approach provides useful simulation tools for designing and controlling SOFC-APUs destined to a wide application area, ranging from automotive to marine and airplane APUs.

  7. Computer simulations of phase field drops on super-hydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Fedeli, Livio

    2017-09-01

    We present a novel quasi-Newton continuation procedure that efficiently solves the system of nonlinear equations arising from the discretization of a phase field model for wetting phenomena. We perform a comparative numerical analysis that shows the improved speed of convergence gained with respect to other numerical schemes. Moreover, we discuss the conditions that, on a theoretical level, guarantee the convergence of this method. At each iterative step, a suitable continuation procedure develops and passes to the nonlinear solver an accurate initial guess. Discretization performs through cell-centered finite differences. The resulting system of equations is solved on a composite grid that uses dynamic mesh refinement and multi-grid techniques. The final code achieves three-dimensional, realistic computer experiments comparable to those produced in laboratory settings. This code offers not only new insights into the phenomenology of super-hydrophobicity, but also serves as a reliable predictive tool for the study of hydrophobic surfaces.

  8. A correlated nickelate synaptic transistor.

    PubMed

    Shi, Jian; Ha, Sieu D; Zhou, You; Schoofs, Frank; Ramanathan, Shriram

    2013-01-01

    Inspired by biological neural systems, neuromorphic devices may open up new computing paradigms to explore cognition, learning and limits of parallel computation. Here we report the demonstration of a synaptic transistor with SmNiO₃, a correlated electron system with insulator-metal transition temperature at 130°C in bulk form. Non-volatile resistance and synaptic multilevel analogue states are demonstrated by control over composition in ionic liquid-gated devices on silicon platforms. The extent of the resistance modulation can be dramatically controlled by the film microstructure. By simulating the time difference between postneuron and preneuron spikes as the input parameter of a gate bias voltage pulse, synaptic spike-timing-dependent plasticity learning behaviour is realized. The extreme sensitivity of electrical properties to defects in correlated oxides may make them a particularly suitable class of materials to realize artificial biological circuits that can be operated at and above room temperature and seamlessly integrated into conventional electronic circuits.

  9. Satellite interference analysis and simulation using personal computers

    NASA Astrophysics Data System (ADS)

    Kantak, Anil

    1988-03-01

    This report presents the complete analysis and formulas necessary to quantify the interference experienced by a generic satellite communications receiving station due to an interfering satellite. Both satellites, the desired as well as the interfering satellite, are considered to be in elliptical orbits. Formulas are developed for the satellite look angles and the satellite transmit angles generally related to the land mask of the receiving station site for both satellites. Formulas for considering Doppler effect due to the satellite motion as well as the Earth's rotation are developed. The effect of the interfering-satellite signal modulation and the Doppler effect on the power received are considered. The statistical formulation of the interference effect is presented in the form of a histogram of the interference to the desired signal power ratio. Finally, a computer program suitable for microcomputers such as IBM AT is provided with the flowchart, a sample run, results of the run, and the program code.

  10. Satellite Interference Analysis and Simulation Using Personal Computers

    NASA Technical Reports Server (NTRS)

    Kantak, Anil

    1988-01-01

    This report presents the complete analysis and formulas necessary to quantify the interference experienced by a generic satellite communications receiving station due to an interfering satellite. Both satellites, the desired as well as the interfering satellite, are considered to be in elliptical orbits. Formulas are developed for the satellite look angles and the satellite transmit angles generally related to the land mask of the receiving station site for both satellites. Formulas for considering Doppler effect due to the satellite motion as well as the Earth's rotation are developed. The effect of the interfering-satellite signal modulation and the Doppler effect on the power received are considered. The statistical formulation of the interference effect is presented in the form of a histogram of the interference to the desired signal power ratio. Finally, a computer program suitable for microcomputers such as IBM AT is provided with the flowchart, a sample run, results of the run, and the program code.

  11. Automatic Learning of Fine Operating Rules for Online Power System Security Control.

    PubMed

    Sun, Hongbin; Zhao, Feng; Wang, Hao; Wang, Kang; Jiang, Weiyong; Guo, Qinglai; Zhang, Boming; Wehenkel, Louis

    2016-08-01

    Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.

  12. Chaste: A test-driven approach to software development for biological modelling

    NASA Astrophysics Data System (ADS)

    Pitt-Francis, Joe; Pathmanathan, Pras; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Fletcher, Alexander G.; Mirams, Gary R.; Murray, Philip; Osborne, James M.; Walter, Alex; Chapman, S. Jon; Garny, Alan; van Leeuwen, Ingeborg M. M.; Maini, Philip K.; Rodríguez, Blanca; Waters, Sarah L.; Whiteley, Jonathan P.; Byrne, Helen M.; Gavaghan, David J.

    2009-12-01

    Chaste ('Cancer, heart and soft-tissue environment') is a software library and a set of test suites for computational simulations in the domain of biology. Current functionality has arisen from modelling in the fields of cancer, cardiac physiology and soft-tissue mechanics. It is released under the LGPL 2.1 licence. Chaste has been developed using agile programming methods. The project began in 2005 when it was reasoned that the modelling of a variety of physiological phenomena required both a generic mathematical modelling framework, and a generic computational/simulation framework. The Chaste project evolved from the Integrative Biology (IB) e-Science Project, an inter-institutional project aimed at developing a suitable IT infrastructure to support physiome-level computational modelling, with a primary focus on cardiac and cancer modelling. Program summaryProgram title: Chaste Catalogue identifier: AEFD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL 2.1 No. of lines in distributed program, including test data, etc.: 5 407 321 No. of bytes in distributed program, including test data, etc.: 42 004 554 Distribution format: tar.gz Programming language: C++ Operating system: Unix Has the code been vectorised or parallelized?: Yes. Parallelized using MPI. RAM:<90 Megabytes for two of the scenarios described in Section 6 of the manuscript (Monodomain re-entry on a slab or Cylindrical crypt simulation). Up to 16 Gigabytes (distributed across processors) for full resolution bidomain cardiac simulation. Classification: 3. External routines: Boost, CodeSynthesis XSD, CxxTest, HDF5, METIS, MPI, PETSc, Triangle, Xerces Nature of problem: Chaste may be used for solving coupled ODE and PDE systems arising from modelling biological systems. Use of Chaste in two application areas are described in this paper: cardiac electrophysiology and intestinal crypt dynamics. Solution method: Coupled multi-physics with PDE, ODE and discrete mechanics simulation. Running time: The largest cardiac simulation described in the manuscript takes about 6 hours to run on a single 3 GHz core. See results section (Section 6) of the manuscript for discussion on parallel scaling.

  13. Geometrically motivated coordinate system for exploring spacetime dynamics in numerical-relativity simulations using a quasi-Kinnersley tetrad

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Brink, Jeandrew; Szilágyi, Béla; Lovelace, Geoffrey

    2012-10-01

    We investigate the suitability and properties of a quasi-Kinnersley tetrad and a geometrically motivated coordinate system as tools for quantifying both strong-field and wave-zone effects in numerical relativity (NR) simulations. We fix two of the coordinate degrees of freedom of the metric, namely, the radial and latitudinal coordinates, using the Coulomb potential associated with the quasi-Kinnersley transverse frame. These coordinates are invariants of the spacetime and can be used to unambiguously fix the outstanding spin-boost freedom associated with the quasi-Kinnersley frame (and thus can be used to choose a preferred quasi-Kinnersley tetrad). In the limit of small perturbations about a Kerr spacetime, these geometrically motivated coordinates and quasi-Kinnersley tetrad reduce to Boyer-Lindquist coordinates and the Kinnersley tetrad, irrespective of the simulation gauge choice. We explore the properties of this construction both analytically and numerically, and we gain insights regarding the propagation of radiation described by a super-Poynting vector, further motivating the use of this construction in NR simulations. We also quantify in detail the peeling properties of the chosen tetrad and gauge. We argue that these choices are particularly well-suited for a rapidly converging wave-extraction algorithm as the extraction location approaches infinity, and we explore numerically the extent to which this property remains applicable on the interior of a computational domain. Using a number of additional tests, we verify numerically that the prescription behaves as required in the appropriate limits regardless of simulation gauge; these tests could also serve to benchmark other wave extraction methods. We explore the behavior of the geometrically motivated coordinate system in dynamical binary-black-hole NR mergers; while we obtain no unexpected results, we do find that these coordinates turn out to be useful for visualizing NR simulations (for example, for vividly illustrating effects such as the initial burst of spurious junk radiation passing through the computational domain). Finally, we carefully scrutinize the head-on collision of two black holes and, for example, the way in which the extracted waveform changes as it moves through the computational domain.

  14. Estimating long-term evolution of fine sediment budget in the Iffezheim reservoir using a simplified method based on classification of boundary conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Qing; Hillebrand, Gudrun; Hoffmann, Thomas; Hinkelmann, Reinhard

    2017-04-01

    The Iffezheim reservoir is the last of a series of reservoirs on the Upper Rhine in Germany. Since its construction in 1977, approximately 115,000 m3 of fine sediments accumulate annually in the weir channel (WSA Freiburg, 2011). In order to obtain detailed information about the space-time development of the topography, the riverbed evolution was measured using echo sounding by the German Federal Waterways and Shipping Administration (WSV). 37 sets of sounding data, which have been obtained between July 2000 and February 2011, were used in this research. In a previous work, the morphodynamic processes in the Iffezheim reservoir were investigated using a high-resolution 3D model. The 3D computational fluid dynamic software SSIIM II (Olsen, 2014) was used for this purpose (Zhang et al., 2015). The model was calibrated using field measurements. A computational time of 14.5 hours, using 24 cores of a 2.4 GHz reference computer, was needed for simulating a period of three months on a grid of 238,013 cells. Thus, the long-term (e.g. 30 years) simulation of morphodynamics of the fine sediment budget in the Iffezheim reservoir with this model is not feasible. A low complexity approach of "classification of the boundary conditions of discharge and suspended sediment concentration" was applied in this research for a long-term numerical simulation. The basic idea of the approach is to replace instationary or quasi-steady simulations of deposition by a limited series of stationary ones. For these, daily volume changes were calculated considering representative discharge and concentration. Representative boundary conditions were determined by subdividing time series of discharge and concentration into classes and using central values per class. The amount of the deposition in the reservoir for a certain period can then be obtained by adding up the calculated daily depositions. This approach was applied to 10 short-term periods, between two successive echo sounding measurements, and 2 longer ones, which include several short-term periods. Short-term periods spread from 1 to 3 months, whereas long-term periods indicate 2 and 5 years. The simulation results showed an acceptable agreement with the measurements. It was also found that the long-term periods had less deviation to the measurements than the short ones. This simplified method exhibited clear savings in computational time compared to the instationary simulations; in this case only 3 hours of computational time were needed for 5 years simulation period using the reference computer mentioned above. Further research is needed with respect to the limits of this linear approach, i.e. with respect to the frequency with which the set of steady simulations has to be updated due to significant changes in morphology and in turn in hydraulics. Yet, the preliminary results are promising, suggesting that the developed approach is very suitable for a long-term simulation of riverbed evolution. REFERENCES Olsen, N.R.B. 2014. A three-dimensional numerical model for simulation of sediment movements in water intakes with multiblock option. Version 1 and 2. User's manual. Department of Hydraulic and Environmental Engineering. The Norwegian University of Science and Technology, Trondheim, Norway. Wasser- und Schifffahrtsamt (WSA) Freiburg. 2011. Sachstandsbericht oberer Wehrkanal Staustufe Iffezheim. Technical report - Upper weir channel of the Iffezheim hydropower reservoir. Zhang, Q., Hillebrand, G. Moser, H. & Hinkelmann, R. 2015. Simulation of non-uniform sediment transport in a German Reservoir with the SSIIM Model and sensitivity analysis. Proceedings of the 36th IAHR World Congress. The Hague, The Netherland.

  15. Vibration of a string against multiple spring-mass-damper stoppers

    NASA Astrophysics Data System (ADS)

    Shin, Ji-Hwan; Talib, Ezdiani; Kwak, Moon K.

    2018-02-01

    When a building sways due to strong wind or an earthquake, the elevator rope can undergo resonance, resulting in collision with the hoist-way wall. In this study, a hard stopper and a soft stopper comprised of a spring-mass-damper system installed along the hoist-way wall were considered to prevent the string from undergoing excessive vibrations. The collision of the string with multiple hard stoppers and multiple spring-mass-damper stoppers was investigated using an analytical method. The result revealed new formulas and computational algorithms that are suitable for simulating the vibration of the string against multiple stoppers. The numerical results show that the spring-mass-damper stopper is more effective in suppressing the vibrations of the string and reducing structural failure. The proposed algorithms were shown to be efficient to simulate the motion of the string against a vibration stopper.

  16. Is a matrix exponential specification suitable for the modeling of spatial correlation structures?

    PubMed Central

    Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha

    2018-01-01

    This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375

  17. Product selectivity control induced by using liquid-liquid parallel laminar flow in a microreactor.

    PubMed

    Amemiya, Fumihiro; Matsumoto, Hideyuki; Fuse, Keishi; Kashiwagi, Tsuneo; Kuroda, Chiaki; Fuchigami, Toshio; Atobe, Mahito

    2011-06-07

    Product selectivity control based on a liquid-liquid parallel laminar flow has been successfully demonstrated by using a microreactor. Our electrochemical microreactor system enables regioselective cross-coupling reaction of aldehyde with allylic chloride via chemoselective cathodic reduction of substrate by the combined use of suitable flow mode and corresponding cathode material. The formation of liquid-liquid parallel laminar flow in the microreactor was supported by the estimation of benzaldehyde diffusion coefficient and computational fluid dynamics simulation. The diffusion coefficient for benzaldehyde in Bu(4)NClO(4)-HMPA medium was determined to be 1.32 × 10(-7) cm(2) s(-1) by electrochemical measurements, and the flow simulation using this value revealed the formation of clear concentration gradient of benzaldehyde in the microreactor channel over a specific channel length. In addition, the necessity of the liquid-liquid parallel laminar flow was confirmed by flow mode experiments.

  18. Comparison Between Simulated and Experimentally Measured Performance of a Four Port Wave Rotor

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Wilson, Jack; Welch, Gerard E.

    2007-01-01

    Performance and operability testing has been completed on a laboratory-scale, four-port wave rotor, of the type suitable for use as a topping cycle on a gas turbine engine. Many design aspects, and performance estimates for the wave rotor were determined using a time-accurate, one-dimensional, computational fluid dynamics-based simulation code developed specifically for wave rotors. The code follows a single rotor passage as it moves past the various ports, which in this reference frame become boundary conditions. This paper compares wave rotor performance predicted with the code to that measured during laboratory testing. Both on and off-design operating conditions were examined. Overall, the match between code and rig was found to be quite good. At operating points where there were disparities, the assumption of larger than expected internal leakage rates successfully realigned code predictions and laboratory measurements. Possible mechanisms for such leakage rates are discussed.

  19. Large-deformation modal coordinates for nonrigid vehicle dynamics

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Fleischer, G. E.

    1972-01-01

    The derivation of minimum-dimension sets of discrete-coordinate and hybrid-coordinate equations of motion of a system consisting of an arbitrary number of hinge-connected rigid bodies assembled in tree topology is presented. These equations are useful for the simulation of dynamical systems that can be idealized as tree-like arrangements of substructures, with each substructure consisting of either a rigid body or a collection of elastically interconnected rigid bodies restricted to small relative rotations at each connection. Thus, some of the substructures represent elastic bodies subjected to small strains or local deformations, but possibly large gross deformations, in the hybrid formulation, distributed coordinates referred to herein as large-deformation modal coordinates, are used for the deformations of these substructures. The equations are in a form suitable for incorporation into one or more computer programs to be used as multipurpose tools in the simulation of spacecraft and other complex electromechanical systems.

  20. Lagrangian study of transport of subarctic water across the Subpolar Front in the Japan Sea

    NASA Astrophysics Data System (ADS)

    Prants, Sergey V.; Uleysky, Michael Yu.; Budyansky, Maxim V.

    2018-06-01

    The southward near-surface transport of transformed subarctic water across the Subpolar Front in the Japan Sea is simulated and analyzed based on altimeter data from January 1, 1993 to December 31, 2017. Computing Lagrangian indicators for a large number of synthetic particles, advected by the AVISO velocity field, we find preferred transport pathways across the Subpolar Front. The southward transport occurs mainly in the central part of the frontal zone due to suitable dispositions of mesoscale eddies promoting propagation of subarctic water to the south. It is documented with the help of Lagrangian origin and L-maps and verified by the tracks of available drifters. The transport of transformed subarctic water to the south is compared with the transport of transformed subtropical water to the north simulated by Prants et al. (Nonlinear Process Geophys 24(1):89-99, 2017c).

  1. Darrieus rotor aerodynamics

    NASA Astrophysics Data System (ADS)

    Klimas, P. C.

    1982-05-01

    A summary of the progress of modeling the aerodynamic effects on the blades of a Darrieus wind turbine is presented. Interference is discussed in terms of blade/blade wake interaction and improvements in single and multiple stream tube models, of vortex simulations of blades and their wakes, and a hybrid momentum/vortex code to combine fast computation time with interference-describing capabilities. An empirical model has been developed for treating the properties of dynamic stall such as airfoil geometry, Reynolds number, reduced frequency, angle-of-attack, and Mach number. Pitching circulation has been subjected to simulation as potential flow about a two-dimensional flat plate, along with applications of the concepts of virtual camber and virtual incidence, with a cambered airfoil operating in a rectilinear flowfield. Finally, a need to develop a loading model suitable for nonsymmetrical blade sections is indicated, as well as blade behavior in a dynamic, curvilinear regime.

  2. Metallic Fuel Casting Development and Parameter Optimization Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R.S. Fielding; J. Crapps; C. Unal

    One of the advantages of metallic fuel is the abilility to cast the fuel slugs to near net shape with little additional processing. However, the high aspect ratio of the fuel is not ideal for casting. EBR-II fuel was cast using counter gravity injection casting (CGIC) but, concerns have been raised concerning the feasibility of this process for americium bearing alloys. The Fuel Cycle Research and Development program has begun developing gravity casting techniques suitable for fuel production. Compared to CGIC gravity casting does not require a large heel that then is recycled, does not require application of a vacuummore » during melting, and is conducive to re-usable molds. Development has included fabrication of two separate benchscale, approximately 300 grams, systems. To shorten development time computer simulations have been used to ensure mold and crucible designs are feasible and to identify which fluid properties most affect casting behavior and therefore require more characterization.« less

  3. A new kind of metal detector based on chaotic oscillator

    NASA Astrophysics Data System (ADS)

    Hu, Wenjing

    2017-12-01

    The sensitivity of a metal detector greatly depends on the identification ability to weak signals from the probe. In order to improve the sensitivity of metal detectors, this paper applies the Duffing chaotic oscillator to metal detectors based on its characteristic which is very sensitive to weak periodic signals. To make a suitable Duffing system for detectors, this paper computes two Lyapunov characteristics exponents of the Duffing oscillator, which help to obtain the threshold of the Duffing system in the critical state accurately and give quantitative criteria for chaos. Meanwhile, a corresponding simulation model of the chaotic oscillator is made by the Simulink tool box of Matlab. Simulation results shows that Duffing oscillator is very sensitive to sinusoidal signals in high frequency cases. And experimental results show that the measurable diameter of metal particles is about 1.5mm. It indicates that this new method can feasibly and effectively improve the metal detector sensitivity.

  4. Hypersonic Inlet for a Laser Powered Propulsion System

    NASA Astrophysics Data System (ADS)

    Harrland, Alan; Doolan, Con; Wheatley, Vincent; Froning, Dave

    2011-11-01

    Propulsion within the lightcraft concept is produced via laser induced detonation of an incoming hypersonic air stream. This process requires suitable engine configurations that offer good performance over all flight speeds and angles of attack to ensure the required thrust is maintained. Stream traced hypersonic inlets have demonstrated the required performance in conventional hydrocarbon fuelled scramjet engines, and has been applied to the laser powered lightcraft vehicle. This paper will outline the current methodology employed in the inlet design, with a particular focus on the performance of the lightcraft inlet at angles of attack. Fully three-dimensional turbulent computational fluid dynamics simulations have been performed on a variety of inlet configurations. The performance of the lightcraft inlets have been evaluated at differing angles of attack. An idealized laser detonation simulation has also been performed to validate that the lightcraft inlet does not unstart during the laser powered propulsion cycle.

  5. Operation analysis of a Chebyshev-Pantograph leg mechanism for a single DOF biped robot

    NASA Astrophysics Data System (ADS)

    Liang, Conghui; Ceccarelli, Marco; Takeda, Yukio

    2012-12-01

    In this paper, operation analysis of a Chebyshev-Pantograph leg mechanism is presented for a single degree of freedom (DOF) biped robot. The proposed leg mechanism is composed of a Chebyshev four-bar linkage and a pantograph mechanism. In contrast to general fully actuated anthropomorphic leg mechanisms, the proposed leg mechanism has peculiar features like compactness, low-cost, and easy-operation. Kinematic equations of the proposed leg mechanism are formulated for a computer oriented simulation. Simulation results show the operation performance of the proposed leg mechanism with suitable characteristics. A parametric study has been carried out to evaluate the operation performance as function of design parameters. A prototype of a single DOF biped robot equipped with two proposed leg mechanisms has been built at LARM (Laboratory of Robotics and Mechatronics). Experimental test shows practical feasible walking ability of the prototype, as well as drawbacks are discussed for the mechanical design.

  6. BIPV: a real-time building performance study for a roof-integrated facility

    NASA Astrophysics Data System (ADS)

    Aaditya, Gayathri; Mani, Monto

    2018-03-01

    Building integrated photovoltaic system (BIPV) is a photovoltaic (PV) integration that generates energy and serves as a building envelope. A building element (e.g. roof and wall) is based on its functional performance, which could include structure, durability, maintenance, weathering, thermal insulation, acoustics, and so on. The present paper discusses the suitability of PV as a building element in terms of thermal performance based on a case study of a 5.25 kWp roof-integrated BIPV system in tropical regions. Performance of PV has been compared with conventional construction materials and various scenarios have been simulated to understand the impact on occupant comfort levels. In the current case study, PV as a roofing material has been shown to cause significant thermal discomfort to the occupants. The study has been based on real-time data monitoring supported by computer-based building simulation model.

  7. Lagrangian study of transport of subarctic water across the Subpolar Front in the Japan Sea

    NASA Astrophysics Data System (ADS)

    Prants, Sergey V.; Uleysky, Michael Yu.; Budyansky, Maxim V.

    2018-05-01

    The southward near-surface transport of transformed subarctic water across the Subpolar Front in the Japan Sea is simulated and analyzed based on altimeter data from January 1, 1993 to December 31, 2017. Computing Lagrangian indicators for a large number of synthetic particles, advected by the AVISO velocity field, we find preferred transport pathways across the Subpolar Front. The southward transport occurs mainly in the central part of the frontal zone due to suitable dispositions of mesoscale eddies promoting propagation of subarctic water to the south. It is documented with the help of Lagrangian origin and L-maps and verified by the tracks of available drifters. The transport of transformed subarctic water to the south is compared with the transport of transformed subtropical water to the north simulated by Prants et al. (Nonlinear Process Geophys 24(1):89-99, 2017c).

  8. Chemical warfare agent simulants for human volunteer trials of emergency decontamination: A systematic review

    PubMed Central

    Wyke, Stacey; Marczylo, Tim; Collins, Samuel; Gaulton, Tom; Foxall, Kerry; Amlôt, Richard; Duarte‐Davidson, Raquel

    2017-01-01

    Abstract Incidents involving the release of chemical agents can pose significant risks to public health. In such an event, emergency decontamination of affected casualties may need to be undertaken to reduce injury and possible loss of life. To ensure these methods are effective, human volunteer trials (HVTs) of decontamination protocols, using simulant contaminants, have been conducted. Simulants must be used to mimic the physicochemical properties of more harmful chemicals, while remaining non‐toxic at the dose applied. This review focuses on studies that employed chemical warfare agent simulants in decontamination contexts, to identify those simulants most suitable for use in HVTs of emergency decontamination. Twenty‐two simulants were identified, of which 17 were determined unsuitable for use in HVTs. The remaining simulants (n = 5) were further scrutinized for potential suitability according to toxicity, physicochemical properties and similarities to their equivalent toxic counterparts. Three suitable simulants, for use in HVTs were identified; methyl salicylate (simulant for sulphur mustard), diethyl malonate (simulant for soman) and malathion (simulant for VX or toxic industrial chemicals). All have been safely used in previous HVTs, and have a range of physicochemical properties that would allow useful inference to more toxic chemicals when employed in future studies of emergency decontamination systems. PMID:28990191

  9. The Primary Computer Dictionary.

    ERIC Educational Resources Information Center

    Girard, Suzanne; Willing, Kathlene

    Suitable for children from kindergarten to grade three, this dictionary is designed to introduce young children to computer terminology at a level that they will understand and find useful. It is also suitable for parents as a home resource, for library use, and as a handbook for teachers. The first sentence of each definition contains the kernel…

  10. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  11. Systematic and Automated Development of Quantum Mechanically Derived Force Fields: The Challenging Case of Halogenated Hydrocarbons.

    PubMed

    Prampolini, Giacomo; Campetella, Marco; De Mitri, Nicola; Livotto, Paolo Roberto; Cacelli, Ivo

    2016-11-08

    A robust and automated protocol for the derivation of sound force field parameters, suitable for condensed-phase classical simulations, is here tested and validated on several halogenated hydrocarbons, a class of compounds for which standard force fields have often been reported to deliver rather inaccurate performances. The major strength of the proposed protocol is that all of the parameters are derived only from first principles because all of the information required is retrieved from quantum mechanical data, purposely computed for the investigated molecule. This a priori parametrization is carried out separately for the intra- and intermolecular contributions to the force fields, respectively exploiting the Joyce and Picky programs, previously developed in our group. To avoid high computational costs, all quantum mechanical calculations were performed exploiting the density functional theory. Because the choice of the functional is known to be crucial for the description of the intermolecular interactions, a specific procedure is proposed, which allows for a reliable benchmark of different functionals against higher-level data. The intramolecular and intermolecular contribution are eventually joined together, and the resulting quantum mechanically derived force field is thereafter employed in lengthy molecular dynamics simulations to compute several thermodynamic properties that characterize the resulting bulk phase. The accuracy of the proposed parametrization protocol is finally validated by comparing the computed macroscopic observables with the available experimental counterparts. It is found that, on average, the proposed approach is capable of yielding a consistent description of the investigated set, often outperforming the literature standard force fields, or at least delivering results of similar accuracy.

  12. Numerical Study of Boundary Layer Interaction with Shocks: Method Improvement and Test Computation

    NASA Technical Reports Server (NTRS)

    Adams, N. A.

    1995-01-01

    The objective is the development of a high-order and high-resolution method for the direct numerical simulation of shock turbulent-boundary-layer interaction. Details concerning the spatial discretization of the convective terms can be found in Adams and Shariff (1995). The computer code based on this method as introduced in Adams (1994) was formulated in Cartesian coordinates and thus has been limited to simple rectangular domains. For more general two-dimensional geometries, as a compression corner, an extension to generalized coordinates is necessary. To keep the requirements or limitations for grid generation low, the extended formulation should allow for non-orthogonal grids. Still, for simplicity and cost efficiency, periodicity can be assumed in one cross-flow direction. For easy vectorization, the compact-ENO coupling algorithm as used in Adams (1994) treated whole planes normal to the derivative direction with the ENO scheme whenever at least one point of this plane satisfied the detection criterion. This is apparently too restrictive for more general geometries and more complex shock patterns. Here we introduce a localized compact-ENO coupling algorithm, which is efficient as long as the overall number of grid points treated by the ENO scheme is small compared to the total number of grid points. Validation and test computations with the final code are performed to assess the efficiency and suitability of the computer code for the problems of interest. We define a set of parameters where a direct numerical simulation of a turbulent boundary layer along a compression corner with reasonably fine resolution is affordable.

  13. Coupled hydrodynamic and ecological simulation for prognosticating land reclamation impacts in river estuaries

    NASA Astrophysics Data System (ADS)

    Xu, Yan; Cai, Yanpeng; Sun, Tao; Yang, Zhifeng; Hao, Yan

    2018-03-01

    A multiphase finite-element hydrodynamic model and a phytoplankton simulation approach are coupled into a general modeling framework. It can help quantify impacts of land reclamation. Compared with previous studies, it has the following improvements: a) reflection of physical currents and suitable growth areas for phytoplankton, (b) advancement of a simulation method to describe the suitability of phytoplankton in the sea water. As the results, water velocity is 16.7% higher than that of original state without human disturbances. The related filling engineering has shortened sediment settling paths, weakened the vortex flow and reduced the capacity of material exchange. Additionally, coastal reclamation lead to decrease of the growth suitability index (GSI), thus it cut down the stability of phytoplankton species approximately 4-12%. The proposed GSI can be applied to the management of coastal reclamation for minimizing ecological impacts. It will be helpful for facilitating identifying suitable phytoplankton growth areas.

  14. GPU Implementation of High Rayleigh Number Three-Dimensional Mantle Convection

    NASA Astrophysics Data System (ADS)

    Sanchez, D. A.; Yuen, D. A.; Wright, G. B.; Barnett, G. A.

    2010-12-01

    Although we have entered the age of petascale computing, many factors are still prohibiting high-performance computing (HPC) from infiltrating all suitable scientific disciplines. For this reason and others, application of GPU to HPC is gaining traction in the scientific world. With its low price point, high performance potential, and competitive scalability, GPU has been an option well worth considering for the last few years. Moreover with the advent of NVIDIA's Fermi architecture, which brings ECC memory, better double-precision performance, and more RAM to GPU, there is a strong message of corporate support for GPU in HPC. However many doubts linger concerning the practicality of using GPU for scientific computing. In particular, GPU has a reputation for being difficult to program and suitable for only a small subset of problems. Although inroads have been made in addressing these concerns, for many scientists GPU still has hurdles to clear before becoming an acceptable choice. We explore the applicability of GPU to geophysics by implementing a three-dimensional, second-order finite-difference model of Rayleigh-Benard thermal convection on an NVIDIA GPU using C for CUDA. Our code reaches sufficient resolution, on the order of 500x500x250 evenly-spaced finite-difference gridpoints, on a single GPU. We make extensive use of highly optimized CUBLAS routines, allowing us to achieve performance on the order of O( 0.1 ) µs per timestep*gridpoint at this resolution. This performance has allowed us to study high Rayleigh number simulations, on the order of 2x10^7, on a single GPU.

  15. Simulations of Congenital Septal Defect Closure and Reactivity Testing in Patient-Specific Models of the Pediatric Pulmonary Vasculature: A 3D Numerical Study With Fluid-Structure Interaction

    PubMed Central

    Hunter, Kendall S.; Lanning, Craig J.; Chen, Shiuh-Yung J.; Zhang, Yanhang; Garg, Ruchira; Ivy, D. Dunbar; Shandas, Robin

    2014-01-01

    Clinical imaging methods are highly effective in the diagnosis of vascular pathologies, but they do not currently provide enough detail to shed light on the cause or progression of such diseases, and would be hard pressed to foresee the outcome of surgical interventions. Greater detail of and prediction capabilities for vascular hemodynamics and arterial mechanics are obtained here through the coupling of clinical imaging methods with computational techniques. Three-dimensional, patient-specific geometric reconstructions of the pediatric proximal pulmonary vasculature were obtained from x-ray angiogram images and meshed for use with commercial computational software. Two such models from hypertensive patients, one with multiple septal defects, the other who underwent vascular reactivity testing, were each completed with two sets of suitable fluid and structural initial and boundary conditions and used to obtain detailed transient simulations of artery wall motion and hemodynamics in both clinically measured and predicted configurations. The simulation of septal defect closure, in which input flow and proximal vascular stiffness were decreased, exhibited substantial decreases in proximal velocity, wall shear stress (WSS), and pressure in the post-op state. The simulation of vascular reactivity, in which distal vascular resistance and proximal vascular stiffness were decreased, displayed negligible changes in velocity and WSS but a significant drop in proximal pressure in the reactive state. This new patient-specific technique provides much greater detail regarding the function of the pulmonary circuit than can be obtained with current medical imaging methods alone, and holds promise for enabling surgical planning. PMID:16813447

  16. Experimental validation of numerical simulations on a cerebral aneurysm phantom model

    PubMed Central

    Seshadhri, Santhosh; Janiga, Gábor; Skalej, Martin; Thévenin, Dominique

    2012-01-01

    The treatment of cerebral aneurysms, found in roughly 5% of the population and associated in case of rupture to a high mortality rate, is a major challenge for neurosurgery and neuroradiology due to the complexity of the intervention and to the resulting, high hazard ratio. Improvements are possible but require a better understanding of the associated, unsteady blood flow patterns in complex 3D geometries. It would be very useful to carry out such studies using suitable numerical models, if it is proven that they reproduce accurately enough the real conditions. This validation step is classically based on comparisons with measured data. Since in vivo measurements are extremely difficult and therefore of limited accuracy, complementary model-based investigations considering realistic configurations are essential. In the present study, simulations based on computational fluid dynamics (CFD) have been compared with in situ, laser-Doppler velocimetry (LDV) measurements in the phantom model of a cerebral aneurysm. The employed 1:1 model is made from transparent silicone. A liquid mixture composed of water, glycerin, xanthan gum and sodium chloride has been specifically adapted for the present investigation. It shows physical flow properties similar to real blood and leads to a refraction index perfectly matched to that of the silicone model, allowing accurate optical measurements of the flow velocity. For both experiments and simulations, complex pulsatile flow waveforms and flow rates were accounted for. This finally allows a direct, quantitative comparison between measurements and simulations. In this manner, the accuracy of the employed computational model can be checked. PMID:24265876

  17. Using numeric simulation in an online e-learning environment to teach functional physiological contexts.

    PubMed

    Christ, Andreas; Thews, Oliver

    2016-04-01

    Mathematical models are suitable to simulate complex biological processes by a set of non-linear differential equations. These simulation models can be used as an e-learning tool in medical education. However, in many cases these mathematical systems have to be treated numerically which is computationally intensive. The aim of the study was to develop a system for numerical simulation to be used in an online e-learning environment. In the software system the simulation is located on the server as a CGI application. The user (student) selects the boundary conditions for the simulation (e.g., properties of a simulated patient) on the browser. With these parameters the simulation on the server is started and the simulation result is re-transferred to the browser. With this system two examples of e-learning units were realized. The first one uses a multi-compartment model of the glucose-insulin control loop for the simulation of the plasma glucose level after a simulated meal or during diabetes (including treatment by subcutaneous insulin application). The second one simulates the ion transport leading to the resting and action potential in nerves. The student can vary parameters systematically to explore the biological behavior of the system. The described system is able to simulate complex biological processes and offers the possibility to use these models in an online e-learning environment. As far as the underlying principles can be described mathematically, this type of system can be applied to a broad spectrum of biomedical or natural scientific topics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    PubMed

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  19. Application of the polynomial chaos expansion to approximate the homogenised response of the intervertebral disc.

    PubMed

    Karajan, N; Otto, D; Oladyshkin, S; Ehlers, W

    2014-10-01

    A possibility to simulate the mechanical behaviour of the human spine is given by modelling the stiffer structures, i.e. the vertebrae, as a discrete multi-body system (MBS), whereas the softer connecting tissue, i.e. the softer intervertebral discs (IVD), is represented in a continuum-mechanical sense using the finite-element method (FEM). From a modelling point of view, the mechanical behaviour of the IVD can be included into the MBS in two different ways. They can either be computed online in a so-called co-simulation of a MBS and a FEM or offline in a pre-computation step, where a representation of the discrete mechanical response of the IVD needs to be defined in terms of the applied degrees of freedom (DOF) of the MBS. For both methods, an appropriate homogenisation step needs to be applied to obtain the discrete mechanical response of the IVD, i.e. the resulting forces and moments. The goal of this paper was to present an efficient method to approximate the mechanical response of an IVD in an offline computation. In a previous paper (Karajan et al. in Biomech Model Mechanobiol 12(3):453-466, 2012), it was proven that a cubic polynomial for the homogenised forces and moments of the FE model is a suitable choice to approximate the purely elastic response as a coupled function of the DOF of the MBS. In this contribution, the polynomial chaos expansion (PCE) is applied to generate these high-dimensional polynomials. Following this, the main challenge is to determine suitable deformation states of the IVD for pre-computation, such that the polynomials can be constructed with high accuracy and low numerical cost. For the sake of a simple verification, the coupling method and the PCE are applied to the same simplified motion segment of the spine as was used in the previous paper, i.e. two cylindrical vertebrae and a cylindrical IVD in between. In a next step, the loading rates are included as variables in the polynomial response functions to account for a more realistic response of the overall viscoelastic intervertebral disc. Herein, an additive split into elastic and inelastic contributions to the homogenised forces and moments is applied.

  20. Tailoring non-equilibrium atmospheric pressure plasmas for healthcare technologies

    NASA Astrophysics Data System (ADS)

    Gans, Timo

    2012-10-01

    Non-equilibrium plasmas operated at ambient atmospheric pressure are very efficient sources for energy transport through reactive neutral particles (radicals and metastables), charged particles (ions and electrons), UV radiation, and electro-magnetic fields. This includes the unique opportunity to deliver short-lived highly reactive species such as atomic oxygen and atomic nitrogen. Reactive oxygen and nitrogen species can initiate a wide range of reactions in biochemical systems, both therapeutic and toxic. The toxicological implications are not clear, e.g. potential risks through DNA damage. It is anticipated that interactions with biological systems will be governed through synergies between two or more species. Suitable optimized plasma sources are improbable through empirical investigations. Quantifying the power dissipation and energy transport mechanisms through the different interfaces from the plasma regime to ambient air, towards the liquid interface and associated impact on the biological system through a new regime of liquid chemistry initiated by the synergy of delivering multiple energy carrying species, is crucial. The major challenge to overcome the obstacles of quantifying energy transport and controlling power dissipation has been the severe lack of suitable plasma sources and diagnostic techniques. Diagnostics and simulations of this plasma regime are very challenging; the highly pronounced collision dominated plasma dynamics at very small dimensions requires extraordinary high resolution - simultaneously in space (microns) and time (picoseconds). Numerical simulations are equally challenging due to the inherent multi-scale character with very rapid electron collisions on the one extreme and the transport of chemically stable species characterizing completely different domains. This presentation will discuss our recent progress actively combining both advance optical diagnostics and multi-scale computer simulations.

  1. Evaluation of the new EMAC-SWIFT chemistry climate model

    NASA Astrophysics Data System (ADS)

    Scheffler, Janice; Langematz, Ulrike; Wohltmann, Ingo; Rex, Markus

    2016-04-01

    It is well known that the representation of atmospheric ozone chemistry in weather and climate models is essential for a realistic simulation of the atmospheric state. Including atmospheric ozone chemistry into climate simulations is usually done by prescribing a climatological ozone field, by including a fast linear ozone scheme into the model or by using a climate model with complex interactive chemistry. While prescribed climatological ozone fields are often not aligned with the modelled dynamics, a linear ozone scheme may not be applicable for a wide range of climatological conditions. Although interactive chemistry provides a realistic representation of atmospheric chemistry such model simulations are computationally very expensive and hence not suitable for ensemble simulations or simulations with multiple climate change scenarios. A new approach to represent atmospheric chemistry in climate models which can cope with non-linearities in ozone chemistry and is applicable to a wide range of climatic states is the Semi-empirical Weighted Iterative Fit Technique (SWIFT) that is driven by reanalysis data and has been validated against observational satellite data and runs of a full Chemistry and Transport Model. SWIFT has recently been implemented into the ECHAM/MESSy (EMAC) chemistry climate model that uses a modular approach to climate modelling where individual model components can be switched on and off. Here, we show first results of EMAC-SWIFT simulations and validate these against EMAC simulations using the complex interactive chemistry scheme MECCA, and against observations.

  2. Real-time simulation of the nonlinear visco-elastic deformations of soft tissues.

    PubMed

    Basafa, Ehsan; Farahmand, Farzam

    2011-05-01

    Mass-spring-damper (MSD) models are often used for real-time surgery simulation due to their fast response and fairly realistic deformation replication. An improved real time simulation model of soft tissue deformation due to a laparoscopic surgical indenter was developed and tested. The mechanical realization of conventional MSD models was improved using nonlinear springs and nodal dampers, while their high computational efficiency was maintained using an adapted implicit integration algorithm. New practical algorithms for model parameter tuning, collision detection, and simulation were incorporated. The model was able to replicate complex biological soft tissue mechanical properties under large deformations, i.e., the nonlinear and viscoelastic behaviors. The simulated response of the model after tuning of its parameters to the experimental data of a deer liver sample, closely tracked the reference data with high correlation and maximum relative differences of less than 5 and 10%, for the tuning and testing data sets respectively. Finally, implementation of the proposed model and algorithms in a graphical environment resulted in a real-time simulation with update rates of 150 Hz for interactive deformation and haptic manipulation, and 30 Hz for visual rendering. The proposed real time simulation model of soft tissue deformation due to a laparoscopic surgical indenter was efficient, realistic, and accurate in ex vivo testing. This model is a suitable candidate for testing in vivo during laparoscopic surgery.

  3. Reward-based learning under hardware constraints-using a RISC processor embedded in a neuromorphic substrate.

    PubMed

    Friedmann, Simon; Frémaux, Nicolas; Schemmel, Johannes; Gerstner, Wulfram; Meier, Karlheinz

    2013-01-01

    In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project.

  4. Reward-based learning under hardware constraints—using a RISC processor embedded in a neuromorphic substrate

    PubMed Central

    Friedmann, Simon; Frémaux, Nicolas; Schemmel, Johannes; Gerstner, Wulfram; Meier, Karlheinz

    2013-01-01

    In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project. PMID:24065877

  5. Gradient Theory simulations of pure fluid interfaces using a generalized expression for influence parameters and a Helmholtz energy equation of state for fundamentally consistent two-phase calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahms, Rainer N.

    2014-12-31

    The fidelity of Gradient Theory simulations depends on the accuracy of saturation properties and influence parameters, and require equations of state (EoS) which exhibit a fundamentally consistent behavior in the two-phase regime. Widely applied multi-parameter EoS, however, are generally invalid inside this region. Hence, they may not be fully suitable for application in concert with Gradient Theory despite their ability to accurately predict saturation properties. The commonly assumed temperature-dependence of pure component influence parameters usually restricts their validity to subcritical temperature regimes. This may distort predictions for general multi-component interfaces where temperatures often exceed the critical temperature of vapor phasemore » components. Then, the calculation of influence parameters is not well defined. In this paper, one of the first studies is presented in which Gradient Theory is combined with a next-generation Helmholtz energy EoS which facilitates fundamentally consistent calculations over the entire two-phase regime. Illustrated on pentafluoroethane as an example, reference simulations using this method are performed. They demonstrate the significance of such high-accuracy and fundamentally consistent calculations for the computation of interfacial properties. These reference simulations are compared to corresponding results from cubic PR EoS, widely-applied in combination with Gradient Theory, and mBWR EoS. The analysis reveals that neither of those two methods succeeds to consistently capture the qualitative distribution of obtained key thermodynamic properties in Gradient Theory. Furthermore, a generalized expression of the pure component influence parameter is presented. This development is informed by its fundamental definition based on the direct correlation function of the homogeneous fluid and by presented high-fidelity simulations of interfacial density profiles. As a result, the new model preserves the accuracy of previous temperature-dependent expressions, remains well-defined at supercritical temperatures, and is fully suitable for calculations of general multi-component two-phase interfaces.« less

  6. SmartVeh: Secure and Efficient Message Access Control and Authentication for Vehicular Cloud Computing.

    PubMed

    Huang, Qinlong; Yang, Yixian; Shi, Yuxiang

    2018-02-24

    With the growing number of vehicles and popularity of various services in vehicular cloud computing (VCC), message exchanging among vehicles under traffic conditions and in emergency situations is one of the most pressing demands, and has attracted significant attention. However, it is an important challenge to authenticate the legitimate sources of broadcast messages and achieve fine-grained message access control. In this work, we propose SmartVeh, a secure and efficient message access control and authentication scheme in VCC. A hierarchical, attribute-based encryption technique is utilized to achieve fine-grained and flexible message sharing, which ensures that vehicles whose persistent or dynamic attributes satisfy the access policies can access the broadcast message with equipped on-board units (OBUs). Message authentication is enforced by integrating an attribute-based signature, which achieves message authentication and maintains the anonymity of the vehicles. In order to reduce the computations of the OBUs in the vehicles, we outsource the heavy computations of encryption, decryption and signing to a cloud server and road-side units. The theoretical analysis and simulation results reveal that our secure and efficient scheme is suitable for VCC.

  7. SmartVeh: Secure and Efficient Message Access Control and Authentication for Vehicular Cloud Computing

    PubMed Central

    Yang, Yixian; Shi, Yuxiang

    2018-01-01

    With the growing number of vehicles and popularity of various services in vehicular cloud computing (VCC), message exchanging among vehicles under traffic conditions and in emergency situations is one of the most pressing demands, and has attracted significant attention. However, it is an important challenge to authenticate the legitimate sources of broadcast messages and achieve fine-grained message access control. In this work, we propose SmartVeh, a secure and efficient message access control and authentication scheme in VCC. A hierarchical, attribute-based encryption technique is utilized to achieve fine-grained and flexible message sharing, which ensures that vehicles whose persistent or dynamic attributes satisfy the access policies can access the broadcast message with equipped on-board units (OBUs). Message authentication is enforced by integrating an attribute-based signature, which achieves message authentication and maintains the anonymity of the vehicles. In order to reduce the computations of the OBUs in the vehicles, we outsource the heavy computations of encryption, decryption and signing to a cloud server and road-side units. The theoretical analysis and simulation results reveal that our secure and efficient scheme is suitable for VCC. PMID:29495269

  8. IETI – Isogeometric Tearing and Interconnecting

    PubMed Central

    Kleiss, Stefan K.; Pechstein, Clemens; Jüttler, Bert; Tomar, Satyendra

    2012-01-01

    Finite Element Tearing and Interconnecting (FETI) methods are a powerful approach to designing solvers for large-scale problems in computational mechanics. The numerical simulation problem is subdivided into a number of independent sub-problems, which are then coupled in appropriate ways. NURBS- (Non-Uniform Rational B-spline) based isogeometric analysis (IGA) applied to complex geometries requires to represent the computational domain as a collection of several NURBS geometries. Since there is a natural decomposition of the computational domain into several subdomains, NURBS-based IGA is particularly well suited for using FETI methods. This paper proposes the new IsogEometric Tearing and Interconnecting (IETI) method, which combines the advanced solver design of FETI with the exact geometry representation of IGA. We describe the IETI framework for two classes of simple model problems (Poisson and linearized elasticity) and discuss the coupling of the subdomains along interfaces (both for matching interfaces and for interfaces with T-joints, i.e. hanging nodes). Special attention is paid to the construction of a suitable preconditioner for the iterative linear solver used for the interface problem. We report several computational experiments to demonstrate the performance of the proposed IETI method. PMID:24511167

  9. GPU-based acceleration of computations in nonlinear finite element deformation analysis.

    PubMed

    Mafi, Ramin; Sirouspour, Shahin

    2014-03-01

    The physics of deformation for biological soft-tissue is best described by nonlinear continuum mechanics-based models, which then can be discretized by the FEM for a numerical solution. However, computational complexity of such models have limited their use in applications requiring real-time or fast response. In this work, we propose a graphic processing unit-based implementation of the FEM using implicit time integration for dynamic nonlinear deformation analysis. This is the most general formulation of the deformation analysis. It is valid for large deformations and strains and can account for material nonlinearities. The data-parallel nature and the intense arithmetic computations of nonlinear FEM equations make it particularly suitable for implementation on a parallel computing platform such as graphic processing unit. In this work, we present and compare two different designs based on the matrix-free and conventional preconditioned conjugate gradients algorithms for solving the FEM equations arising in deformation analysis. The speedup achieved with the proposed parallel implementations of the algorithms will be instrumental in the development of advanced surgical simulators and medical image registration methods involving soft-tissue deformation. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Analysis and Modeling of Realistic Compound Channels in Transparent Relay Transmissions

    PubMed Central

    Kanjirathumkal, Cibile K.; Mohammed, Sameer S.

    2014-01-01

    Analytical approaches for the characterisation of the compound channels in transparent multihop relay transmissions over independent fading channels are considered in this paper. Compound channels with homogeneous links are considered first. Using Mellin transform technique, exact expressions are derived for the moments of cascaded Weibull distributions. Subsequently, two performance metrics, namely, coefficient of variation and amount of fade, are derived using the computed moments. These metrics quantify the possible variations in the channel gain and signal to noise ratio from their respective average values and can be used to characterise the achievable receiver performance. This approach is suitable for analysing more realistic compound channel models for scattering density variations of the environment, experienced in multihop relay transmissions. The performance metrics for such heterogeneous compound channels having distinct distribution in each hop are computed and compared with those having identical constituent component distributions. The moments and the coefficient of variation computed are then used to develop computationally efficient estimators for the distribution parameters and the optimal hop count. The metrics and estimators proposed are complemented with numerical and simulation results to demonstrate the impact of the accuracy of the approaches. PMID:24701175

  11. Combining wet and dry research: experience with model development for cardiac mechano-electric structure-function studies

    PubMed Central

    Quinn, T. Alexander; Kohl, Peter

    2013-01-01

    Since the development of the first mathematical cardiac cell model 50 years ago, computational modelling has become an increasingly powerful tool for the analysis of data and for the integration of information related to complex cardiac behaviour. Current models build on decades of iteration between experiment and theory, representing a collective understanding of cardiac function. All models, whether computational, experimental, or conceptual, are simplified representations of reality and, like tools in a toolbox, suitable for specific applications. Their range of applicability can be explored (and expanded) by iterative combination of ‘wet’ and ‘dry’ investigation, where experimental or clinical data are used to first build and then validate computational models (allowing integration of previous findings, quantitative assessment of conceptual models, and projection across relevant spatial and temporal scales), while computational simulations are utilized for plausibility assessment, hypotheses-generation, and prediction (thereby defining further experimental research targets). When implemented effectively, this combined wet/dry research approach can support the development of a more complete and cohesive understanding of integrated biological function. This review illustrates the utility of such an approach, based on recent examples of multi-scale studies of cardiac structure and mechano-electric function. PMID:23334215

  12. Simulation of the vibrational chemistry and the infrared signature induced by a Sprite streamer in the mesosphere

    NASA Astrophysics Data System (ADS)

    Romand, F.; Payan, S.; Croize, L.

    2017-12-01

    Since their first observation in 1989, effect of TLEs on the atmospheric composition has become an open and important question. The lack of suitable experimental data is a shortcoming that hampers our understanding of the physics and chemistry induced by these effects. HALESIS (High-Altitude Luminous Events Studied by Infrared Spectro-imagery) is a future experiment dedicated to the measurement of the atmospheric perturbation induced by a TLE in the minutes following its occurrence, from a stratospheric balloon flying at an altitude of 25 km to 40 km. This work aims to quantify the local chemical impact of sprites in the stratosphere and mesosphere. In this paper, we will present the development of a tool which simulates (i) the impact of a sprite on the vibrational chemistry, (ii) the resulting infrared signature and (iii) the propagation of this signature through the atmosphere to an observer. First the Non Local Thermodynamic Equilibrium populations of a background atmosphere were computed using SAMM2 code. The initial thermodynamic and chemical description of atmosphere comes from the Whole Atmosphere community Climate Model (WACCM). Then a perturbation was applied to simulate a sprite. Chemistry due to TLEs was computed using Gordillo-Vazquez kinetic model. Rate coefficients that depend on the electron energy distribution function were calculated from collision cross-section data by solving the electron Boltzmann equation (BE). Time evolutions of the species densities and of vibrational populations in the non-thermal plasma consecutive to sprite discharge were simulated using the computer code ZDPlasKin (S. Pancheshn et al.). Finally, the resulting infrared signatures were propagated from the disturbed area through the atmosphere to an instrument placed in a limb line of sight using a line by line radiative transfer model. We will conclude that sprite could produce a significant infrared signature that last a few tens of seconds after the visible flash.

  13. Section 1. Simulation of surface-water integrated flow and transport in two-dimensions: SWIFT2D user's manual

    USGS Publications Warehouse

    Schaffranek, Raymond W.

    2004-01-01

    A numerical model for simulation of surface-water integrated flow and transport in two (horizontal-space) dimensions is documented. The model solves vertically integrated forms of the equations of mass and momentum conservation and solute transport equations for heat, salt, and constituent fluxes. An equation of state for salt balance directly couples solution of the hydrodynamic and transport equations to account for the horizontal density gradient effects of salt concentrations on flow. The model can be used to simulate the hydrodynamics, transport, and water quality of well-mixed bodies of water, such as estuaries, coastal seas, harbors, lakes, rivers, and inland waterways. The finite-difference model can be applied to geographical areas bounded by any combination of closed land or open water boundaries. The simulation program accounts for sources of internal discharges (such as tributary rivers or hydraulic outfalls), tidal flats, islands, dams, and movable flow barriers or sluices. Water-quality computations can treat reactive and (or) conservative constituents simultaneously. Input requirements include bathymetric and topographic data defining land-surface elevations, time-varying water level or flow conditions at open boundaries, and hydraulic coefficients. Optional input includes the geometry of hydraulic barriers and constituent concentrations at open boundaries. Time-dependent water level, flow, and constituent-concentration data are required for model calibration and verification. Model output consists of printed reports and digital files of numerical results in forms suitable for postprocessing by graphical software programs and (or) scientific visualization packages. The model is compatible with most mainframe, workstation, mini- and micro-computer operating systems and FORTRAN compilers. This report defines the mathematical formulation and computational features of the model, explains the solution technique and related model constraints, describes the model framework, documents the type and format of inputs required, and identifies the type and format of output available.

  14. Finite Element Methods and Multiphase Continuum Theory for Modeling 3D Air-Water-Sediment Interactions

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Miller, C. T.; Dimakopoulos, A.; Farthing, M.

    2016-12-01

    The last decade has seen an expansion in the development and application of 3D free surface flow models in the context of environmental simulation. These models are based primarily on the combination of effective algorithms, namely level set and volume-of-fluid methods, with high-performance, parallel computing. These models are still computationally expensive and suitable primarily when high-fidelity modeling near structures is required. While most research on algorithms and implementations has been conducted in the context of finite volume methods, recent work has extended a class of level set schemes to finite element methods on unstructured methods. This work considers models of three-phase flow in domains containing air, water, and granular phases. These multi-phase continuum mechanical formulations show great promise for applications such as analysis of coastal and riverine structures. This work will consider formulations proposed in the literature over the last decade as well as new formulations derived using the thermodynamically constrained averaging theory, an approach to deriving and closing macroscale continuum models for multi-phase and multi-component processes. The target applications require the ability to simulate wave breaking and structure over-topping, particularly fully three-dimensional, non-hydrostatic flows that drive these phenomena. A conservative level set scheme suitable for higher-order finite element methods is used to describe the air/water phase interaction. The interaction of these air/water flows with granular materials, such as sand and rubble, must also be modeled. The range of granular media dynamics targeted including flow and wave transmision through the solid media as well as erosion and deposition of granular media and moving bed dynamics. For the granular phase we consider volume- and time-averaged continuum mechanical formulations that are discretized with the finite element method and coupled to the underlying air/water flow via operator splitting (fractional step) schemes. Particular attention will be given to verification and validation of the numerical model and important qualitative features of the numerical methods including phase conservation, wave energy dissipation, and computational efficiency in regimes of interest.

  15. Simulation of Turbulent Combustion Fields of Shock-Dispersed Aluminum Using the AMR Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhl, A L; Bell, J B; Beckner, V E

    2006-11-02

    We present a Model for simulating experiments of combustion in Shock-Dispersed-Fuel (SDF) explosions. The SDF charge consisted of a 0.5-g spherical PETN booster, surrounded by 1-g of fuel powder (flake Aluminum). Detonation of the booster charge creates a high-temperature, high-pressure source (PETN detonation products gases) that both disperses the fuel and heats it. Combustion ensues when the fuel mixes with air. The gas phase is governed by the gas-dynamic conservation laws, while the particle phase obeys the continuum mechanics laws for heterogeneous media. The two phases exchange mass, momentum and energy according to inter-phase interaction terms. The kinetics model usedmore » an empirical particle burn relation. The thermodynamic model considers the air, fuel and booster products to be of frozen composition, while the Al combustion products are assumed to be in equilibrium. The thermodynamic states were calculated by the Cheetah code; resulting state points were fit with analytic functions suitable for numerical simulations. Numerical simulations of combustion of an Aluminum SDF charge in a 6.4-liter chamber were performed. Computed pressure histories agree with measurements.« less

  16. Comparison of one-dimensional probabilistic finite element method with direct numerical simulation of dynamically loaded heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Robbins, Joshua; Voth, Thomas

    2011-06-01

    Material response to dynamic loading is often dominated by microstructure such as grain topology, porosity, inclusions, and defects; however, many models rely on assumptions of homogeneity. We use the probabilistic finite element method (WK Liu, IJNME, 1986) to introduce local uncertainty to account for material heterogeneity. The PFEM uses statistical information about the local material response (i.e., its expectation, coefficient of variation, and autocorrelation) drawn from knowledge of the microstructure, single crystal behavior, and direct numerical simulation (DNS) to determine the expectation and covariance of the system response (velocity, strain, stress, etc). This approach is compared to resolved grain-scale simulations of the equivalent system. The microstructures used for the DNS are produced using Monte Carlo simulations of grain growth, and a sufficient number of realizations are computed to ensure a meaningful comparison. Finally, comments are made regarding the suitability of one-dimensional PFEM for modeling material heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  17. Simulation-Based Joint Estimation of Body Deformation and Elasticity Parameters for Medical Image Analysis

    PubMed Central

    Foskey, Mark; Niethammer, Marc; Krajcevski, Pavel; Lin, Ming C.

    2014-01-01

    Estimation of tissue stiffness is an important means of noninvasive cancer detection. Existing elasticity reconstruction methods usually depend on a dense displacement field (inferred from ultrasound or MR images) and known external forces. Many imaging modalities, however, cannot provide details within an organ and therefore cannot provide such a displacement field. Furthermore, force exertion and measurement can be difficult for some internal organs, making boundary forces another missing parameter. We propose a general method for estimating elasticity and boundary forces automatically using an iterative optimization framework, given the desired (target) output surface. During the optimization, the input model is deformed by the simulator, and an objective function based on the distance between the deformed surface and the target surface is minimized numerically. The optimization framework does not depend on a particular simulation method and is therefore suitable for different physical models. We show a positive correlation between clinical prostate cancer stage (a clinical measure of severity) and the recovered elasticity of the organ. Since the surface correspondence is established, our method also provides a non-rigid image registration, where the quality of the deformation fields is guaranteed, as they are computed using a physics-based simulation. PMID:22893381

  18. Experimental and numerical investigation of strain rate effect on low cycle fatigue behaviour of AA 5754 alloy

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Singh, A.

    2018-04-01

    The present study deals with evaluation of low cycle fatigue (LCF) behavior of aluminum alloy 5754 (AA 5754) at different strain rates. This alloy has magnesium (Mg) as main alloying element (Al-Mg alloy) which makes this alloy suitable for Marines and Cryogenics applications. The testing procedure and specimen preparation are guided by ASTM E606 standard. The tests are performed at 0.5% strain amplitude with three different strain rates i.e. 0.5×10-3 sec-1, 1×10-3 sec-1 and 2×10-3 sec-1 thus the frequency of tests vary accordingly. The experimental results show that there is significant decrease in the fatigue life with the increase in strain rate. LCF behavior of AA 5754 is also simulated at different strain rates by finite element method. Chaboche kinematic hardening cyclic plasticity model is used for simulating the hardening behavior of the material. Axisymmetric finite element model is created to reduce the computational cost of the simulation. The material coefficients used for “Chaboche Model” are determined by experimentally obtained stabilized hysteresis loop. The results obtained from finite element simulation are compared with those obtained through LCF experiments.

  19. Model Analysis of the Factors Regulating Trends and Variability of Methane, Carbon Monoxide and OH: 1. Model Validation

    NASA Technical Reports Server (NTRS)

    Elshorbany, Y. F.; Strode, S.; Wang, J.; Duncan, B.

    2014-01-01

    Methane (CH4) is the second most important anthropogenic greenhouse gas (GHG). Its 100-year global warming potential (GWP) is 25 times larger than that for carbon dioxide. The 100-yr integrated GWP of CH4 is sensitive to changes in OH levels. Methane's atmospheric growth rate was estimated to be more than 10 ppb yr(exp -1) in 1998 but less than zero in 2001, 2004 and 2005 (Kirschke et al., 2013). Since 2006, the CH4 is increasing again. This phenomena is yet not well understood. Oxidation of CH4 by OH is the main loss process, thus affecting the oxidizing capacity of the atmosphere and contributing to the global ozone background. Current models typically use an annual cycle of offline OH fields to simulate CH4. The implemented OH fields in these models are typically tuned so that simulated CH4 growth rates match that measured. For future and climate simulations, the OH tuning technique may not be suitable. In addition, running full chemistry, multi-decadal CH4 simulations is a serious challenge and currently, due to computational intensity, almost impossible.

  20. Simulating Sand Behavior through Terrain Subdivision and Particle Refinement

    NASA Astrophysics Data System (ADS)

    Clothier, M.

    2013-12-01

    Advances in computer graphics, GPUs, and parallel processing hardware have provided researchers with new methods to visualize scientific data. In fact, these advances have spurred new research opportunities between computer graphics and other disciplines, such as Earth sciences. Through collaboration, Earth and planetary scientists have benefited by using these advances in hardware technology to process large amounts of data for visualization and analysis. At Oregon State University, we are collaborating with the Oregon Space Grant and IGERT Ecosystem Informatics programs to investigate techniques for simulating the behavior of sand. In addition, we have also been collaborating with the Jet Propulsion Laboratory's DARTS Lab to exchange ideas on our research. The DARTS Lab specializes in the simulation of planetary vehicles, such as the Mars rovers. One aspect of their work is testing these vehicles in a virtual "sand box" to test their performance in different environments. Our research builds upon this idea to create a sand simulation framework to allow for more complex and diverse environments. As a basis for our framework, we have focused on planetary environments, such as the harsh, sandy regions on Mars. To evaluate our framework, we have used simulated planetary vehicles, such as a rover, to gain insight into the performance and interaction between the surface sand and the vehicle. Unfortunately, simulating the vast number of individual sand particles and their interaction with each other has been a computationally complex problem in the past. However, through the use of high-performance computing, we have developed a technique to subdivide physically active terrain regions across a large landscape. To achieve this, we only subdivide terrain regions where sand particles are actively participating with another object or force, such as a rover wheel. This is similar to a Level of Detail (LOD) technique, except that the density of subdivisions are determined by their proximity to the interacting object or force with the sand. To illustrate an example, as a rover wheel moves forward and approaches a particular sand region, that region will continue to subdivide until individual sand particles are represented. Conversely, if the rover wheel moves away, previously subdivided sand regions will recombine. Thus, individual sand particles are available when an interacting force is present but stored away if there is not. As such, this technique allows for many particles to be represented without the computational complexity. We have also further generalized these subdivision regions in our sand framework into any volumetric area suitable for use in the simulation. This allows for more compact subdivision regions and has fine-tuned our framework so that more emphasis can be placed on regions of actively participating sand. We feel that this increases the framework's usefulness across scientific applications and can provide for other research opportunities within the earth and planetary sciences. Through continued collaboration with our academic partners, we continue to build upon our sand simulation framework and look for other opportunities to utilize this research.

Top