Sample records for addition computer simulations

  1. Training auscultatory skills: computer simulated heart sounds or additional bedside training? A randomized trial on third-year medical students

    PubMed Central

    2010-01-01

    Background The present study compares the value of additional use of computer simulated heart sounds, to conventional bedside auscultation training, on the cardiac auscultation skills of 3rd year medical students at Oslo University Medical School. Methods In addition to their usual curriculum courses, groups of seven students each were randomized to receive four hours of additional auscultation training either employing a computer simulator system or adding on more conventional bedside training. Cardiac auscultation skills were afterwards tested using live patients. Each student gave a written description of the auscultation findings in four selected patients, and was rewarded from 0-10 points for each patient. Differences between the two study groups were evaluated using student's t-test. Results At the auscultation test no significant difference in mean score was found between the students who had used additional computer based sound simulation compared to additional bedside training. Conclusions Students at an early stage of their cardiology training demonstrated equal performance of cardiac auscultation whether they had received an additional short auscultation course based on computer simulated training, or had had additional bedside training. PMID:20082701

  2. Using Computational Simulations to Confront Students' Mental Models

    ERIC Educational Resources Information Center

    Rodrigues, R.; Carvalho, P. Simeão

    2014-01-01

    In this paper we show an example of how to use a computational simulation to obtain visual feedback for students' mental models, and compare their predictions with the simulated system's behaviour. Additionally, we use the computational simulation to incrementally modify the students' mental models in order to accommodate new data,…

  3. Numerical simulation of turbulent jet noise, part 2

    NASA Technical Reports Server (NTRS)

    Metcalfe, R. W.; Orszag, S. A.

    1976-01-01

    Results on the numerical simulation of jet flow fields were used to study the radiated sound field, and in addition, to extend and test the capabilities of the turbulent jet simulation codes. The principal result of the investigation was the computation of the radiated sound field from a turbulent jet. In addition, the computer codes were extended to account for the effects of compressibility and eddy viscosity, and the treatment of the nonlinear terms of the Navier-Stokes equations was modified so that they can be computed in a semi-implicit way. A summary of the flow model and a description of the numerical methods used for its solution are presented. Calculations of the radiated sound field are reported. In addition, the extensions that were made to the fundamental dynamical codes are described. Finally, the current state-of-the-art for computer simulation of turbulent jet noise is summarized.

  4. LOS selective fading and AN/FRC-170(V) radio hybrid computer simulation phase A report

    NASA Astrophysics Data System (ADS)

    Klukis, M. K.; Lyon, T. I.; Walker, R.

    1981-09-01

    This report documents results of the first phase of modeling, simulation and study of the dual diversity AN/FRC-170(V) radio and frequency selective fading line of sight channel. Both hybrid computer and circuit technologies were used to develop a fast, accurate and flexible simulation tool to investigate changes and proposed improvements to the design of the AN/FRC-170(V) radio. In addition to the simulation study, a remote hybrid computer terminal was provided to DCEC for interactive study of the modeled radio and channel. Simulated performance of the radio for Rayleigh, line of sight two ray channels, and additive noise are included in the report.

  5. Computer simulation: A modern day crystal ball?

    NASA Technical Reports Server (NTRS)

    Sham, Michael; Siprelle, Andrew

    1994-01-01

    It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.

  6. Programmable Quantum Photonic Processor Using Silicon Photonics

    DTIC Science & Technology

    2017-04-01

    quantum information processing and quantum sensing, ranging from linear optics quantum computing and quantum simulation to quantum ...transformers have driven experimental and theoretical advances in quantum simulation, cluster-state quantum computing , all-optical quantum repeaters...neuromorphic computing , and other applications. In addition, we developed new schemes for ballistic quantum computation , new methods for

  7. Computer-based simulation training to improve learning outcomes in mannequin-based simulation exercises.

    PubMed

    Curtin, Lindsay B; Finn, Laura A; Czosnowski, Quinn A; Whitman, Craig B; Cawley, Michael J

    2011-08-10

    To assess the impact of computer-based simulation on the achievement of student learning outcomes during mannequin-based simulation. Participants were randomly assigned to rapid response teams of 5-6 students and then teams were randomly assigned to either a group that completed either computer-based or mannequin-based simulation cases first. In both simulations, students used their critical thinking skills and selected interventions independent of facilitator input. A predetermined rubric was used to record and assess students' performance in the mannequin-based simulations. Feedback and student performance scores were generated by the software in the computer-based simulations. More of the teams in the group that completed the computer-based simulation before completing the mannequin-based simulation achieved the primary outcome for the exercise, which was survival of the simulated patient (41.2% vs. 5.6%). The majority of students (>90%) recommended the continuation of simulation exercises in the course. Students in both groups felt the computer-based simulation should be completed prior to the mannequin-based simulation. The use of computer-based simulation prior to mannequin-based simulation improved the achievement of learning goals and outcomes. In addition to improving participants' skills, completing the computer-based simulation first may improve participants' confidence during the more real-life setting achieved in the mannequin-based simulation.

  8. Virtual Transgenics: Using a Molecular Biology Simulation to Impact Student Academic Achievement and Attitudes

    NASA Astrophysics Data System (ADS)

    Shegog, Ross; Lazarus, Melanie M.; Murray, Nancy G.; Diamond, Pamela M.; Sessions, Nathalie; Zsigmond, Eva

    2012-10-01

    The transgenic mouse model is useful for studying the causes and potential cures for human genetic diseases. Exposing high school biology students to laboratory experience in developing transgenic animal models is logistically prohibitive. Computer-based simulation, however, offers this potential in addition to advantages of fidelity and reach. This study describes and evaluates a computer-based simulation to train advanced placement high school science students in laboratory protocols, a transgenic mouse model was produced. A simulation module on preparing a gene construct in the molecular biology lab was evaluated using a randomized clinical control design with advanced placement high school biology students in Mercedes, Texas ( n = 44). Pre-post tests assessed procedural and declarative knowledge, time on task, attitudes toward computers for learning and towards science careers. Students who used the simulation increased their procedural and declarative knowledge regarding molecular biology compared to those in the control condition (both p < 0.005). Significant increases continued to occur with additional use of the simulation ( p < 0.001). Students in the treatment group became more positive toward using computers for learning ( p < 0.001). The simulation did not significantly affect attitudes toward science in general. Computer simulation of complex transgenic protocols have potential to provide a "virtual" laboratory experience as an adjunct to conventional educational approaches.

  9. Frontiers in the Teaching of Physiology. Computer Literacy and Simulation.

    ERIC Educational Resources Information Center

    Tidball, Charles S., Ed.; Shelesnyak, M. C., Ed.

    Provided is a collection of papers on computer literacy and simulation originally published in The Physiology Teacher, supplemented by additional papers and a glossary of terms relevant to the field. The 12 papers are presented in five sections. An affirmation of conventional physiology laboratory exercises, coping with computer terminology, and…

  10. Computational Fluid Dynamics and Additive Manufacturing to Diagnose and Treat Cardiovascular Disease.

    PubMed

    Randles, Amanda; Frakes, David H; Leopold, Jane A

    2017-11-01

    Noninvasive engineering models are now being used for diagnosing and planning the treatment of cardiovascular disease. Techniques in computational modeling and additive manufacturing have matured concurrently, and results from simulations can inform and enable the design and optimization of therapeutic devices and treatment strategies. The emerging synergy between large-scale simulations and 3D printing is having a two-fold benefit: first, 3D printing can be used to validate the complex simulations, and second, the flow models can be used to improve treatment planning for cardiovascular disease. In this review, we summarize and discuss recent methods and findings for leveraging advances in both additive manufacturing and patient-specific computational modeling, with an emphasis on new directions in these fields and remaining open questions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  12. Computer Simulation of Diffraction Patterns.

    ERIC Educational Resources Information Center

    Dodd, N. A.

    1983-01-01

    Describes an Apple computer program (listing available from author) which simulates Fraunhofer and Fresnel diffraction using vector addition techniques (vector chaining) and allows user to experiment with different shaped multiple apertures. Graphics output include vector resultants, phase difference, diffraction patterns, and the Cornu spiral…

  13. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  14. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  15. Simulation of turbulent separated flows using a novel, evolution-based, eddy-viscosity formulation

    NASA Astrophysics Data System (ADS)

    Castellucci, Paul

    Currently, there exists a lack of confidence in the computational simulation of turbulent separated flows at large Reynolds numbers. The most accurate methods available are too computationally costly to use in engineering applications. Thus, inexpensive models, developed using the Reynolds-averaged Navier-Stokes (RANS) equations, are often extended beyond their applicability. Although these methods will often reproduce integrated quantities within engineering tolerances, such metrics are often insensitive to details within a separated wake, and therefore, poor indicators of simulation fidelity. Using concepts borrowed from large-eddy simulation (LES), a two-equation RANS model is modified to simulate the turbulent wake behind a circular cylinder. This modification involves the computation of one additional scalar field, adding very little to the overall computational cost. When properly inserted into the baseline RANS model, this modification mimics LES in the separated wake, yet reverts to the unmodified form at the cylinder surface. In this manner, superior predictive capability may be achieved without the additional cost of fine spatial resolution associated with LES near solid boundaries. Simulations using modified and baseline RANS models are benchmarked against both LES and experimental data for a circular cylinder wake at Reynolds number 3900. In addition, the computational tool used in this investigation is subject to verification via the Method of Manufactured Solutions. Post-processing of the resultant flow fields includes both mean value and triple-decomposition analysis. These results reveal substantial improvements using the modified system and appear to drive the baseline wake solution toward that of LES, as intended.

  16. Comprehensive silicon solar cell computer modeling

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1984-01-01

    The development of an efficient, comprehensive Si solar cell modeling program that has the capability of simulation accuracy of 5 percent or less is examined. A general investigation of computerized simulation is provided. Computer simulation programs are subdivided into a number of major tasks: (1) analytical method used to represent the physical system; (2) phenomena submodels that comprise the simulation of the system; (3) coding of the analysis and the phenomena submodels; (4) coding scheme that results in efficient use of the CPU so that CPU costs are low; and (5) modularized simulation program with respect to structures that may be analyzed, addition and/or modification of phenomena submodels as new experimental data become available, and the addition of other photovoltaic materials.

  17. Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Wong, Jay Ming

    2014-01-01

    Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.

  18. A demonstrative model of a lunar base simulation on a personal computer

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.

  19. Visual Computing Environment Workshop

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles (Compiler)

    1998-01-01

    The Visual Computing Environment (VCE) is a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis.

  20. Fast Learning for Immersive Engagement in Energy Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M

    The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less

  1. Advanced computational simulations of water waves interacting with wave energy converters

    NASA Astrophysics Data System (ADS)

    Pathak, Ashish; Freniere, Cole; Raessi, Mehdi

    2017-03-01

    Wave energy converter (WEC) devices harness the renewable ocean wave energy and convert it into useful forms of energy, e.g. mechanical or electrical. This paper presents an advanced 3D computational framework to study the interaction between water waves and WEC devices. The computational tool solves the full Navier-Stokes equations and considers all important effects impacting the device performance. To enable large-scale simulations in fast turnaround times, the computational solver was developed in an MPI parallel framework. A fast multigrid preconditioned solver is introduced to solve the computationally expensive pressure Poisson equation. The computational solver was applied to two surface-piercing WEC geometries: bottom-hinged cylinder and flap. Their numerically simulated response was validated against experimental data. Additional simulations were conducted to investigate the applicability of Froude scaling in predicting full-scale WEC response from the model experiments.

  2. Understanding Islamist political violence through computational social simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G

    Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates themore » computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.« less

  3. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    PubMed Central

    Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.

    2016-01-01

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676

  4. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    DOE PAGES

    Drawert, Brian; Hellander, Andreas; Bales, Ben; ...

    2016-12-08

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less

  5. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  6. Computer Simulations to Support Science Instruction and Learning: A critical review of the literature

    NASA Astrophysics Data System (ADS)

    Smetana, Lara Kathleen; Bell, Randy L.

    2012-06-01

    Researchers have explored the effectiveness of computer simulations for supporting science teaching and learning during the past four decades. The purpose of this paper is to provide a comprehensive, critical review of the literature on the impact of computer simulations on science teaching and learning, with the goal of summarizing what is currently known and providing guidance for future research. We report on the outcomes of 61 empirical studies dealing with the efficacy of, and implications for, computer simulations in science instruction. The overall findings suggest that simulations can be as effective, and in many ways more effective, than traditional (i.e. lecture-based, textbook-based and/or physical hands-on) instructional practices in promoting science content knowledge, developing process skills, and facilitating conceptual change. As with any other educational tool, the effectiveness of computer simulations is dependent upon the ways in which they are used. Thus, we outline specific research-based guidelines for best practice. Computer simulations are most effective when they (a) are used as supplements; (b) incorporate high-quality support structures; (c) encourage student reflection; and (d) promote cognitive dissonance. Used appropriately, computer simulations involve students in inquiry-based, authentic science explorations. Additionally, as educational technologies continue to evolve, advantages such as flexibility, safety, and efficiency deserve attention.

  7. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures.

    PubMed

    Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena

    2018-01-01

    The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.

  8. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures

    PubMed Central

    Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena

    2018-01-01

    The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results. PMID:29765315

  9. A cis-regulatory logic simulator.

    PubMed

    Zeigler, Robert D; Gertz, Jason; Cohen, Barak A

    2007-07-27

    A major goal of computational studies of gene regulation is to accurately predict the expression of genes based on the cis-regulatory content of their promoters. The development of computational methods to decode the interactions among cis-regulatory elements has been slow, in part, because it is difficult to know, without extensive experimental validation, whether a particular method identifies the correct cis-regulatory interactions that underlie a given set of expression data. There is an urgent need for test expression data in which the interactions among cis-regulatory sites that produce the data are known. The ability to rapidly generate such data sets would facilitate the development and comparison of computational methods that predict gene expression patterns from promoter sequence. We developed a gene expression simulator which generates expression data using user-defined interactions between cis-regulatory sites. The simulator can incorporate additive, cooperative, competitive, and synergistic interactions between regulatory elements. Constraints on the spacing, distance, and orientation of regulatory elements and their interactions may also be defined and Gaussian noise can be added to the expression values. The simulator allows for a data transformation that simulates the sigmoid shape of expression levels from real promoters. We found good agreement between sets of simulated promoters and predicted regulatory modules from real expression data. We present several data sets that may be useful for testing new methodologies for predicting gene expression from promoter sequence. We developed a flexible gene expression simulator that rapidly generates large numbers of simulated promoters and their corresponding transcriptional output based on specified interactions between cis-regulatory sites. When appropriate rule sets are used, the data generated by our simulator faithfully reproduces experimentally derived data sets. We anticipate that using simulated gene expression data sets will facilitate the direct comparison of computational strategies to predict gene expression from promoter sequence. The source code is available online and as additional material. The test sets are available as additional material.

  10. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    PubMed

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  11. A Guide to Computer Simulations of Three Adaptive Instructional Models for the Advanced Instructional System Phases II and III. Final Report.

    ERIC Educational Resources Information Center

    Hansen, Duncan N.; And Others

    Computer simulations of three individualized adaptive instructional models (AIM) were undertaken to determine if these models function as prescribed in Air Force technical training programs. In addition, the project sought to develop a user's guide for effective understanding of adaptive models during field implementation. Successful simulations…

  12. Moose: An Open-Source Framework to Enable Rapid Development of Collaborative, Multi-Scale, Multi-Physics Simulation Tools

    NASA Astrophysics Data System (ADS)

    Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.

    2014-12-01

    The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.

  13. Comparison of the Melting Temperatures of Classical and Quantum Water Potential Models

    NASA Astrophysics Data System (ADS)

    Du, Sen; Yoo, Soohaeng; Li, Jinjin

    2017-08-01

    As theoretical approaches and technical methods improve over time, the field of computer simulations for water has greatly progressed. Water potential models become much more complex when additional interactions and advanced theories are considered. Macroscopic properties of water predicted by computer simulations using water potential models are expected to be consistent with experimental outcomes. As such, discrepancies between computer simulations and experiments could be a criterion to comment on the performances of various water potential models. Notably, water can occur not only as liquid phases but also as solid and vapor phases. Therefore, the melting temperature related to the solid and liquid phase equilibrium is an effective parameter to judge the performances of different water potential models. As a mini review, our purpose is to introduce some water models developed in recent years and the melting temperatures obtained through simulations with such models. Moreover, some explanations referred to in the literature are described for the additional evaluation of the water potential models.

  14. Mission: Define Computer Literacy. The Illinois-Wisconsin ISACS Computer Coordinators' Committee on Computer Literacy Report (May 1985).

    ERIC Educational Resources Information Center

    Computing Teacher, 1985

    1985-01-01

    Defines computer literacy and describes a computer literacy course which stresses ethics, hardware, and disk operating systems throughout. Core units on keyboarding, word processing, graphics, database management, problem solving, algorithmic thinking, and programing are outlined, together with additional units on spreadsheets, simulations,…

  15. Shock compression response of cold-rolled Ni/Al multilayer composites

    DOE PAGES

    Specht, Paul E.; Weihs, Timothy P.; Thadhani, Naresh N.

    2017-01-06

    Uniaxial strain, plate-on-plate impact experiments were performed on cold-rolled Ni/Al multilayer composites and the resulting Hugoniot was determined through time-resolved measurements combined with impedance matching. The experimental Hugoniot agreed with that previously predicted by two dimensional (2D) meso-scale calculations. Additional 2D meso-scale simulations were performed using the same computational method as the prior study to reproduce the experimentally measured free surface velocities and stress profiles. Finally, these simulations accurately replicated the experimental profiles, providing additional validation for the previous computational work.

  16. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Reynolds, Daniel R.

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  17. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE PAGES

    Gardner, David J.; Reynolds, Daniel R.

    2017-01-05

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  18. Signal-to-noise ratio estimation in digital computer simulation of lowpass and bandpass systems with applications to analog and digital communications, volume 3

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.; Turner, M. D.

    1977-01-01

    Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.

  19. An investigation of the use of microcomputer-based laboratory simulations in promoting conceptual understanding in secondary physics instruction

    NASA Astrophysics Data System (ADS)

    Tomshaw, Stephen G.

    Physics education research has shown that students bring alternate conceptions to the classroom which can be quite resistant to traditional instruction methods (Clement, 1982; Halloun & Hestenes, 1985; McDermott, 1991). Microcomputer-based laboratory (MBL) experiments that employ an active-engagement strategy have been shown to improve student conceptual understanding in high school and introductory university physics courses (Thornton & Sokoloff, 1998). These (MBL) experiments require a specialized computer interface, type-specific sensors (e.g. motion detectors, force probes, accelerometers), and specialized software in addition to the standard physics experimental apparatus. Tao and Gunstone (1997) have shown that computer simulations used in an active engagement environment can also lead to conceptual change. This study investigated 69 secondary physics students' use of computer simulations of MBL activities in place of the hands-on MBL laboratory activities. The average normalized gain in students' conceptual understanding was measured using the Force and Motion Conceptual Evaluation (FMCE). Student attitudes towards physics and computers were probed using the Views About Science Survey (VASS) and the Computer Attitude Scale (CAS). While it may be possible to obtain an equivalent level of conceptual understanding using computer simulations in combination with an active-engagement environment, this study found no significant gains in students' conceptual understanding ( = -0.02) after they completed a series of nine simulated experiments from the Tools for Scientific Thinking curriculum (Thornton & Sokoloff, 1990). The absence of gains in conceptual understanding may indicate that either the simulations were ineffective in promoting conceptual change or problems with the implementation of the treatment inhibited its effectiveness. There was a positive shift in students' attitudes towards physics in the VASS dimensions of structure and reflective thinking, while there was a negative shift in students' attitudes towards computers in the CAS subscales of anxiety and usefulness. The negative shift in attitudes towards computers may be due to the additional time and work required by the students to perform the simulation experiments with no apparent reward in terms of their physics grade. Suggestions for future research include a qualitative element to observe student interactions and alternate formats for the simulations themselves.

  20. Computational aeroacoustics and numerical simulation of supersonic jets

    NASA Technical Reports Server (NTRS)

    Morris, Philip J.; Long, Lyle N.

    1996-01-01

    The research project has been a computational study of computational aeroacoustics algorithms and numerical simulations of the flow and noise of supersonic jets. During this study a new method for the implementation of solid wall boundary conditions for complex geometries in three dimensions has been developed. In addition, a detailed study of the simulation of the flow in and noise from supersonic circular and rectangular jets has been conducted. Extensive comparisons have been made with experimental measurements. A summary of the results of the research program are attached as the main body of this report in the form of two publications. Also, the report lists the names of the students who were supported by this grant, their degrees, and the titles of their dissertations. In addition, a list of presentations and publications made by the Principal Investigators and the research students is also included.

  1. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing

    PubMed Central

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-01

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343

  2. Computer technique for simulating the combustion of cellulose and other fuels

    Treesearch

    Andrew M. Stein; Brian W. Bauske

    1971-01-01

    A computer method has been developed for simulating the combustion of wood and other cellulosic fuels. The products of combustion are used as input for a convection model that slimulates real fires. The method allows the chemical process to proceed to equilibrium and then examines the effects of mass addition and repartitioning on the fluid mechanics of the convection...

  3. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  4. Computer simulation of reconstructed image for computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Yasuda, Tomoki; Kitamura, Mitsuru; Watanabe, Masachika; Tsumuta, Masato; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2009-02-01

    This report presents the results of computer simulation images for image-type Computer-Generated Holograms (CGHs) observable under white light fabricated with an electron beam lithography system. The simulated image is obtained by calculating wavelength and intensity of diffracted light traveling toward the viewing point from the CGH. Wavelength and intensity of the diffracted light are calculated using FFT image generated from interference fringe data. Parallax image of CGH corresponding to the viewing point can be easily obtained using this simulation method. Simulated image from interference fringe data was compared with reconstructed image of real CGH with an Electron Beam (EB) lithography system. According to the result, the simulated image resembled the reconstructed image of the CGH closely in shape, parallax, coloring and shade. And, in accordance with the shape of the light sources the simulated images which were changed in chroma saturation and blur by using two kinds of simulations: the several light sources method and smoothing method. In addition, as the applications of the CGH, full-color CGH and CGH with multiple images were simulated. The result was that the simulated images of those CGHs closely resembled the reconstructed image of real CGHs.

  5. Brief Survey of TSC Computing Facilities

    DOT National Transportation Integrated Search

    1972-05-01

    The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...

  6. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    USDA-ARS?s Scientific Manuscript database

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  7. A method for three-dimensional modeling of wind-shear environments for flight simulator applications

    NASA Technical Reports Server (NTRS)

    Bray, R. S.

    1984-01-01

    A computational method for modeling severe wind shears of the type that have been documented during severe convective atmospheric conditions is offered for use in research and training flight simulation. The procedure was developed with the objectives of operational flexibility and minimum computer load. From one to five, simple down burst wind models can be configured and located to produce the wind field desired for specific simulated flight scenarios. A definition of related turbulence parameters is offered as an additional product of the computations. The use of the method to model several documented examples of severe wind shear is demonstrated.

  8. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    NASA Astrophysics Data System (ADS)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  9. Instructional support and implementation structure during elementary teachers' science education simulation use

    NASA Astrophysics Data System (ADS)

    Gonczi, Amanda L.; Chiu, Jennifer L.; Maeng, Jennifer L.; Bell, Randy L.

    2016-07-01

    This investigation sought to identify patterns in elementary science teachers' computer simulation use, particularly implementation structures and instructional supports commonly employed by teachers. Data included video-recorded science lessons of 96 elementary teachers who used computer simulations in one or more science lessons. Results indicated teachers used a one-to-one student-to-computer ratio most often either during class-wide individual computer use or during a rotating station structure. Worksheets, general support, and peer collaboration were the most common forms of instructional support. The least common instructional support forms included lesson pacing, initial play, and a closure discussion. Students' simulation use was supported in the fewest ways during a rotating station structure. Results suggest that simulation professional development with elementary teachers needs to explicitly focus on implementation structures and instructional support to enhance participants' pedagogical knowledge and improve instructional simulation use. In addition, research is needed to provide theoretical explanations for the observed patterns that should subsequently be addressed in supporting teachers' instructional simulation use during professional development or in teacher preparation programs.

  10. Shock compression response of cold-rolled Ni/Al multilayer composites

    NASA Astrophysics Data System (ADS)

    Specht, Paul E.; Weihs, Timothy P.; Thadhani, Naresh N.

    2017-01-01

    Uniaxial strain, plate-on-plate impact experiments were performed on cold-rolled Ni/Al multilayer composites and the resulting Hugoniot was determined through time-resolved measurements combined with impedance matching. The experimental Hugoniot agreed with that previously predicted by two dimensional (2D) meso-scale calculations [Specht et al., J. Appl. Phys. 111, 073527 (2012)]. Additional 2D meso-scale simulations were performed using the same computational method as the prior study to reproduce the experimentally measured free surface velocities and stress profiles. These simulations accurately replicated the experimental profiles, providing additional validation for the previous computational work.

  11. Universal, computer facilitated, steady state oscillator, closed loop analysis theory and some applications to precision oscillators

    NASA Technical Reports Server (NTRS)

    Parzen, Benjamin

    1992-01-01

    The theory of oscillator analysis in the immittance domain should be read in conjunction with the additional theory presented here. The combined theory enables the computer simulation of the steady state oscillator. The simulation makes the calculation of the oscillator total steady state performance practical, including noise at all oscillator locations. Some specific precision oscillators are analyzed.

  12. Proton transport in functionalised additives for PEM fuel cells: contributions from atomistic simulations.

    PubMed

    Tölle, Pia; Köhler, Christof; Marschall, Roland; Sharifi, Monir; Wark, Michael; Frauenheim, Thomas

    2012-08-07

    The conventional polymer electrolyte membrane (PEM) materials for fuel cell applications strongly rely on temperature and pressure conditions for optimal performance. In order to expand the range of operating conditions of these conventional PEM materials, mesoporous functionalised SiO(2) additives are developed. It has been demonstrated that these additives themselves achieve proton conductivities approaching those of conventional materials. However, the proton conduction mechanisms and especially factors influencing charge carrier mobility under different hydration conditions are not well known and difficult to separate from concentration effects in experiments. This tutorial review highlights contributions of atomistic computer simulations to the basic understanding and eventual design of these materials. Some basic introduction to the theoretical and computational framework is provided to introduce the reader to the field, the techniques are in principle applicable to a wide range of other situations as well. Simulation results are directly compared to experimental data as far as possible.

  13. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  14. FACE computer simulation. [Flexible Arm Controls Experiment

    NASA Technical Reports Server (NTRS)

    Sadeh, Willy Z.; Szmyd, Jeffrey A.

    1990-01-01

    A computer simulation of the FACE (Flexible Arm Controls Experiment) was conducted to assess its design for use in the Space Shuttle. The FACE is supposed to be a 14-ft long articulate structure with 4 degrees of freedom, consisting of shoulder pitch and yaw, elbow pitch, and wrist pitch. Kinematics of the FACE was simulated to obtain data on arm operation, function, workspace and interaction. Payload capture ability was modeled. The simulation indicates the capability for detailed kinematic simulation and payload capture ability analysis, and the feasibility of real-time simulation was determined. In addition, the potential for interactive real-time training through integration of the simulation with various interface controllers was revealed. At this stage, the flexibility of the arm was not yet considered.

  15. Predicting Flows of Rarefied Gases

    NASA Technical Reports Server (NTRS)

    LeBeau, Gerald J.; Wilmoth, Richard G.

    2005-01-01

    DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.

  16. Ku-Band rendezvous radar performance computer simulation model

    NASA Technical Reports Server (NTRS)

    Magnusson, H. G.; Goff, M. F.

    1984-01-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  17. Ku-Band rendezvous radar performance computer simulation model

    NASA Astrophysics Data System (ADS)

    Magnusson, H. G.; Goff, M. F.

    1984-06-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  18. Computational methods for coupling microstructural and micromechanical materials response simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were appliedmore » to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.« less

  19. Simulation: an evolving methodology for health administration education.

    PubMed

    Taylor, J K; Moore, J A; Holland, M G

    1985-01-01

    Simulation provides a valuable addition to a university's teaching methods. Computer-assisted gaming is especially effective in teaching advanced business strategy and corporate policy when the nature and complexity of the simulation permit. The potential for using simulation techniques in postgraduate professional education and in managerial self-assessment appears to be significant over the next several years.

  20. High-Fidelity Simulations of Electromagnetic Propagation and RF Communication Systems

    DTIC Science & Technology

    2017-05-01

    addition to high -fidelity RF propagation modeling, lower-fidelity mod- els, which are less computationally burdensome, are available via a C++ API...expensive to perform, requiring roughly one hour of computer time with 36 available cores and ray tracing per- formed by a single high -end GPU...ER D C TR -1 7- 2 Military Engineering Applied Research High -Fidelity Simulations of Electromagnetic Propagation and RF Communication

  1. An Initial Multi-Domain Modeling of an Actively Cooled Structure

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur

    1997-01-01

    A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.

  2. Stochastic simulation of human pulmonary blood flow and transit time frequency distribution based on anatomic and elasticity data.

    PubMed

    Huang, Wei; Shi, Jun; Yen, R T

    2012-12-01

    The objective of our study was to develop a computing program for computing the transit time frequency distributions of red blood cell in human pulmonary circulation, based on our anatomic and elasticity data of blood vessels in human lung. A stochastic simulation model was introduced to simulate blood flow in human pulmonary circulation. In the stochastic simulation model, the connectivity data of pulmonary blood vessels in human lung was converted into a probability matrix. Based on this model, the transit time of red blood cell in human pulmonary circulation and the output blood pressure were studied. Additionally, the stochastic simulation model can be used to predict the changes of blood flow in human pulmonary circulation with the advantage of the lower computing cost and the higher flexibility. In conclusion, a stochastic simulation approach was introduced to simulate the blood flow in the hierarchical structure of a pulmonary circulation system, and to calculate the transit time distributions and the blood pressure outputs.

  3. 2000 Numerical Propulsion System Simulation Review

    NASA Technical Reports Server (NTRS)

    Lytle, John; Follen, Greg; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac

    2001-01-01

    The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective. high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA'S Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 1999 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2000 by the High Performance Computing and Communications Program.

  4. 2001 Numerical Propulsion System Simulation Review

    NASA Technical Reports Server (NTRS)

    Lytle, John; Follen, Gregory; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac

    2002-01-01

    The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA's Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 2000 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2001 by the High Performance Computing and Communications Program.

  5. Modeling and simulation of ocean wave propagation using lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Nuraiman, Dian

    2017-10-01

    In this paper, we present on modeling and simulation of ocean wave propagation from the deep sea to the shoreline. This requires high computational cost for simulation with large domain. We propose to couple a 1D shallow water equations (SWE) model with a 2D incompressible Navier-Stokes equations (NSE) model in order to reduce the computational cost. The coupled model is solved using the lattice Boltzmann method (LBM) with the lattice Bhatnagar-Gross-Krook (BGK) scheme. Additionally, a special method is implemented to treat the complex behavior of free surface close to the shoreline. The result shows the coupled model can reduce computational cost significantly compared to the full NSE model.

  6. Numerical Simulation of a High-Lift Configuration with Embedded Fluidic Actuators

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Casalino, Damiano; Lin, John C.; Appelbaum, Jason

    2014-01-01

    Numerical simulations have been performed for a vertical tail configuration with deflected rudder. The suction surface of the main element of this configuration is embedded with an array of 32 fluidic actuators that produce oscillating sweeping jets. Such oscillating jets have been found to be very effective for flow control applications in the past. In the current paper, a high-fidelity computational fluid dynamics (CFD) code known as the PowerFLOW(Registered TradeMark) code is used to simulate the entire flow field associated with this configuration, including the flow inside the actuators. The computed results for the surface pressure and integrated forces compare favorably with measured data. In addition, numerical solutions predict the correct trends in forces with active flow control compared to the no control case. Effect of varying yaw and rudder deflection angles are also presented. In addition, computations have been performed at a higher Reynolds number to assess the performance of fluidic actuators at flight conditions.

  7. Science Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1985

    1985-01-01

    Presents 23 experiments, demonstrations, activities, and computer programs in biology, chemistry, and physics. Topics include lead in petrol, production of organic chemicals, reduction of water, enthalpy, X-ray diffraction model, nuclear magnetic resonance spectroscopy, computer simulation for additive mixing of colors, Archimedes Principle, and…

  8. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  9. Gray: a ray tracing-based Monte Carlo simulator for PET

    NASA Astrophysics Data System (ADS)

    Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.

    2018-05-01

    Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.

  10. Solving search problems by strongly simulating quantum circuits

    PubMed Central

    Johnson, T. H.; Biamonte, J. D.; Clark, S. R.; Jaksch, D.

    2013-01-01

    Simulating quantum circuits using classical computers lets us analyse the inner workings of quantum algorithms. The most complete type of simulation, strong simulation, is believed to be generally inefficient. Nevertheless, several efficient strong simulation techniques are known for restricted families of quantum circuits and we develop an additional technique in this article. Further, we show that strong simulation algorithms perform another fundamental task: solving search problems. Efficient strong simulation techniques allow solutions to a class of search problems to be counted and found efficiently. This enhances the utility of strong simulation methods, known or yet to be discovered, and extends the class of search problems known to be efficiently simulable. Relating strong simulation to search problems also bounds the computational power of efficiently strongly simulable circuits; if they could solve all problems in P this would imply that all problems in NP and #P could be solved in polynomial time. PMID:23390585

  11. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.

  12. Software for Brain Network Simulations: A Comparative Study

    PubMed Central

    Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.

    2017-01-01

    Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687

  13. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  14. Advanced Techniques for Simulating the Behavior of Sand

    NASA Astrophysics Data System (ADS)

    Clothier, M.; Bailey, M.

    2009-12-01

    Computer graphics and visualization techniques continue to provide untapped research opportunities, particularly when working with earth science disciplines. Through collaboration with the Oregon Space Grant and IGERT Ecosystem Informatics programs we are developing new techniques for simulating sand. In addition, through collaboration with the Oregon Space Grant, we’ve been communicating with the Jet Propulsion Laboratory (JPL) to exchange ideas and gain feedback on our work. More specifically, JPL’s DARTS Laboratory specializes in planetary vehicle simulation, such as the Mars rovers. This simulation utilizes a virtual "sand box" to test how planetary rovers respond to different terrains while traversing them. Unfortunately, this simulation is unable to fully mimic the harsh, sandy environments of those found on Mars. Ideally, these simulations should allow a rover to interact with the sand beneath it, particularly for different sand granularities and densities. In particular, there may be situations where a rover may become stuck in sand due to lack of friction between the sand and wheels. In fact, in May 2009, the Spirit rover became stuck in the Martian sand and has provided additional motivation for this research. In order to develop a new sand simulation model, high performance computing will play a very important role in this work. More specifically, graphics processing units (GPUs) are useful due to their ability to run general purpose algorithms and ability to perform massively parallel computations. In prior research, simulating vast quantities of sand has been difficult to compute in real-time due to the computational complexity of many colliding particles. With the use of GPUs however, each particle collision will be parallelized, allowing for a dramatic performance increase. In addition, spatial partitioning will also provide a speed boost as this will help limit the number of particle collision calculations. However, since the goal of this research is to simulate the look and behavior of sand, this work will go beyond simple particle collision. In particular, we can continue to use our parallel algorithms not only on single particles but on particle “clumps” that consist of multiple combined particles. Since sand is typically not spherical in nature, these particle “clumps” help to simulate the coarse nature of sand. In a simulation environment, multiple combined particles could be used to simulate the polygonal and granular nature of sand grains. Thus, a diversity of sand particles can be generated. The interaction between these particles can then be parallelized using GPU hardware. As such, this research will investigate different graphics and physics techniques and determine the tradeoffs in performance and visual quality for sand simulation. An enhanced sand model through the use of high performance computing and GPUs has great potential to impact research for both earth and space scientists. Interaction with JPL has provided an opportunity for us to refine our simulation techniques that can ultimately be used for their vehicle simulator. As an added benefit of this work, advancements in simulating sand can also benefit scientists here on earth, especially in regard to understanding landslides and debris flows.

  15. A Comparative Study on Real Lab and Simulation Lab in Communication Engineering from Students' Perspectives

    ERIC Educational Resources Information Center

    Balakrishnan, B.; Woods, P. C.

    2013-01-01

    Over the years, rapid development in computer technology has engendered simulation-based laboratory (lab) in addition to the traditional hands-on (physical) lab. Many higher education institutions adopt simulation lab, replacing some existing physical lab experiments. The creation of new systems for conducting engineering lab activities has raised…

  16. Computer simulation of white pine blister rust epidemics

    Treesearch

    Geral I. McDonald; Raymond J. Hoff; William R. Wykoff

    1981-01-01

    A simulation of white pine blister rust is described in both word and mathematical models. The objective of this first generation simulation was to organize and analyze the available epidemiological knowledge to produce a foundation for integrated management of this destructive rust of 5-needle pines. Verification procedures and additional research needs are also...

  17. Aortic dissection simulation models for clinical support: fluid-structure interaction vs. rigid wall models.

    PubMed

    Alimohammadi, Mona; Sherwood, Joseph M; Karimpour, Morad; Agu, Obiekezie; Balabani, Stavroula; Díaz-Zuccarini, Vanessa

    2015-04-15

    The management and prognosis of aortic dissection (AD) is often challenging and the use of personalised computational models is being explored as a tool to improve clinical outcome. Including vessel wall motion in such simulations can provide more realistic and potentially accurate results, but requires significant additional computational resources, as well as expertise. With clinical translation as the final aim, trade-offs between complexity, speed and accuracy are inevitable. The present study explores whether modelling wall motion is worth the additional expense in the case of AD, by carrying out fluid-structure interaction (FSI) simulations based on a sample patient case. Patient-specific anatomical details were extracted from computed tomography images to provide the fluid domain, from which the vessel wall was extrapolated. Two-way fluid-structure interaction simulations were performed, with coupled Windkessel boundary conditions and hyperelastic wall properties. The blood was modelled using the Carreau-Yasuda viscosity model and turbulence was accounted for via a shear stress transport model. A simulation without wall motion (rigid wall) was carried out for comparison purposes. The displacement of the vessel wall was comparable to reports from imaging studies in terms of intimal flap motion and contraction of the true lumen. Analysis of the haemodynamics around the proximal and distal false lumen in the FSI model showed complex flow structures caused by the expansion and contraction of the vessel wall. These flow patterns led to significantly different predictions of wall shear stress, particularly its oscillatory component, which were not captured by the rigid wall model. Through comparison with imaging data, the results of the present study indicate that the fluid-structure interaction methodology employed herein is appropriate for simulations of aortic dissection. Regions of high wall shear stress were not significantly altered by the wall motion, however, certain collocated regions of low and oscillatory wall shear stress which may be critical for disease progression were only identified in the FSI simulation. We conclude that, if patient-tailored simulations of aortic dissection are to be used as an interventional planning tool, then the additional complexity, expertise and computational expense required to model wall motion is indeed justified.

  18. Remote control system for high-perfomance computer simulation of crystal growth by the PFC method

    NASA Astrophysics Data System (ADS)

    Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.

  19. Effectiveness evaluation of STOL transport operations (phase 2). [computer simulation program of commercial short haul aircraft operations

    NASA Technical Reports Server (NTRS)

    Welp, D. W.; Brown, R. A.; Ullman, D. G.; Kuhner, M. B.

    1974-01-01

    A computer simulation program which models a commercial short-haul aircraft operating in the civil air system was developed. The purpose of the program is to evaluate the effect of a given aircraft avionics capability on the ability of the aircraft to perform on-time carrier operations. The program outputs consist primarily of those quantities which can be used to determine direct operating costs. These include: (1) schedule reliability or delays, (2) repairs/replacements, (3) fuel consumption, and (4) cancellations. More comprehensive models of the terminal area environment were added and a simulation of an existing airline operation was conducted to obtain a form of model verification. The capability of the program to provide comparative results (sensitivity analysis) was then demonstrated by modifying the aircraft avionics capability for additional computer simulations.

  20. CHARMM additive and polarizable force fields for biophysics and computer-aided drug design

    PubMed Central

    Vanommeslaeghe, K.

    2014-01-01

    Background Molecular Mechanics (MM) is the method of choice for computational studies of biomolecular systems owing to its modest computational cost, which makes it possible to routinely perform molecular dynamics (MD) simulations on chemical systems of biophysical and biomedical relevance. Scope of Review As one of the main factors limiting the accuracy of MD results is the empirical force field used, the present paper offers a review of recent developments in the CHARMM additive force field, one of the most popular bimolecular force fields. Additionally, we present a detailed discussion of the CHARMM Drude polarizable force field, anticipating a growth in the importance and utilization of polarizable force fields in the near future. Throughout the discussion emphasis is placed on the force fields’ parametrization philosophy and methodology. Major Conclusions Recent improvements in the CHARMM additive force field are mostly related to newly found weaknesses in the previous generation of additive force fields. Beyond the additive approximation is the newly available CHARMM Drude polarizable force field, which allows for MD simulations of up to 1 microsecond on proteins, DNA, lipids and carbohydrates. General Significance Addressing the limitations ensures the reliability of the new CHARMM36 additive force field for the types of calculations that are presently coming into routine computational reach while the availability of the Drude polarizable force fields offers a model that is an inherently more accurate model of the underlying physical forces driving macromolecular structures and dynamics. PMID:25149274

  1. Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.

    PubMed

    Slażyński, Leszek; Bohte, Sander

    2012-01-01

    The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.

  2. Simulation techniques in hyperthermia treatment planning

    PubMed Central

    Paulides, MM; Stauffer, PR; Neufeld, E; Maccarini, P; Kyriakou, A; Canters, RAM; Diederich, C; Bakker, JF; Van Rhoon, GC

    2013-01-01

    Clinical trials have shown that hyperthermia (HT), i.e. an increase of tissue temperature to 39-44°C, significantly enhance radiotherapy and chemotherapy effectiveness (1). Driven by the developments in computational techniques and computing power, personalized hyperthermia treatment planning (HTP) has matured and has become a powerful tool for optimizing treatment quality. Electromagnetic, ultrasound, and thermal simulations using realistic clinical setups are now being performed to achieve patient-specific treatment optimization. In addition, extensive studies aimed to properly implement novel HT tools and techniques, and to assess the quality of HT, are becoming more common. In this paper, we review the simulation tools and techniques developed for clinical hyperthermia, and evaluate their current status on the path from “model” to “clinic”. In addition, we illustrate the major techniques employed for validation and optimization. HTP has become an essential tool for improvement, control, and assessment of HT treatment quality. As such, it plays a pivotal role in the quest to establish HT as an efficacious addition to multi-modality treatment of cancer. PMID:23672453

  3. Long range Debye-Hückel correction for computation of grid-based electrostatic forces between biomacromolecules

    PubMed Central

    2014-01-01

    Background Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme. Results We found that the inclusion of the long-range electrostatic correction increased the accuracy of both the protein-protein interaction profiles and the protein diffusion coefficients at low ionic strength. Conclusions An advantage of this method is the low additional computational cost required to treat long-range electrostatic interactions in large biomacromolecular systems. Moreover, the implementation described here for BD simulations of protein solutions can also be applied in implicit solvent molecular dynamics simulations that make use of gridded interaction potentials. PMID:25045516

  4. Parallel computing in enterprise modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less

  5. Efficiency and Accuracy in Thermal Simulation of Powder Bed Fusion of Bulk Metallic Glass

    NASA Astrophysics Data System (ADS)

    Lindwall, J.; Malmelöv, A.; Lundbäck, A.; Lindgren, L.-E.

    2018-05-01

    Additive manufacturing by powder bed fusion processes can be utilized to create bulk metallic glass as the process yields considerably high cooling rates. However, there is a risk that reheated material set in layers may become devitrified, i.e., crystallize. Therefore, it is advantageous to simulate the process to fully comprehend it and design it to avoid the aforementioned risk. However, a detailed simulation is computationally demanding. It is necessary to increase the computational speed while maintaining accuracy of the computed temperature field in critical regions. The current study evaluates a few approaches based on temporal reduction to achieve this. It is found that the evaluated approaches save a lot of time and accurately predict the temperature history.

  6. Adding computationally efficient realism to Monte Carlo turbulence simulation

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1985-01-01

    Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.

  7. Numerical Propulsion System Simulation (NPSS) 1999 Industry Review

    NASA Technical Reports Server (NTRS)

    Lytle, John; Follen, Greg; Naiman, Cynthia; Evans, Austin

    2000-01-01

    The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. In addition, the paper contains a summary of the feedback received from industry partners in the development effort and the actions taken over the past year to respond to that feedback. The NPSS development was supported in FY99 by the High Performance Computing and Communications Program.

  8. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  9. Architectural Improvements and New Processing Tools for the Open XAL Online Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Christopher K; Pelaia II, Tom; Freed, Jonathan M

    The online model is the component of Open XAL providing accelerator modeling, simulation, and dynamic synchronization to live hardware. Significant architectural changes and feature additions have been recently made in two separate areas: 1) the managing and processing of simulation data, and 2) the modeling of RF cavities. Simulation data and data processing have been completely decoupled. A single class manages all simulation data while standard tools were developed for processing the simulation results. RF accelerating cavities are now modeled as composite structures where parameter and dynamics computations are distributed. The beam and hardware models both maintain their relative phasemore » information, which allows for dynamic phase slip and elapsed time computation.« less

  10. Computers in the Foreign Language Classroom: Not Just Another "Fad or Frill."

    ERIC Educational Resources Information Center

    Cox, Ruth

    Designed to assist foreign language teachers to become more computer literate, this paper discusses five major types of educational software currently on the market: (1) drill and practice; (2) tutorials; (3) simulations; (4) computer games; and (5) problem solvers. Possible uses for each type of program are given; in addition, specific programs…

  11. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    PubMed

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  12. Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis

    PubMed Central

    Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.

    2014-01-01

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300

  13. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation. Appendix B: ROBSIM programmer's guide

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.

    1986-01-01

    The purpose of the Robotic Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAM 77 and implemented on a VAX 11/750 computer using the VMS operating system. The programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With the manual and the in-code documentation, an experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.

  14. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation, appendix B

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.

    1984-01-01

    The purpose of the Robotics Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAN 77 and implemented on a VAX 11/750 computer using the VMS operating system. This programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With this manual and the in-code documentation, and experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.

  15. Computer-intensive simulation of solid-state NMR experiments using SIMPSON.

    PubMed

    Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas

    2014-09-01

    Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Current implementation and future plans on new code architecture, programming language and user interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brun, B.

    1997-07-01

    Computer technology has improved tremendously during the last years with larger media capacity, memory and more computational power. Visual computing with high-performance graphic interface and desktop computational power have changed the way engineers accomplish everyday tasks, development and safety studies analysis. The emergence of parallel computing will permit simulation over a larger domain. In addition, new development methods, languages and tools have appeared in the last several years.

  17. Li-Doped Ionic Liquid Electrolytes: From Bulk Phase to Interfacial Behavior

    NASA Technical Reports Server (NTRS)

    Haskins, Justin B.; Lawson, John W.

    2016-01-01

    Ionic liquids have been proposed as candidate electrolytes for high-energy density, rechargeable batteries. We present an extensive computational analysis supported by experimental comparisons of the bulk and interfacial properties of a representative set of these electrolytes as a function of Li-salt doping. We begin by investigating the bulk electrolyte using quantum chemistry and ab initio molecular dynamics to elucidate the solvation structure of Li(+). MD simulations using the polarizable force field of Borodin and coworkers were then performed, from which we obtain an array of thermodynamic and transport properties. Excellent agreement is found with experiments for diffusion, ionic conductivity, and viscosity. Combining MD simulations with electronic structure computations, we computed the electrochemical window of the electrolytes across a range of Li(+)-doping levels and comment on the role of the liquid environment. Finally, we performed a suite of simulations of these Li-doped electrolytes at ideal electrified interfaces to evaluate the differential capacitance and the equilibrium Li(+) distribution in the double layer. The magnitude of differential capacitance is in good agreement with our experiments and exhibits the characteristic camel-shaped profile. In addition, the simulations reveal Li(+) to be highly localized to the second molecular layer of the double layer, which is supported by additional computations that find this layer to be a free energy minimum with respect to Li(+) translation.

  18. Biocellion: accelerating computer simulation of multicellular biological system models.

    PubMed

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. The effects of computer simulation versus hands-on dissection and the placement of computer simulation within the learning cycle on student achievement and attitude

    NASA Astrophysics Data System (ADS)

    Hopkins, Kathryn Susan

    The value of dissection as an instructional strategy has been debated, but not evidenced in research literature. The purpose of this study was to examine the efficacy of using computer simulated frog dissection as a substitute for traditional hands-on frog dissection and to examine the possible enhancement of achievement by combining the two strategies in a specific sequence. In this study, 134 biology students at two Central Texas schools were divided into the five following treatment groups: computer simulation of frog dissection, computer simulation before dissection, traditional hands-on frog dissection, dissection before computer simulation, and textual worksheet materials. The effects on achievement were evaluated by labeling 10 structures on three diagrams, identifying 11 pinned structures on a prosected frog, and answering 9 multiple-choice questions over the dissection process. Attitude was evaluated using a thirty item survey with a five-point Likert scale. The quasi-experimental design was pretest/post-test/post-test nonequivalent group for both control and experimental groups, a 2 x 2 x 5 completely randomized factorial design (gender, school, five treatments). The pretest/post-test design was incorporated to control for prior knowledge using analysis of covariance. The dissection only group evidenced a significantly higher performance than all other treatments except dissection-then-computer on the post-test segment requiring students to label pinned anatomical parts on a prosected frog. Interactions between treatment and school in addition to interaction between treatment and gender were found to be significant. The diagram and attitude post-tests evidenced no significant difference. Results on the nine multiple-choice questions about dissection procedures indicated a significant difference between schools. The interaction between treatment and school was also found to be significant. On a delayed post-test, a significant difference in gender was found on the diagram labeling segment of the post-test. Males were reported to have the higher score. Since existing research conflicts with this study's results, additional research using authentic assessment is recommended. Instruction should be aligned with dissection content and process objectives for each treatment group, and the teacher variable should be controlled.

  20. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  1. The updated algorithm of the Energy Consumption Program (ECP): A computer model simulating heating and cooling energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.

    1979-01-01

    The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.

  2. Parallel Signal Processing and System Simulation using aCe

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2003-01-01

    Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).

  3. A qualitative analysis of bus simulator training on transit incidents : a case study in Florida.

    DOT National Transportation Integrated Search

    2013-06-01

    The purpose of this research was to track and observe three Florida public transit agencies as they incorporated and integrated computer-based transit bus simulators into their existing bus operator training programs. In addition to the three Florida...

  4. Gray: a ray tracing-based Monte Carlo simulator for PET.

    PubMed

    Freese, David L; Olcott, Peter D; Buss, Samuel R; Levin, Craig S

    2018-05-21

    Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a [Formula: see text] speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within [Formula: see text]% when accounting for differences in peak NECR. We also estimate the peak NECR to be [Formula: see text] kcps, or within [Formula: see text]% of published experimental data. The activity concentration of the peak is also estimated within 1.3%.

  5. Quantum Spin Glasses, Annealing and Computation

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Bikas K.; Inoue, Jun-ichi; Tamura, Ryo; Tanaka, Shu

    2017-05-01

    List of tables; List of figures, Preface; 1. Introduction; Part I. Quantum Spin Glass, Annealing and Computation: 2. Classical spin models from ferromagnetic spin systems to spin glasses; 3. Simulated annealing; 4. Quantum spin glass; 5. Quantum dynamics; 6. Quantum annealing; Part II. Additional Notes: 7. Notes on adiabatic quantum computers; 8. Quantum information and quenching dynamics; 9. A brief historical note on the studies of quantum glass, annealing and computation.

  6. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  7. An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon

    NASA Technical Reports Server (NTRS)

    Rutherford, Brian

    2000-01-01

    The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.

  8. Crocodile Technology. [CD-ROM].

    ERIC Educational Resources Information Center

    2000

    This high school physics computer software resource is a systems and control simulator that covers the topics of electricity, electronics, mechanics, and programming. Circuits can easily be simulated on the screen and electronic and mechanical components can be combined. In addition to those provided in Crocodile Technology, a student can create…

  9. Computational Investigation of Fluidic Counterflow Thrust Vectoring

    NASA Technical Reports Server (NTRS)

    Hunter, Craig A.; Deere, Karen A.

    1999-01-01

    A computational study of fluidic counterflow thrust vectoring has been conducted. Two-dimensional numerical simulations were run using the computational fluid dynamics code PAB3D with two-equation turbulence closure and linear Reynolds stress modeling. For validation, computational results were compared to experimental data obtained at the NASA Langley Jet Exit Test Facility. In general, computational results were in good agreement with experimental performance data, indicating that efficient thrust vectoring can be obtained with low secondary flow requirements (less than 1% of the primary flow). An examination of the computational flowfield has revealed new details about the generation of a countercurrent shear layer, its relation to secondary suction, and its role in thrust vectoring. In addition to providing new information about the physics of counterflow thrust vectoring, this work appears to be the first documented attempt to simulate the counterflow thrust vectoring problem using computational fluid dynamics.

  10. Computation of Unsteady Flow in Flame Trench For Prediction of Ignition Overpressure Waves

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kris, Cetin

    2010-01-01

    Computational processes/issues for supporting mission tasks are discussed using an example from launch environment simulation. Entire CFD process has been discussed using an existing code; STS-124 conditions were revisited to support wall repair effort for STS-125 flight; when water bags were not included, computed results indicate that IOP waves with the peak values have been reflected from SRB s own exhaust hole; ARES-1X simulations show that there is a shock wave going through the unused exhaust hole, however, it plays a secondary role; all three ARES-1X cases and STS-1 simulations showed very similar IOP magnitudes and patters on the vehicle; with the addition of water bags and water injection, it will further diminish the IOP effects.

  11. Lumber values from computerized simulation of hardwood log sawing

    Treesearch

    D.B. Richards; W.K. Adkins; H. Hallock; E.H. Bulgrin

    1980-01-01

    Computer simulation sawing programs were used to study the sawing of mathematical models of hardwood logs by me live sawing and three 4-sided sawing methods. One of the 4-sided methods simulated "grade sawing" by sawing each successive board from the log face with the highest potential grade. Logs from 10 through 28 inches in diameter were sawn. In addition,...

  12. Experimental Validation of Numerical Simulations for an Acoustic Liner in Grazing Flow

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Pastouchenko, Nikolai N.; Jones, Michael G.; Watson, Willie R.

    2013-01-01

    A coordinated experimental and numerical simulation effort is carried out to improve our understanding of the physics of acoustic liners in a grazing flow as well our computational aeroacoustics (CAA) method prediction capability. A numerical simulation code based on advanced CAA methods is developed. In a parallel effort, experiments are performed using the Grazing Flow Impedance Tube at the NASA Langley Research Center. In the experiment, a liner is installed in the upper wall of a rectangular flow duct with a 2 inch by 2.5 inch cross section. Spatial distribution of sound pressure levels and relative phases are measured on the wall opposite the liner in the presence of a Mach 0.3 grazing flow. The computer code is validated by comparing computed results with experimental measurements. Good agreements are found. The numerical simulation code is then used to investigate the physical properties of the acoustic liner. It is shown that an acoustic liner can produce self-noise in the presence of a grazing flow and that a feedback acoustic resonance mechanism is responsible for the generation of this liner self-noise. In addition, the same mechanism also creates additional liner drag. An estimate, based on numerical simulation data, indicates that for a resonant liner with a 10% open area ratio, the drag increase would be about 4% of the turbulent boundary layer drag over a flat wall.

  13. CHARMM additive and polarizable force fields for biophysics and computer-aided drug design.

    PubMed

    Vanommeslaeghe, K; MacKerell, A D

    2015-05-01

    Molecular Mechanics (MM) is the method of choice for computational studies of biomolecular systems owing to its modest computational cost, which makes it possible to routinely perform molecular dynamics (MD) simulations on chemical systems of biophysical and biomedical relevance. As one of the main factors limiting the accuracy of MD results is the empirical force field used, the present paper offers a review of recent developments in the CHARMM additive force field, one of the most popular biomolecular force fields. Additionally, we present a detailed discussion of the CHARMM Drude polarizable force field, anticipating a growth in the importance and utilization of polarizable force fields in the near future. Throughout the discussion emphasis is placed on the force fields' parametrization philosophy and methodology. Recent improvements in the CHARMM additive force field are mostly related to newly found weaknesses in the previous generation of additive force fields. Beyond the additive approximation is the newly available CHARMM Drude polarizable force field, which allows for MD simulations of up to 1μs on proteins, DNA, lipids and carbohydrates. Addressing the limitations ensures the reliability of the new CHARMM36 additive force field for the types of calculations that are presently coming into routine computational reach while the availability of the Drude polarizable force fields offers an inherently more accurate model of the underlying physical forces driving macromolecular structures and dynamics. This article is part of a Special Issue entitled "Recent developments of molecular dynamics". Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Additivity of Factor Effects in Reading Tasks Is Still a Challenge for Computational Models: Reply to Ziegler, Perry, and Zorzi (2009)

    ERIC Educational Resources Information Center

    Besner, Derek; O'Malley, Shannon

    2009-01-01

    J. C. Ziegler, C. Perry, and M. Zorzi (2009) have claimed that their connectionist dual process model (CDP+) can simulate the data reported by S. O'Malley and D. Besner. Most centrally, they have claimed that the model simulates additive effects of stimulus quality and word frequency on the time to read aloud when words and nonwords are randomly…

  15. Collaborative Simulation Grid: Multiscale Quantum-Mechanical/Classical Atomistic Simulations on Distributed PC Clusters in the US and Japan

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; Iyetomi, Hiroshi; Ogata, Shuji; Kouno, Takahisa; Shimojo, Fuyuki; Tsuruta, Kanji; Saini, Subhash; hide

    2002-01-01

    A multidisciplinary, collaborative simulation has been performed on a Grid of geographically distributed PC clusters. The multiscale simulation approach seamlessly combines i) atomistic simulation backed on the molecular dynamics (MD) method and ii) quantum mechanical (QM) calculation based on the density functional theory (DFT), so that accurate but less scalable computations are performed only where they are needed. The multiscale MD/QM simulation code has been Grid-enabled using i) a modular, additive hybridization scheme, ii) multiple QM clustering, and iii) computation/communication overlapping. The Gridified MD/QM simulation code has been used to study environmental effects of water molecules on fracture in silicon. A preliminary run of the code has achieved a parallel efficiency of 94% on 25 PCs distributed over 3 PC clusters in the US and Japan, and a larger test involving 154 processors on 5 distributed PC clusters is in progress.

  16. Multi-man flight simulator

    NASA Technical Reports Server (NTRS)

    Macdonald, G.

    1983-01-01

    A prototype Air Traffic Control facility and multiman flight simulator facility was designed and one of the component simulators fabricated as a proof of concept. The facility was designed to provide a number of independent simple simulator cabs that would have the capability of some local, stand alone processing that would in turn interface with a larger host computer. The system can accommodate up to eight flight simulators (commercially available instrument trainers) which could be operated stand alone if no graphics were required or could operate in a common simulated airspace if connected to the host computer. A proposed addition to the original design is the capability of inputing pilot inputs and quantities displayed on the flight and navigation instruments to the microcomputer when the simulator operates in the stand alone mode to allow independent use of these commercially available instrument trainers for research. The conceptual design of the system and progress made to date on its implementation are described.

  17. Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations

    PubMed Central

    Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey

    2012-01-01

    Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254

  18. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  19. Real-Time Computer-Mediated Communication: Email and Instant Messaging Simulation

    ERIC Educational Resources Information Center

    Newman, Amy

    2007-01-01

    As computer-mediated communication becomes increasingly prevalent in the workplace, students need to apply effective writing principles to today's technologies. Email, in particular, requires interns and new hires to manage incoming messages, use an appropriate tone, and craft clear, concise messages. In addition, with instant messaging (IM)…

  20. Software for Simulating a Complex Robot

    NASA Technical Reports Server (NTRS)

    Goza, S. Michael

    2003-01-01

    RoboSim (Robot Simulation) is a computer program that simulates the poses and motions of the Robonaut a developmental anthropomorphic robot that has a complex system of joints with 43 degrees of freedom and multiple modes of operation and control. RoboSim performs a full kinematic simulation of all degrees of freedom. It also includes interface components that duplicate the functionality of the real Robonaut interface with control software and human operators. Basically, users see no difference between the real Robonaut and the simulation. Consequently, new control algorithms can be tested by computational simulation, without risk to the Robonaut hardware, and without using excessive Robonaut-hardware experimental time, which is always at a premium. Previously developed software incorporated into RoboSim includes Enigma (for graphical displays), OSCAR (for kinematical computations), and NDDS (for communication between the Robonaut and external software). In addition, RoboSim incorporates unique inverse-kinematical algorithms for chains of joints that have fewer than six degrees of freedom (e.g., finger joints). In comparison with the algorithms of OSCAR, these algorithms are more readily adaptable and provide better results when using equivalent sets of data.

  1. Web-Based Computational Chemistry Education with CHARMMing I: Lessons and Tutorial

    PubMed Central

    Miller, Benjamin T.; Singh, Rishi P.; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S.; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R.; Woodcock, H. Lee

    2014-01-01

    This article describes the development, implementation, and use of web-based “lessons” to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that “point and click” simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance. PMID:25057988

  2. Web-based computational chemistry education with CHARMMing I: Lessons and tutorial.

    PubMed

    Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R; Woodcock, H Lee

    2014-07-01

    This article describes the development, implementation, and use of web-based "lessons" to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that "point and click" simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance.

  3. 3D Parallel Multigrid Methods for Real-Time Fluid Simulation

    NASA Astrophysics Data System (ADS)

    Wan, Feifei; Yin, Yong; Zhang, Suiyu

    2018-03-01

    The multigrid method is widely used in fluid simulation because of its strong convergence. In addition to operating accuracy, operational efficiency is also an important factor to consider in order to enable real-time fluid simulation in computer graphics. For this problem, we compared the performance of the Algebraic Multigrid and the Geometric Multigrid in the V-Cycle and Full-Cycle schemes respectively, and analyze the convergence and speed of different methods. All the calculations are done on the parallel computing of GPU in this paper. Finally, we experiment with the 3D-grid for each scale, and give the exact experimental results.

  4. Extended MHD modeling of nonlinear instabilities in fusion and space plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Germaschewski, Kai

    A number of different sub-projects where pursued within this DOE early career project. The primary focus was on using fully nonlinear, curvilinear, extended MHD simulations of instabilities with applications to fusion and space plasmas. In particular, we performed comprehensive studies of the dynamics of the double tearing mode in different regimes and confi gurations, using Cartesian and cyclindrical geometry and investigating both linear and non-linear dynamics. In addition to traditional extended MHD involving Hall term and electron pressure gradient, we also employed a new multi-fluid moment model, which shows great promise to incorporate kinetic effects, in particular off-diagonal elements ofmore » the pressure tensor, in a fluid model, which is naturally computationally much cheaper than fully kinetic particle or Vlasov simulations. We used our Vlasov code for detailed studies of how weak collisions effect plasma echos. In addition, we have played an important supporting role working with the PPPL theory group around Will Fox and Amitava Bhattacharjee on providing simulation support for HED plasma experiments performed at high-powered laser facilities like OMEGA-EP in Rochester, NY. This project has support a great number of computational advances in our fluid and kinetic plasma models, and has been crucial to winning multiple INCITE computer time awards that supported our computational modeling.« less

  5. Lower- and higher-order aberrations predicted by an optomechanical model of arcuate keratotomy for astigmatism.

    PubMed

    Navarro, Rafael; Palos, Fernando; Lanchares, Elena; Calvo, Begoña; Cristóbal, José A

    2009-01-01

    To develop a realistic model of the optomechanical behavior of the cornea after curved relaxing incisions to simulate the induced astigmatic change and predict the optical aberrations produced by the incisions. ICMA Consejo Superior de Investigaciones Científicas and Universidad de Zaragoza, Zaragoza, Spain. A 3-dimensional finite element model of the anterior hemisphere of the ocular surface was used. The corneal tissue was modeled as a quasi-incompressible, anisotropic hyperelastic constitutive behavior strongly dependent on the physiological collagen fibril distribution. Similar behaviors were assigned to the limbus and sclera. With this model, some corneal incisions were computer simulated after the Lindstrom nomogram. The resulting geometry of the biomechanical simulation was analyzed in the optical zone, and finite ray tracing was performed to compute refractive power and higher-order aberrations (HOAs). The finite-element simulation provided new geometry of the corneal surfaces, from which elevation topographies were obtained. The surgically induced astigmatism (SIA) of the simulated incisions according to the Lindstrom nomogram was computed by finite ray tracing. However, paraxial computations would yield slightly different results (undercorrection of astigmatism). In addition, arcuate incisions would induce significant amounts of HOAs. Finite-element models, together with finite ray-tracing computations, yielded realistic simulations of the biomechanical and optical changes induced by relaxing incisions. The model reproduced the SIA indicated by the Lindstrom nomogram for the simulated incisions and predicted a significant increase in optical aberrations induced by arcuate keratotomy.

  6. Toward high-efficiency and detailed Monte Carlo simulation study of the granular flow spallation target

    NASA Astrophysics Data System (ADS)

    Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei

    2018-02-01

    The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.

  7. Simulation of solid-liquid flows in a stirred bead mill based on computational fluid dynamics (CFD)

    NASA Astrophysics Data System (ADS)

    Winardi, S.; Widiyastuti, W.; Septiani, E. L.; Nurtono, T.

    2018-05-01

    The selection of simulation model is an important step in computational fluid dynamics (CFD) to obtain an agreement with experimental work. In addition, computational time and processor speed also influence the performance of the simulation results. Here, we report the simulation of solid-liquid flow in a bead mill using Eulerian model. Multiple Reference Frame (MRF) was also used to model the interaction between moving (shaft and disk) and stationary (chamber exclude shaft and disk) zones. Bead mill dimension was based on the experimental work of Yamada and Sakai (2013). The effect of shaft rotation speed of 1200 and 1800 rpm on the particle distribution and the flow field was discussed. For rotation speed of 1200 rpm, the particles spread evenly throughout the bead mill chamber. On the other hand, for the rotation speed of 1800 rpm, the particles tend to be thrown to the near wall region resulting in the dead zone and found no particle in the center region. The selected model agreed well to the experimental data with average discrepancies less than 10%. Furthermore, the simulation was run without excessive computational cost.

  8. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  9. Seismic waveform modeling over cloud

    NASA Astrophysics Data System (ADS)

    Luo, Cong; Friederich, Wolfgang

    2016-04-01

    With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.

  10. Experience with Ada on the F-18 High Alpha Research Vehicle Flight Test Program

    NASA Technical Reports Server (NTRS)

    Regenie, Victoria A.; Earls, Michael; Le, Jeanette; Thomson, Michael

    1992-01-01

    Considerable experience was acquired with Ada at the NASA Dryden Flight Research Facility during the on-going High Alpha Technology Program. In this program, an F-18 aircraft was highly modified by the addition of thrust-vectoring vanes to the airframe. In addition, substantial alteration was made in the original quadruplex flight control system. The result is the High Alpha Research Vehicle. An additional research flight control computer was incorporated in each of the four channels. Software for the research flight control computer was written in Ada. To date, six releases of this software have been flown. This paper provides a detailed description of the modifications to the research flight control system. Efficient ground-testing of the software was accomplished by using simulations that used the Ada for portions of their software. These simulations are also described. Modifying and transferring the Ada for flight software to the software simulation configuration has allowed evaluation of this language. This paper also discusses such significant issues in using Ada as portability, modifiability, and testability as well as documentation requirements.

  11. Experience with Ada on the F-18 High Alpha Research Vehicle flight test program

    NASA Technical Reports Server (NTRS)

    Regenie, Victoria A.; Earls, Michael; Le, Jeanette; Thomson, Michael

    1994-01-01

    Considerable experience has been acquired with Ada at the NASA Dryden Flight Research Facility during the on-going High Alpha Technology Program. In this program, an F-18 aircraft has been highly modified by the addition of thrust-vectoring vanes to the airframe. In addition, substantial alteration was made in the original quadruplex flight control system. The result is the High Alpha Research Vehicle. An additional research flight control computer was incorporated in each of the four channels. Software for the research flight control computer was written Ada. To date, six releases of this software have been flown. This paper provides a detailed description of the modifications to the research flight control system. Efficient ground-testing of the software was accomplished by using simulations that used the Ada for portions of their software. These simulations are also described. Modifying and transferring the Ada flight software to the software simulation configuration has allowed evaluation of this language. This paper also discusses such significant issues in using Ada as portability, modifiability, and testability as well as documentation requirements.

  12. Computer Modeling of Direct Metal Laser Sintering

    NASA Technical Reports Server (NTRS)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  13. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  14. Simulation Of Seawater Intrusion With 2D And 3D Models: Nauru Island Case Study

    NASA Astrophysics Data System (ADS)

    Ghassemi, F.; Jakeman, A. J.; Jacobson, G.; Howard, K. W. F.

    1996-03-01

    With the advent of large computing capacities during the past few decades, sophisticated models have been developed for the simulation of seawater intrusion in coastal and island aquifers. Currently, several models are commercially available for the simulation of this problem. This paper describes the mathematical basis and application of the SUTRA and HST3D models to simulate seawater intrusion in Nauru Island, in the central Pacific Ocean. A comparison of the performance and limitations of these two models in simulating a real problem indicates that three-dimensional simulation of seawater intrusion with the HST3D model has the major advantage of being able to specify natural boundary conditions as well as pumping stresses. However, HST3D requires a small grid size and short time steps in order to maintain numerical stability and accuracy. These requirements lead to solution of a large set of linear equations that requires the availability of powerful computing facilities in terms of memory and computing speed. Combined results of the two simulation models indicate a safe pumping rate of 400 m3/d for the aquifer on Nauru Island, where additional fresh water is presently needed for the rehabilitation of mined-out land.

  15. Large Eddy Simulation Study for Fluid Disintegration and Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2011-01-01

    A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not totally satisfactory. With a model for the additional term in the momentum equation, the predictions of the constant-coefficient Smagorinsky and constant-coefficient Scale-Similarity models were improved to a certain extent; however, most of the improvement was obtained for the Gradient model. The previously derived model and a newly developed model for the additional term in the momentum equation were both tested, with the new model proving even more successful than the previous model at reproducing the high density-gradient magnitude regions. Several dynamic SGS-flux models, in which the SGS-flux model coefficient is computed as part of the simulation, were tested in conjunction with the new model for this additional term in the momentum equation. The most successful dynamic model was a "mixed" model combining the Smagorinsky and Gradient models. This work is directly applicable to simulations of gas turbine engines (aeronautics) and rocket engines (astronautics).

  16. Modeling a Wireless Network for International Space Station

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Yaprak, Ece; Lamouri, Saad

    2000-01-01

    This paper describes the application of wireless local area network (LAN) simulation modeling methods to the hybrid LAN architecture designed for supporting crew-computing tools aboard the International Space Station (ISS). These crew-computing tools, such as wearable computers and portable advisory systems, will provide crew members with real-time vehicle and payload status information and access to digital technical and scientific libraries, significantly enhancing human capabilities in space. A wireless network, therefore, will provide wearable computer and remote instruments with the high performance computational power needed by next-generation 'intelligent' software applications. Wireless network performance in such simulated environments is characterized by the sustainable throughput of data under different traffic conditions. This data will be used to help plan the addition of more access points supporting new modules and more nodes for increased network capacity as the ISS grows.

  17. Using Palm Technology in Participatory Simulations of Complex Systems: A New Take on Ubiquitous and Accessible Mobile Computing

    NASA Astrophysics Data System (ADS)

    Klopfer, Eric; Yoon, Susan; Perry, Judy

    2005-09-01

    This paper reports on teachers' perceptions of the educational affordances of a handheld application called Participatory Simulations. It presents evidence from five cases representing each of the populations who work with these computational tools. Evidence across multiple data sources yield similar results to previous research evaluations of handheld activities with respect to enhancing motivation, engagement and self-directed learning. Three additional themes are discussed that provide insight into understanding curricular applicability of Participatory Simulations that suggest a new take on ubiquitous and accessible mobile computing. These themes generally point to the multiple layers of social and cognitive flexibility intrinsic to their design: ease of adaptation to subject-matter content knowledge and curricular integration; facility in attending to teacher-individualized goals; and encouraging the adoption of learner-centered strategies.

  18. MOOSE: A parallel computational framework for coupled systems of nonlinear equations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derek Gaston; Chris Newman; Glen Hansen

    Systems of coupled, nonlinear partial differential equations (PDEs) often arise in simulation of nuclear processes. MOOSE: Multiphysics Object Oriented Simulation Environment, a parallel computational framework targeted at the solution of such systems, is presented. As opposed to traditional data-flow oriented computational frameworks, MOOSE is instead founded on the mathematical principle of Jacobian-free Newton-Krylov (JFNK) solution methods. Utilizing the mathematical structure present in JFNK, physics expressions are modularized into `Kernels,'' allowing for rapid production of new simulation tools. In addition, systems are solved implicitly and fully coupled, employing physics based preconditioning, which provides great flexibility even with large variance in timemore » scales. A summary of the mathematics, an overview of the structure of MOOSE, and several representative solutions from applications built on the framework are presented.« less

  19. Estimating the economic opportunity cost of water use with river basin simulators in a computationally efficient way

    NASA Astrophysics Data System (ADS)

    Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.

    2017-04-01

    The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).

  20. Challenges & Roadmap for Beyond CMOS Computing Simulation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigues, Arun F.; Frank, Michael P.

    Simulating HPC systems is a difficult task and the emergence of “Beyond CMOS” architectures and execution models will increase that difficulty. This document presents a “tutorial” on some of the simulation challenges faced by conventional and non-conventional architectures (Section 1) and goals and requirements for simulating Beyond CMOS systems (Section 2). These provide background for proposed short- and long-term roadmaps for simulation efforts at Sandia (Sections 3 and 4). Additionally, a brief explanation of a proof-of-concept integration of a Beyond CMOS architectural simulator is presented (Section 2.3).

  1. Design of a real-time wind turbine simulator using a custom parallel architecture

    NASA Technical Reports Server (NTRS)

    Hoffman, John A.; Gluck, R.; Sridhar, S.

    1995-01-01

    The design of a new parallel-processing digital simulator is described. The new simulator has been developed specifically for analysis of wind energy systems in real time. The new processor has been named: the Wind Energy System Time-domain simulator, version 3 (WEST-3). Like previous WEST versions, WEST-3 performs many computations in parallel. The modules in WEST-3 are pure digital processors, however. These digital processors can be programmed individually and operated in concert to achieve real-time simulation of wind turbine systems. Because of this programmability, WEST-3 is very much more flexible and general than its two predecessors. The design features of WEST-3 are described to show how the system produces high-speed solutions of nonlinear time-domain equations. WEST-3 has two very fast Computational Units (CU's) that use minicomputer technology plus special architectural features that make them many times faster than a microcomputer. These CU's are needed to perform the complex computations associated with the wind turbine rotor system in real time. The parallel architecture of the CU causes several tasks to be done in each cycle, including an IO operation and the combination of a multiply, add, and store. The WEST-3 simulator can be expanded at any time for additional computational power. This is possible because the CU's interfaced to each other and to other portions of the simulation using special serial buses. These buses can be 'patched' together in essentially any configuration (in a manner very similar to the programming methods used in analog computation) to balance the input/ output requirements. CU's can be added in any number to share a given computational load. This flexible bus feature is very different from many other parallel processors which usually have a throughput limit because of rigid bus architecture.

  2. Efficiently passing messages in distributed spiking neural network simulation.

    PubMed

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  3. Estimation of conformational entropy in protein-ligand interactions: a computational perspective.

    PubMed

    Polyansky, Anton A; Zubac, Ruben; Zagrovic, Bojan

    2012-01-01

    Conformational entropy is an important component of the change in free energy upon binding of a ligand to its target protein. As a consequence, development of computational techniques for reliable estimation of conformational entropies is currently receiving an increased level of attention in the context of computational drug design. Here, we review the most commonly used techniques for conformational entropy estimation from classical molecular dynamics simulations. Although by-and-large still not directly used in practical drug design, these techniques provide a golden standard for developing other, computationally less-demanding methods for such applications, in addition to furthering our understanding of protein-ligand interactions in general. In particular, we focus on the quasi-harmonic approximation and discuss different approaches that can be used to go beyond it, most notably, when it comes to treating anharmonic and/or correlated motions. In addition to reviewing basic theoretical formalisms, we provide a concrete set of steps required to successfully calculate conformational entropy from molecular dynamics simulations, as well as discuss a number of practical issues that may arise in such calculations.

  4. On the error statistics of Viterbi decoding and the performance of concatenated codes

    NASA Technical Reports Server (NTRS)

    Miller, R. L.; Deutsch, L. J.; Butman, S. A.

    1981-01-01

    Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.

  5. TESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dmitriy Morozov, Tom Peterka

    2014-07-29

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets. As the scale of simulations and observations surpasses billions of particles, a distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this software is a distributed-memory parallel Delaunay and Voronoi tessellation algorithm based on existing serial computational geometry libraries that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include the addition of periodic and wall boundary conditions.

  6. Structure identification methods for atomistic simulations of crystalline materials

    DOE PAGES

    Stukowski, Alexander

    2012-05-28

    Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.

  7. Virtual Transgenics: Using a Molecular Biology Simulation to Impact Student Academic Achievement and Attitudes

    ERIC Educational Resources Information Center

    Shegog, Ross; Lazarus, Melanie M.; Murray, Nancy G.; Diamond, Pamela M.; Sessions, Nathalie; Zsigmond, Eva

    2012-01-01

    The transgenic mouse model is useful for studying the causes and potential cures for human genetic diseases. Exposing high school biology students to laboratory experience in developing transgenic animal models is logistically prohibitive. Computer-based simulation, however, offers this potential in addition to advantages of fidelity and reach.…

  8. Database Driven 6-DOF Trajectory Simulation for Debris Transport Analysis

    NASA Technical Reports Server (NTRS)

    West, Jeff

    2008-01-01

    Debris mitigation and risk assessment have been carried out by NASA and its contractors supporting Space Shuttle Return-To-Flight (RTF). As a part of this assessment, analysis of transport potential for debris that may be liberated from the vehicle or from pad facilities prior to tower clear (Lift-Off Debris) is being performed by MSFC. This class of debris includes plume driven and wind driven sources for which lift as well as drag are critical for the determination of the debris trajectory. As a result, NASA MSFC has a need for a debris transport or trajectory simulation that supports the computation of lift effect in addition to drag without the computational expense of fully coupled CFD with 6-DOF. A database driven 6-DOF simulation that uses aerodynamic force and moment coefficients for the debris shape that are interpolated from a database has been developed to meet this need. The design, implementation, and verification of the database driven six degree of freedom (6-DOF) simulation addition to the Lift-Off Debris Transport Analysis (LODTA) software are discussed in this paper.

  9. A Low Cost Simulation System to Demonstrate Pilot Induced Oscillation Phenomenon

    NASA Technical Reports Server (NTRS)

    Ali, Syed Firasat

    1997-01-01

    A flight simulation system with graphics and software on Silicon Graphics computer workstations has been installed in the Flight Vehicle Design Laboratory at Tuskegee University. The system has F-15E flight simulation software from NASA Dryden which uses the graphics of SGI flight simulation demos. On the system, thus installed, a study of pilot induced oscillations is planned for future work. Preliminary research is conducted by obtaining two sets of straight level flights with pilot in the loop. In one set of flights no additional delay is used between the stick input and the appearance of airplane response on the computer monitor. In another set of flights, a 500 ms additional delay is used. The flight data is analyzed to find cross correlations between deflections of control surfaces and response of the airplane. The pilot dynamics features depicted from cross correlations of straight level flights are discussed in this report. The correlations presented here will serve as reference material for the corresponding correlations in a future study of pitch attitude tracking tasks involving pilot induced oscillations.

  10. Software for Engineering Simulations of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Shireman, Kirk; McSwain, Gene; McCormick, Bernell; Fardelos, Panayiotis

    2005-01-01

    Spacecraft Engineering Simulation II (SES II) is a C-language computer program for simulating diverse aspects of operation of a spacecraft characterized by either three or six degrees of freedom. A functional model in SES can include a trajectory flight plan; a submodel of a flight computer running navigational and flight-control software; and submodels of the environment, the dynamics of the spacecraft, and sensor inputs and outputs. SES II features a modular, object-oriented programming style. SES II supports event-based simulations, which, in turn, create an easily adaptable simulation environment in which many different types of trajectories can be simulated by use of the same software. The simulation output consists largely of flight data. SES II can be used to perform optimization and Monte Carlo dispersion simulations. It can also be used to perform simulations for multiple spacecraft. In addition to its generic simulation capabilities, SES offers special capabilities for space-shuttle simulations: for this purpose, it incorporates submodels of the space-shuttle dynamics and a C-language version of the guidance, navigation, and control components of the space-shuttle flight software.

  11. Quantification of Wear and Deformation in Different Configurations of Polyethylene Acetabular Cups Using Micro X-ray Computed Tomography

    PubMed Central

    Affatato, Saverio; Zanini, Filippo; Carmignato, Simone

    2017-01-01

    Wear is currently quantified as mass loss of the bearing materials measured using gravimetric methods. However, this method does not provide other information, such as volumetric loss or surface deviation. In this work, we validated a technique to quantify polyethylene wear in three different batches of ultrahigh-molecular-polyethylene acetabular cups used for hip implants using nondestructive microcomputed tomography. Three different configurations of polyethylene acetabular cups, previously tested under the ISO 14242 parameters, were tested on a hip simulator for an additional 2 million cycles using a modified ISO 14242 load waveform. In this context, a new approach was proposed in order to simulate, on a hip joint simulator, high-demand activities. In addition, the effects of these activities were analyzed in terms of wear and deformations of those polyethylenes by means of gravimetric method and micro X-ray computed tomography. In particular, while the gravimetric method was used for weight loss assessment, microcomputed tomography allowed for acquisition of additional quantitative information about the evolution of local wear and deformation through three-dimensional surface deviation maps for the entire cups’ surface. Experimental results showed that the wear and deformation behavior of these materials change according to different mechanical simulations. PMID:28772616

  12. The effectiveness of using computer simulated experiments on junior high students' understanding of the volume displacement concept

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Soon; Gennaro, Eugene

    Several researchers have suggested that the computer holds much promise as a tool for science teachers for use in their classrooms (Bork, 1979, Lunetta & Hofstein, 1981). It also has been said that there needs to be more research in determining the effectiveness of computer software (Tinker, 1983).This study compared the effectiveness of microcomputer simulated experiences with that of parallel instruction involving hands-on laboratory experiences for teaching the concept of volume displacement to junior high school students. This study also assessed the differential effect on students' understanding of the volume displacement concept using sex of the students as another independent variable. In addition, it compared the degree of retention, after 45 days, of both treatment groups.It was found that computer simulated experiences were as effective as hands-on laboratory experiences, and that males, having had hands-on laboratory experiences, performed better on the posttest than females having had the hands-on laboratory experiences. There were no significant differences in performance when comparing males with females using the computer simulation in the learning of the displacement concept. This study also showed that there were no significant differences in the retention levels when the retention scores of the computer simulation groups were compared to those that had the hands-on laboratory experiences. However, an ANOVA of the retention test scores revealed that males in both treatment conditions retained knowledge of volume displacement better than females.

  13. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  14. Noise Radiation From a Leading-Edge Slat

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Choudhari, Meelan M.

    2009-01-01

    This paper extends our previous computations of unsteady flow within the slat cove region of a multi-element high-lift airfoil configuration, which showed that both statistical and structural aspects of the experimentally observed unsteady flow behavior can be captured via 3D simulations over a computational domain of narrow spanwise extent. Although such narrow domain simulation can account for the spanwise decorrelation of the slat cove fluctuations, the resulting database cannot be applied towards acoustic predictions of the slat without invoking additional approximations to synthesize the fluctuation field over the rest of the span. This deficiency is partially alleviated in the present work by increasing the spanwise extent of the computational domain from 37.3% of the slat chord to nearly 226% (i.e., 15% of the model span). The simulation database is used to verify consistency with previous computational results and, then, to develop predictions of the far-field noise radiation in conjunction with a frequency-domain Ffowcs-Williams Hawkings solver.

  15. Advanced ballistic range technology

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1993-01-01

    Optical images, such as experimental interferograms, schlieren, and shadowgraphs, are routinely used to identify and locate features in experimental flow fields and for validating computational fluid dynamics (CFD) codes. Interferograms can also be used for comparing experimental and computed integrated densities. By constructing these optical images from flow-field simulations, one-to-one comparisons of computation and experiment are possible. During the period from February 1, 1992, to November 30, 1992, work has continued on the development of CISS (Constructed Interferograms, Schlieren, and Shadowgraphs), a code that constructs images from ideal- and real-gas flow-field simulations. In addition, research connected with the automated film-reading system and the proposed reactivation of the radiation facility has continued.

  16. A digital computer program for the dynamic interaction simulation of controls and structure (DISCOS), volume 1

    NASA Technical Reports Server (NTRS)

    Bodley, C. S.; Devers, A. D.; Park, A. C.; Frisch, H. P.

    1978-01-01

    A theoretical development and associated digital computer program system for the dynamic simulation and stability analysis of passive and actively controlled spacecraft are presented. The dynamic system (spacecraft) is modeled as an assembly of rigid and/or flexible bodies not necessarily in a topological tree configuration. The computer program system is used to investigate total system dynamic characteristics, including interaction effects between rigid and/or flexible bodies, control systems, and a wide range of environmental loadings. In addition, the program system is used for designing attitude control systems and for evaluating total dynamic system performance, including time domain response and frequency domain stability analyses.

  17. Advances in computer simulation of genome evolution: toward more realistic evolutionary genomics analysis by approximate bayesian computation.

    PubMed

    Arenas, Miguel

    2015-04-01

    NGS technologies present a fast and cheap generation of genomic data. Nevertheless, ancestral genome inference is not so straightforward due to complex evolutionary processes acting on this material such as inversions, translocations, and other genome rearrangements that, in addition to their implicit complexity, can co-occur and confound ancestral inferences. Recently, models of genome evolution that accommodate such complex genomic events are emerging. This letter explores these novel evolutionary models and proposes their incorporation into robust statistical approaches based on computer simulations, such as approximate Bayesian computation, that may produce a more realistic evolutionary analysis of genomic data. Advantages and pitfalls in using these analytical methods are discussed. Potential applications of these ancestral genomic inferences are also pointed out.

  18. Computer program for a four-cylinder-Stirling-engine controls simulation

    NASA Technical Reports Server (NTRS)

    Daniels, C. J.; Lorenzo, C. F.

    1982-01-01

    A four cylinder Stirling engine, transient engine simulation computer program is presented. The program is intended for controls analysis. The associated engine model was simplified to shorten computer calculation time. The model includes engine mechanical drive dynamics and vehicle load effects. The computer program also includes subroutines that allow: (1) acceleration of the engine by addition of hydrogen to the system, and (2) braking of the engine by short circuiting of the working spaces. Subroutines to calculate degraded engine performance (e.g., due to piston ring and piston rod leakage) are provided. Input data required to run the program are described and flow charts are provided. The program is modular to allow easy modification of individual routines. Examples of steady state and transient results are presented.

  19. Dynamic Mesh CFD Simulations of Orion Parachute Pendulum Motion During Atmospheric Entry

    NASA Technical Reports Server (NTRS)

    Halstrom, Logan D.; Schwing, Alan M.; Robinson, Stephen K.

    2016-01-01

    This paper demonstrates the usage of computational fluid dynamics to study the effects of pendulum motion dynamics of the NASAs Orion Multi-Purpose Crew Vehicle parachute system on the stability of the vehicles atmospheric entry and decent. Significant computational fluid dynamics testing has already been performed at NASAs Johnson Space Center, but this study sought to investigate the effect of bulk motion of the parachute, such as pitching, on the induced aerodynamic forces. Simulations were performed with a moving grid geometry oscillating according to the parameters observed in flight tests. As with the previous simulations, OVERFLOW computational fluid dynamics tool is used with the assumption of rigid, non-permeable geometry. Comparison to parachute wind tunnel tests is included for a preliminary validation of the dynamic mesh model. Results show qualitative differences in the flow fields of the static and dynamic simulations and quantitative differences in the induced aerodynamic forces, suggesting that dynamic mesh modeling of the parachute pendulum motion may uncover additional dynamic effects.

  20. Design of a dynamic optical tissue phantom to model extravasation pharmacokinetics

    NASA Astrophysics Data System (ADS)

    Zhang, Jane Y.; Ergin, Aysegul; Andken, Kerry Lee; Sheng, Chao; Bigio, Irving J.

    2010-02-01

    We describe an optical tissue phantom that enables the simulation of drug extravasation from microvessels and validates computational compartmental models of drug delivery. The phantom consists of a microdialysis tubing bundle to simulate the permeable blood vessels, immersed in either an aqueous suspension of titanium dioxide (TiO2) or a TiO2 mixed agarose scattering medium. Drug administration is represented by a dye circulated through this porous microdialysis tubing bundle. Optical pharmacokinetic (OP) methods are used to measure changes in the absorption coefficient of the scattering medium due to the arrival and diffusion of the dye. We have established particle sizedependent concentration profiles over time of phantom drug delivery by intravenous (IV) and intra-arterial (IA) routes. Additionally, pharmacokinetic compartmental models are implemented in computer simulations for the conditions studied within the phantom. The simulated concentration-time profiles agree well with measurements from the phantom. The results are encouraging for future optical pharmacokinetic method development, both physical and computational, to understand drug extravasation under various physiological conditions.

  1. Large-Eddy Simulation of Internal Flow through Human Vocal Folds

    NASA Astrophysics Data System (ADS)

    Lasota, Martin; Šidlof, Petr

    2018-06-01

    The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.

  2. Numerical Simulations of Plasma Based Flow Control Applications

    NASA Technical Reports Server (NTRS)

    Suzen, Y. B.; Huang, P. G.; Jacob, J. D.; Ashpis, D. E.

    2005-01-01

    A mathematical model was developed to simulate flow control applications using plasma actuators. The effects of the plasma actuators on the external flow are incorporated into Navier Stokes computations as a body force vector. In order to compute this body force vector, the model solves two additional equations: one for the electric field due to the applied AC voltage at the electrodes and the other for the charge density representing the ionized air. The model is calibrated against an experiment having plasma-driven flow in a quiescent environment and is then applied to simulate a low pressure turbine flow with large flow separation. The effects of the plasma actuator on control of flow separation are demonstrated numerically.

  3. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    NASA Astrophysics Data System (ADS)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  4. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  5. GPU based 3D feature profile simulation of high-aspect ratio contact hole etch process under fluorocarbon plasmas

    NASA Astrophysics Data System (ADS)

    Chun, Poo-Reum; Lee, Se-Ah; Yook, Yeong-Geun; Choi, Kwang-Sung; Cho, Deog-Geun; Yu, Dong-Hun; Chang, Won-Seok; Kwon, Deuk-Chul; Im, Yeon-Ho

    2013-09-01

    Although plasma etch profile simulation has been attracted much interest for developing reliable plasma etching, there still exist big gaps between current research status and predictable modeling due to the inherent complexity of plasma process. As an effort to address this issue, we present 3D feature profile simulation coupled with well-defined plasma-surface kinetic model for silicon dioxide etching process under fluorocarbon plasmas. To capture the realistic plasma surface reaction behaviors, a polymer layer based surface kinetic model was proposed to consider the simultaneous polymer deposition and oxide etching. Finally, the realistic plasma surface model was used for calculation of speed function for 3D topology simulation, which consists of multiple level set based moving algorithm, and ballistic transport module. In addition, the time consumable computations in the ballistic transport calculation were improved drastically by GPU based numerical computation, leading to the real time computation. Finally, we demonstrated that the surface kinetic model could be coupled successfully for 3D etch profile simulations in high-aspect ratio contact hole plasma etching.

  6. First-principles simulations of heat transport

    NASA Astrophysics Data System (ADS)

    Puligheddu, Marcello; Gygi, Francois; Galli, Giulia

    2017-11-01

    Advances in understanding heat transport in solids were recently reported by both experiment and theory. However an efficient and predictive quantum simulation framework to investigate thermal properties of solids, with the same complexity as classical simulations, has not yet been developed. Here we present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at close to equilibrium conditions, which only requires calculations of first-principles trajectories and atomic forces, thus avoiding direct computation of heat currents and energy densities. In addition the method requires much shorter sequential simulation times than ordinary molecular dynamics techniques, making it applicable within density functional theory. We discuss results for a representative oxide, MgO, at different temperatures and for ordered and nanostructured morphologies, showing the performance of the method in different conditions.

  7. CFD Based Added Mass Prediction in Cruise Condition of Underwater Vehicle Dynamic

    NASA Astrophysics Data System (ADS)

    Agoes Moelyadi, Mochammad; Bambang Riswandi, Bagus

    2018-04-01

    One of the unsteady flow behavior on the hydrodynamic characteristics of underwater vehicle is the presence of added mass. In cruising conditions, the underwater vehicle may require the addition of speed or experience the disturbance in the form of unsteady flow so that cause the hydrodynamic interaction between the surface of the vehicle with the surrounding fluid. This leads to the rise of local velocity of flow and the great changes of hydrodynamic forces which are very influential on the stability of the underwater vehicle. One of the result is an additional force called added mass. It is very useful parameter to control underwater vehicle dynamic.This paper reports the research on the added mass coefficient of underwater vehicles obtained through the Computational Fluid Dynmaic (CFD) simulation method using CFX software. Added mass coefficient is calculated by performing an unsteady simulation or known as transient simulation. Computational simulations are based on the Reynold Average Navier- Stokes (RANS) equation solution. The simulated vehicle moves forward and backward according to the sinus function, with a frequency of 0.25 Hz, a 2 m amplitude, a cruising depth of 10 m below sea level, and Vcruise 1.54 m / s (Re = 9.000.000). Simulation result data includes velocity contour, variation of force and acceleration to frequency, and added mass coefficient.

  8. Generalized image charge solvation model for electrostatic interactions in molecular dynamics simulations of aqueous solutions

    PubMed Central

    Deng, Shaozhong; Xue, Changfeng; Baumketner, Andriy; Jacobs, Donald; Cai, Wei

    2013-01-01

    This paper extends the image charge solvation model (ICSM) [J. Chem. Phys. 131, 154103 (2009)], a hybrid explicit/implicit method to treat electrostatic interactions in computer simulations of biomolecules formulated for spherical cavities, to prolate spheroidal and triaxial ellipsoidal cavities, designed to better accommodate non-spherical solutes in molecular dynamics (MD) simulations. In addition to the utilization of a general truncated octahedron as the MD simulation box, central to the proposed extension is an image approximation method to compute the reaction field for a point charge placed inside such a non-spherical cavity by using a single image charge located outside the cavity. The resulting generalized image charge solvation model (GICSM) is tested in simulations of liquid water, and the results are analyzed in comparison with those obtained from the ICSM simulations as a reference. We find that, for improved computational efficiency due to smaller simulation cells and consequently a less number of explicit solvent molecules, the generalized model can still faithfully reproduce known static and dynamic properties of liquid water at least for systems considered in the present paper, indicating its great potential to become an accurate but more efficient alternative to the ICSM when bio-macromolecules of irregular shapes are to be simulated. PMID:23913979

  9. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  10. On the usage of ultrasound computational models for decision making under ambiguity

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  11. GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-04-01

    Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.

  12. Visual Computing Environment

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Putt, Charles W.

    1997-01-01

    The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.

  13. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  14. Learning Reverse Engineering and Simulation with Design Visualization

    NASA Technical Reports Server (NTRS)

    Hemsworth, Paul J.

    2018-01-01

    The Design Visualization (DV) group supports work at the Kennedy Space Center by utilizing metrology data with Computer-Aided Design (CAD) models and simulations to provide accurate visual representations that aid in decision-making. The capability to measure and simulate objects in real time helps to predict and avoid potential problems before they become expensive in addition to facilitating the planning of operations. I had the opportunity to work on existing and new models and simulations in support of DV and NASA’s Exploration Ground Systems (EGS).

  15. Computer Simulations Improve University Instructional Laboratories1

    PubMed Central

    2004-01-01

    Laboratory classes are commonplace and essential in biology departments but can sometimes be cumbersome, unreliable, and a drain on time and resources. As university intakes increase, pressure on budgets and staff time can often lead to reduction in practical class provision. Frequently, the ability to use laboratory equipment, mix solutions, and manipulate test animals are essential learning outcomes, and “wet” laboratory classes are thus appropriate. In others, however, interpretation and manipulation of the data are the primary learning outcomes, and here, computer-based simulations can provide a cheaper, easier, and less time- and labor-intensive alternative. We report the evaluation of two computer-based simulations of practical exercises: the first in chromosome analysis, the second in bioinformatics. Simulations can provide significant time savings to students (by a factor of four in our first case study) without affecting learning, as measured by performance in assessment. Moreover, under certain circumstances, performance can be improved by the use of simulations (by 7% in our second case study). We concluded that the introduction of these simulations can significantly enhance student learning where consideration of the learning outcomes indicates that it might be appropriate. In addition, they can offer significant benefits to teaching staff. PMID:15592599

  16. The Structure and Properties of Silica Glass Nanostructures using Novel Computational Systems

    NASA Astrophysics Data System (ADS)

    Doblack, Benjamin N.

    The structure and properties of silica glass nanostructures are examined using computational methods in this work. Standard synthesis methods of silica and its associated material properties are first discussed in brief. A review of prior experiments on this amorphous material is also presented. Background and methodology for the simulation of mechanical tests on amorphous bulk silica and nanostructures are later presented. A new computational system for the accurate and fast simulation of silica glass is also presented, using an appropriate interatomic potential for this material within the open-source molecular dynamics computer program LAMMPS. This alternative computational method uses modern graphics processors, Nvidia CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model select materials, this enhancement allows the addition of accelerated molecular dynamics simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal of this project is to investigate the structure and size dependent mechanical properties of silica glass nanohelical structures under tensile MD conditions using the innovative computational system. Specifically, silica nanoribbons and nanosprings are evaluated which revealed unique size dependent elastic moduli when compared to the bulk material. For the nanoribbons, the tensile behavior differed widely between the models simulated, with distinct characteristic extended elastic regions. In the case of the nanosprings simulated, more clear trends are observed. In particular, larger nanospring wire cross-sectional radii (r) lead to larger Young's moduli, while larger helical diameters (2R) resulted in smaller Young's moduli. Structural transformations and theoretical models are also analyzed to identify possible factors which might affect the mechanical response of silica nanostructures under tension. The work presented outlines an innovative simulation methodology, and discusses how results can be validated against prior experimental and simulation findings. The ultimate goal is to develop new computational methods for the study of nanostructures which will make the field of materials science more accessible, cost effective and efficient.

  17. Final report for the DOE Early Career Award #DE-SC0003912

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayaraman, Arthi

    This DoE supported early career project was aimed at developing computational models, theory and simulation methods that would be then be used to predict assembly and morphology in polymer nanocomposites. In particular, the focus was on composites in active layers of devices, containing conducting polymers that act as electron donors and nanoscale additives that act as electron acceptors. During the course this work, we developed the first of its kind molecular models to represent conducting polymers enabling simulations at the experimentally relevant length and time scales. By comparison with experimentally observed morphologies we validated these models. Furthermore, using these modelsmore » and molecular dynamics simulations on graphical processing units (GPUs) we predicted the molecular level design features in polymers and additive that lead to morphologies with optimal features for charge carrier behavior in solar cells. Additionally, we also predicted computationally new design rules for better dispersion of additives in polymers that have been confirmed through experiments. Achieving dispersion in polymer nanocomposites is valuable to achieve controlled macroscopic properties of the composite. The results obtained during the course of this DOE funded project enables optimal design of higher efficiency organic electronic and photovoltaic devices and improve every day life with engineering of these higher efficiency devices.« less

  18. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  19. Computer simulations of austenite decomposition of microalloyed 700 MPa steel during cooling

    NASA Astrophysics Data System (ADS)

    Pohjonen, Aarne; Paananen, Joni; Mourujärvi, Juho; Manninen, Timo; Larkiola, Jari; Porter, David

    2018-05-01

    We present computer simulations of austenite decomposition to ferrite and bainite during cooling. The phase transformation model is based on Johnson-Mehl-Avrami-Kolmogorov type equations. The model is parameterized by numerical fitting to continuous cooling data obtained with Gleeble thermo-mechanical simulator and it can be used for calculation of the transformation behavior occurring during cooling along any cooling path. The phase transformation model has been coupled with heat conduction simulations. The model includes separate parameters to account for the incubation stage and for the kinetics after the transformation has started. The incubation time is calculated with inversion of the CCT transformation start time. For heat conduction simulations we employed our own parallelized 2-dimensional finite difference code. In addition, the transformation model was also implemented as a subroutine in commercial finite-element software Abaqus which allows for the use of the model in various engineering applications.

  20. Facilitating Integration of Electron Beam Lithography Devices with Interactive Videodisc, Computer-Based Simulation and Job Aids.

    ERIC Educational Resources Information Center

    Von Der Linn, Robert Christopher

    A needs assessment of the Grumman E-Beam Systems Group identified the requirement for additional skill mastery for the engineers who assemble, integrate, and maintain devices used to manufacture integrated circuits. Further analysis of the tasks involved led to the decision to develop interactive videodisc, computer-based job aids to enable…

  1. Simulation of Foam Impact Effects on Components of the Space Shuttle Thermal Protection System. Chapter 7

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.; Park, Young-Keun

    2004-01-01

    A series of three dimensional simulations has been performed to investigate analytically the effect of insulating foam impacts on ceramic tile and reinforced carbon-carbon components of the Space Shuttle thermal protection system. The simulations employed a hybrid particle-finite element method and a parallel code developed for use in spacecraft design applications. The conclusions suggested by the numerical study are in general consistent with experiment. The results emphasize the need for additional material testing work on the dynamic mechanical response of thermal protection system materials, and additional impact experiments for use in validating computational models of impact effects.

  2. Provable classically intractable sampling with measurement-based computation in constant time

    NASA Astrophysics Data System (ADS)

    Sanders, Stephen; Miller, Jacob; Miyake, Akimasa

    We present a constant-time measurement-based quantum computation (MQC) protocol to perform a classically intractable sampling problem. We sample from the output probability distribution of a subclass of the instantaneous quantum polynomial time circuits introduced by Bremner, Montanaro and Shepherd. In contrast with the usual circuit model, our MQC implementation includes additional randomness due to byproduct operators associated with the computation. Despite this additional randomness we show that our sampling task cannot be efficiently simulated by a classical computer. We extend previous results to verify the quantum supremacy of our sampling protocol efficiently using only single-qubit Pauli measurements. Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131, USA.

  3. How To Create Complex Measurement Models: A Case Study of Principled Assessment Design.

    ERIC Educational Resources Information Center

    Bauer, Malcolm; Williamson, David M.; Steinberg, Linda S.; Mislevy, Robert J.; Behrens, John T.

    In computer-based simulations, students must bring a wide range of relevant knowledge, skills, and abilities to bear jointly as they solve meaningful problems in a learning domain. To function effectively as an assessment, a simulation system must additionally be able to evoke and interpret observable evidence about targeted knowledge in a manner…

  4. Sensitivity analysis of a pulse nutrient addition technique for estimating nutrient uptake in large streams

    Treesearch

    Laurence Lin; J.R. Webster

    2012-01-01

    The constant nutrient addition technique has been used extensively to measure nutrient uptake in streams. However, this technique is impractical for large streams, and the pulse nutrient addition (PNA) has been suggested as an alternative. We developed a computer model to simulate Monod kinetics nutrient uptake in large rivers and used this model to evaluate the...

  5. Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merzari, Elia; Obabko, Aleks; Fischer, Paul

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less

  6. Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives

    DOE PAGES

    Merzari, Elia; Obabko, Aleks; Fischer, Paul; ...

    2016-11-03

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less

  7. A real-time, dual processor simulation of the rotor system research aircraft

    NASA Technical Reports Server (NTRS)

    Mackie, D. B.; Alderete, T. S.

    1977-01-01

    A real-time, man-in-the loop, simulation of the rotor system research aircraft (RSRA) was conducted. The unique feature of this simulation was that two digital computers were used in parallel to solve the equations of the RSRA mathematical model. The design, development, and implementation of the simulation are documented. Program validation was discussed, and examples of data recordings are given. This simulation provided an important research tool for the RSRA project in terms of safe and cost-effective design analysis. In addition, valuable knowledge concerning parallel processing and a powerful simulation hardware and software system was gained.

  8. Volatilisation and competing processes computed for a pesticide applied to plants in a wind tunnel system.

    PubMed

    Leistra, Minze; Wolters, André; van den Berg, Frederik

    2008-06-01

    Volatilisation of pesticides from crop canopies can be an important emission pathway. In addition to pesticide properties, competing processes in the canopy and environmental conditions play a part. A computation model is being developed to simulate the processes, but only some of the input data can be obtained directly from the literature. Three well-defined experiments on the volatilisation of radiolabelled parathion-methyl (as example compound) from plants in a wind tunnel system were simulated with the computation model. Missing parameter values were estimated by calibration against the experimental results. The resulting thickness of the air boundary layer, rate of plant penetation and rate of phototransformation were compared with a diversity of literature data. The sequence of importance of the canopy processes was: volatilisation > plant penetration > phototransformation. Computer simulation of wind tunnel experiments, with radiolabelled pesticide sprayed on plants, yields values for the rate coefficients of processes at the plant surface. As some input data for simulations are not required in the framework of registration procedures, attempts to estimate missing parameter values on the basis of divergent experimental results have to be continued. Copyright (c) 2008 Society of Chemical Industry.

  9. Extending the Capabilities of Closed-loop Distributed Engine Control Simulations Using LAN Communication

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia Mae; Culley, Dennis E.

    2014-01-01

    Distributed Engine Control (DEC) is an enabling technology that has the potential to advance the state-of-the-art in gas turbine engine control. To analyze the capabilities that DEC offers, a Hardware-In-the-Loop (HIL) test bed is being developed at NASA Glenn Research Center. This test bed will support a systems-level analysis of control capabilities in closed-loop engine simulations. The structure of the HIL emulates a virtual test cell by implementing the operator functions, control system, and engine on three separate computers. This implementation increases the flexibility and extensibility of the HIL. Here, a method is discussed for implementing these interfaces by connecting the three platforms over a dedicated Local Area Network (LAN). This approach is verified using the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k), which is typically implemented on one computer. There are marginal differences between the results from simulation of the typical and the three-computer implementation. Additional analysis of the LAN network, including characterization of network load, packet drop, and latency, is presented. The three-computer setup supports the incorporation of complex control models and proprietary engine models into the HIL framework.

  10. Development of massive multilevel molecular dynamics simulation program, Platypus (PLATform for dYnamic Protein Unified Simulation), for the elucidation of protein functions.

    PubMed

    Takano, Yu; Nakata, Kazuto; Yonezawa, Yasushige; Nakamura, Haruki

    2016-05-05

    A massively parallel program for quantum mechanical-molecular mechanical (QM/MM) molecular dynamics simulation, called Platypus (PLATform for dYnamic Protein Unified Simulation), was developed to elucidate protein functions. The speedup and the parallelization ratio of Platypus in the QM and QM/MM calculations were assessed for a bacteriochlorophyll dimer in the photosynthetic reaction center (DIMER) on the K computer, a massively parallel computer achieving 10 PetaFLOPs with 705,024 cores. Platypus exhibited the increase in speedup up to 20,000 core processors at the HF/cc-pVDZ and B3LYP/cc-pVDZ, and up to 10,000 core processors by the CASCI(16,16)/6-31G** calculations. We also performed excited QM/MM-MD simulations on the chromophore of Sirius (SIRIUS) in water. Sirius is a pH-insensitive and photo-stable ultramarine fluorescent protein. Platypus accelerated on-the-fly excited-state QM/MM-MD simulations for SIRIUS in water, using over 4000 core processors. In addition, it also succeeded in 50-ps (200,000-step) on-the-fly excited-state QM/MM-MD simulations for the SIRIUS in water. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  11. A new computational growth model for sea urchin skeletons.

    PubMed

    Zachos, Louis G

    2009-08-07

    A new computational model has been developed to simulate growth of regular sea urchin skeletons. The model incorporates the processes of plate addition and individual plate growth into a composite model of whole-body (somatic) growth. A simple developmental model based on hypothetical morphogens underlies the assumptions used to define the simulated growth processes. The data model is based on a Delaunay triangulation of plate growth center points, using the dual Voronoi polygons to define plate topologies. A spherical frame of reference is used for growth calculations, with affine deformation of the sphere (based on a Young-Laplace membrane model) to result in an urchin-like three-dimensional form. The model verifies that the patterns of coronal plates in general meet the criteria of Voronoi polygonalization, that a morphogen/threshold inhibition model for plate addition results in the alternating plate addition pattern characteristic of sea urchins, and that application of the Bertalanffy growth model to individual plates results in simulated somatic growth that approximates that seen in living urchins. The model suggests avenues of research that could explain some of the distinctions between modern sea urchins and the much more disparate groups of forms that characterized the Paleozoic Era.

  12. Are metastases from metastases clinical relevant? Computer modelling of cancer spread in a case of hepatocellular carcinoma.

    PubMed

    Bethge, Anja; Schumacher, Udo; Wree, Andreas; Wedemann, Gero

    2012-01-01

    Metastasis formation remains an enigmatic process and one of the main questions recently asked is whether metastases are able to generate further metastases. Different models have been proposed to answer this question; however, their clinical significance remains unclear. Therefore a computer model was developed that permits comparison of the different models quantitatively with clinical data and that additionally predicts the outcome of treatment interventions. The computer model is based on discrete events simulation approach. On the basis of a case from an untreated patient with hepatocellular carcinoma and its multiple metastases in the liver, it was evaluated whether metastases are able to metastasise and in particular if late disseminated tumour cells are still capable to form metastases. Additionally, the resection of the primary tumour was simulated. The simulation results were compared with clinical data. The simulation results reveal that the number of metastases varies significantly between scenarios where metastases metastasise and scenarios where they do not. In contrast, the total tumour mass is nearly unaffected by the two different modes of metastasis formation. Furthermore, the results provide evidence that metastasis formation is an early event and that late disseminated tumour cells are still capable of forming metastases. Simulations also allow estimating how the resection of the primary tumour delays the patient's death. The simulation results indicate that for this particular case of a hepatocellular carcinoma late metastases, i.e., metastases from metastases, are irrelevant in terms of total tumour mass. Hence metastases seeded from metastases are clinically irrelevant in our model system. Only the first metastases seeded from the primary tumour contribute significantly to the tumour burden and thus cause the patient's death.

  13. Experimentally valid predictions of muscle force and EMG in models of motor-unit function are most sensitive to neural properties.

    PubMed

    Keenan, Kevin G; Valero-Cuevas, Francisco J

    2007-09-01

    Computational models of motor-unit populations are the objective implementations of the hypothesized mechanisms by which neural and muscle properties give rise to electromyograms (EMGs) and force. However, the variability/uncertainty of the parameters used in these models--and how they affect predictions--confounds assessing these hypothesized mechanisms. We perform a large-scale computational sensitivity analysis on the state-of-the-art computational model of surface EMG, force, and force variability by combining a comprehensive review of published experimental data with Monte Carlo simulations. To exhaustively explore model performance and robustness, we ran numerous iterative simulations each using a random set of values for nine commonly measured motor neuron and muscle parameters. Parameter values were sampled across their reported experimental ranges. Convergence after 439 simulations found that only 3 simulations met our two fitness criteria: approximating the well-established experimental relations for the scaling of EMG amplitude and force variability with mean force. An additional 424 simulations preferentially sampling the neighborhood of those 3 valid simulations converged to reveal 65 additional sets of parameter values for which the model predictions approximate the experimentally known relations. We find the model is not sensitive to muscle properties but very sensitive to several motor neuron properties--especially peak discharge rates and recruitment ranges. Therefore to advance our understanding of EMG and muscle force, it is critical to evaluate the hypothesized neural mechanisms as implemented in today's state-of-the-art models of motor unit function. We discuss experimental and analytical avenues to do so as well as new features that may be added in future implementations of motor-unit models to improve their experimental validity.

  14. Virtual fragment preparation for computational fragment-based drug design.

    PubMed

    Ludington, Jennifer L

    2015-01-01

    Fragment-based drug design (FBDD) has become an important component of the drug discovery process. The use of fragments can accelerate both the search for a hit molecule and the development of that hit into a lead molecule for clinical testing. In addition to experimental methodologies for FBDD such as NMR and X-ray Crystallography screens, computational techniques are playing an increasingly important role. The success of the computational simulations is due in large part to how the database of virtual fragments is prepared. In order to prepare the fragments appropriately it is necessary to understand how FBDD differs from other approaches and the issues inherent in building up molecules from smaller fragment pieces. The ultimate goal of these calculations is to link two or more simulated fragments into a molecule that has an experimental binding affinity consistent with the additive predicted binding affinities of the virtual fragments. Computationally predicting binding affinities is a complex process, with many opportunities for introducing error. Therefore, care should be taken with the fragment preparation procedure to avoid introducing additional inaccuracies.This chapter is focused on the preparation process used to create a virtual fragment database. Several key issues of fragment preparation which affect the accuracy of binding affinity predictions are discussed. The first issue is the selection of the two-dimensional atomic structure of the virtual fragment. Although the particular usage of the fragment can affect this choice (i.e., whether the fragment will be used for calibration, binding site characterization, hit identification, or lead optimization), general factors such as synthetic accessibility, size, and flexibility are major considerations in selecting the 2D structure. Other aspects of preparing the virtual fragments for simulation are the generation of three-dimensional conformations and the assignment of the associated atomic point charges.

  15. Numerical Simulations of Single Flow Element in a Nuclear Thermal Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed and global thermo-fluid environments of a single now element in a hypothetical solid-core nuclear thermal thrust chamber assembly, Several numerical and multi-physics thermo-fluid models, such as chemical reactions, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver. The numerical simulations of a single now element provide a detailed thermo-fluid environment for thermal stress estimation and insight for possible occurrence of mid-section corrosion. In addition, detailed conjugate heat transfer simulations were employed to develop the porosity models for efficient pressure drop and thermal load calculations.

  16. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE PAGES

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.; ...

    2016-11-07

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  17. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  18. Automatic domain updating technique for improving computational efficiency of 2-D flood-inundation simulation

    NASA Astrophysics Data System (ADS)

    Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.

    2017-12-01

    Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.

  19. Electron and ion heating by whistler turbulence: Three-dimensional particle-in-cell simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, R. Scott; Gary, S. Peter; Wang, Joseph

    2014-12-17

    Three-dimensional particle-in-cell simulations of decaying whistler turbulence are carried out on a collisionless, homogeneous, magnetized, electron-ion plasma model. In addition, the simulations use an initial ensemble of relatively long wavelength whistler modes with a broad range of initial propagation directions with an initial electron beta β e = 0.05. The computations follow the temporal evolution of the fluctuations as they cascade into broadband turbulent spectra at shorter wavelengths. Three simulations correspond to successively larger simulation boxes and successively longer wavelengths of the initial fluctuations. The computations confirm previous results showing electron heating is preferentially parallel to the background magnetic fieldmore » B o, and ion heating is preferentially perpendicular to B o. The new results here are that larger simulation boxes and longer initial whistler wavelengths yield weaker overall dissipation, consistent with linear dispersion theory predictions of decreased damping, stronger ion heating, consistent with a stronger ion Landau resonance, and weaker electron heating.« less

  20. Site Identification by Ligand Competitive Saturation (SILCS) simulations for fragment-based drug design.

    PubMed

    Faller, Christina E; Raman, E Prabhu; MacKerell, Alexander D; Guvench, Olgun

    2015-01-01

    Fragment-based drug design (FBDD) involves screening low molecular weight molecules ("fragments") that correspond to functional groups found in larger drug-like molecules to determine their binding to target proteins or nucleic acids. Based on the principle of thermodynamic additivity, two fragments that bind nonoverlapping nearby sites on the target can be combined to yield a new molecule whose binding free energy is the sum of those of the fragments. Experimental FBDD approaches, like NMR and X-ray crystallography, have proven very useful but can be expensive in terms of time, materials, and labor. Accordingly, a variety of computational FBDD approaches have been developed that provide different levels of detail and accuracy.The Site Identification by Ligand Competitive Saturation (SILCS) method of computational FBDD uses all-atom explicit-solvent molecular dynamics (MD) simulations to identify fragment binding. The target is "soaked" in an aqueous solution with multiple fragments having different identities. The resulting computational competition assay reveals what small molecule types are most likely to bind which regions of the target. From SILCS simulations, 3D probability maps of fragment binding called "FragMaps" can be produced. Based on the probabilities relative to bulk, SILCS FragMaps can be used to determine "Grid Free Energies (GFEs)," which provide per-atom contributions to fragment binding affinities. For essentially no additional computational overhead relative to the production of the FragMaps, GFEs can be used to compute Ligand Grid Free Energies (LGFEs) for arbitrarily complex molecules, and these LGFEs can be used to rank-order the molecules in accordance with binding affinities.

  1. High frequency dynamic engine simulation. [TF-30 engine

    NASA Technical Reports Server (NTRS)

    Schuerman, J. A.; Fischer, K. E.; Mclaughlin, P. W.

    1977-01-01

    A digital computer simulation of a mixed flow, twin spool turbofan engine was assembled to evaluate and improve the dynamic characteristics of the engine simulation to disturbance frequencies of at least 100 Hz. One dimensional forms of the dynamic mass, momentum and energy equations were used to model the engine. A TF30 engine was simulated so that dynamic characteristics could be evaluated against results obtained from testing of the TF30 engine at the NASA Lewis Research Center. Dynamic characteristics of the engine simulation were improved by modifying the compression system model. Modifications to the compression system model were established by investigating the influence of size and number of finite dynamic elements. Based on the results of this program, high frequency engine simulations using finite dynamic elements can be assembled so that the engine dynamic configuration is optimum with respect to dynamic characteristics and computer execution time. Resizing of the compression systems finite elements improved the dynamic characteristics of the engine simulation but showed that additional refinements are required to obtain close agreement simulation and actual engine dynamic characteristics.

  2. Computer Simulation for Emergency Incident Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D L

    2004-12-03

    This report describes the findings and recommendations resulting from the Department of Homeland Security (DHS) Incident Management Simulation Workshop held by the DHS Advanced Scientific Computing Program in May 2004. This workshop brought senior representatives of the emergency response and incident-management communities together with modeling and simulation technologists from Department of Energy laboratories. The workshop provided an opportunity for incident responders to describe the nature and substance of the primary personnel roles in an incident response, to identify current and anticipated roles of modeling and simulation in support of incident response, and to begin a dialog between the incident responsemore » and simulation technology communities that will guide and inform planned modeling and simulation development for incident response. This report provides a summary of the discussions at the workshop as well as a summary of simulation capabilities that are relevant to incident-management training, and recommendations for the use of simulation in both incident management and in incident management training, based on the discussions at the workshop. In addition, the report discusses areas where further research and development will be required to support future needs in this area.« less

  3. Computable general equilibrium model fiscal year 2013 capability development report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Brian Keith; Rivera, Michael Kelly; Boero, Riccardo

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences inmore » the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.« less

  4. The analysis of a generic air-to-air missile simulation model

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Chappell, Alan R.; Mcmanus, John W.

    1994-01-01

    A generic missile model was developed to evaluate the benefits of using a dynamic missile fly-out simulation system versus a static missile launch envelope system for air-to-air combat simulation. This paper examines the performance of a launch envelope model and a missile fly-out model. The launch envelope model bases its probability of killing the target aircraft on the target aircraft's position at the launch time of the weapon. The benefits gained from a launch envelope model are the simplicity of implementation and the minimal computational overhead required. A missile fly-out model takes into account the physical characteristics of the missile as it simulates the guidance, propulsion, and movement of the missile. The missile's probability of kill is based on the missile miss distance (or the minimum distance between the missile and the target aircraft). The problems associated with this method of modeling are a larger computational overhead, the additional complexity required to determine the missile miss distance, and the additional complexity of determining the reason(s) the missile missed the target. This paper evaluates the two methods and compares the results of running each method on a comprehensive set of test conditions.

  5. Commentary on the Integration of Model Sharing and Reproducibility Analysis to Scholarly Publishing Workflow in Computational Biomechanics

    PubMed Central

    Erdemir, Ahmet; Guess, Trent M.; Halloran, Jason P.; Modenese, Luca; Reinbolt, Jeffrey A.; Thelen, Darryl G.; Umberger, Brian R.

    2016-01-01

    Objective The overall goal of this document is to demonstrate that dissemination of models and analyses for assessing the reproducibility of simulation results can be incorporated in the scientific review process in biomechanics. Methods As part of a special issue on model sharing and reproducibility in IEEE Transactions on Biomedical Engineering, two manuscripts on computational biomechanics were submitted: A. Rajagopal et al., IEEE Trans. Biomed. Eng., 2016 and A. Schmitz and D. Piovesan, IEEE Trans. Biomed. Eng., 2016. Models used in these studies were shared with the scientific reviewers and the public. In addition to the standard review of the manuscripts, the reviewers downloaded the models and performed simulations that reproduced results reported in the studies. Results There was general agreement between simulation results of the authors and those of the reviewers. Discrepancies were resolved during the necessary revisions. The manuscripts and instructions for download and simulation were updated in response to the reviewers’ feedback; changes that may otherwise have been missed if explicit model sharing and simulation reproducibility analysis were not conducted in the review process. Increased burden on the authors and the reviewers, to facilitate model sharing and to repeat simulations, were noted. Conclusion When the authors of computational biomechanics studies provide access to models and data, the scientific reviewers can download and thoroughly explore the model, perform simulations, and evaluate simulation reproducibility beyond the traditional manuscript-only review process. Significance Model sharing and reproducibility analysis in scholarly publishing will result in a more rigorous review process, which will enhance the quality of modeling and simulation studies and inform future users of computational models. PMID:28072567

  6. An Analysis of Failure Handling in Chameleon, A Framework for Supporting Cost-Effective Fault Tolerant Services

    NASA Technical Reports Server (NTRS)

    Haakensen, Erik Edward

    1998-01-01

    The desire for low-cost reliable computing is increasing. Most current fault tolerant computing solutions are not very flexible, i.e., they cannot adapt to reliability requirements of newly emerging applications in business, commerce, and manufacturing. It is important that users have a flexible, reliable platform to support both critical and noncritical applications. Chameleon, under development at the Center for Reliable and High-Performance Computing at the University of Illinois, is a software framework. for supporting cost-effective adaptable networked fault tolerant service. This thesis details a simulation of fault injection, detection, and recovery in Chameleon. The simulation was written in C++ using the DEPEND simulation library. The results obtained from the simulation included the amount of overhead incurred by the fault detection and recovery mechanisms supported by Chameleon. In addition, information about fault scenarios from which Chameleon cannot recover was gained. The results of the simulation showed that both critical and noncritical applications can be executed in the Chameleon environment with a fairly small amount of overhead. No single point of failure from which Chameleon could not recover was found. Chameleon was also found to be capable of recovering from several multiple failure scenarios.

  7. Numerical simulations of clinical focused ultrasound functional neurosurgery

    NASA Astrophysics Data System (ADS)

    Pulkkinen, Aki; Werner, Beat; Martin, Ernst; Hynynen, Kullervo

    2014-04-01

    A computational model utilizing grid and finite difference methods were developed to simulate focused ultrasound functional neurosurgery interventions. The model couples the propagation of ultrasound in fluids (soft tissues) and solids (skull) with acoustic and visco-elastic wave equations. The computational model was applied to simulate clinical focused ultrasound functional neurosurgery treatments performed in patients suffering from therapy resistant chronic neuropathic pain. Datasets of five patients were used to derive the treatment geometry. Eight sonications performed in the treatments were then simulated with the developed model. Computations were performed by driving the simulated phased array ultrasound transducer with the acoustic parameters used in the treatments. Resulting focal temperatures and size of the thermal foci were compared quantitatively, in addition to qualitative inspection of the simulated pressure and temperature fields. This study found that the computational model and the simulation parameters predicted an average of 24 ± 13% lower focal temperature elevations than observed in the treatments. The size of the simulated thermal focus was found to be 40 ± 13% smaller in the anterior-posterior direction and 22 ± 14% smaller in the inferior-superior direction than in the treatments. The location of the simulated thermal focus was off from the prescribed target by 0.3 ± 0.1 mm, while the peak focal temperature elevation observed in the measurements was off by 1.6 ± 0.6 mm. Although the results of the simulations suggest that there could be some inaccuracies in either the tissue parameters used, or in the simulation methods, the simulations were able to predict the focal spot locations and temperature elevations adequately for initial treatment planning performed to assess, for example, the feasibility of sonication. The accuracy of the simulations could be improved if more precise ultrasound tissue properties (especially of the skull bone) could be obtained.

  8. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    PubMed

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined.

  9. Solutions of the Taylor-Green Vortex Problem Using High-Resolution Explicit Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2013-01-01

    A computational fluid dynamics code that solves the compressible Navier-Stokes equations was applied to the Taylor-Green vortex problem to examine the code s ability to accurately simulate the vortex decay and subsequent turbulence. The code, WRLES (Wave Resolving Large-Eddy Simulation), uses explicit central-differencing to compute the spatial derivatives and explicit Low Dispersion Runge-Kutta methods for the temporal discretization. The flow was first studied and characterized using Bogey & Bailley s 13-point dispersion relation preserving (DRP) scheme. The kinetic energy dissipation rate, computed both directly and from the enstrophy field, vorticity contours, and the energy spectra are examined. Results are in excellent agreement with a reference solution obtained using a spectral method and provide insight into computations of turbulent flows. In addition the following studies were performed: a comparison of 4th-, 8th-, 12th- and DRP spatial differencing schemes, the effect of the solution filtering on the results, the effect of large-eddy simulation sub-grid scale models, and the effect of high-order discretization of the viscous terms.

  10. A new constitutive model for simulation of softening, plateau, and densification phenomena for trabecular bone under compression.

    PubMed

    Lee, Chi-Seung; Lee, Jae-Myung; Youn, BuHyun; Kim, Hyung-Sik; Shin, Jong Ki; Goh, Tae Sik; Lee, Jung Sub

    2017-01-01

    A new type of constitutive model and its computational implementation procedure for the simulation of a trabecular bone are proposed in the present study. A yield surface-independent Frank-Brockman elasto-viscoplastic model is introduced to express the nonlinear material behavior such as softening beyond yield point, plateau, and densification under compressive loads. In particular, the hardening- and softening-dominant material functions are introduced and adopted in the plastic multiplier to describe each nonlinear material behavior separately. In addition, the elasto-viscoplastic model is transformed into an implicit type discrete model, and is programmed as a user-defined material subroutine in commercial finite element analysis code. In particular, the consistent tangent modulus method is proposed to improve the computational convergence and to save computational time during finite element analysis. Through the developed material library, the nonlinear stress-strain relationship is analyzed qualitatively and quantitatively, and the simulation results are compared with the results of compression test on the trabecular bone to validate the proposed constitutive model, computational method, and material library. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Gene regulatory networks: a coarse-grained, equation-free approach to multiscale computation.

    PubMed

    Erban, Radek; Kevrekidis, Ioannis G; Adalsteinsson, David; Elston, Timothy C

    2006-02-28

    We present computer-assisted methods for analyzing stochastic models of gene regulatory networks. The main idea that underlies this equation-free analysis is the design and execution of appropriately initialized short bursts of stochastic simulations; the results of these are processed to estimate coarse-grained quantities of interest, such as mesoscopic transport coefficients. In particular, using a simple model of a genetic toggle switch, we illustrate the computation of an effective free energy Phi and of a state-dependent effective diffusion coefficient D that characterize an unavailable effective Fokker-Planck equation. Additionally we illustrate the linking of equation-free techniques with continuation methods for performing a form of stochastic "bifurcation analysis"; estimation of mean switching times in the case of a bistable switch is also implemented in this equation-free context. The accuracy of our methods is tested by direct comparison with long-time stochastic simulations. This type of equation-free analysis appears to be a promising approach to computing features of the long-time, coarse-grained behavior of certain classes of complex stochastic models of gene regulatory networks, circumventing the need for long Monte Carlo simulations.

  12. Spectral-based propagation schemes for time-dependent quantum systems with application to carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Chen, Zuojing; Polizzi, Eric

    2010-11-01

    Effective modeling and numerical spectral-based propagation schemes are proposed for addressing the challenges in time-dependent quantum simulations of systems ranging from atoms, molecules, and nanostructures to emerging nanoelectronic devices. While time-dependent Hamiltonian problems can be formally solved by propagating the solutions along tiny simulation time steps, a direct numerical treatment is often considered too computationally demanding. In this paper, however, we propose to go beyond these limitations by introducing high-performance numerical propagation schemes to compute the solution of the time-ordered evolution operator. In addition to the direct Hamiltonian diagonalizations that can be efficiently performed using the new eigenvalue solver FEAST, we have designed a Gaussian propagation scheme and a basis-transformed propagation scheme (BTPS) which allow to reduce considerably the simulation times needed by time intervals. It is outlined that BTPS offers the best computational efficiency allowing new perspectives in time-dependent simulations. Finally, these numerical schemes are applied to study the ac response of a (5,5) carbon nanotube within a three-dimensional real-space mesh framework.

  13. Influence of sample preparation on the transformation of low-density to high-density amorphous ice: An explanation based on the potential energy landscape

    NASA Astrophysics Data System (ADS)

    Giovambattista, Nicolas; Starr, Francis W.; Poole, Peter H.

    2017-07-01

    Experiments and computer simulations of the transformations of amorphous ices display different behaviors depending on sample preparation methods and on the rates of change of temperature and pressure to which samples are subjected. In addition to these factors, simulation results also depend strongly on the chosen water model. Using computer simulations of the ST2 water model, we study how the sharpness of the compression-induced transition from low-density amorphous ice (LDA) to high-density amorphous ice (HDA) is influenced by the preparation of LDA. By studying LDA samples prepared using widely different procedures, we find that the sharpness of the LDA-to-HDA transformation is correlated with the depth of the initial LDA sample in the potential energy landscape (PEL), as characterized by the inherent structure energy. Our results show that the complex phenomenology of the amorphous ices reported in experiments and computer simulations can be understood and predicted in a unified way from knowledge of the PEL of the system.

  14. Implementation of extended Lagrangian dynamics in GROMACS for polarizable simulations using the classical Drude oscillator model.

    PubMed

    Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D

    2015-07-15

    Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.

  15. [Computer simulation of a clinical magnet resonance tomography scanner for training purposes].

    PubMed

    Hackländer, T; Mertens, H; Cramer, B M

    2004-08-01

    The idea for this project was born by the necessity to offer medical students an easy approach to the theoretical basics of magnetic resonance imaging. The aim was to simulate the features and functions of such a scanner on a commercially available computer by means of a computer program. The simulation was programmed in pure Java under the GNU General Public License and is freely available for a commercially available computer with Windows, Macintosh or Linux operating system. The graphic user interface is oriented to a real scanner. In an external program parameter, images for the proton density and the relaxation times T1 and T2 are calculated on the basis of clinical examinations. From this, the image calculation is carried out in the simulation program pixel by pixel on the basis of a pulse sequence chosen and modified by the user. The images can be stored and printed. In addition, it is possible to display and modify k-space images. Seven classes of pulse sequences are implemented and up to 14 relevant sequence parameters, such as repetition time and echo time, can be altered. Aliasing and motion artifacts can be simulated. As the image calculation only takes a few seconds, interactive working is possible. The simulation has been used in the university education for more than 1 year, successfully illustrating the dependence of the MR images on the measuring parameters. This should facititate the approach of students to the understanding MR imaging in the future.

  16. Exploitation of realistic computational anthropomorphic phantoms for the optimization of nuclear imaging acquisition and processing protocols.

    PubMed

    Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C

    2014-01-01

    Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.

  17. Using computer simulations to facilitate conceptual understanding of electromagnetic induction

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Fen

    This study investigated the use of computer simulations to facilitate conceptual understanding in physics. The use of computer simulations in the present study was grounded in a conceptual framework drawn from findings related to the use of computer simulations in physics education. To achieve the goal of effective utilization of computers for physics education, I first reviewed studies pertaining to computer simulations in physics education categorized by three different learning frameworks and studies comparing the effects of different simulation environments. My intent was to identify the learning context and factors for successful use of computer simulations in past studies and to learn from the studies which did not obtain a significant result. Based on the analysis of reviewed literature, I proposed effective approaches to integrate computer simulations in physics education. These approaches are consistent with well established education principles such as those suggested by How People Learn (Bransford, Brown, Cocking, Donovan, & Pellegrino, 2000). The research based approaches to integrated computer simulations in physics education form a learning framework called Concept Learning with Computer Simulations (CLCS) in the current study. The second component of this study was to examine the CLCS learning framework empirically. The participants were recruited from a public high school in Beijing, China. All participating students were randomly assigned to two groups, the experimental (CLCS) group and the control (TRAD) group. Research based computer simulations developed by the physics education research group at University of Colorado at Boulder were used to tackle common conceptual difficulties in learning electromagnetic induction. While interacting with computer simulations, CLCS students were asked to answer reflective questions designed to stimulate qualitative reasoning and explanation. After receiving model reasoning online, students were asked to submit their revised answers electronically. Students in the TRAD group were not granted access to the CLCS material and followed their normal classroom routine. At the end of the study, both the CLCS and TRAD students took a post-test. Questions on the post-test were divided into "what" questions, "how" questions, and an open response question. Analysis of students' post-test performance showed mixed results. While the TRAD students scored higher on the "what" questions, the CLCS students scored higher on the "how" questions and the one open response questions. This result suggested that more TRAD students knew what kinds of conditions may or may not cause electromagnetic induction without understanding how electromagnetic induction works. Analysis of the CLCS students' learning also suggested that frequent disruption and technical trouble might pose threats to the effectiveness of the CLCS learning framework. Despite the mixed results of students' post-test performance, the CLCS learning framework revealed some limitations to promote conceptual understanding in physics. Improvement can be made by providing students with background knowledge necessary to understand model reasoning and incorporating the CLCS learning framework with other learning frameworks to promote integration of various physics concepts. In addition, the reflective questions in the CLCS learning framework may be refined to better address students' difficulties. Limitations of the study, as well as suggestions for future research, are also presented in this study.

  18. Graph Representations of Flow and Transport in Fracture Networks using Machine Learning

    NASA Astrophysics Data System (ADS)

    Srinivasan, G.; Viswanathan, H. S.; Karra, S.; O'Malley, D.; Godinez, H. C.; Hagberg, A.; Osthus, D.; Mohd-Yusof, J.

    2017-12-01

    Flow and transport of fluids through fractured systems is governed by the properties and interactions at the micro-scale. Retaining information about the micro-structure such as fracture length, orientation, aperture and connectivity in mesh-based computational models results in solving for millions to billions of degrees of freedom and quickly renders the problem computationally intractable. Our approach depicts fracture networks graphically, by mapping fractures to nodes and intersections to edges, thereby greatly reducing computational burden. Additionally, we use machine learning techniques to build simulators on the graph representation, trained on data from the mesh-based high fidelity simulations to speed up computation by orders of magnitude. We demonstrate our methodology on ensembles of discrete fracture networks, dividing up the data into training and validation sets. Our machine learned graph-based solvers result in over 3 orders of magnitude speedup without any significant sacrifice in accuracy.

  19. Accelerating 3D Hall MHD Magnetosphere Simulations with Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Bard, C.; Dorelli, J.

    2017-12-01

    The resolution required to simulate planetary magnetospheres with Hall magnetohydrodynamics result in program sizes approaching several hundred million grid cells. These would take years to run on a single computational core and require hundreds or thousands of computational cores to complete in a reasonable time. However, this requires access to the largest supercomputers. Graphics processing units (GPUs) provide a viable alternative: one GPU can do the work of roughly 100 cores, bringing Hall MHD simulations of Ganymede within reach of modest GPU clusters ( 8 GPUs). We report our progress in developing a GPU-accelerated, three-dimensional Hall magnetohydrodynamic code and present Hall MHD simulation results for both Ganymede (run on 8 GPUs) and Mercury (56 GPUs). We benchmark our Ganymede simulation with previous results for the Galileo G8 flyby, namely that adding the Hall term to ideal MHD simulations changes the global convection pattern within the magnetosphere. Additionally, we present new results for the G1 flyby as well as initial results from Hall MHD simulations of Mercury and compare them with the corresponding ideal MHD runs.

  20. Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids

    NASA Astrophysics Data System (ADS)

    Sezer-Uzol, Nilay

    In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.

  1. Combined bending and thermal fatigue of high-temperature metal-matrix composites - Computational simulation

    NASA Technical Reports Server (NTRS)

    Gotsis, Pascal K.; Chamis, Christos C.

    1992-01-01

    The nonlinear behavior of a high-temperature metal-matrix composite (HT-MMC) was simulated by using the metal matrix composite analyzer (METCAN) computer code. The simulation started with the fabrication process, proceeded to thermomechanical cyclic loading, and ended with the application of a monotonic load. Classical laminate theory and composite micromechanics and macromechanics are used in METCAN, along with a multifactor interaction model for the constituents behavior. The simulation of the stress-strain behavior from the macromechanical and the micromechanical points of view, as well as the initiation and final failure of the constituents and the plies in the composite, were examined in detail. It was shown that, when the fibers and the matrix were perfectly bonded, the fracture started in the matrix and then propagated with increasing load to the fibers. After the fibers fractured, the composite lost its capacity to carry additional load and fractured.

  2. Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, J.H.; Michelotti, M.D.; Riemer, N.

    2016-10-01

    Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removalmore » rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.« less

  3. Combined thermal and bending fatigue of high-temperature metal-matrix composites: Computational simulation

    NASA Technical Reports Server (NTRS)

    Gotsis, Pascal K.

    1991-01-01

    The nonlinear behavior of a high-temperature metal-matrix composite (HT-MMC) was simulated by using the metal matrix composite analyzer (METCAN) computer code. The simulation started with the fabrication process, proceeded to thermomechanical cyclic loading, and ended with the application of a monotonic load. Classical laminate theory and composite micromechanics and macromechanics are used in METCAN, along with a multifactor interaction model for the constituents behavior. The simulation of the stress-strain behavior from the macromechanical and the micromechanical points of view, as well as the initiation and final failure of the constituents and the plies in the composite, were examined in detail. It was shown that, when the fibers and the matrix were perfectly bonded, the fracture started in the matrix and then propagated with increasing load to the fibers. After the fibers fractured, the composite lost its capacity to carry additional load and fractured.

  4. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Lytle, John K. (Technical Monitor)

    2002-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT). This paper discusses the salient features of the NPSS Architecture including its interface layer, object layer, implementation for accessing legacy codes, numerical zooming infrastructure and its computing layer. The computing layer focuses on the use and deployment of these propulsion simulations on parallel and distributed computing platforms which has been the focus of NASA Ames. Additional features of the object oriented architecture that support MultiDisciplinary (MD) Coupling, computer aided design (CAD) access and MD coupling objects will be discussed. Included will be a discussion of the successes, challenges and benefits of implementing this architecture.

  5. A Grand Canonical Monte Carlo simulation program for computing ion distributions around biomolecules in hard sphere solvents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The GIBS software program is a Grand Canonical Monte Carlo (GCMC) simulation program (written in C++) that can be used for 1) computing the excess chemical potential of ions and the mean activity coefficients of salts in homogeneous electrolyte solutions; and, 2) for computing the distribution of ions around fixed macromolecules such as, nucleic acids and proteins. The solvent can be represented as neutral hard spheres or as a dielectric continuum. The ions are represented as charged hard spheres that can interact via Coulomb, hard-sphere, or Lennard-Jones potentials. In addition to hard-sphere repulsions, the ions can also be made tomore » interact with the solvent hard spheres via short-ranged attractive square-well potentials.« less

  6. Sensitivity analysis of a ground-water-flow model

    USGS Publications Warehouse

    Torak, Lynn J.; ,

    1991-01-01

    A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.

  7. Particle-in-cell simulations of Hall plasma thrusters

    NASA Astrophysics Data System (ADS)

    Miranda, Rodrigo; Ferreira, Jose Leonardo; Martins, Alexandre

    2016-07-01

    Hall plasma thrusters can be modelled using particle-in-cell (PIC) simulations. In these simulations, the plasma is described by a set of equations which represent a coupled system of charged particles and electromagnetic fields. The fields are computed using a spatial grid (i.e., a discretization in space), whereas the particles can move continuously in space. Briefly, the particle and fields dynamics are computed as follows. First, forces due to electric and magnetic fields are employed to calculate the velocities and positions of particles. Next, the velocities and positions of particles are used to compute the charge and current densities at discrete positions in space. Finally, these densities are used to solve the electromagnetic field equations in the grid, which are interpolated at the position of the particles to obtain the acting forces, and restart this cycle. We will present numerical simulations using software for PIC simulations to study turbulence, wave and instabilities that arise in Hall plasma thrusters. We have sucessfully reproduced a numerical simulation of a SPT-100 Hall thruster using a two-dimensional (2D) model. In addition, we are developing a 2D model of a cylindrical Hall thruster. The results of these simulations will contribute to improve the performance of plasma thrusters to be used in Cubesats satellites currenty in development at the Plasma Laboratory at University of Brasília.

  8. Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.

    2010-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  9. Manned systems utilization analysis (study 2.1). Volume 5: Program listing for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1975-01-01

    The LOVES computer code developed to investigate the concept of space servicing operational satellites as an alternative to replacing expendable satellites or returning satellites to earth for ground refurbishment is presented. In addition to having the capability to simulate the expendable satellite operation and the ground refurbished satellite operation, the program is designed to simulate the logistics of space servicing satellites using an upper stage vehicle and/or the earth to orbit shuttle. The program not only provides for the initial deployment of the satellite but also simulates the random failure and subsequent replacement of various equipment modules comprising the satellite. The program has been used primarily to conduct trade studies and/or parametric studies of various space program operational philosophies.

  10. Feasibility study for a numerical aerodynamic simulation facility. Volume 3: FMP language specification/user manual

    NASA Technical Reports Server (NTRS)

    Kenner, B. G.; Lincoln, N. R.

    1979-01-01

    The manual is intended to show the revisions and additions to the current STAR FORTRAN. The changes are made to incorporate an FMP (Flow Model Processor) for use in the Numerical Aerodynamic Simulation Facility (NASF) for the purpose of simulating fluid flow over three-dimensional bodies in wind tunnel environments and in free space. The FORTRAN programming language for the STAR-100 computer contains both CDC and unique STAR extensions to the standard FORTRAN. Several of the STAR FORTRAN extensions to standard FOR-TRAN allow the FORTRAN user to exploit the vector processing capabilities of the STAR computer. In STAR FORTRAN, vectors can be expressed with an explicit notation, functions are provided that return vector results, and special call statements enable access to any machine instruction.

  11. Computer simulation analysis of the behavior of renal-regulating hormones during hypogravic stress

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1982-01-01

    A computer simulation of a mathematical circulation model is used to study the alterations of body fluids and their electrolyte composition that occur in weightlessness. The behavior of the renal-regulating hormones which control these alterations is compared in simulations of several one-g analogs of weightlessness and space flight. It is shown that the renal-regulating hormones represent a tightly coupled system that responds acutely to volume disturbances and chronically to electrolyte disturbances. During hypogravic conditions these responses lead to an initial suppression of hormone levels and a long-term effect which varies depending on metabolic factors that can alter the plasma electrolytes. In addition, it is found that if pressure effects normalize rapidly, a transition phase may exist which leads to a dynamic multiphasic endocrine response.

  12. Virtual reality simulation: using three-dimensional technology to teach nursing students.

    PubMed

    Jenson, Carole E; Forsyth, Diane McNally

    2012-06-01

    The use of computerized technology is rapidly growing in the classroom and in healthcare. An emerging computer technology strategy for nursing education is the use of virtual reality simulation. This computer-based three-dimensional educational tool simulates real-life patient experiences in a risk-free environment, allows for repeated practice sessions, requires clinical decision making, exposes students to diverse patient conditions, provides immediate feedback, and is portable. The purpose of this article was to review the importance of virtual reality simulation as a computerized teaching strategy. In addition, a project to explore readiness of nursing faculty at one major Midwestern university for the use of virtual reality simulation as a computerized teaching strategy is described where faculty thought virtual reality simulation would increase students' knowledge of an intravenous line insertion procedure. Faculty who practiced intravenous catheter insertion via virtual reality simulation expressed a wide range of learning experiences from using virtual reality simulation that is congruent with the literature regarding the barriers to student learning. Innovative teaching strategies, such as virtual reality simulation, address barriers of increasing patient acuity, high student-to-faculty ratio, patient safety concerns from faculty, and student anxiety and can offer rapid feedback to students.

  13. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    NASA Astrophysics Data System (ADS)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  14. Numerical Simulation of the Water Cycle Change Over the 20th Century

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Schubert, Siegfried D.

    2003-01-01

    We have used numerical models to test the impact of the change in Sea Surface Temperatures (SSTs) and carbon dioxide (CO2) concentration on the global circulation, particularly focusing on the hydrologic cycle, namely the global cycling of water and continental recycling of water. We have run four numerical simulations using mean annual SST from the early part of the 20th century (1900-1920) and the later part (1980-2000). In addition, we vary the CO2 concentrations for these periods as well. The duration of the simulations is 15 years, and the spatial resolution is 2 degrees. We use passive tracers to study the geographical sources of water. Surface evaporation from predetermined continental and oceanic regions provides the source of water for each passive tracer. In this way, we compute the percent of precipitation of each region over the globe. This can also be used to estimate precipitation recycling. In addition, we are using the passive tracers to independently compute the global cycling of water (compared to the traditional, Q/P calculation).

  15. Simulation of Nonlinear Instabilities in an Attachment-Line Boundary Layer

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.

    1996-01-01

    The linear and the nonlinear stability of disturbances that propagate along the attachment line of a three-dimensional boundary layer is considered. The spatially evolving disturbances in the boundary layer are computed by direct numerical simulation (DNS) of the unsteady, incompressible Navier-Stokes equations. Disturbances are introduced either by forcing at the in ow or by applying suction and blowing at the wall. Quasi-parallel linear stability theory and a nonparallel theory yield notably different stability characteristics for disturbances near the critical Reynolds number; the DNS results con rm the latter theory. Previously, a weakly nonlinear theory and computations revealed a high wave-number region of subcritical disturbance growth. More recent computations have failed to achieve this subcritical growth. The present computational results indicate the presence of subcritically growing disturbances; the results support the weakly nonlinear theory. Furthermore, an explanation is provided for the previous theoretical and computational discrepancy. In addition, the present results demonstrate that steady suction can be used to stabilize disturbances that otherwise grow subcritically along the attachment line.

  16. Computational simulations of the interaction of water waves with pitching flap-type ocean wave energy converters

    NASA Astrophysics Data System (ADS)

    Pathak, Ashish; Raessi, Mehdi

    2016-11-01

    Using an in-house computational framework, we have studied the interaction of water waves with pitching flap-type ocean wave energy converters (WECs). The computational framework solves the full 3D Navier-Stokes equations and captures important effects, including the fluid-solid interaction, the nonlinear and viscous effects. The results of the computational tool, is first compared against the experimental data on the response of a flap-type WEC in a wave tank, and excellent agreement is demonstrated. Further simulations at the model and prototype scales are presented to assess the validity of the Froude scaling. The simulations are used to address some important questions, such as the validity range of common WEC modeling approaches that rely heavily on the Froude scaling and the inviscid potential flow theory. Additionally, the simulations examine the role of the Keulegan-Carpenter (KC) number, which is often used as a measure of relative importance of viscous drag on bodies exposed to oscillating flows. The performance of the flap-type WECs is investigated at various KC numbers to establish the relationship between the viscous drag and KC number for such geometry. That is of significant importance because such relationship only exists for simple geometries, e.g., a cylinder. Support from the National Science Foundation is gratefully acknowledged.

  17. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  18. 2D modeling of direct laser metal deposition process using a finite particle method

    NASA Astrophysics Data System (ADS)

    Anedaf, T.; Abbès, B.; Abbès, F.; Li, Y. M.

    2018-05-01

    Direct laser metal deposition is one of the material additive manufacturing processes used to produce complex metallic parts. A thorough understanding of the underlying physical phenomena is required to obtain a high-quality parts. In this work, a mathematical model is presented to simulate the coaxial laser direct deposition process tacking into account of mass addition, heat transfer, and fluid flow with free surface and melting. The fluid flow in the melt pool together with mass and energy balances are solved using the Computational Fluid Dynamics (CFD) software NOGRID-points, based on the meshless Finite Pointset Method (FPM). The basis of the computations is a point cloud, which represents the continuum fluid domain. Each finite point carries all fluid information (density, velocity, pressure and temperature). The dynamic shape of the molten zone is explicitly described by the point cloud. The proposed model is used to simulate a single layer cladding.

  19. Ligand Binding: Molecular Mechanics Calculation of the Streptavidin-Biotin Rupture Force

    NASA Astrophysics Data System (ADS)

    Grubmuller, Helmut; Heymann, Berthold; Tavan, Paul

    1996-02-01

    The force required to rupture the streptavidin-biotin complex was calculated here by computer simulations. The computed force agrees well with that obtained by recent single molecule atomic force microscope experiments. These simulations suggest a detailed multiple-pathway rupture mechanism involving five major unbinding steps. Binding forces and specificity are attributed to a hydrogen bond network between the biotin ligand and residues within the binding pocket of streptavidin. During rupture, additional water bridges substantially enhance the stability of the complex and even dominate the binding inter-actions. In contrast, steric restraints do not appear to contribute to the binding forces, although conformational motions were observed.

  20. Scaling up Planetary Dynamo Modeling to Massively Parallel Computing Systems: The Rayleigh Code at ALCF

    NASA Astrophysics Data System (ADS)

    Featherstone, N. A.; Aurnou, J. M.; Yadav, R. K.; Heimpel, M. H.; Soderlund, K. M.; Matsui, H.; Stanley, S.; Brown, B. P.; Glatzmaier, G.; Olson, P.; Buffett, B. A.; Hwang, L.; Kellogg, L. H.

    2017-12-01

    In the past three years, CIG's Dynamo Working Group has successfully ported the Rayleigh Code to the Argonne Leadership Computer Facility's Mira BG/Q device. In this poster, we present some our first results, showing simulations of 1) convection in the solar convection zone; 2) dynamo action in Earth's core and 3) convection in the jovian deep atmosphere. These simulations have made efficient use of 131 thousand cores, 131 thousand cores and 232 thousand cores, respectively, on Mira. In addition to our novel results, the joys and logistical challenges of carrying out such large runs will also be discussed.

  1. A comprehensive combined experimental and computational framework for pre-clinical wear simulation of total knee replacements.

    PubMed

    Abdelgaied, A; Fisher, J; Jennings, L M

    2018-02-01

    A more robust pre-clinical wear simulation framework is required in order to simulate wider and higher ranges of activities, observed in different patient populations such as younger more active patients. Such a framework will help to understand and address the reported higher failure rates for younger and more active patients (National_Joint_Registry, 2016). The current study has developed and validated a comprehensive combined experimental and computational framework for pre-clinical wear simulation of total knee replacements (TKR). The input mechanical (elastic modulus and Poisson's ratio) and wear parameters of the moderately cross-linked ultra-high molecular weight polyethylene (UHMWPE) bearing material were independently measured from experimental studies under realistic test conditions, similar to the loading conditions found in the total knee replacements. The wear predictions from the computational wear simulation were validated against the direct experimental wear measurements for size 3 Sigma curved total knee replacements (DePuy, UK) in an independent experimental wear simulation study under three different daily activities; walking, deep squat, and stairs ascending kinematic conditions. The measured compressive mechanical properties of the moderately cross-linked UHMWPE material were more than 20% lower than that reported in the literature under tensile test conditions. The pin-on-plate wear coefficient of moderately cross-linked UHMWPE was significantly dependant of the contact stress and the degree of cross-shear at the articulating surfaces. The computational wear predictions for the TKR from the current framework were consistent and in a good agreement with the independent full TKR experimental wear simulation measurements, with 0.94 coefficient of determination of the framework. In addition, the comprehensive combined experimental and computational framework was able to explain the complex experimental wear trends from the three different daily activities investigated. Therefore, such a framework can be adopted as a pre-clinical simulation approach to optimise different designs, materials, as well as patient's specific total knee replacements for a range of activities. Copyright © 2017. Published by Elsevier Ltd.

  2. Quantitative validation of carbon-fiber laminate low velocity impact simulations

    DOE PAGES

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    2015-09-26

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  3. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  4. Testing the Use of Implicit Solvent in the Molecular Dynamics Modelling of DNA Flexibility

    NASA Astrophysics Data System (ADS)

    Mitchell, J.; Harris, S.

    DNA flexibility controls packaging, looping and in some cases sequence specific protein binding. Molecular dynamics simulations carried out with a computationally efficient implicit solvent model are potentially a powerful tool for studying larger DNA molecules than can be currently simulated when water and counterions are represented explicitly. In this work we compare DNA flexibility at the base pair step level modelled using an implicit solvent model to that previously determined from explicit solvent simulations and database analysis. Although much of the sequence dependent behaviour is preserved in implicit solvent, the DNA is considerably more flexible when the approximate model is used. In addition we test the ability of the implicit solvent to model stress induced DNA disruptions by simulating a series of DNA minicircle topoisomers which vary in size and superhelical density. When compared with previously run explicit solvent simulations, we find that while the levels of DNA denaturation are similar using both computational methodologies, the specific structural form of the disruptions is different.

  5. GENESIS 1.1: A hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms.

    PubMed

    Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji

    2017-09-30

    GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Multiscale Computer Simulation of Failure in Aerogels

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2008-01-01

    Aerogels have been of interest to the aerospace community primarily for their thermal properties, notably their low thermal conductivities. While such gels are typically fragile, recent advances in the application of conformal polymer layers to these gels has made them potentially useful as lightweight structural materials as well. We have previously performed computer simulations of aerogel thermal conductivity and tensile and compressive failure, with results that are in qualitative, and sometimes quantitative, agreement with experiment. However, recent experiments in our laboratory suggest that gels having similar densities may exhibit substantially different properties. In this work, we extend our original diffusion limited cluster aggregation (DLCA) model for gel structure to incorporate additional variation in DLCA simulation parameters, with the aim of producing DLCA clusters of similar densities that nevertheless have different fractal dimension and secondary particle coordination. We perform particle statics simulations of gel strain on these clusters, and consider the effects of differing DLCA simulation conditions, and the resultant differences in fractal dimension and coordination, on gel strain properties.

  7. Computed tomographic colonography to screen for colorectal cancer, extracolonic cancer, and aortic aneurysm: model simulation with cost-effectiveness analysis.

    PubMed

    Hassan, Cesare; Pickhardt, Perry J; Pickhardt, Perry; Laghi, Andrea; Kim, Daniel H; Kim, Daniel; Zullo, Angelo; Iafrate, Franco; Di Giulio, Lorenzo; Morini, Sergio

    2008-04-14

    In addition to detecting colorectal neoplasia, abdominal computed tomography (CT) with colonography technique (CTC) can also detect unsuspected extracolonic cancers and abdominal aortic aneurysms (AAA).The efficacy and cost-effectiveness of this combined abdominal CT screening strategy are unknown. A computerized Markov model was constructed to simulate the occurrence of colorectal neoplasia, extracolonic malignant neoplasm, and AAA in a hypothetical cohort of 100,000 subjects from the United States who were 50 years of age. Simulated screening with CTC, using a 6-mm polyp size threshold for reporting, was compared with a competing model of optical colonoscopy (OC), both without and with abdominal ultrasonography for AAA detection (OC-US strategy). In the simulated population, CTC was the dominant screening strategy, gaining an additional 1458 and 462 life-years compared with the OC and OC-US strategies and being less costly, with a savings of $266 and $449 per person, respectively. The additional gains for CTC were largely due to a decrease in AAA-related deaths, whereas the modeled benefit from extracolonic cancer downstaging was a relatively minor factor. At sensitivity analysis, OC-US became more cost-effective only when the CTC sensitivity for large polyps dropped to 61% or when broad variations of costs were simulated, such as an increase in CTC cost from $814 to $1300 or a decrease in OC cost from $1100 to $500. With the OC-US approach, suboptimal compliance had a strong negative influence on efficacy and cost-effectiveness. The estimated mortality from CT-induced cancer was less than estimated colonoscopy-related mortality (8 vs 22 deaths), both of which were minor compared with the positive benefit from screening. When detection of extracolonic findings such as AAA and extracolonic cancer are considered in addition to colorectal neoplasia in our model simulation, CT colonography is a dominant screening strategy (ie, more clinically effective and more cost-effective) over both colonoscopy and colonoscopy with 1-time ultrasonography.

  8. Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design

    NASA Technical Reports Server (NTRS)

    Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.

  9. Kinetic barriers in the isomerization of substituted ureas: implications for computer-aided drug design.

    PubMed

    Loeffler, Johannes R; Ehmki, Emanuel S R; Fuchs, Julian E; Liedl, Klaus R

    2016-05-01

    Urea derivatives are ubiquitously found in many chemical disciplines. N,N'-substituted ureas may show different conformational preferences depending on their substitution pattern. The high energetic barrier for isomerization of the cis and trans state poses additional challenges on computational simulation techniques aiming at a reproduction of the biological properties of urea derivatives. Herein, we investigate energetics of urea conformations and their interconversion using a broad spectrum of methodologies ranging from data mining, via quantum chemistry to molecular dynamics simulation and free energy calculations. We find that the inversion of urea conformations is inherently slow and beyond the time scale of typical simulation protocols. Therefore, extra care needs to be taken by computational chemists to work with appropriate model systems. We find that both knowledge-driven approaches as well as physics-based methods may guide molecular modelers towards accurate starting structures for expensive calculations to ensure that conformations of urea derivatives are modeled as adequately as possible.

  10. Random walk numerical simulation for hopping transport at finite carrier concentrations: diffusion coefficient and transport energy concept.

    PubMed

    Gonzalez-Vazquez, J P; Anta, Juan A; Bisquert, Juan

    2009-11-28

    The random walk numerical simulation (RWNS) method is used to compute diffusion coefficients for hopping transport in a fully disordered medium at finite carrier concentrations. We use Miller-Abrahams jumping rates and an exponential distribution of energies to compute the hopping times in the random walk simulation. The computed diffusion coefficient shows an exponential dependence with respect to Fermi-level and Arrhenius behavior with respect to temperature. This result indicates that there is a well-defined transport level implicit to the system dynamics. To establish the origin of this transport level we construct histograms to monitor the energies of the most visited sites. In addition, we construct "corrected" histograms where backward moves are removed. Since these moves do not contribute to transport, these histograms provide a better estimation of the effective transport level energy. The analysis of this concept in connection with the Fermi-level dependence of the diffusion coefficient and the regime of interest for the functioning of dye-sensitised solar cells is thoroughly discussed.

  11. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).

  12. Quasi-dynamic versus fully dynamic simulations of earthquakes and aseismic slip with and without enhanced coseismic weakening

    NASA Astrophysics Data System (ADS)

    Thomas, Marion Y.; Lapusta, Nadia; Noda, Hiroyuki; Avouac, Jean-Philippe

    2014-03-01

    Physics-based numerical simulations of earthquakes and slow slip, coupled with field observations and laboratory experiments, can, in principle, be used to determine fault properties and potential fault behaviors. Because of the computational cost of simulating inertial wave-mediated effects, their representation is often simplified. The quasi-dynamic (QD) approach approximately accounts for inertial effects through a radiation damping term. We compare QD and fully dynamic (FD) simulations by exploring the long-term behavior of rate-and-state fault models with and without additional weakening during seismic slip. The models incorporate a velocity-strengthening (VS) patch in a velocity-weakening (VW) zone, to consider rupture interaction with a slip-inhibiting heterogeneity. Without additional weakening, the QD and FD approaches generate qualitatively similar slip patterns with quantitative differences, such as slower slip velocities and rupture speeds during earthquakes and more propensity for rupture arrest at the VS patch in the QD cases. Simulations with additional coseismic weakening produce qualitatively different patterns of earthquakes, with near-periodic pulse-like events in the FD simulations and much larger crack-like events accompanied by smaller events in the QD simulations. This is because the FD simulations with additional weakening allow earthquake rupture to propagate at a much lower level of prestress than the QD simulations. The resulting much larger ruptures in the QD simulations are more likely to propagate through the VS patch, unlike for the cases with no additional weakening. Overall, the QD approach should be used with caution, as the QD simulation results could drastically differ from the true response of the physical model considered.

  13. Site Identification by Ligand Competitive Saturation (SILCS) Simulations for Fragment-Based Drug Design

    PubMed Central

    Faller, Christina E.; Raman, E. Prabhu; MacKerell, Alexander D.; Guvench, Olgun

    2015-01-01

    Fragment-based drug design (FBDD) involves screening low molecular weight molecules (“fragments”) that correspond to functional groups found in larger drug-like molecules to determine their binding to target proteins or nucleic acids. Based on the principle of thermodynamic additivity, two fragments that bind non-overlapping nearby sites on the target can be combined to yield a new molecule whose binding free energy is the sum of those of the fragments. Experimental FBDD approaches, like NMR and X-ray crystallography, have proven very useful but can be expensive in terms of time, materials, and labor. Accordingly, a variety of computational FBDD approaches have been developed that provide different levels of detail and accuracy. The Site Identification by Ligand Competitive Saturation (SILCS) method of computational FBDD uses all-atom explicit-solvent molecular dynamics (MD) simulations to identify fragment binding. The target is “soaked” in an aqueous solution with multiple fragments having different identities. The resulting computational competition assay reveals what small molecule types are most likely to bind which regions of the target. From SILCS simulations, 3D probability maps of fragment binding called “FragMaps” can be produced. Based on the probabilities relative to bulk, SILCS FragMaps can be used to determine “Grid Free Energies (GFEs),” which provide per-atom contributions to fragment binding affinities. For essentially no additional computational overhead relative to the production of the FragMaps, GFEs can be used to compute Ligand Grid Free Energies (LGFEs) for arbitrarily complex molecules, and these LGFEs can be used to rank-order the molecules in accordance with binding affinities. PMID:25709034

  14. Combining cell-based hydrodynamics with hybrid particle-field simulations: efficient and realistic simulation of structuring dynamics.

    PubMed

    Sevink, G J A; Schmid, F; Kawakatsu, T; Milano, G

    2017-02-22

    We have extended an existing hybrid MD-SCF simulation technique that employs a coarsening step to enhance the computational efficiency of evaluating non-bonded particle interactions. This technique is conceptually equivalent to the single chain in mean-field (SCMF) method in polymer physics, in the sense that non-bonded interactions are derived from the non-ideal chemical potential in self-consistent field (SCF) theory, after a particle-to-field projection. In contrast to SCMF, however, MD-SCF evolves particle coordinates by the usual Newton's equation of motion. Since collisions are seriously affected by the softening of non-bonded interactions that originates from their evaluation at the coarser continuum level, we have devised a way to reinsert the effect of collisions on the structural evolution. Merging MD-SCF with multi-particle collision dynamics (MPCD), we mimic particle collisions at the level of computational cells and at the same time properly account for the momentum transfer that is important for a realistic system evolution. The resulting hybrid MD-SCF/MPCD method was validated for a particular coarse-grained model of phospholipids in aqueous solution, against reference full-particle simulations and the original MD-SCF model. We additionally implemented and tested an alternative and more isotropic finite difference gradient. Our results show that efficiency is improved by merging MD-SCF with MPCD, as properly accounting for hydrodynamic interactions considerably speeds up the phase separation dynamics, with negligible additional computational costs compared to efficient MD-SCF. This new method enables realistic simulations of large-scale systems that are needed to investigate the applications of self-assembled structures of lipids in nanotechnologies.

  15. Testing and Validating Gadget2 for GPUs

    NASA Astrophysics Data System (ADS)

    Wibking, Benjamin; Holley-Bockelmann, K.; Berlind, A. A.

    2013-01-01

    We are currently upgrading a version of Gadget2 (Springel et al., 2005) that is optimized for NVIDIA's CUDA GPU architecture (Frigaard, unpublished) to work with the latest libraries and graphics cards. Preliminary tests of its performance indicate a ~40x speedup in the particle force tree approximation calculation, with overall speedup of 5-10x for cosmological simulations run with GPUs compared to running on the same CPU cores without GPU acceleration. We believe this speedup can be reasonably increased by an additional factor of two with futher optimization, including overlap of computation on CPU and GPU. Tests of single-precision GPU numerical fidelity currently indicate accuracy of the mass function and the spectral power density to within a few percent of extended-precision CPU results with the unmodified form of Gadget. Additionally, we plan to test and optimize the GPU code for Millenium-scale "grand challenge" simulations of >10^9 particles, a scale that has been previously untested with this code, with the aid of the NSF XSEDE flagship GPU-based supercomputing cluster codenamed "Keeneland." Current work involves additional validation of numerical results, extending the numerical precision of the GPU calculations to double precision, and evaluating performance/accuracy tradeoffs. We believe that this project, if successful, will yield substantial computational performance benefits to the N-body research community as the next generation of GPU supercomputing resources becomes available, both increasing the electrical power efficiency of ever-larger computations (making simulations possible a decade from now at scales and resolutions unavailable today) and accelerating the pace of research in the field.

  16. Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Brian Keith; Rivera, Michael K.; Boero, Riccardo

    2014-04-01

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences inmore » the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.« less

  17. NREL Software Aids Offshore Wind Turbine Designs (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2013-10-01

    NREL researchers are supporting offshore wind power development with computer models that allow detailed analyses of both fixed and floating offshore wind turbines. While existing computer-aided engineering (CAE) models can simulate the conditions and stresses that a land-based wind turbine experiences over its lifetime, offshore turbines require the additional considerations of variations in water depth, soil type, and wind and wave severity, which also necessitate the use of a variety of support-structure types. NREL's core wind CAE tool, FAST, models the additional effects of incident waves, sea currents, and the foundation dynamics of the support structures.

  18. System support software for the Space Ultrareliable Modular Computer (SUMC)

    NASA Technical Reports Server (NTRS)

    Hill, T. E.; Hintze, G. C.; Hodges, B. C.; Austin, F. A.; Buckles, B. P.; Curran, R. T.; Lackey, J. D.; Payne, R. E.

    1974-01-01

    The highly transportable programming system designed and implemented to support the development of software for the Space Ultrareliable Modular Computer (SUMC) is described. The SUMC system support software consists of program modules called processors. The initial set of processors consists of the supervisor, the general purpose assembler for SUMC instruction and microcode input, linkage editors, an instruction level simulator, a microcode grid print processor, and user oriented utility programs. A FORTRAN 4 compiler is undergoing development. The design facilitates the addition of new processors with a minimum effort and provides the user quasi host independence on the ground based operational software development computer. Additional capability is provided to accommodate variations in the SUMC architecture without consequent major modifications in the initial processors.

  19. Predicting debris-flow initiation and run-out with a depth-averaged two-phase model and adaptive numerical methods

    NASA Astrophysics Data System (ADS)

    George, D. L.; Iverson, R. M.

    2012-12-01

    Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.

  20. Surgical robot setup simulation with consistent kinematics and haptics for abdominal surgery.

    PubMed

    Hayashibe, Mitsuhiro; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Konishi, Kozo; Kakeji, Yoshihiro; Hashizume, Makoto

    2005-01-01

    Preoperative simulation and planning of surgical robot setup should accompany advanced robotic surgery if their advantages are to be further pursued. Feedback from the planning system will plays an essential role in computer-aided robotic surgery in addition to preoperative detailed geometric information from patient CT/MRI images. Surgical robot setup simulation systems for appropriate trocar site placement have been developed especially for abdominal surgery. The motion of the surgical robot can be simulated and rehearsed with kinematic constraints at the trocar site, and the inverse-kinematics of the robot. Results from simulation using clinical patient data verify the effectiveness of the proposed system.

  1. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-Based Generic Programming

    DOE PAGES

    Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.

    2012-01-01

    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.

  2. [INVITED] Computational intelligence for smart laser materials processing

    NASA Astrophysics Data System (ADS)

    Casalino, Giuseppe

    2018-03-01

    Computational intelligence (CI) involves using a computer algorithm to capture hidden knowledge from data and to use them for training ;intelligent machine; to make complex decisions without human intervention. As simulation is becoming more prevalent from design and planning to manufacturing and operations, laser material processing can also benefit from computer generating knowledge through soft computing. This work is a review of the state-of-the-art on the methodology and applications of CI in laser materials processing (LMP), which is nowadays receiving increasing interest from world class manufacturers and 4.0 industry. The focus is on the methods that have been proven effective and robust in solving several problems in welding, cutting, drilling, surface treating and additive manufacturing using the laser beam. After a basic description of the most common computational intelligences employed in manufacturing, four sections, namely, laser joining, machining, surface, and additive covered the most recent applications in the already extensive literature regarding the CI in LMP. Eventually, emerging trends and future challenges were identified and discussed.

  3. Virtual planning for craniomaxillofacial surgery--7 years of experience.

    PubMed

    Adolphs, Nicolai; Haberl, Ernst-Johannes; Liu, Weichen; Keeve, Erwin; Menneking, Horst; Hoffmeister, Bodo

    2014-07-01

    Contemporary computer-assisted surgery systems more and more allow for virtual simulation of even complex surgical procedures with increasingly realistic predictions. Preoperative workflows are established and different commercially software solutions are available. Potential and feasibility of virtual craniomaxillofacial surgery as an additional planning tool was assessed retrospectively by comparing predictions and surgical results. Since 2006 virtual simulation has been performed in selected patient cases affected by complex craniomaxillofacial disorders (n = 8) in addition to standard surgical planning based on patient specific 3d-models. Virtual planning could be performed for all levels of the craniomaxillofacial framework within a reasonable preoperative workflow. Simulation of even complex skeletal displacements corresponded well with the real surgical result and soft tissue simulation proved to be helpful. In combination with classic 3d-models showing the underlying skeletal pathology virtual simulation improved planning and transfer of craniomaxillofacial corrections. Additional work and expenses may be justified by increased possibilities of visualisation, information, instruction and documentation in selected craniomaxillofacial procedures. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  4. Influence of simulation parameters on the speed and accuracy of Monte Carlo calculations using PENEPMA

    NASA Astrophysics Data System (ADS)

    Llovet, X.; Salvat, F.

    2018-01-01

    The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.

  5. Nonlinear relaxation algorithms for circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, R.A.

    Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less

  6. Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code

    NASA Astrophysics Data System (ADS)

    Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.

    2015-08-01

    MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.

  7. Gamma Ray Observatory (GRO) OBC attitude error analysis

    NASA Technical Reports Server (NTRS)

    Harman, R. R.

    1990-01-01

    This analysis involves an in-depth look into the onboard computer (OBC) attitude determination algorithm. A review of TRW error analysis and necessary ground simulations to understand the onboard attitude determination process are performed. In addition, a plan is generated for the in-flight calibration and validation of OBC computed attitudes. Pre-mission expected accuracies are summarized and sensitivity of onboard algorithms to sensor anomalies and filter tuning parameters are addressed.

  8. Simulation tools for robotics research and assessment

    NASA Astrophysics Data System (ADS)

    Fields, MaryAnne; Brewer, Ralph; Edge, Harris L.; Pusey, Jason L.; Weller, Ed; Patel, Dilip G.; DiBerardino, Charles A.

    2016-05-01

    The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component-level computational models to provide the necessary simulation fidelity for accuracy. However, the Perception domain remains the most problematic for adequate simulation performance due to the often cartoon nature of computer rendering and the inability to model realistic electromagnetic radiation effects, such as multiple reflections, in real-time.

  9. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  10. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duque, Earl P.N.; Whitlock, Brad J.

    High performance computers have for many years been on a trajectory that gives them extraordinary compute power with the addition of more and more compute cores. At the same time, other system parameters such as the amount of memory per core and bandwidth to storage have remained constant or have barely increased. This creates an imbalance in the computer, giving it the ability to compute a lot of data that it cannot reasonably save out due to time and storage constraints. While technologies have been invented to mitigate this problem (burst buffers, etc.), software has been adapting to employ inmore » situ libraries which perform data analysis and visualization on simulation data while it is still resident in memory. This avoids the need to ever have to pay the costs of writing many terabytes of data files. Instead, in situ enables the creation of more concentrated data products such as statistics, plots, and data extracts, which are all far smaller than the full-sized volume data. With the increasing popularity of in situ, multiple in situ infrastructures have been created, each with its own mechanism for integrating with a simulation. To make it easier to instrument a simulation with multiple in situ infrastructures and include custom analysis algorithms, this project created the SENSEI framework.« less

  11. Tinker-OpenMM: Absolute and relative alchemical free energies using AMOEBA on GPUs.

    PubMed

    Harger, Matthew; Li, Daniel; Wang, Zhi; Dalby, Kevin; Lagardère, Louis; Piquemal, Jean-Philip; Ponder, Jay; Ren, Pengyu

    2017-09-05

    The capabilities of the polarizable force fields for alchemical free energy calculations have been limited by the high computational cost and complexity of the underlying potential energy functions. In this work, we present a GPU-based general alchemical free energy simulation platform for polarizable potential AMOEBA. Tinker-OpenMM, the OpenMM implementation of the AMOEBA simulation engine has been modified to enable both absolute and relative alchemical simulations on GPUs, which leads to a ∼200-fold improvement in simulation speed over a single CPU core. We show that free energy values calculated using this platform agree with the results of Tinker simulations for the hydration of organic compounds and binding of host-guest systems within the statistical errors. In addition to absolute binding, we designed a relative alchemical approach for computing relative binding affinities of ligands to the same host, where a special path was applied to avoid numerical instability due to polarization between the different ligands that bind to the same site. This scheme is general and does not require ligands to have similar scaffolds. We show that relative hydration and binding free energy calculated using this approach match those computed from the absolute free energy approach. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Evaluation of Airframe Noise Reduction Concepts via Simulations Using a Lattice Boltzmann Approach

    NASA Technical Reports Server (NTRS)

    Fares, Ehab; Casalino, Damiano; Khorrami, Mehdi R.

    2015-01-01

    Unsteady computations are presented for a high-fidelity, 18% scale, semi-span Gulfstream aircraft model in landing configuration, i.e. flap deflected at 39 degree and main landing gear deployed. The simulations employ the lattice Boltzmann solver PowerFLOW® to simultaneously capture the flow physics and acoustics in the near field. Sound propagation to the far field is obtained using a Ffowcs Williams and Hawkings acoustic analogy approach. In addition to the baseline geometry, which was presented previously, various noise reduction concepts for the flap and main landing gear are simulated. In particular, care is taken to fully resolve the complex geometrical details associated with these concepts in order to capture the resulting intricate local flow field thus enabling accurate prediction of their acoustic behavior. To determine aeroacoustic performance, the farfield noise predicted with the concepts applied is compared to high-fidelity simulations of the untreated baseline configurations. To assess the accuracy of the computed results, the aerodynamic and aeroacoustic impact of the noise reduction concepts is evaluated numerically and compared to experimental results for the same model. The trends and effectiveness of the simulated noise reduction concepts compare well with measured values and demonstrate that the computational approach is capable of capturing the primary effects of the acoustic treatment on a full aircraft model.

  13. [Acquiring skills in malignant hyperthermia crisis management: comparison of high-fidelity simulation versus computer-based case study].

    PubMed

    Mejía, Vilma; Gonzalez, Carlos; Delfino, Alejandro E; Altermatt, Fernando R; Corvetto, Marcia A

    The primary purpose of this study was to compare the effect of high fidelity simulation versus a computer-based case solving self-study, in skills acquisition about malignant hyperthermia on first year anesthesiology residents. After institutional ethical committee approval, 31 first year anesthesiology residents were enrolled in this prospective randomized single-blinded study. Participants were randomized to either a High Fidelity Simulation Scenario or a computer-based Case Study about malignant hyperthermia. After the intervention, all subjects' performance in was assessed through a high fidelity simulation scenario using a previously validated assessment rubric. Additionally, knowledge tests and a satisfaction survey were applied. Finally, a semi-structured interview was done to assess self-perception of reasoning process and decision-making. 28 first year residents finished successfully the study. Resident's management skill scores were globally higher in High Fidelity Simulation versus Case Study, however they were significant in 4 of the 8 performance rubric elements: recognize signs and symptoms (p = 0.025), prioritization of initial actions of management (p = 0.003), recognize complications (p = 0.025) and communication (p = 0.025). Average scores from pre- and post-test knowledge questionnaires improved from 74% to 85% in the High Fidelity Simulation group, and decreased from 78% to 75% in the Case Study group (p = 0.032). Regarding the qualitative analysis, there was no difference in factors influencing the student's process of reasoning and decision-making with both teaching strategies. Simulation-based training with a malignant hyperthermia high-fidelity scenario was superior to computer-based case study, improving knowledge and skills in malignant hyperthermia crisis management, with a very good satisfaction level in anesthesia residents. Copyright © 2018 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.

  14. Numerical models for the diffuse ionized gas in galaxies. I. Synthetic spectra of thermally excited gas with turbulent magnetic reconnection as energy source

    NASA Astrophysics Data System (ADS)

    Hoffmann, T. L.; Lieb, S.; Pauldrach, A. W. A.; Lesch, H.; Hultzsch, P. J. N.; Birk, G. T.

    2012-08-01

    Aims: The aim of this work is to verify whether turbulent magnetic reconnection can provide the additional energy input required to explain the up to now only poorly understood ionization mechanism of the diffuse ionized gas (DIG) in galaxies and its observed emission line spectra. Methods: We use a detailed non-LTE radiative transfer code that does not make use of the usual restrictive gaseous nebula approximations to compute synthetic spectra for gas at low densities. Excitation of the gas is via an additional heating term in the energy balance as well as by photoionization. Numerical values for this heating term are derived from three-dimensional resistive magnetohydrodynamic two-fluid plasma-neutral-gas simulations to compute energy dissipation rates for the DIG under typical conditions. Results: Our simulations show that magnetic reconnection can liberate enough energy to by itself fully or partially ionize the gas. However, synthetic spectra from purely thermally excited gas are incompatible with the observed spectra; a photoionization source must additionally be present to establish the correct (observed) ionization balance in the gas.

  15. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  16. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  17. Coarse Grid CFD for underresolved simulation

    NASA Astrophysics Data System (ADS)

    Class, Andreas G.; Viellieber, Mathias O.; Himmel, Steffen R.

    2010-11-01

    CFD simulation of the complete reactor core of a nuclear power plant requires exceedingly huge computational resources so that this crude power approach has not been pursued yet. The traditional approach is 1D subchannel analysis employing calibrated transport models. Coarse grid CFD is an attractive alternative technique based on strongly under-resolved CFD and the inviscid Euler equations. Obviously, using inviscid equations and coarse grids does not resolve all the physics requiring additional volumetric source terms modelling viscosity and other sub-grid effects. The source terms are implemented via correlations derived from fully resolved representative simulations which can be tabulated or computed on the fly. The technique is demonstrated for a Carnot diffusor and a wire-wrap fuel assembly [1]. [4pt] [1] Himmel, S.R. phd thesis, Stuttgart University, Germany 2009, http://bibliothek.fzk.de/zb/berichte/FZKA7468.pdf

  18. Capabilities and applications of the Program to Optimize Simulated Trajectories (POST). Program summary document

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Stevenson, R.

    1977-01-01

    The capabilities and applications of the three-degree-of-freedom (3DOF) version and the six-degree-of-freedom (6DOF) version of the Program to Optimize Simulated Trajectories (POST) are summarized. The document supplements the detailed program manuals by providing additional information that motivates and clarifies basic capabilities, input procedures, applications and computer requirements of these programs. The information will enable prospective users to evaluate the programs, and to determine if they are applicable to their problems. Enough information is given to enable managerial personnel to evaluate the capabilities of the programs and describes the POST structure, formulation, input and output procedures, sample cases, and computer requirements. The report also provides answers to basic questions concerning planet and vehicle modeling, simulation accuracy, optimization capabilities, and general input rules. Several sample cases are presented.

  19. What Fermenter?

    ERIC Educational Resources Information Center

    Terry, John

    1987-01-01

    Discusses the feasibility of using fermenters in secondary school laboratories. Includes discussions of equipment, safety, and computer interfacing. Describes how a simple fermenter could be used to simulate large-scale processes. Concludes that, although teachers and technicians will require additional training, the prospects for biotechnology in…

  20. De-quantisation

    NASA Astrophysics Data System (ADS)

    Gruska, Jozef

    2012-06-01

    One of the most basic tasks in quantum information processing, communication and security (QIPCC) research, theoretically deep and practically important, is to find bounds on how really important are inherently quantum resources for speeding up computations. This area of research is bringing a variety of results that imply, often in a very unexpected and counter-intuitive way, that: (a) surprisingly large classes of quantum circuits and algorithms can be efficiently simulated on classical computers; (b) the border line between quantum processes that can and cannot be efficiently simulated on classical computers is often surprisingly thin; (c) the addition of a seemingly very simple resource or a tool often enormously increases the power of available quantum tools. These discoveries have put also a new light on our understanding of quantum phenomena and quantum physics and on the potential of its inherently quantum and often mysteriously looking phenomena. The paper motivates and surveys research and its outcomes in the area of de-quantisation, especially presents various approaches and their outcomes concerning efficient classical simulations of various families of quantum circuits and algorithms. To motivate this area of research some outcomes in the area of de-randomization of classical randomized computations.

  1. Multiple point statistical simulation using uncertain (soft) conditional data

    NASA Astrophysics Data System (ADS)

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  2. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.

    2003-01-01

    Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques

  3. References and benchmarks for pore-scale flow simulated using micro-CT images of porous media and digital rocks

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Hofmann, Ronny; Alpak, Faruk O.; Berg, Steffen; Dietderich, Jesse; Agarwal, Umang; Tandon, Kunj; Hunter, Sander; Freeman, Justin; Wilson, Ove Bjorn

    2017-11-01

    We generate a novel reference dataset to quantify the impact of numerical solvers, boundary conditions, and simulation platforms. We consider a variety of microstructures ranging from idealized pipes to digital rocks. Pore throats of the digital rocks considered are large enough to be well resolved with state-of-the-art micro-computerized tomography technology. Permeability is computed using multiple numerical engines, 12 in total, including, Lattice-Boltzmann, computational fluid dynamics, voxel based, fast semi-analytical, and known empirical models. Thus, we provide a measure of uncertainty associated with flow computations of digital media. Moreover, the reference and standards dataset generated is the first of its kind and can be used to test and improve new fluid flow algorithms. We find that there is an overall good agreement between solvers for idealized cross-section shape pipes. As expected, the disagreement increases with increase in complexity of the pore space. Numerical solutions for pipes with sinusoidal variation of cross section show larger variability compared to pipes of constant cross-section shapes. We notice relatively larger variability in computed permeability of digital rocks with coefficient of variation (of up to 25%) in computed values between various solvers. Still, these differences are small given other subsurface uncertainties. The observed differences between solvers can be attributed to several causes including, differences in boundary conditions, numerical convergence criteria, and parameterization of fundamental physics equations. Solvers that perform additional meshing of irregular pore shapes require an additional step in practical workflows which involves skill and can introduce further uncertainty. Computation times for digital rocks vary from minutes to several days depending on the algorithm and available computational resources. We find that more stringent convergence criteria can improve solver accuracy but at the expense of longer computation time.

  4. Parallel Computing for Brain Simulation.

    PubMed

    Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A

    2017-01-01

    The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  5. Process Modeling and Validation for Metal Big Area Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan; Nycz, Andrzej; Noakes, Mark W.

    Metal Big Area Additive Manufacturing (mBAAM) is a new additive manufacturing (AM) technology based on the metal arc welding. A continuously fed metal wire is melted by an electric arc that forms between the wire and the substrate, and deposited in the form of a bead of molten metal along the predetermined path. Objects are manufactured one layer at a time starting from the base plate. The final properties of the manufactured object are dependent on its geometry and the metal deposition path, in addition to depending on the basic welding process parameters. Computational modeling can be used to acceleratemore » the development of the mBAAM technology as well as a design and optimization tool for the actual manufacturing process. We have developed a finite element method simulation framework for mBAAM using the new features of software ABAQUS. The computational simulation of material deposition with heat transfer is performed first, followed by the structural analysis based on the temperature history for predicting the final deformation and stress state. In this formulation, we assume that two physics phenomena are coupled in only one direction, i.e. the temperatures are driving the deformation and internal stresses, but their feedback on the temperatures is negligible. The experiment instrumentation (measurement types, sensor types, sensor locations, sensor placements, measurement intervals) and the measurements are presented. The temperatures and distortions from the simulations show good correlation with experimental measurements. Ongoing modeling work is also briefly discussed.« less

  6. Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.

    PubMed

    Salis, Howard; Kaznessis, Yiannis

    2005-02-01

    The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.

  7. Benchmarking of Neutron Production of Heavy-Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  8. Benchmarking of Heavy Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  9. Capacity Maximizing Constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Jones, Christopher

    2010-01-01

    Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity

  10. Engineering studies related to Skylab program. [assessment of automatic gain control data

    NASA Technical Reports Server (NTRS)

    Hayne, G. S.

    1973-01-01

    The relationship between the S-193 Automatic Gain Control data and the magnitude of received signal power was studied in order to characterize performance parameters for Skylab equipment. The r-factor was used for the assessment and is defined to be less than unity, and a function of off-nadir angle, ocean surface roughness, and receiver signal to noise ratio. A digital computer simulation was also used to assess to additive receiver, or white noise. The system model for the digital simulation is described, along with intermediate frequency and video impulse response functions used, details of the input waveforms, and results to date. Specific discussion of the digital computer programs used is also provided.

  11. Credibility, Replicability, and Reproducibility in Simulation for Biomedicine and Clinical Applications in Neuroscience

    PubMed Central

    Mulugeta, Lealem; Drach, Andrew; Erdemir, Ahmet; Hunt, C. A.; Horner, Marc; Ku, Joy P.; Myers Jr., Jerry G.; Vadigepalli, Rajanikanth; Lytton, William W.

    2018-01-01

    Modeling and simulation in computational neuroscience is currently a research enterprise to better understand neural systems. It is not yet directly applicable to the problems of patients with brain disease. To be used for clinical applications, there must not only be considerable progress in the field but also a concerted effort to use best practices in order to demonstrate model credibility to regulatory bodies, to clinics and hospitals, to doctors, and to patients. In doing this for neuroscience, we can learn lessons from long-standing practices in other areas of simulation (aircraft, computer chips), from software engineering, and from other biomedical disciplines. In this manuscript, we introduce some basic concepts that will be important in the development of credible clinical neuroscience models: reproducibility and replicability; verification and validation; model configuration; and procedures and processes for credible mechanistic multiscale modeling. We also discuss how garnering strong community involvement can promote model credibility. Finally, in addition to direct usage with patients, we note the potential for simulation usage in the area of Simulation-Based Medical Education, an area which to date has been primarily reliant on physical models (mannequins) and scenario-based simulations rather than on numerical simulations. PMID:29713272

  12. Credibility, Replicability, and Reproducibility in Simulation for Biomedicine and Clinical Applications in Neuroscience.

    PubMed

    Mulugeta, Lealem; Drach, Andrew; Erdemir, Ahmet; Hunt, C A; Horner, Marc; Ku, Joy P; Myers, Jerry G; Vadigepalli, Rajanikanth; Lytton, William W

    2018-01-01

    Modeling and simulation in computational neuroscience is currently a research enterprise to better understand neural systems. It is not yet directly applicable to the problems of patients with brain disease. To be used for clinical applications, there must not only be considerable progress in the field but also a concerted effort to use best practices in order to demonstrate model credibility to regulatory bodies, to clinics and hospitals, to doctors, and to patients. In doing this for neuroscience, we can learn lessons from long-standing practices in other areas of simulation (aircraft, computer chips), from software engineering, and from other biomedical disciplines. In this manuscript, we introduce some basic concepts that will be important in the development of credible clinical neuroscience models: reproducibility and replicability; verification and validation; model configuration; and procedures and processes for credible mechanistic multiscale modeling. We also discuss how garnering strong community involvement can promote model credibility. Finally, in addition to direct usage with patients, we note the potential for simulation usage in the area of Simulation-Based Medical Education, an area which to date has been primarily reliant on physical models (mannequins) and scenario-based simulations rather than on numerical simulations.

  13. Convergence of Molecular Dynamics Simulation of Protein Native States: Feasibility vs Self-Consistency Dilemma.

    PubMed

    Sawle, Lucas; Ghosh, Kingshuk

    2016-02-09

    All-atom molecular dynamics simulations need convergence tests to evaluate the quality of data. The notion of "true" convergence is elusive, and one can only hope to satisfy self-consistency checks (SCC). There are multiple SCC criteria, and their assessment of all-atom simulations of the native state for real globular proteins is sparse. Here, we present a systematic study of different SCC algorithms, both in terms of their ability to detect the lack of self-consistency and their computational demand, for the all-atom native state simulations of four globular proteins (CSP, CheA, CheW, and BPTI). Somewhat surprisingly, we notice some of the most stringent SCC criteria, e.g., the criteria demanding similarity of the cluster probability distribution between the first and the second halves of the trajectory or the comparison of fluctuations between different blocks using covariance overlap measure, can require tens of microseconds of simulation even for proteins with less than 100 amino acids. We notice such long simulation times can sometimes be associated with traps, but these traps cannot be detected by some of the common SCC methods. We suggest an additional, and simple, SCC algorithm to quickly detect such traps by monitoring the constancy of the cluster entropy (CCE). CCE is a necessary but not sufficient criteria, and additional SCC algorithms must be combined with it. Furthermore, as seen in the explicit solvent simulation of 1 ms long trajectory of BPTI,1 passing self-consistency checks at an earlier stage may be misleading due to conformational changes taking place later in the simulation, resulting in different, but segregated regions of SCC. Although there is a hierarchy of complex SCC algorithms, caution must be exercised in their application with the knowledge of their limitations and computational expense.

  14. Development and testing of a numerical simulation method for thermally nonequilibrium dissociating flows in ANSYS Fluent

    NASA Astrophysics Data System (ADS)

    Shoev, G. V.; Bondar, Ye. A.; Oblapenko, G. P.; Kustova, E. V.

    2016-03-01

    Various issues of numerical simulation of supersonic gas flows with allowance for thermochemical nonequilibrium on the basis of fluid dynamic equations in the two-temperature approximation are discussed. The computational tool for modeling flows with thermochemical nonequilibrium is the commercial software package ANSYS Fluent with an additional userdefined open-code module. A comparative analysis of results obtained by various models of vibration-dissociation coupling in binary gas mixtures of nitrogen and oxygen is performed. Results of numerical simulations are compared with available experimental data.

  15. Scientific Visualization and Computational Science: Natural Partners

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization research because the results vary so widely and include things that have no known appearance. The amount of data creates additional challenges for both hardware and software systems. Evaluations of visualization should ultimately reflect the insight gained into the scientific phenomena. So making good visualizations requires consideration of characteristics of the user and the purpose of the visualization. Knowledge about human perception and graphic design is also relevant. It is this breadth of knowledge that stimulates proposals for multidisciplinary visualization teams and intelligent visualization assistant software. Visualization is an immature field, but computational science is stimulating research on a broad front.

  16. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    PubMed Central

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined. PMID:22172142

  17. Simulation of a Geiger-Mode Imaging LADAR System for Performance Assessment

    PubMed Central

    Kim, Seongjoon; Lee, Impyeong; Kwon, Yong Joon

    2013-01-01

    As LADAR systems applications gradually become more diverse, new types of systems are being developed. When developing new systems, simulation studies are an essential prerequisite. A simulator enables performance predictions and optimal system parameters at the design level, as well as providing sample data for developing and validating application algorithms. The purpose of the study is to propose a method for simulating a Geiger-mode imaging LADAR system. We develop simulation software to assess system performance and generate sample data for the applications. The simulation is based on three aspects of modeling—the geometry, radiometry and detection. The geometric model computes the ranges to the reflection points of the laser pulses. The radiometric model generates the return signals, including the noises. The detection model determines the flight times of the laser pulses based on the nature of the Geiger-mode detector. We generated sample data using the simulator with the system parameters and analyzed the detection performance by comparing the simulated points to the reference points. The proportion of the outliers in the simulated points reached 25.53%, indicating the need for efficient outlier elimination algorithms. In addition, the false alarm rate and dropout rate of the designed system were computed as 1.76% and 1.06%, respectively. PMID:23823970

  18. Investigations into the low temperature behavior of jet fuels: Visualization, modeling, and viscosity studies

    NASA Astrophysics Data System (ADS)

    Atkins, Daniel L.

    Aircraft operation in arctic regions or at high altitudes exposes jet fuel to temperatures below freeze point temperature specifications. Fuel constituents may solidify and remain within tanks or block fuel system components. Military and scientific requirements have been met with costly, low freeze point specialty jet fuels. Commercial airline interest in polar routes and the use of high altitude unmanned aerial vehicles (UAVs) has spurred interest in the effects of low temperatures and low-temperature additives on jet fuel. The solidification of jet fuel due to freezing is not well understood and limited visualization of fuel freezing existed prior to the research presented in this dissertation. Consequently, computational fluid dynamics (CFD) modeling that simulates jet fuel freezing and model validation were incomplete prior to the present work. The ability to simulate jet fuel freezing is a necessary tool for fuel system designers. An additional impediment to the understanding and simulation of jet fuel freezing has been the absence of published low-temperature thermo-physical properties, including viscosity, which the present work addresses. The dissertation is subdivided into three major segments covering visualization, modeling and validation, and viscosity studies. In the first segment samples of jet fuel, JPTS, kerosene, Jet A and Jet A containing additives, were cooled below their freeze point temperatures in a rectangular, optical cell. Images and temperature data recorded during the solidification process provided information on crystal habit, crystallization behavior, and the influence of the buoyancy-driven flow on freezing. N-alkane composition of the samples was determined. The Jet A sample contained the least n-alkane mass. The cooling of JPTS resulted in the least wax formation while the cooling of kerosene yielded the greatest wax formation. The JPTS and kerosene samples exhibited similar crystallization behavior and crystal habits during cooling. Low-temperature additives modified the crystal habit of the Jet A fuel. Crystal shapes and sizes were recorded for use in future computational modeling. In the second segment, a computational fluid dynamics model was developed that simulates the solidification of jet fuel due to freezing in a buoyancy-driven flow. Flow resistance caused by porous crystal structures that exist in liquid-solid regions is simulated through the use of a momentum resistance source term. (Abstract shortened by UMI.)

  19. Advanced Computation Dynamics Simulation of Protective Structures Research

    DTIC Science & Technology

    2013-02-01

    additional load with increased cracking and deflection. Eventually, the walls failed in flexure due to self-weight and did not indicate any signs of shear...overall volume of the FEM block to be 432.2 in3, instead of 415.1 in3; the overall volume increased of area is 1.041%. This additional material is...sections in addition to the summary. Section 2 consists of an introduction, objectives, scope and methodology, and organization of the report. Section 2

  20. Cyber Technology for Materials and Structures in Aeronautics and Aerospace

    NASA Technical Reports Server (NTRS)

    Pipes, R. Byron

    1999-01-01

    This report summarizes efforts undertaken during the 1998-99 program year and includes a survey of the field of computational mechanics, a discussion of biomimetics and intelligent simulation, a survey of the field of biomimetics, an illustration of biomimetics and computational mechanics through the example of the high performance composite tensile structure. In addition, the preliminary results of a state-of-the art survey of composite materials technology is presented.

  1. A dynamical-systems approach for computing ice-affected streamflow

    USGS Publications Warehouse

    Holtschlag, David J.

    1996-01-01

    A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.

  2. Effects of Energetic Additives on Combustion Dynamics

    DTIC Science & Technology

    2010-04-19

    has the Distribution Statement checked befow. The current distribution for this document can be found in the DTIC® Technical Report Database. Q...no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently velid OMB...and ethanol drops loaded with nano-Al additives burned differently. An exploratory computational study using Large Eddy Simulation indicated that

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  4. A priori and a posteriori analyses of the flamelet/progress variable approach for supersonic combustion

    NASA Astrophysics Data System (ADS)

    Saghafian, Amirreza; Pitsch, Heinz

    2012-11-01

    A compressible flamelet/progress variable approach (CFPV) has been devised for high-speed flows. Temperature is computed from the transported total energy and tabulated species mass fractions and the source term of the progress variable is rescaled with pressure and temperature. The combustion is thus modeled by three additional scalar equations and a chemistry table that is computed in a pre-processing step. Three-dimensional direct numerical simulation (DNS) databases of reacting supersonic turbulent mixing layer with detailed chemistry are analyzed to assess the underlying assumptions of CFPV. Large eddy simulations (LES) of the same configuration using the CFPV method have been performed and compared with the DNS results. The LES computations are based on the presumed subgrid PDFs of mixture fraction and progress variable, beta function and delta function respectively, which are assessed using DNS databases. The flamelet equation budget is also computed to verify the validity of CFPV method for high-speed flows.

  5. Accelerating three-dimensional FDTD calculations on GPU clusters for electromagnetic field simulation.

    PubMed

    Nagaoka, Tomoaki; Watanabe, Soichi

    2012-01-01

    Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.

  6. Testing trivializing maps in the Hybrid Monte Carlo algorithm

    PubMed Central

    Engel, Georg P.; Schaefer, Stefan

    2011-01-01

    We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CPN−1 model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied. PMID:21969733

  7. Mathematic models for a ray tracing method and its applications in wireless optical communications.

    PubMed

    Zhang, Minglun; Zhang, Yangan; Yuan, Xueguang; Zhang, Jinnan

    2010-08-16

    This paper presents a new ray tracing method, which contains a whole set of mathematic models, and its validity is verified by simulations. In addition, both theoretical analysis and simulation results show that the computational complexity of the method is much lower than that of previous ones. Therefore, the method can be used to rapidly calculate the impulse response of wireless optical channels for complicated systems.

  8. Computational fluid dynamics - The coming revolution

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1982-01-01

    The development of aerodynamic theory is traced from the days of Aristotle to the present, with the next stage in computational fluid dynamics dependent on superspeed computers for flow calculations. Additional attention is given to the history of numerical methods inherent in writing computer codes applicable to viscous and inviscid analyses for complex configurations. The advent of the superconducting Josephson junction is noted to place configurational demands on computer design to avoid limitations imposed by the speed of light, and a Japanese projection of a computer capable of several hundred billion operations/sec is mentioned. The NASA Numerical Aerodynamic Simulator is described, showing capabilities of a billion operations/sec with a memory of 240 million words using existing technology. Near-term advances in fluid dynamics are discussed.

  9. PARALLEL HOP: A SCALABLE HALO FINDER FOR MASSIVE COSMOLOGICAL DATA SETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skory, Stephen; Turk, Matthew J.; Norman, Michael L.

    2010-11-15

    Modern N-body cosmological simulations contain billions (10{sup 9}) of dark matter particles. These simulations require hundreds to thousands of gigabytes of memory and employ hundreds to tens of thousands of processing cores on many compute nodes. In order to study the distribution of dark matter in a cosmological simulation, the dark matter halos must be identified using a halo finder, which establishes the halo membership of every particle in the simulation. The resources required for halo finding are similar to the requirements for the simulation itself. In particular, simulations have become too extensive to use commonly employed halo finders, suchmore » that the computational requirements to identify halos must now be spread across multiple nodes and cores. Here, we present a scalable-parallel halo finding method called Parallel HOP for large-scale cosmological simulation data. Based on the halo finder HOP, it utilizes message passing interface and domain decomposition to distribute the halo finding workload across multiple compute nodes, enabling analysis of much larger data sets than is possible with the strictly serial or previous parallel implementations of HOP. We provide a reference implementation of this method as a part of the toolkit {sup yt}, an analysis toolkit for adaptive mesh refinement data that include complementary analysis modules. Additionally, we discuss a suite of benchmarks that demonstrate that this method scales well up to several hundred tasks and data sets in excess of 2000{sup 3} particles. The Parallel HOP method and our implementation can be readily applied to any kind of N-body simulation data and is therefore widely applicable.« less

  10. SAFSIM theory manual: A computer program for the engineering simulation of flow systems

    NASA Astrophysics Data System (ADS)

    Dobranich, Dean

    1993-12-01

    SAFSIM (System Analysis Flow SIMulator) is a FORTRAN computer program for simulating the integrated performance of complex flow systems. SAFSIM provides sufficient versatility to allow the engineering simulation of almost any system, from a backyard sprinkler system to a clustered nuclear reactor propulsion system. In addition to versatility, speed and robustness are primary SAFSIM development goals. SAFSIM contains three basic physics modules: (1) a fluid mechanics module with flow network capability; (2) a structure heat transfer module with multiple convection and radiation exchange surface capability; and (3) a point reactor dynamics module with reactivity feedback and decay heat capability. Any or all of the physics modules can be implemented, as the problem dictates. SAFSIM can be used for compressible and incompressible, single-phase, multicomponent flow systems. Both the fluid mechanics and structure heat transfer modules employ a one-dimensional finite element modeling approach. This document contains a description of the theory incorporated in SAFSIM, including the governing equations, the numerical methods, and the overall system solution strategies.

  11. Experimental evaluation of atmospheric effects on radiometric measurements using the EREP of Skylab. [Salton Sea and Great Salt Lake

    NASA Technical Reports Server (NTRS)

    Chang, D. T. (Principal Investigator); Isaacs, R. G.

    1975-01-01

    The author has identified the following significant results. Test sites were located near the Great Salt Lake and the Salton Sea. Calculations were performed for a set of atmospheric models corresponding to the test sites, in addition to standard models for summer and winter midlatitude atmospheres with respective integrated water vapor amount of 2.4 g/sq cm and 0.9 g/sq cm. Each atmosphere was found to contain an average amount of continental aerosol. Computations were valid for high solar elevation angles. Atmospheric attenuation quantities were computed in addition to simulated EREP S192 radiances.

  12. Three-dimensional simulation and auto-stereoscopic 3D display of the battlefield environment based on the particle system algorithm

    NASA Astrophysics Data System (ADS)

    Ning, Jiwei; Sang, Xinzhu; Xing, Shujun; Cui, Huilong; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    The army's combat training is very important now, and the simulation of the real battlefield environment is of great significance. Two-dimensional information has been unable to meet the demand at present. With the development of virtual reality technology, three-dimensional (3D) simulation of the battlefield environment is possible. In the simulation of 3D battlefield environment, in addition to the terrain, combat personnel and the combat tool ,the simulation of explosions, fire, smoke and other effects is also very important, since these effects can enhance senses of realism and immersion of the 3D scene. However, these special effects are irregular objects, which make it difficult to simulate with the general geometry. Therefore, the simulation of irregular objects is always a hot and difficult research topic in computer graphics. Here, the particle system algorithm is used for simulating irregular objects. We design the simulation of the explosion, fire, smoke based on the particle system and applied it to the battlefield 3D scene. Besides, the battlefield 3D scene simulation with the glasses-free 3D display is carried out with an algorithm based on GPU 4K super-multiview 3D video real-time transformation method. At the same time, with the human-computer interaction function, we ultimately realized glasses-free 3D display of the simulated more realistic and immersed 3D battlefield environment.

  13. Generalized simulation technique for turbojet engine system analysis

    NASA Technical Reports Server (NTRS)

    Seldner, K.; Mihaloew, J. R.; Blaha, R. J.

    1972-01-01

    A nonlinear analog simulation of a turbojet engine was developed. The purpose of the study was to establish simulation techniques applicable to propulsion system dynamics and controls research. A schematic model was derived from a physical description of a J85-13 turbojet engine. Basic conservation equations were applied to each component along with their individual performance characteristics to derive a mathematical representation. The simulation was mechanized on an analog computer. The simulation was verified in both steady-state and dynamic modes by comparing analytical results with experimental data obtained from tests performed at the Lewis Research Center with a J85-13 engine. In addition, comparison was also made with performance data obtained from the engine manufacturer. The comparisons established the validity of the simulation technique.

  14. Computer-based simulation of the Bielschowsky head-tilt test using the SEE++ software system.

    PubMed

    Kaltofen, Thomas; Buchberger, Michael; Priglinger, Siegfried

    2008-01-01

    Latest measurements of the vestibulo-ocular reflex (VOR) allowed the integration of the simulation of the Bielschowsky head-tilt test (BHTT) into the SEE++ software system. SEE++ realizes a biomechanical model of the human eye in order to simulate eye motility disorders and strabismus surgeries. With the addition of the BHTT it can now also be used for differential-diagnostic simulations of complex disorders (e.g., superior oblique palsies). In order to simulate the BHTT in SEE++, the user can freely choose the desired head-tilt angle from -45 degrees to +45 degrees. The chosen angle is shown in the 3D view with a human body model and is also used in the calculation of the Hess-Lancaster test. The integration of the BHTT offers an additional improvement of the possibilities for simulating eye motility disorders. Moreover, SEE++ allows the creation of a video of the "virtual patient" while tilting the head from one side to the other, which shows dynamic changes in the simulated Hess-diagrams. Comparisons of simulation results with patient-measured data showed a good correlation between the simulated and the measured data. Further comparisons with patient data are planned.

  15. Theory and Simulation of Multicomponent Osmotic Systems

    PubMed Central

    Karunaweera, Sadish; Gee, Moon Bae; Weerasinghe, Samantha; Smith, Paul E.

    2012-01-01

    Most cellular processes occur in systems containing a variety of components many of which are open to material exchange. However, computer simulations of biological systems are almost exclusively performed in systems closed to material exchange. In principle, the behavior of biomolecules in open and closed systems will be different. Here, we provide a rigorous framework for the analysis of experimental and simulation data concerning open and closed multicomponent systems using the Kirkwood-Buff (KB) theory of solutions. The results are illustrated using computer simulations for various concentrations of the solutes Gly, Gly2 and Gly3 in both open and closed systems, and in the absence or presence of NaCl as a cosolvent. In addition, KB theory is used to help rationalize the aggregation properties of the solutes. Here one observes that the picture of solute association described by the KB integrals, which are directly related to the solution thermodynamics, and that provided by more physical clustering approaches are different. It is argued that the combination of KB theory and simulation data provides a simple and powerful tool for the analysis of complex multicomponent open and closed systems. PMID:23329894

  16. Providing a parallel and distributed capability for JMASS using SPEEDES

    NASA Astrophysics Data System (ADS)

    Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob

    2002-07-01

    The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.

  17. Overview of the Tusas Code for Simulation of Dendritic Solidification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trainer, Amelia J.; Newman, Christopher Kyle; Francois, Marianne M.

    2016-01-07

    The aim of this project is to conduct a parametric investigation into the modeling of two dimensional dendrite solidification, using the phase field model. Specifically, we use the Tusas code, which is for coupled heat and phase-field simulation of dendritic solidification. Dendritic solidification, which may occur in the presence of an unstable solidification interface, results in treelike microstructures that often grow perpendicular to the rest of the growth front. The interface may become unstable if the enthalpy of the solid material is less than that of the liquid material, or if the solute is less soluble in solid than itmore » is in liquid, potentially causing a partition [1]. A key motivation behind this research is that a broadened understanding of phase-field formulation and microstructural developments can be utilized for macroscopic simulations of phase change. This may be directly implemented as a part of the Telluride project at Los Alamos National Laboratory (LANL), through which a computational additive manufacturing simulation tool is being developed, ultimately to become part of the Advanced Simulation and Computing Program within the U.S. Department of Energy [2].« less

  18. Coarse kMC-based replica exchange algorithms for the accelerated simulation of protein folding in explicit solvent.

    PubMed

    Peter, Emanuel K; Shea, Joan-Emma; Pivkin, Igor V

    2016-05-14

    In this paper, we present a coarse replica exchange molecular dynamics (REMD) approach, based on kinetic Monte Carlo (kMC). The new development significantly can reduce the amount of replicas and the computational cost needed to enhance sampling in protein simulations. We introduce 2 different methods which primarily differ in the exchange scheme between the parallel ensembles. We apply this approach on folding of 2 different β-stranded peptides: the C-terminal β-hairpin fragment of GB1 and TrpZip4. Additionally, we use the new simulation technique to study the folding of TrpCage, a small fast folding α-helical peptide. Subsequently, we apply the new methodology on conformation changes in signaling of the light-oxygen voltage (LOV) sensitive domain from Avena sativa (AsLOV2). Our results agree well with data reported in the literature. In simulations of dialanine, we compare the statistical sampling of the 2 techniques with conventional REMD and analyze their performance. The new techniques can reduce the computational cost of REMD significantly and can be used in enhanced sampling simulations of biomolecules.

  19. [Cost analysis for navigation in knee endoprosthetics].

    PubMed

    Cerha, O; Kirschner, S; Günther, K-P; Lützner, J

    2009-12-01

    Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to 300-395 depending on the navigation system. Computer-assisted TKA is associated with additional costs. From an economical point of view an amount of more than 50 procedures per year appears to be favourable. The cost-effectiveness could be estimated if long-term results will show a reduction of revisions or a better clinical outcome.

  20. An Examination of Unsteady Airloads on a UH-60A Rotor: Computation Versus Measurement

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Lee-Rausch, Elizabeth

    2012-01-01

    An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids is used to simulate the flow over a UH-60A rotor. Traditionally, the computed pressure and shear stresses are integrated on the computational mesh at selected radial stations and compared to measured airloads. However, the corresponding integration of experimental data uses only the pressure contribution, and the set of integration points (pressure taps) is modest compared to the computational mesh resolution. This paper examines the difference between the traditional integration of computed airloads and an integration consistent with that used for the experimental data. In addition, a comparison of chordwise pressure distributions between computation and measurement is made. Examination of this unsteady pressure data provides new opportunities to understand differences between computation and flight measurement.

  1. A network-analysis-based comparative study of the throughput behavior of polymer melts in barrier screw geometries

    NASA Astrophysics Data System (ADS)

    Aigner, M.; Köpplmayr, T.; Kneidinger, C.; Miethlinger, J.

    2014-05-01

    Barrier screws are widely used in the plastics industry. Due to the extreme diversity of their geometries, describing the flow behavior is difficult and rarely done in practice. We present a systematic approach based on networks that uses tensor algebra and numerical methods to model and calculate selected barrier screw geometries in terms of pressure, mass flow, and residence time. In addition, we report the results of three-dimensional simulations using the commercially available ANSYS Polyflow software. The major drawbacks of three-dimensional finite-element-method (FEM) simulations are that they require vast computational power and, large quantities of memory, and consume considerable time to create a geometric model created by computer-aided design (CAD) and complete a flow calculation. Consequently, a modified 2.5-dimensional finite volume method, termed network analysis is preferable. The results obtained by network analysis and FEM simulations correlated well. Network analysis provides an efficient alternative to complex FEM software in terms of computing power and memory consumption. Furthermore, typical barrier screw geometries can be parameterized and used for flow calculations without timeconsuming CAD-constructions.

  2. Comparison of DSMC and CFD Solutions of Fire II Including Radiative Heating

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.; Johnston, Christopher O.; Lewis, Mark J.

    2011-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. These flows may also contain significant radiative heating. To prepare for these missions, NASA is developing the capability to simulate rarefied, ionized flows and to then calculate the resulting radiative heating to the vehicle's surface. In this study, the DSMC codes DAC and DS2V are used to obtain charge-neutral ionization solutions. NASA s direct simulation Monte Carlo code DAC is currently being updated to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced Quantum-Kinetic chemistry model, and to include electronic energy levels as an additional internal energy mode. The Fire II flight test is used in this study to assess these new capabilities. The 1634 second data point was chosen for comparisons to be made in order to include comparisons to computational fluid dynamics solutions. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid. It is shown that there can be quite a bit of variability in the vibrational temperature inferred from DSMC solutions and that, from how radiative heating is computed, the electronic temperature is much better suited for radiative calculations. To include the radiative portion of heating, the flow-field solutions are post-processed by the non-equilibrium radiation code HARA. Acceptable agreement between CFD and DSMC flow field solutions is demonstrated and the progress of the updates to DAC, along with an appropriate radiative heating solution, are discussed. In addition, future plans to generate more high fidelity radiative heat transfer solutions are discussed.

  3. Chapter 28: Theory SkyNode

    NASA Astrophysics Data System (ADS)

    Wagner, R.; Norman, M. L.

    Here we present a working example of a Basic SkyNode serving theoretical data. The data is taken from the Simulated Cluster Archive (SCA), a set of simulated X-ray clusters, where each cluster was computed using four different physics models. The LCA Theory SkyNode (LCATheory) tables contain columns of the integrated physical properties of the clusters at various redshifts. The ease of setting up a Theory SkyNode is an important result, because it represents a clear way to present theory data to the Virtual Observatory. Also, our Theory SkyNode provides a prototype for additional simulated object catalogs, which will be created from other simulations by our group, and hopefully others.

  4. Octree-based Global Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Ramirez-Guzman, L.; Juarez, A.; Bielak, J.; Salazar Monroy, E. F.

    2017-12-01

    Seismological research has motivated recent efforts to construct more accurate three-dimensional (3D) velocity models of the Earth, perform global simulations of wave propagation to validate models, and also to study the interaction of seismic fields with 3D structures. However, traditional methods for seismogram computation at global scales are limited by computational resources, relying primarily on traditional methods such as normal mode summation or two-dimensional numerical methods. We present an octree-based mesh finite element implementation to perform global earthquake simulations with 3D models using topography and bathymetry with a staircase approximation, as modeled by the Carnegie Mellon Finite Element Toolchain Hercules (Tu et al., 2006). To verify the implementation, we compared the synthetic seismograms computed in a spherical earth against waveforms calculated using normal mode summation for the Preliminary Earth Model (PREM) for a point source representation of the 2014 Mw 7.3 Papanoa, Mexico earthquake. We considered a 3 km-thick ocean layer for stations with predominantly oceanic paths. Eigen frequencies and eigen functions were computed for toroidal, radial, and spherical oscillations in the first 20 branches. Simulations are valid at frequencies up to 0.05 Hz. Matching among the waveforms computed by both approaches, especially for long period surface waves, is excellent. Additionally, we modeled the Mw 9.0 Tohoku-Oki earthquake using the USGS finite fault inversion. Topography and bathymetry from ETOPO1 are included in a mesh with more than 3 billion elements; constrained by the computational resources available. We compared estimated velocity and GPS synthetics against observations at regional and teleseismic stations of the Global Seismological Network and discuss the differences among observations and synthetics, revealing that heterogeneity, particularly in the crust, needs to be considered.

  5. Towards real-time photon Monte Carlo dose calculation in the cloud

    NASA Astrophysics Data System (ADS)

    Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-01

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  6. Direct numerical simulation of reactor two-phase flows enabled by high-performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Jun; Cambareri, Joseph J.; Brown, Cameron S.

    Nuclear reactor two-phase flows remain a great engineering challenge, where the high-resolution two-phase flow database which can inform practical model development is still sparse due to the extreme reactor operation conditions and measurement difficulties. Owing to the rapid growth of computing power, the direct numerical simulation (DNS) is enjoying a renewed interest in investigating the related flow problems. A combination between DNS and an interface tracking method can provide a unique opportunity to study two-phase flows based on first principles calculations. More importantly, state-of-the-art high-performance computing (HPC) facilities are helping unlock this great potential. This paper reviews the recent researchmore » progress of two-phase flow DNS related to reactor applications. The progress in large-scale bubbly flow DNS has been focused not only on the sheer size of those simulations in terms of resolved Reynolds number, but also on the associated advanced modeling and analysis techniques. Specifically, the current areas of active research include modeling of sub-cooled boiling, bubble coalescence, as well as the advanced post-processing toolkit for bubbly flow simulations in reactor geometries. A novel bubble tracking method has been developed to track the evolution of bubbles in two-phase bubbly flow. Also, spectral analysis of DNS database in different geometries has been performed to investigate the modulation of the energy spectrum slope due to bubble-induced turbulence. In addition, the single-and two-phase analysis results are presented for turbulent flows within the pressurized water reactor (PWR) core geometries. The related simulations are possible to carry out only with the world leading HPC platforms. These simulations are allowing more complex turbulence model development and validation for use in 3D multiphase computational fluid dynamics (M-CFD) codes.« less

  7. Towards real-time photon Monte Carlo dose calculation in the cloud.

    PubMed

    Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-07

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  8. Simulating and assessing boson sampling experiments with phase-space representations

    NASA Astrophysics Data System (ADS)

    Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.

    2018-04-01

    The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.

  9. Parallel Proximity Detection for Computer Simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)

    1997-01-01

    The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are includes by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.

  10. Parallel Proximity Detection for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)

    1998-01-01

    The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are included by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.

  11. CFD Assessment of Aerodynamic Degradation of a Subsonic Transport Due to Airframe Damage

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.; Pirzadeh, Shahyar Z.; Atkins, Harold L.; Viken, Sally A.; Morrison, Joseph H.

    2010-01-01

    A computational study is presented to assess the utility of two NASA unstructured Navier-Stokes flow solvers for capturing the degradation in static stability and aerodynamic performance of a NASA General Transport Model (GTM) due to airframe damage. The approach is to correlate computational results with a substantial subset of experimental data for the GTM undergoing progressive losses to the wing, vertical tail, and horizontal tail components. The ultimate goal is to advance the probability of inserting computational data into the creation of advanced flight simulation models of damaged subsonic aircraft in order to improve pilot training. Results presented in this paper demonstrate good correlations with slope-derived quantities, such as pitch static margin and static directional stability, and incremental rolling moment due to wing damage. This study further demonstrates that high fidelity Navier-Stokes flow solvers could augment flight simulation models with additional aerodynamic data for various airframe damage scenarios.

  12. TSaT-MUSIC: a novel algorithm for rapid and accurate ultrasonic 3D localization

    NASA Astrophysics Data System (ADS)

    Mizutani, Kyohei; Ito, Toshio; Sugimoto, Masanori; Hashizume, Hiromichi

    2011-12-01

    We describe a fast and accurate indoor localization technique using the multiple signal classification (MUSIC) algorithm. The MUSIC algorithm is known as a high-resolution method for estimating directions of arrival (DOAs) or propagation delays. A critical problem in using the MUSIC algorithm for localization is its computational complexity. Therefore, we devised a novel algorithm called Time Space additional Temporal-MUSIC, which can rapidly and simultaneously identify DOAs and delays of mul-ticarrier ultrasonic waves from transmitters. Computer simulations have proved that the computation time of the proposed algorithm is almost constant in spite of increasing numbers of incoming waves and is faster than that of existing methods based on the MUSIC algorithm. The robustness of the proposed algorithm is discussed through simulations. Experiments in real environments showed that the standard deviation of position estimations in 3D space is less than 10 mm, which is satisfactory for indoor localization.

  13. WE-C-217BCD-08: Rapid Monte Carlo Simulations of DQE(f) of Scintillator-Based Detectors.

    PubMed

    Star-Lack, J; Abel, E; Constantin, D; Fahrig, R; Sun, M

    2012-06-01

    Monte Carlo simulations of DQE(f) can greatly aid in the design of scintillator-based detectors by helping optimize key parameters including scintillator material and thickness, pixel size, surface finish, and septa reflectivity. However, the additional optical transport significantly increases simulation times, necessitating a large number of parallel processors to adequately explore the parameter space. To address this limitation, we have optimized the DQE(f) algorithm, reducing simulation times per design iteration to 10 minutes on a single CPU. DQE(f) is proportional to the ratio, MTF(f)̂2 /NPS(f). The LSF-MTF simulation uses a slanted line source and is rapidly performed with relatively few gammas launched. However, the conventional NPS simulation for standard radiation exposure levels requires the acquisition of multiple flood fields (nRun), each requiring billions of input gamma photons (nGamma), many of which will scintillate, thereby producing thousands of optical photons (nOpt) per deposited MeV. The resulting execution time is proportional to the product nRun x nGamma x nOpt. In this investigation, we revisit the theoretical derivation of DQE(f), and reveal significant computation time savings through the optimization of nRun, nGamma, and nOpt. Using GEANT4, we determine optimal values for these three variables for a GOS scintillator-amorphous silicon portal imager. Both isotropic and Mie optical scattering processes were modeled. Simulation results were validated against the literature. We found that, depending on the radiative and optical attenuation properties of the scintillator, the NPS can be accurately computed using values for nGamma below 1000, and values for nOpt below 500/MeV. nRun should remain above 200. Using these parameters, typical computation times for a complete NPS ranged from 2-10 minutes on a single CPU. The number of launched particles and corresponding execution times for a DQE simulation can be dramatically reduced allowing for accurate computation with modest computer hardware. NIHRO1 CA138426. Several authors work for Varian Medical Systems. © 2012 American Association of Physicists in Medicine.

  14. Structure simulation with calculated NMR parameters - integrating COSMOS into the CCPN framework.

    PubMed

    Schneider, Olaf; Fogh, Rasmus H; Sternberg, Ulrich; Klenin, Konstantin; Kondov, Ivan

    2012-01-01

    The Collaborative Computing Project for NMR (CCPN) has build a software framework consisting of the CCPN data model (with APIs) for NMR related data, the CcpNmr Analysis program and additional tools like CcpNmr FormatConverter. The open architecture allows for the integration of external software to extend the abilities of the CCPN framework with additional calculation methods. Recently, we have carried out the first steps for integrating our software Computer Simulation of Molecular Structures (COSMOS) into the CCPN framework. The COSMOS-NMR force field unites quantum chemical routines for the calculation of molecular properties with a molecular mechanics force field yielding the relative molecular energies. COSMOS-NMR allows introducing NMR parameters as constraints into molecular mechanics calculations. The resulting infrastructure will be made available for the NMR community. As a first application we have tested the evaluation of calculated protein structures using COSMOS-derived 13C Cα and Cβ chemical shifts. In this paper we give an overview of the methodology and a roadmap for future developments and applications.

  15. An evaluation of noise reduction algorithms for particle-based fluid simulations in multi-scale applications

    NASA Astrophysics Data System (ADS)

    Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.

    2016-11-01

    Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.

  16. Comparison of remotely sensed continental-shelf wave spectra with spectra computed by using a wave refraction computer model

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1976-01-01

    An initial attempt was made to verify the Langley Research Center and Virginia Institute of Marine Science mid-Atlantic continental-shelf wave refraction model. The model was used to simulate refraction occurring during a continental-shelf remote sensing experiment conducted on August 17, 1973. Simulated wave spectra compared favorably, in a qualitative sense, with the experimental spectra. However, it was observed that most of the wave energy resided at frequencies higher than those for which refraction and shoaling effects were predicted, In addition, variations among the experimental spectra were so small that they were not considered statistically significant. In order to verify the refraction model, simulation must be performed in conjunction with a set of significantly varying spectra in which a considerable portion of the total energy resides at frequencies for which refraction and shoaling effects are likely.

  17. A potential-energy scaling model to simulate the initial stages of thin-film growth

    NASA Technical Reports Server (NTRS)

    Heinbockel, J. H.; Outlaw, R. A.; Walker, G. H.

    1983-01-01

    A solid on solid (SOS) Monte Carlo computer simulation employing a potential energy scaling technique was used to model the initial stages of thin film growth. The model monitors variations in the vertical interaction potential that occur due to the arrival or departure of selected adatoms or impurities at all sites in the 400 sq. ft. array. Boltzmann ordered statistics are used to simulate fluctuations in vibrational energy at each site in the array, and the resulting site energy is compared with threshold levels of possible atomic events. In addition to adsorption, desorption, and surface migration, adatom incorporation and diffusion of a substrate atom to the surface are also included. The lateral interaction of nearest, second nearest, and third nearest neighbors is also considered. A series of computer experiments are conducted to illustrate the behavior of the model.

  18. Modeling the Hydration Layer around Proteins: Applications to Small- and Wide-Angle X-Ray Scattering

    PubMed Central

    Virtanen, Jouko Juhani; Makowski, Lee; Sosnick, Tobin R.; Freed, Karl F.

    2011-01-01

    Small-/wide-angle x-ray scattering (SWAXS) experiments can aid in determining the structures of proteins and protein complexes, but success requires accurate computational treatment of solvation. We compare two methods by which to calculate SWAXS patterns. The first approach uses all-atom explicit-solvent molecular dynamics (MD) simulations. The second, far less computationally expensive method involves prediction of the hydration density around a protein using our new HyPred solvation model, which is applied without the need for additional MD simulations. The SWAXS patterns obtained from the HyPred model compare well to both experimental data and the patterns predicted by the MD simulations. Both approaches exhibit advantages over existing methods for analyzing SWAXS data. The close correspondence between calculated and observed SWAXS patterns provides strong experimental support for the description of hydration implicit in the HyPred model. PMID:22004761

  19. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  20. A Model of In vitro Plasticity at the Parallel Fiber—Molecular Layer Interneuron Synapses

    PubMed Central

    Lennon, William; Yamazaki, Tadashi; Hecht-Nielsen, Robert

    2015-01-01

    Theoretical and computational models of the cerebellum typically focus on the role of parallel fiber (PF)—Purkinje cell (PKJ) synapses for learned behavior, but few emphasize the role of the molecular layer interneurons (MLIs)—the stellate and basket cells. A number of recent experimental results suggest the role of MLIs is more important than previous models put forth. We investigate learning at PF—MLI synapses and propose a mathematical model to describe plasticity at this synapse. We perform computer simulations with this form of learning using a spiking neuron model of the MLI and show that it reproduces six in vitro experimental results in addition to simulating four novel protocols. Further, we show how this plasticity model can predict the results of other experimental protocols that are not simulated. Finally, we hypothesize what the biological mechanisms are for changes in synaptic efficacy that embody the phenomenological model proposed here. PMID:26733856

  1. Recovery Act: Web-based CO{sub 2} Subsurface Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paolini, Christopher; Castillo, Jose

    2012-11-30

    The Web-based CO{sub 2} Subsurface Modeling project focused primarily on extending an existing text-only, command-line driven, isothermal and isobaric, geochemical reaction-transport simulation code, developed and donated by Sienna Geodynamics, into an easier-to-use Web-based application for simulating long-term storage of CO{sub 2} in geologic reservoirs. The Web-based interface developed through this project, publically accessible via URL http://symc.sdsu.edu/, enables rapid prototyping of CO{sub 2} injection scenarios and allows students without advanced knowledge of geochemistry to setup a typical sequestration scenario, invoke a simulation, analyze results, and then vary one or more problem parameters and quickly re-run a simulation to answer what-if questions.more » symc.sdsu.edu has 2x12 core AMD Opteron™ 6174 2.20GHz processors and 16GB RAM. The Web-based application was used to develop a new computational science course at San Diego State University, COMP 670: Numerical Simulation of CO{sub 2} Sequestration, which was taught during the fall semester of 2012. The purpose of the class was to introduce graduate students to Carbon Capture, Use and Storage (CCUS) through numerical modeling and simulation, and to teach students how to interpret simulation results to make predictions about long-term CO{sub 2} storage capacity in deep brine reservoirs. In addition to the training and education component of the project, significant software development efforts took place. Two computational science doctoral and one geological science masters student, under the direction of the PIs, extended the original code developed by Sienna Geodynamics, named Sym.8. New capabilities were added to Sym.8 to simulate non-isothermal and non-isobaric flows of charged aqueous solutes in porous media, in addition to incorporating HPC support into the code for execution on many-core XSEDE clusters. A successful outcome of this project was the funding and training of three new computational science students and one geological science student in technologies relevant to carbon sequestration and problems involving flow in subsurface media. The three computational science students are currently finishing their doctorial studies on different aspects of modeling CO{sub 2} sequestration, while the geological science student completed his master’s thesis in modeling the thermal response of CO{sub 2} injection in brine and, as a direct result of participation in this project, is now employed at ExxonMobil as a full-time staff geologist.« less

  2. Development of a methodology to compute solvation free energies on the basis of the theory of energy representation for solutions represented with a polarizable force field.

    PubMed

    Suzuoka, Daiki; Takahashi, Hideaki; Ishiyama, Tatsuya; Morita, Akihiro

    2012-12-07

    We have developed a method of molecular simulations utilizing a polarizable force field in combination with the theory of energy representation (ER) for the purpose of establishing an efficient and accurate methodology to compute solvation free energies. The standard version of the ER method is, however, based on the assumption that the solute-solvent interaction is pairwise additive for its construction. A crucial step in the present method is to introduce an intermediate state in the solvation process to treat separately the many-body interaction associated with the polarizable model. The intermediate state is chosen so that the solute-solvent interaction can be formally written in the pairwise form, though the solvent molecules are interacting with each other with polarizable charges dependent on the solvent configuration. It is, then, possible to extract the free energy contribution δμ due to the many-body interaction between solute and solvent from the total solvation free energy Δμ. It is shown that the free energy δμ can be computed by an extension of the recent development implemented in quantum mechanical∕molecular mechanical simulations. To assess the numerical robustness of the approach, we computed the solvation free energies of a water and a methanol molecule in water solvent, where two paths for the solvation processes were examined by introducing different intermediate states. The solvation free energies of a water molecule associated with the two paths were obtained as -5.3 and -5.8 kcal∕mol. Those of a methanol molecule were determined as -3.5 and -3.7 kcal∕mol. These results of the ER simulations were also compared with those computed by a numerically exact approach. It was demonstrated that the present approach produces the solvation free energies in comparable accuracies to simulations of thermodynamic integration (TI) method within a tenth of computational time used for the TI simulations.

  3. Enabling Co-Design of Multi-Layer Exascale Storage Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carothers, Christopher

    Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less

  4. Insights from molecular dynamics simulations for computational protein design.

    PubMed

    Childers, Matthew Carter; Daggett, Valerie

    2017-02-01

    A grand challenge in the field of structural biology is to design and engineer proteins that exhibit targeted functions. Although much success on this front has been achieved, design success rates remain low, an ever-present reminder of our limited understanding of the relationship between amino acid sequences and the structures they adopt. In addition to experimental techniques and rational design strategies, computational methods have been employed to aid in the design and engineering of proteins. Molecular dynamics (MD) is one such method that simulates the motions of proteins according to classical dynamics. Here, we review how insights into protein dynamics derived from MD simulations have influenced the design of proteins. One of the greatest strengths of MD is its capacity to reveal information beyond what is available in the static structures deposited in the Protein Data Bank. In this regard simulations can be used to directly guide protein design by providing atomistic details of the dynamic molecular interactions contributing to protein stability and function. MD simulations can also be used as a virtual screening tool to rank, select, identify, and assess potential designs. MD is uniquely poised to inform protein design efforts where the application requires realistic models of protein dynamics and atomic level descriptions of the relationship between dynamics and function. Here, we review cases where MD simulations was used to modulate protein stability and protein function by providing information regarding the conformation(s), conformational transitions, interactions, and dynamics that govern stability and function. In addition, we discuss cases where conformations from protein folding/unfolding simulations have been exploited for protein design, yielding novel outcomes that could not be obtained from static structures.

  5. Insights from molecular dynamics simulations for computational protein design

    PubMed Central

    Childers, Matthew Carter; Daggett, Valerie

    2017-01-01

    A grand challenge in the field of structural biology is to design and engineer proteins that exhibit targeted functions. Although much success on this front has been achieved, design success rates remain low, an ever-present reminder of our limited understanding of the relationship between amino acid sequences and the structures they adopt. In addition to experimental techniques and rational design strategies, computational methods have been employed to aid in the design and engineering of proteins. Molecular dynamics (MD) is one such method that simulates the motions of proteins according to classical dynamics. Here, we review how insights into protein dynamics derived from MD simulations have influenced the design of proteins. One of the greatest strengths of MD is its capacity to reveal information beyond what is available in the static structures deposited in the Protein Data Bank. In this regard simulations can be used to directly guide protein design by providing atomistic details of the dynamic molecular interactions contributing to protein stability and function. MD simulations can also be used as a virtual screening tool to rank, select, identify, and assess potential designs. MD is uniquely poised to inform protein design efforts where the application requires realistic models of protein dynamics and atomic level descriptions of the relationship between dynamics and function. Here, we review cases where MD simulations was used to modulate protein stability and protein function by providing information regarding the conformation(s), conformational transitions, interactions, and dynamics that govern stability and function. In addition, we discuss cases where conformations from protein folding/unfolding simulations have been exploited for protein design, yielding novel outcomes that could not be obtained from static structures. PMID:28239489

  6. Parametric Study of Decay of Homogeneous Isotropic Turbulence Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Rumsey, Christopher L.; Rubinstein, Robert; Balakumar, Ponnampalam; Zang, Thomas A.

    2012-01-01

    Numerical simulations of decaying homogeneous isotropic turbulence are performed with both low-order and high-order spatial discretization schemes. The turbulent Mach and Reynolds numbers for the simulations are 0.2 and 250, respectively. For the low-order schemes we use either second-order central or third-order upwind biased differencing. For higher order approximations we apply weighted essentially non-oscillatory (WENO) schemes, both with linear and nonlinear weights. There are two objectives in this preliminary effort to investigate possible schemes for large eddy simulation (LES). One is to explore the capability of a widely used low-order computational fluid dynamics (CFD) code to perform LES computations. The other is to determine the effect of higher order accuracy (fifth, seventh, and ninth order) achieved with high-order upwind biased WENO-based schemes. Turbulence statistics, such as kinetic energy, dissipation, and skewness, along with the energy spectra from simulations of the decaying turbulence problem are used to assess and compare the various numerical schemes. In addition, results from the best performing schemes are compared with those from a spectral scheme. The effects of grid density, ranging from 32 cubed to 192 cubed, on the computations are also examined. The fifth-order WENO-based scheme is found to be too dissipative, especially on the coarser grids. However, with the seventh-order and ninth-order WENO-based schemes we observe a significant improvement in accuracy relative to the lower order LES schemes, as revealed by the computed peak in the energy dissipation and by the energy spectrum.

  7. Integrative metabolomics for characterizing unknown low-abundance metabolites by capillary electrophoresis-mass spectrometry with computer simulations.

    PubMed

    Lee, Richard; Ptolemy, Adam S; Niewczas, Liliana; Britz-McKibbin, Philip

    2007-01-15

    Characterization of unknown low-abundance metabolites in biological samples is one the most significant challenges in metabolomic research. In this report, an integrative strategy based on capillary electrophoresis-electrospray ionization-ion trap mass spectrometry (CE-ESI-ITMS) with computer simulations is examined as a multiplexed approach for studying the selective nutrient uptake behavior of E. coli within a complex broth medium. On-line sample preconcentration with desalting by CE-ESI-ITMS was performed directly without off-line sample pretreatment in order to improve detector sensitivity over 50-fold for cationic metabolites with nanomolar detection limits. The migration behavior of charged metabolites were also modeled in CE as a qualitative tool to support MS characterization based on two fundamental analyte physicochemical properties, namely, absolute mobility (muo) and acid dissociation constant (pKa). Computer simulations using Simul 5.0 were used to better understand the dynamics of analyte electromigration, as well as aiding de novo identification of unknown nutrients. There was excellent agreement between computer-simulated and experimental electropherograms for several classes of cationic metabolites as reflected by their relative migration times with an average error of <2.0%. Our studies revealed differential uptake of specific amino acids and nucleoside nutrients associated with distinct stages of bacterial growth. Herein, we demonstrate that CE can serve as an effective preconcentrator, desalter, and separator prior to ESI-MS, while providing additional qualitative information for unambiguous identification among isobaric and isomeric metabolites. The proposed strategy is particularly relevant for characterizing unknown yet biologically relevant metabolites that are not readily synthesized or commercially available.

  8. Analytical investigation of critical phenomena in MHD power generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-07-31

    Critical phenomena in the Arnold Engineering Development Center (AEDC) High Performance Demonstration Experiment (HPDE) and the US U-25 Experiment, are analyzed. Also analyzed are the performance of a NASA-specified 500 MW(th) flow train and computations concerning critica issues for the scale-up of MHD Generators. The HPDE is characterized by computational simulations of both the nominal conditions and the conditions during the experimental runs. The steady-state performance is discussed along with the Hall voltage overshoots during the start-up and shutdown transients. The results of simulations of the HPDE runs with codes from the Q3D and TRANSIENT code families are compared tomore » the experimental results. The results of the simulations are in good agreement with the experimental data. Additional critica phenomena analyzed in the AEDC/HPDE are the optimal load schedules, parametric variations, the parametric dependence of the electrode voltage drops, the boundary layer behavior, near electrode phenomena with finite electrode segmentation, and current distribution in the end regions. The US U-25 experiment is characterized by computational simulations of the nominal operating conditions. The steady-state performance for the nominal design of the US U-25 experiment is analyzed, as is the dependence of performance on the mass flow rate. A NASA-specified 500 MW(th) MHD flow train is characterized for computer simulation and the electrical, transport, and thermodynamic properties at the inlet plane are analyzed. Issues for the scale-up of MHD power trains are discussed. The AEDC/HPDE performance is analyzed to compare these experimental results to scale-up rules.« less

  9. Computational Analysis of a Prototype Martian Rotorcraft Experiment

    NASA Technical Reports Server (NTRS)

    Corfeld, Kelly J.; Strawn, Roger C.; Long, Lyle N.

    2002-01-01

    This paper presents Reynolds-averaged Navier-Stokes calculations for a prototype Martian rotorcraft. The computations are intended for comparison with an ongoing Mars rotor hover test at NASA Ames Research Center. These computational simulations present a new and challenging problem, since rotors that operate on Mars will experience a unique low Reynolds number and high Mach number environment. Computed results for the 3-D rotor differ substantially from 2-D sectional computations in that the 3-D results exhibit a stall delay phenomenon caused by rotational forces along the blade span. Computational results have yet to be compared to experimental data, but computed performance predictions match the experimental design goals fairly well. In addition, the computed results provide a high level of detail in the rotor wake and blade surface aerodynamics. These details provide an important supplement to the expected experimental performance data.

  10. Appendices to the user's manual for a computer program for the emulation/simulation of a space station environmental control and life support system

    NASA Technical Reports Server (NTRS)

    Yanosy, James L.

    1988-01-01

    A user's Manual for the Emulation Simulation Computer Model was published previously. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation/simulation combination in the design, development, and test of a piece of ARS hardware - SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from the test. In addition, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system, the second a potential Space Station air revitalization system.

  11. Modeling and additive manufacturing of bio-inspired composites with tunable fracture mechanical properties.

    PubMed

    Dimas, Leon S; Buehler, Markus J

    2014-07-07

    Flaws, imperfections and cracks are ubiquitous in material systems and are commonly the catalysts of catastrophic material failure. As stresses and strains tend to concentrate around cracks and imperfections, structures tend to fail far before large regions of material have ever been subjected to significant loading. Therefore, a major challenge in material design is to engineer systems that perform on par with pristine structures despite the presence of imperfections. In this work we integrate knowledge of biological systems with computational modeling and state of the art additive manufacturing to synthesize advanced composites with tunable fracture mechanical properties. Supported by extensive mesoscale computer simulations, we demonstrate the design and manufacturing of composites that exhibit deformation mechanisms characteristic of pristine systems, featuring flaw-tolerant properties. We analyze the results by directly comparing strain fields for the synthesized composites, obtained through digital image correlation (DIC), and the computationally tested composites. Moreover, we plot Ashby diagrams for the range of simulated and experimental composites. Our findings show good agreement between simulation and experiment, confirming that the proposed mechanisms have a significant potential for vastly improving the fracture response of composite materials. We elucidate the role of stiffness ratio variations of composite constituents as an important feature in determining the composite properties. Moreover, our work validates the predictive ability of our models, presenting them as useful tools for guiding further material design. This work enables the tailored design and manufacturing of composites assembled from inferior building blocks, that obtain optimal combinations of stiffness and toughness.

  12. Parallel tempering simulation of the three-dimensional Edwards-Anderson model with compact asynchronous multispin coding on GPU

    NASA Astrophysics Data System (ADS)

    Fang, Ye; Feng, Sheng; Tam, Ka-Ming; Yun, Zhifeng; Moreno, Juana; Ramanujam, J.; Jarrell, Mark

    2014-10-01

    Monte Carlo simulations of the Ising model play an important role in the field of computational statistical physics, and they have revealed many properties of the model over the past few decades. However, the effect of frustration due to random disorder, in particular the possible spin glass phase, remains a crucial but poorly understood problem. One of the obstacles in the Monte Carlo simulation of random frustrated systems is their long relaxation time making an efficient parallel implementation on state-of-the-art computation platforms highly desirable. The Graphics Processing Unit (GPU) is such a platform that provides an opportunity to significantly enhance the computational performance and thus gain new insight into this problem. In this paper, we present optimization and tuning approaches for the CUDA implementation of the spin glass simulation on GPUs. We discuss the integration of various design alternatives, such as GPU kernel construction with minimal communication, memory tiling, and look-up tables. We present a binary data format, Compact Asynchronous Multispin Coding (CAMSC), which provides an additional 28.4% speedup compared with the traditionally used Asynchronous Multispin Coding (AMSC). Our overall design sustains a performance of 33.5 ps per spin flip attempt for simulating the three-dimensional Edwards-Anderson model with parallel tempering, which significantly improves the performance over existing GPU implementations.

  13. Characterization of three-dimensional anisotropic heart valve tissue mechanical properties using inverse finite element analysis.

    PubMed

    Abbasi, Mostafa; Barakat, Mohammed S; Vahidkhah, Koohyar; Azadani, Ali N

    2016-09-01

    Computational modeling has an important role in design and assessment of medical devices. In computational simulations, considering accurate constitutive models is of the utmost importance to capture mechanical response of soft tissue and biomedical materials under physiological loading conditions. Lack of comprehensive three-dimensional constitutive models for soft tissue limits the effectiveness of computational modeling in research and development of medical devices. The aim of this study was to use inverse finite element (FE) analysis to determine three-dimensional mechanical properties of bovine pericardial leaflets of a surgical bioprosthesis under dynamic loading condition. Using inverse parameter estimation, 3D anisotropic Fung model parameters were estimated for the leaflets. The FE simulations were validated using experimental in-vitro measurements, and the impact of different constitutive material models was investigated on leaflet stress distribution. The results of this study showed that the anisotropic Fung model accurately simulated the leaflet deformation and coaptation during valve opening and closing. During systole, the peak stress reached to 3.17MPa at the leaflet boundary while during diastole high stress regions were primarily observed in the commissures with the peak stress of 1.17MPa. In addition, the Rayleigh damping coefficient that was introduced to FE simulations to simulate viscous damping effects of surrounding fluid was determined. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Kinetic particle simulation of discharge and wall erosion of a Hall thruster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Shinatora; Komurasaki, Kimiya; Arakawa, Yoshihiro

    2013-06-15

    The primary lifetime limiting factor of Hall thrusters is the wall erosion caused by the ion induced sputtering, which is predominated by dielectric wall sheath and pre-sheath. However, so far only fluid or hybrid simulation models were applied to wall erosion and lifetime studies in which this non-quasi-neutral and non-equilibrium area cannot be treated directly. Thus, in this study, a 2D fully kinetic particle-in-cell model was presented for Hall thruster discharge and lifetime simulation. Because the fully kinetic lifetime simulation was yet to be achieved so far due to the high computational cost, the semi-implicit field solver and the techniquemore » of mass ratio manipulation was employed to accelerate the computation. However, other artificial manipulations like permittivity or geometry scaling were not used in order to avoid unrecoverable change of physics. Additionally, a new physics recovering model for the mass ratio was presented for better preservation of electron mobility at the weakly magnetically confined plasma region. The validity of the presented model was examined by various parametric studies, and the thrust performance and wall erosion rate of a laboratory model magnetic layer type Hall thruster was modeled for different operation conditions. The simulation results successfully reproduced the measurement results with typically less than 10% discrepancy without tuning any numerical parameters. It is also shown that the computational cost was reduced to the level that the Hall thruster fully kinetic lifetime simulation is feasible.« less

  15. Numerical Zooming Between a NPSS Engine System Simulation and a One-Dimensional High Compressor Analysis Code

    NASA Technical Reports Server (NTRS)

    Follen, Gregory; auBuchon, M.

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer along with the concept of numerical zooming between zero-dimensional to one-, two-, and three-dimensional component engine codes. In addition, the NPSS is refining the computing and communication technologies necessary to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Of the different technology areas that contribute to the development of the NPSS Environment, the subject of this paper is a discussion on numerical zooming between a NPSS engine simulation and higher fidelity representations of the engine components (fan, compressor, burner, turbines, etc.). What follows is a description of successfully zooming one-dimensional (row-by-row) high-pressure compressor analysis results back to a zero-dimensional NPSS engine simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the capability of the engine system simulation and increase the level of virtual test conducted prior to committing the design to hardware.

  16. Quantum simulator review

    NASA Astrophysics Data System (ADS)

    Bednar, Earl; Drager, Steven L.

    2007-04-01

    Quantum information processing's objective is to utilize revolutionary computing capability based on harnessing the paradigm shift offered by quantum computing to solve classically hard and computationally challenging problems. Some of our computationally challenging problems of interest include: the capability for rapid image processing, rapid optimization of logistics, protecting information, secure distributed simulation, and massively parallel computation. Currently, one important problem with quantum information processing is that the implementation of quantum computers is difficult to realize due to poor scalability and great presence of errors. Therefore, we have supported the development of Quantum eXpress and QuIDD Pro, two quantum computer simulators running on classical computers for the development and testing of new quantum algorithms and processes. This paper examines the different methods used by these two quantum computing simulators. It reviews both simulators, highlighting each simulators background, interface, and special features. It also demonstrates the implementation of current quantum algorithms on each simulator. It concludes with summary comments on both simulators.

  17. Third-order accurate conservative method on unstructured meshes for gasdynamic simulations

    NASA Astrophysics Data System (ADS)

    Shirobokov, D. A.

    2017-04-01

    A third-order accurate finite-volume method on unstructured meshes is proposed for solving viscous gasdynamic problems. The method is described as applied to the advection equation. The accuracy of the method is verified by computing the evolution of a vortex on meshes of various degrees of detail with variously shaped cells. Additionally, unsteady flows around a cylinder and a symmetric airfoil are computed. The numerical results are presented in the form of plots and tables.

  18. User's guide to Monte Carlo methods for evaluating path integrals

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan

    2018-04-01

    We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.

  19. The Agent-Based Simulation (ABS) Verification, validation and Accreditation (VV&A) Study Phase 2 - Joint/DoD

    DTIC Science & Technology

    2008-09-15

    however, a variety of so-called variance-reduction techniques ( VRTs ) that have been developed, which reduce output variance with little or no...additional computational effort. VRTs typically achieve this via judicious and careful reuse of the basic underlying random nmnbers. Perhaps the best-known...typical simulation situation- change a weapons-system configuration and see what difference it makes). Key to making CRN and most other VRTs work

  20. Adaptation of the Carter-Tracy water influx calculation to groundwater flow simulation

    USGS Publications Warehouse

    Kipp, Kenneth L.

    1986-01-01

    The Carter-Tracy calculation for water influx is adapted to groundwater flow simulation with additional clarifying explanation not present in the original papers. The Van Everdingen and Hurst aquifer-influence functions for radial flow from an outer aquifer region are employed. This technique, based on convolution of unit-step response functions, offers a simple but approximate method for embedding an inner region of groundwater flow simulation within a much larger aquifer region where flow can be treated in an approximate fashion. The use of aquifer-influence functions in groundwater flow modeling reduces the size of the computational grid with a corresponding reduction in computer storage and execution time. The Carter-Tracy approximation to the convolution integral enables the aquifer influence function calculation to be made with an additional storage requirement of only two times the number of boundary nodes more than that required for the inner region simulation. It is a good approximation for constant flow rates but is poor for time-varying flow rates where the variation is large relative to the mean. A variety of outer aquifer region geometries, exterior boundary conditions, and flow rate versus potentiometric head relations can be used. The radial, transient-flow case presented is representative. An analytical approximation to the functions of Van Everdingen and Hurst for the dimensionless potentiometric head versus dimensionless time is given.

  1. Computational Nanotechnology at NASA Ames Research Center, 1996

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.

  2. A new model to compute the desired steering torque for steer-by-wire vehicles and driving simulators

    NASA Astrophysics Data System (ADS)

    Fankem, Steve; Müller, Steffen

    2014-05-01

    This paper deals with the control of the hand wheel actuator in steer-by-wire (SbW) vehicles and driving simulators (DSs). A novel model for the computation of the desired steering torque is presented. The introduced steering torque computation does not only aim to generate a realistic steering feel, which means that the driver should not miss the basic steering functionality of a modern conventional steering system such as an electric power steering (EPS) or hydraulic power steering (HPS), and this in every driving situation. In addition, the modular structure of the steering torque computation combined with suitably selected tuning parameters has the objective to offer a high degree of customisability of the steering feel and thus to provide each driver with his preferred steering feel in a very intuitive manner. The task and the tuning of each module are firstly described. Then, the steering torque computation is parameterised such that the steering feel of a series EPS system is reproduced. For this purpose, experiments are conducted in a hardware-in-the-loop environment where a test EPS is mounted on a steering test bench coupled with a vehicle simulator and parameter identification techniques are applied. Subsequently, how appropriate the steering torque computation mimics the test EPS system is objectively evaluated with respect to criteria concerning the steering torque level and gradient, the feedback behaviour and the steering return ability. Finally, the intuitive tuning of the modular steering torque computation is demonstrated for deriving a sportier steering feel configuration.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preece, D.S.; Knudsen, S.D.

    The spherical element computer code DMC (Distinct Motion Code) used to model rock motion resulting from blasting has been enhanced to allow routine computer simulations of bench blasting. The enhancements required for bench blast simulation include: (1) modifying the gas flow portion of DMC, (2) adding a new explosive gas equation of state capability, (3) modifying the porosity calculation, and (4) accounting for blastwell spacing parallel to the face. A parametric study performed with DMC shows logical variation of the face velocity as burden, spacing, blastwell diameter and explosive type are varied. These additions represent a significant advance in themore » capability of DMC which will not only aid in understanding the physics involved in blasting but will also become a blast design tool. 8 refs., 7 figs., 1 tab.« less

  4. Hardware accelerator for molecular dynamics: MDGRAPE-2

    NASA Astrophysics Data System (ADS)

    Susukita, Ryutaro; Ebisuzaki, Toshikazu; Elmegreen, Bruce G.; Furusawa, Hideaki; Kato, Kenya; Kawai, Atsushi; Kobayashi, Yoshinao; Koishi, Takahiro; McNiven, Geoffrey D.; Narumi, Tetsu; Yasuoka, Kenji

    2003-10-01

    We developed MDGRAPE-2, a hardware accelerator that calculates forces at high speed in molecular dynamics (MD) simulations. MDGRAPE-2 is connected to a PC or a workstation as an extension board. The sustained performance of one MDGRAPE-2 board is 15 Gflops, roughly equivalent to the peak performance of the fastest supercomputer processing element. One board is able to calculate all forces between 10 000 particles in 0.28 s (i.e. 310000 time steps per day). If 16 boards are connected to one computer and operated in parallel, this calculation speed becomes ˜10 times faster. In addition to MD, MDGRAPE-2 can be applied to gravitational N-body simulations, the vortex method and smoothed particle hydrodynamics in computational fluid dynamics.

  5. Quench simulation studies of TAC jelly roll superferric dipole corrector elements for the SSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez, G.

    Using the computer program SSC-DTAC-T, which is a modification of the quench computer program SSC-RR to model Jelly Roll coils, the quench behavior of the dipole corrector element (TAC design with Jelly Roll winding) is studied. The simulations are made as a function of the length of the magnet, the copper-to-superconducting ratio, and the thickness of insulation surrounding the wires. The magnet is self-protected with all listed considerations. In addition, this implies that other corrector multipoles (quadrupole, sextupole, octupole, etc.), which use the same conductor winding technique, are self-protected. A passive protection system should work for these elements. 9 refs.,more » 18 figs., 1 tab.« less

  6. Benchmarking of neutron production of heavy-ion transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, I.; Ronningen, R. M.; Heilbronn, L.

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less

  7. Physics-based subsurface visualization of human tissue.

    PubMed

    Sharp, Richard; Adams, Jacob; Machiraju, Raghu; Lee, Robert; Crane, Robert

    2007-01-01

    In this paper, we present a framework for simulating light transport in three-dimensional tissue with inhomogeneous scattering properties. Our approach employs a computational model to simulate light scattering in tissue through the finite element solution of the diffusion equation. Although our model handles both visible and nonvisible wavelengths, we especially focus on the interaction of near infrared (NIR) light with tissue. Since most human tissue is permeable to NIR light, tools to noninvasively image tumors, blood vasculature, and monitor blood oxygenation levels are being constructed. We apply this model to a numerical phantom to visually reproduce the images generated by these real-world tools. Therefore, in addition to enabling inverse design of detector instruments, our computational tools produce physically-accurate visualizations of subsurface structures.

  8. An efficient approach to the analysis of rail surface irregularities accounting for dynamic train-track interaction and inelastic deformations

    NASA Astrophysics Data System (ADS)

    Andersson, Robin; Torstensson, Peter T.; Kabo, Elena; Larsson, Fredrik

    2015-11-01

    A two-dimensional computational model for assessment of rolling contact fatigue induced by discrete rail surface irregularities, especially in the context of so-called squats, is presented. Dynamic excitation in a wide frequency range is considered in computationally efficient time-domain simulations of high-frequency dynamic vehicle-track interaction accounting for transient non-Hertzian wheel-rail contact. Results from dynamic simulations are mapped onto a finite element model to resolve the cyclic, elastoplastic stress response in the rail. Ratcheting under multiple wheel passages is quantified. In addition, low cycle fatigue impact is quantified using the Jiang-Sehitoglu fatigue parameter. The functionality of the model is demonstrated by numerical examples.

  9. Integrated solar energy system optimization

    NASA Astrophysics Data System (ADS)

    Young, S. K.

    1982-11-01

    The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.

  10. Development of axisymmetric lattice Boltzmann flux solver for complex multiphase flows

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Shu, Chang; Yang, Li-Ming; Yuan, Hai-Zhuan

    2018-05-01

    This paper presents an axisymmetric lattice Boltzmann flux solver (LBFS) for simulating axisymmetric multiphase flows. In the solver, the two-dimensional (2D) multiphase LBFS is applied to reconstruct macroscopic fluxes excluding axisymmetric effects. Source terms accounting for axisymmetric effects are introduced directly into the governing equations. As compared to conventional axisymmetric multiphase lattice Boltzmann (LB) method, the present solver has the kinetic feature for flux evaluation and avoids complex derivations of external forcing terms. In addition, the present solver also saves considerable computational efforts in comparison with three-dimensional (3D) computations. The capability of the proposed solver in simulating complex multiphase flows is demonstrated by studying single bubble rising in a circular tube. The obtained results compare well with the published data.

  11. GPU-accelerated computational tool for studying the effectiveness of asteroid disruption techniques

    NASA Astrophysics Data System (ADS)

    Zimmerman, Ben J.; Wie, Bong

    2016-10-01

    This paper presents the development of a new Graphics Processing Unit (GPU) accelerated computational tool for asteroid disruption techniques. Numerical simulations are completed using the high-order spectral difference (SD) method. Due to the compact nature of the SD method, it is well suited for implementation with the GPU architecture, hence solutions are generated at orders of magnitude faster than the Central Processing Unit (CPU) counterpart. A multiphase model integrated with the SD method is introduced, and several asteroid disruption simulations are conducted, including kinetic-energy impactors, multi-kinetic energy impactor systems, and nuclear options. Results illustrate the benefits of using multi-kinetic energy impactor systems when compared to a single impactor system. In addition, the effectiveness of nuclear options is observed.

  12. Analysis of the Space Shuttle main engine simulation

    NASA Technical Reports Server (NTRS)

    Deabreu-Garcia, J. Alex; Welch, John T.

    1993-01-01

    This is a final report on an analysis of the Space Shuttle Main Engine Program, a digital simulator code written in Fortran. The research was undertaken in ultimate support of future design studies of a shuttle life-extending Intelligent Control System (ICS). These studies are to be conducted by NASA Lewis Space Research Center. The primary purpose of the analysis was to define the means to achieve a faster running simulation, and to determine if additional hardware would be necessary for speeding up simulations for the ICS project. In particular, the analysis was to consider the use of custom integrators based on the Matrix Stability Region Placement (MSRP) method. In addition to speed of execution, other qualities of the software were to be examined. Among these are the accuracy of computations, the useability of the simulation system, and the maintainability of the program and data files. Accuracy involves control of truncation error of the methods, and roundoff error induced by floating point operations. It also involves the requirement that the user be fully aware of the model that the simulator is implementing.

  13. Quantum Computing Architectural Design

    NASA Astrophysics Data System (ADS)

    West, Jacob; Simms, Geoffrey; Gyure, Mark

    2006-03-01

    Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.

  14. Users' Manual for ILSS (Revised ILSLOC) : Simulation for Derogation Effects on the Instrument Landing System

    DOT National Transportation Integrated Search

    1976-12-01

    The manual presents the complete ILSLOC computer program package. In addition to including a thorough description of the program itself and a commented listing, the manual contains a brief description of the ILS system and antenna patterns. To illust...

  15. Progressive Fracture of Laminated Fiber-Reinforced Composite Stiffened Plate Under Pressure

    NASA Technical Reports Server (NTRS)

    Gotsis, Pascalis K.; Abdi, Frank; Chamis, Christos C.; Tsouros, Konstantinos

    2007-01-01

    S-Glass/epoxy laminated fiber-reinforced composite stiffened plate structure with laminate configuration (0/90)5 was simulated to investigate damage and fracture progression, under uniform pressure. For comparison reasons a simple plate was examined, in addition with the stiffened plate. An integrated computer code was used for the simulation. The damage initiation began with matrix failure in tension, continuous with damage and/or fracture progression as a result of additional matrix failure and fiber fracture and followed by additional interply delamination. Fracture through the thickness began when the damage accumulation was 90%. After that stage, the cracks propagate rapidly and the structures collapse. The collapse load for the simple plate is 21.57 MPa (3120 psi) and for the stiffened plate 25.24 MPa (3660 psi).

  16. Modeling approaches for the simulation of ultrasonic inspections of anisotropic composite structures in the CIVA software platform

    NASA Astrophysics Data System (ADS)

    Jezzine, Karim; Imperiale, Alexandre; Demaldent, Edouard; Le Bourdais, Florian; Calmon, Pierre; Dominguez, Nicolas

    2018-04-01

    Models for the simulation of ultrasonic inspections of flat and curved plate-like composite structures, as well as stiffeners, are available in the CIVA-COMPOSITE module released in 2016. A first modelling approach using a ray-based model is able to predict the ultrasonic propagation in an anisotropic effective medium obtained after having homogenized the composite laminate. Fast 3D computations can be performed on configurations featuring delaminations, flat bottom holes or inclusions for example. In addition, computations on ply waviness using this model will be available in CIVA 2017. Another approach is proposed in the CIVA-COMPOSITE module. It is based on the coupling of CIVA ray-based model and a finite difference scheme in time domain (FDTD) developed by AIRBUS. The ray model handles the ultrasonic propagation between the transducer and the FDTD computation zone that surrounds the composite part. In this way, the computational efficiency is preserved and the ultrasound scattering by the composite structure can be predicted. Alternatively, a high order finite element approach is currently developed at CEA but not yet integrated in CIVA. The advantages of this approach will be discussed and first simulation results on Carbon Fiber Reinforced Polymers (CFRP) will be shown. Finally, the application of these modelling tools to the construction of metamodels is discussed.

  17. Credibility Assessment of Deterministic Computational Models and Simulations for Space Biomedical Research and Operations

    NASA Technical Reports Server (NTRS)

    Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Myers, Jerry

    2015-01-01

    Human missions beyond low earth orbit to destinations, such as to Mars and asteroids will expose astronauts to novel operational conditions that may pose health risks that are currently not well understood and perhaps unanticipated. In addition, there are limited clinical and research data to inform development and implementation of health risk countermeasures for these missions. Consequently, NASA's Digital Astronaut Project (DAP) is working to develop and implement computational models and simulations (M&S) to help predict and assess spaceflight health and performance risks, and enhance countermeasure development. In order to effectively accomplish these goals, the DAP evaluates its models and simulations via a rigorous verification, validation and credibility assessment process to ensure that the computational tools are sufficiently reliable to both inform research intended to mitigate potential risk as well as guide countermeasure development. In doing so, DAP works closely with end-users, such as space life science researchers, to establish appropriate M&S credibility thresholds. We will present and demonstrate the process the DAP uses to vet computational M&S for space biomedical analysis using real M&S examples. We will also provide recommendations on how the larger space biomedical community can employ these concepts to enhance the credibility of their M&S codes.

  18. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  19. Multiscale Mechanics of Articular Cartilage: Potentials and Challenges of Coupling Musculoskeletal, Joint, and Microscale Computational Models

    PubMed Central

    Halloran, J. P.; Sibole, S.; van Donkelaar, C. C.; van Turnhout, M. C.; Oomens, C. W. J.; Weiss, J. A.; Guilak, F.; Erdemir, A.

    2012-01-01

    Articular cartilage experiences significant mechanical loads during daily activities. Healthy cartilage provides the capacity for load bearing and regulates the mechanobiological processes for tissue development, maintenance, and repair. Experimental studies at multiple scales have provided a fundamental understanding of macroscopic mechanical function, evaluation of the micromechanical environment of chondrocytes, and the foundations for mechanobiological response. In addition, computational models of cartilage have offered a concise description of experimental data at many spatial levels under healthy and diseased conditions, and have served to generate hypotheses for the mechanical and biological function. Further, modeling and simulation provides a platform for predictive risk assessment, management of dysfunction, as well as a means to relate multiple spatial scales. Simulation-based investigation of cartilage comes with many challenges including both the computational burden and often insufficient availability of data for model development and validation. This review outlines recent modeling and simulation approaches to understand cartilage function from a mechanical systems perspective, and illustrates pathways to associate mechanics with biological function. Computational representations at single scales are provided from the body down to the microstructure, along with attempts to explore multiscale mechanisms of load sharing that dictate the mechanical environment of the cartilage and chondrocytes. PMID:22648577

  20. Modeling nonlinear ultrasound propagation in heterogeneous media with power law absorption using a k-space pseudospectral method.

    PubMed

    Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T

    2012-06-01

    The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.

  1. Assessment of electron propagator methods for the simulation of vibrationally-resolved valence and core photoionization spectra

    PubMed Central

    Baiardi, A.; Paoloni, L.; Barone, V.; Zakrzewski, V.G.; Ortiz, J.V.

    2017-01-01

    The analysis of photoelectron spectra is usually facilitated by quantum mechanical simulations. Due to the recent improvement of experimental techniques, the resolution of experimental spectra is rapidly increasing, and the inclusion of vibrational effects is usually mandatory to obtain a reliable reproduction of the spectra. With the aim of defining a robust computational protocol, a general time-independent formulation to compute different kinds of vibrationally-resolved electronic spectra has been generalized to support also photoelectron spectroscopy. The electronic structure data underlying the simulation are computed using different electron propagator approaches. In addition to the more standard approaches, a new and robust implementation of the second-order self-energy approximation of the electron propagator based on a transition operator reference (TOEP2) is presented. To validate our implementation, a series of molecules has been used as test cases. The result of the simulations shows that, for ultraviolet photoionization spectra, the more accurate non-diagonal approaches are needed to obtain a reliable reproduction of vertical ionization energies, but diagonal approaches are sufficient for energy gradients and pole strengths. For X-ray photoelectron spectroscopy, the TOEP2 approach, besides being more efficient, is also the most accurate in the reproduction of both vertical ionization energies and vibrationally-resolved bandshapes. PMID:28521087

  2. RotCFD Analysis of the AH-56 Cheyenne Hub Drag

    NASA Technical Reports Server (NTRS)

    Solis, Eduardo; Bass, Tal A.; Keith, Matthew D.; Oppenheim, Rebecca T.; Runyon, Bryan T.; Veras-Alba, Belen

    2016-01-01

    In 2016, the U.S. Army Aviation Development Directorate (ADD) conducted tests in the U.S. Army 7- by 10- Foot Wind Tunnel at NASA Ames Research Center of a nonrotating 2/5th-scale AH-56 rotor hub. The objective of the tests was to determine how removing the mechanical control gyro affected the drag. Data for the lift, drag, and pitching moment were recorded for the 4-bladed rotor hub in various hardware configurations, azimuth angles, and angles of attack. Numerical simulations of a selection of the configurations and orientations were then performed, and the results were compared with the test data. To generate the simulation results, the hardware configurations were modeled using Creo and Rhinoceros 5, three-dimensional surface modeling computer-aided design (CAD) programs. The CAD model was imported into Rotorcraft Computational Fluid Dynamics (RotCFD), a computational fluid dynamics (CFD) tool used for analyzing rotor flow fields. RotCFD simulation results were compared with the experimental results of three hardware configurations at two azimuth angles, two angles of attack, and with and without wind tunnel walls. The results help validate RotCFD as a tool for analyzing low-drag rotor hub designs for advanced high-speed rotorcraft concepts. Future work will involve simulating additional hub geometries to reduce drag or tailor to other desired performance levels.

  3. A membrane computing simulator of trans-hierarchical antibiotic resistance evolution dynamics in nested ecological compartments (ARES).

    PubMed

    Campos, Marcelino; Llorens, Carlos; Sempere, José M; Futami, Ricardo; Rodriguez, Irene; Carrasco, Purificación; Capilla, Rafael; Latorre, Amparo; Coque, Teresa M; Moya, Andres; Baquero, Fernando

    2015-08-05

    Antibiotic resistance is a major biomedical problem upon which public health systems demand solutions to construe the dynamics and epidemiological risk of resistant bacteria in anthropogenically-altered environments. The implementation of computable models with reciprocity within and between levels of biological organization (i.e. essential nesting) is central for studying antibiotic resistances. Antibiotic resistance is not just the result of antibiotic-driven selection but more properly the consequence of a complex hierarchy of processes shaping the ecology and evolution of the distinct subcellular, cellular and supra-cellular vehicles involved in the dissemination of resistance genes. Such a complex background motivated us to explore the P-system standards of membrane computing an innovative natural computing formalism that abstracts the notion of movement across membranes to simulate antibiotic resistance evolution processes across nested levels of micro- and macro-environmental organization in a given ecosystem. In this article, we introduce ARES (Antibiotic Resistance Evolution Simulator) a software device that simulates P-system model scenarios with five types of nested computing membranes oriented to emulate a hierarchy of eco-biological compartments, i.e. a) peripheral ecosystem; b) local environment; c) reservoir of supplies; d) animal host; and e) host's associated bacterial organisms (microbiome). Computational objects emulating molecular entities such as plasmids, antibiotic resistance genes, antimicrobials, and/or other substances can be introduced into this framework and may interact and evolve together with the membranes, according to a set of pre-established rules and specifications. ARES has been implemented as an online server and offers additional tools for storage and model editing and downstream analysis. The stochastic nature of the P-system model implemented in ARES explicitly links within and between host dynamics into a simulation, with feedback reciprocity among the different units of selection influenced by antibiotic exposure at various ecological levels. ARES offers the possibility of modeling predictive multilevel scenarios of antibiotic resistance evolution that can be interrogated, edited and re-simulated if necessary, with different parameters, until a correct model description of the process in the real world is convincingly approached. ARES can be accessed at http://gydb.org/ares.

  4. Transient Three-Dimensional Side Load Analysis of a Film Cooled Nozzle

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Guidos, Mike

    2008-01-01

    Transient three-dimensional numerical investigations on the side load physics for an engine encompassing a film cooled nozzle extension and a regeneratively cooled thrust chamber, were performed. The objectives of this study are to identify the three-dimensional side load physics and to compute the associated aerodynamic side load using an anchored computational methodology. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and a transient inlet history based on an engine system simulation. Ultimately, the computational results will be provided to the nozzle designers for estimating of effect of the peak side load on the nozzle structure. Computations simulating engine startup at ambient pressures corresponding to sea level and three high altitudes were performed. In addition, computations for both engine startup and shutdown transients were also performed for a stub nozzle, operating at sea level. For engine with the full nozzle extension, computational result shows starting up at sea level, the peak side load occurs when the lambda shock steps into the turbine exhaust flow, while the side load caused by the transition from free-shock separation to restricted-shock separation comes at second; and the side loads decreasing rapidly and progressively as the ambient pressure decreases. For the stub nozzle operating at sea level, the computed side loads during both startup and shutdown becomes very small due to the much reduced flow area.

  5. Using mixed reality, force feedback and tactile augmentation to improve the realism of medical simulation.

    PubMed

    Fisher, J Brian; Porter, Susan M

    2002-01-01

    This paper describes an application of a display approach which uses chromakey techniques to composite real and computer-generated images allowing a user to see his hands and medical instruments collocated with the display of virtual objects during a medical training simulation. Haptic feedback is provided through the use of a PHANTOM force feedback device in addition to tactile augmentation, which allows the user to touch virtual objects by introducing corresponding real objects in the workspace. A simplified catheter introducer insertion simulation was developed to demonstrate the capabilities of this approach.

  6. Developing 3D morphologies for simulating building energy demand in urban microclimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    New, Joshua Ryan; Omitaomu, Olufemi A.; Allen, Melissa R.

    In order to simulate the effect of interactions between urban morphology and microclimate on demand for heating and cooling in buildings, we utilize source elevation data to create 3D building geometries at the neighborhood and city scale. Additionally, we use urban morphology concepts to design virtual morphologies for simulation scenarios in an undeveloped land parcel. Using these morphologies, we compute building-energy parameters such as the density for each surface and the frontal area index for each of the buildings to be able to effectively model the microclimate for the urban area.

  7. Analysis of Test Case Computations and Experiments for the First Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Heeg, Jennifer; Wieseman, Carol D.; Chwalowski, Pawel

    2013-01-01

    This paper compares computational and experimental data from the Aeroelastic Prediction Workshop (AePW) held in April 2012. This workshop was designed as a series of technical interchange meetings to assess the state of the art of computational methods for predicting unsteady flowfields and static and dynamic aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques to simulate aeroelastic problems and to identify computational and experimental areas needing additional research and development. Three subject configurations were chosen from existing wind-tunnel data sets where there is pertinent experimental data available for comparison. Participant researchers analyzed one or more of the subject configurations, and results from all of these computations were compared at the workshop.

  8. Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS

    NASA Astrophysics Data System (ADS)

    Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L.; Bolch, Wesley E.

    2017-06-01

    A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.

  9. Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L; Bolch, Wesley E

    2017-06-21

    A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.

  10. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1994-01-01

    Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

  11. Concept verification of three dimensional free motion simulator for space robot

    NASA Technical Reports Server (NTRS)

    Okamoto, Osamu; Nakaya, Teruomi; Pokines, Brett

    1994-01-01

    In the development of automatic assembling technologies for space structures, it is an indispensable matter to investigate and simulate the movements of robot satellites concerned with mission operation. The movement investigation and simulation on the ground will be effectively realized by a free motion simulator. Various types of ground systems for simulating free motion have been proposed and utilized. Some of these methods are a neutral buoyancy system, an air or magnetic suspension system, a passive suspension balance system, and a free flying aircraft or drop tower system. In addition, systems can be simulated by computers using an analytical model. Each free motion simulation method has limitations and well known problems, specifically, disturbance by water viscosity, limited number of degrees-of-freedom, complex dynamics induced by the attachment of the simulation system, short experiment time, and the lack of high speed super-computer simulation systems, respectively. The basic idea presented here is to realize 3-dimensional free motion. This is achieved by combining a spherical air bearing, a cylindrical air bearing, and a flat air bearing. A conventional air bearing system has difficulty realizing free vertical motion suspension. The idea of free vertical suspension is that a cylindrical air bearing and counter balance weight realize vertical free motion. This paper presents a design concept, configuration, and basic performance characteristics of an innovative free motion simulator. A prototype simulator verifies the feasibility of 3-dimensional free motion simulation.

  12. A breakthrough for experiencing and understanding simulated physics

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1988-01-01

    The use of computer simulation in physics research is discussed, focusing on improvements to graphic workstations. Simulation capabilities and applications of enhanced visualization tools are outlined. The elements of an ideal computer simulation are presented and the potential for improving various simulation elements is examined. The interface between the human and the computer and simulation models are considered. Recommendations are made for changes in computer simulation practices and applications of simulation technology in education.

  13. Mobility analysis, simulation, and scale model testing for the design of wheeled planetary rovers

    NASA Technical Reports Server (NTRS)

    Lindemann, Randel A.; Eisen, Howard J.

    1993-01-01

    The use of computer based techniques to model and simulate wheeled rovers on rough natural terrains is considered. Physical models of a prototype vehicle can be used to test the correlation of the simulations in scaled testing. The computer approaches include a quasi-static planar or two dimensional analysis and design tool based on the traction necessary for the vehicle to have imminent mobility. The computer program modeled a six by six wheel drive vehicle of original kinematic configuration, called the Rocker Bogie. The Rocker Bogie was optimized using the quasi-static software with respect to its articulation parameters prior to fabrication of a prototype. In another approach used, the dynamics of the Rocker Bogie vehicle in 3-D space was modeled on an engineering workstation using commercial software. The model included the complex and nonlinear interaction of the tire and terrain. The results of the investigation yielded numerical and graphical results of the rover traversing rough terrain on the earth, moon, and Mars. In addition, animations of the rover excursions were also generated. A prototype vehicle was then used in a series of testbed and field experiments. Correspondence was then established between the computer models and the physical model. The results indicated the utility of the quasi-static tool for configurational design, as well as the predictive ability of the 3-D simulation to model the dynamic behavior of the vehicle over short traverses.

  14. A geostatistical extreme-value framework for fast simulation of natural hazard events

    PubMed Central

    Stephenson, David B.

    2016-01-01

    We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768

  15. Experimental Study and CFD Simulation of a 2D Circulating Fluidized Bed

    NASA Astrophysics Data System (ADS)

    Kallio, S.; Guldén, M.; Hermanson, A.

    Computational fluid dynamics (CFD) gains popularity in fluidized bed modeling. For model validation, there is a need of detailed measurements under well-defined conditions. In the present study, experiments were carried out in a 40 em wide and 3 m high 2D circulating fluidized bed. Two experiments were simulated by means of the Eulerian multiphase models of the Fluent CFD software. The vertical pressure and solids volume fraction profiles and the solids circulation rate obtained from the simulation were compared to the experimental results. In addition, lateral volume fraction profiles could be compared. The simulated CFB flow patterns and the profiles obtained from simulations were in general in a good agreement with the experimental results.

  16. Microscopic simulation of xenon-based optical TPCs in the presence of molecular additives

    NASA Astrophysics Data System (ADS)

    Azevedo, C. D. R.; González-Díaz, D.; Biagi, S. F.; Oliveira, C. A. B.; Henriques, C. A. O.; Escada, J.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; Gutiérrez, R. M.; Hauptman, J.; Hernandez, A. I.; Morata, J. A. Hernando; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; Lopez-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Vidal, J. Muñoz; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Serra, L.; Shuman, D.; Simón, A.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.

    2018-01-01

    We introduce a simulation framework for the transport of high and low energy electrons in xenon-based optical time projection chambers (OTPCs). The simulation relies on elementary cross sections (electron-atom and electron-molecule) and incorporates, in order to compute the gas scintillation, the reaction/quenching rates (atom-atom and atom-molecule) of the first 41 excited states of xenon and the relevant associated excimers, together with their radiative cascade. The results compare positively with observations made in pure xenon and its mixtures with CO2 and CF4 in a range of pressures from 0.1 to 10 bar. This work sheds some light on the elementary processes responsible for the primary and secondary xenon-scintillation mechanisms in the presence of additives, that are of interest to the OTPC technology.

  17. Adaptive thinking & leadership simulation game training for special forces officers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raybourn, Elaine Marie; Mendini, Kip; Heneghan, Jerry

    Complex problem solving approaches and novel strategies employed by the military at the squad, team, and commander level are often best learned experimentally. Since live action exercises can be costly, advances in simulation game training technology offer exciting ways to enhance current training. Computer games provide an environment for active, critical learning. Games open up possibilities for simultaneous learning on multiple levels; players may learn from contextual information embedded in the dynamics of the game, the organic process generated by the game, and through the risks, benefits, costs, outcomes, and rewards of alternative strategies that result from decision making. Inmore » the present paper we discuss a multiplayer computer game simulation created for the Adaptive Thinking & Leadership (ATL) Program to train Special Forces Team Leaders. The ATL training simulation consists of a scripted single-player and an immersive multiplayer environment for classroom use which leverages immersive computer game technology. We define adaptive thinking as consisting of competencies such as negotiation and consensus building skills, the ability to communicate effectively, analyze ambiguous situations, be self-aware, think innovatively, and critically use effective problem solving skills. Each of these competencies is an essential element of leader development training for the U.S. Army Special Forces. The ATL simulation is used to augment experiential learning in the curriculum for the U.S. Army JFK Special Warfare Center & School (SWCS) course in Adaptive Thinking & Leadership. The school is incorporating the ATL simulation game into two additional training pipelines (PSYOPS and Civil Affairs Qualification Courses) that are also concerned with developing cultural awareness, interpersonal communication adaptability, and rapport-building skills. In the present paper, we discuss the design, development, and deployment of the training simulation, and emphasize how the multiplayer simulation game is successfully used in the Special Forces Officer training program.« less

  18. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  19. The SCEC Broadband Platform: A Collaborative Open-Source Software Package for Strong Ground Motion Simulation and Validation

    NASA Astrophysics Data System (ADS)

    Silva, F.; Maechling, P. J.; Goulet, C.; Somerville, P.; Jordan, T. H.

    2013-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving SCEC researchers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Broadband Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms of a historical earthquake for which observed strong ground motion data is available. Also in validation mode, the Broadband Platform calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. During the past year, we have modified the software to enable the addition of a large number of historical events, and we are now adding validation simulation inputs and observational data for 23 historical events covering the Eastern and Western United States, Japan, Taiwan, Turkey, and Italy. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. By establishing an interface between scientific modules with a common set of input and output files, the Broadband Platform facilitates the addition of new scientific methods, which are written by earth scientists in a number of languages such as C, C++, Fortran, and Python. The Broadband Platform's modular design also supports the reuse of existing software modules as building blocks to create new scientific methods. Additionally, the Platform implements a wrapper around each scientific module, converting input and output files to and from the specific formats required (or produced) by individual scientific codes. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes the addition of 3 new simulation methods and several new data products, such as map and distance-based goodness of fit plots. Finally, as the number and complexity of scenarios simulated using the Broadband Platform increase, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.

  20. The 3-D numerical simulation research of vacuum injector for linear induction accelerator

    NASA Astrophysics Data System (ADS)

    Liu, Dagang; Xie, Mengjun; Tang, Xinbing; Liao, Shuqing

    2017-01-01

    Simulation method for voltage in-feed and electron injection of vacuum injector is given, and verification of the simulated voltage and current is carried out. The numerical simulation for the magnetic field of solenoid is implemented, and a comparative analysis is conducted between the simulation results and experimental results. A semi-implicit difference algorithm is adopted to suppress the numerical noise, and a parallel acceleration algorithm is used for increasing the computation speed. The RMS emittance calculation method of the beam envelope equations is analyzed. In addition, the simulated results of RMS emittance are compared with the experimental data. Finally, influences of the ferromagnetic rings on the radial and axial magnetic fields of solenoid as well as the emittance of beam are studied.

  1. Configuration Management File Manager Developed for Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    1997-01-01

    One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.

  2. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  3. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over several simulation timesteps. One MD application described here highlights the utility of including long range contributions to Lennard-Jones potential in constant pressure simulations. Another application shows the time dependence of long range forces in a multiple time step MD simulation.

  4. Properties of Organic Liquids when Simulated with Long-Range Lennard-Jones Interactions.

    PubMed

    Fischer, Nina M; van Maaren, Paul J; Ditz, Jonas C; Yildirim, Ahmet; van der Spoel, David

    2015-07-14

    In order to increase the accuracy of classical computer simulations, existing methodologies may need to be adapted. Hitherto, most force fields employ a truncated potential function to model van der Waals interactions, sometimes augmented with an analytical correction. Although such corrections are accurate for homogeneous systems with a long cutoff, they should not be used in inherently inhomogeneous systems such as biomolecular and interface systems. For such cases, a variant of the particle mesh Ewald algorithm (Lennard-Jones PME) was already proposed 20 years ago (Essmann et al. J. Chem. Phys. 1995, 103, 8577-8593), but it was implemented only recently (Wennberg et al. J. Chem. Theory Comput. 2013, 9, 3527-3537) in a major simulation code (GROMACS). The availability of this method allows surface tensions of liquids as well as bulk properties to be established, such as density and enthalpy of vaporization, without approximations due to truncation. Here, we report on simulations of ≈150 liquids (taken from a force field benchmark: Caleman et al. J. Chem. Theory Comput. 2012, 8, 61-74) using three different force fields and compare simulations with and without explicit long-range van der Waals interactions. We find that the density and enthalpy of vaporization increase for most liquids using the generalized Amber force field (GAFF, Wang et al. J. Comput. Chem. 2004, 25, 1157-1174) and the Charmm generalized force field (CGenFF, Vanommeslaeghe et al. J. Comput. Chem. 2010, 31, 671-690) but less so for OPLS/AA (Jorgensen and Tirado-Rives, Proc. Natl. Acad. Sci. U.S.A. 2005, 102, 6665-6670), which was parametrized with an analytical correction to the van der Waals potential. The surface tension increases by ≈10(-2) N/m for all force fields. These results suggest that van der Waals attractions in force fields are too strong, in particular for the GAFF and CGenFF. In addition to the simulation results, we introduce a new version of a web server, http://virtualchemistry.org, aimed at facilitating sharing and reuse of input files for molecular simulations.

  5. User's manual for ILSLOC : simulation for derogation effects on the localizer portion of the Instrument Landing System

    DOT National Transportation Integrated Search

    1973-08-01

    The manual presents the complete ILSLOC computer program package. In addition to including a thorough description of the program itself and a commented listing, the manual contains a brief description of the ILS system and antenna patterns. To illust...

  6. Demonstration of the Water Erosion Prediction Project (WEPP) internet interface and services

    USDA-ARS?s Scientific Manuscript database

    The Water Erosion Prediction Project (WEPP) model is a process-based FORTRAN computer simulation program for prediction of runoff and soil erosion by water at hillslope profile, field, and small watershed scales. To effectively run the WEPP model and interpret results additional software has been de...

  7. Using Virtual Reality with and without Gaming Attributes for Academic Achievement

    ERIC Educational Resources Information Center

    Vogel, Jennifer J.; Greenwood-Ericksen, Adams; Cannon-Bowers, Jan; Bowers, Clint A.

    2006-01-01

    A subcategory of computer-assisted instruction (CAI), games have additional attributes such as motivation, reward, interactivity, score, and challenge. This study used a quasi-experimental design to determine if previous findings generalize to non simulation-based game designs. Researchers observed significant improvement in the overall population…

  8. Numerical simulation of steady supersonic flow. [spatial marching

    NASA Technical Reports Server (NTRS)

    Schiff, L. B.; Steger, J. L.

    1981-01-01

    A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.

  9. Rapid Human-Computer Interactive Conceptual Design of Mobile and Manipulative Robot Systems

    DTIC Science & Technology

    2015-05-19

    algorithm based on Age-Fitness Pareto Optimization (AFPO) ([9]) with an additional user prefer- ence objective and a neural network-based user model, we...greater than 40, which is about 5 times further than any robot traveled in our experiments. 6 3.3 Methods The algorithm uses a client -server computational...architecture. The client here is an interactive pro- gram which takes a pair of controllers as input, simulates4 two copies of the robot with

  10. Obtaining valid geologic models from 3-D resistivity inversion of magnetotelluric data at Pahute Mesa, Nevada

    USGS Publications Warehouse

    Rodriguez, Brian D.; Sweetkind, Donald S.

    2015-01-01

    The 3-D inversion was generally able to reproduce the gross resistivity structure of the “known” model, but the simulated conductive volcanic composite unit horizons were often too shallow when compared to the “known” model. Additionally, the chosen computation parameters such as station spacing appear to have resulted in computational artifacts that are difficult to interpret but could potentially be removed with further refinements of the 3-D resistivity inversion modeling technique.

  11. Space Station communications and tracking systems modeling and RF link simulation

    NASA Technical Reports Server (NTRS)

    Tsang, Chit-Sang; Chie, Chak M.; Lindsey, William C.

    1986-01-01

    In this final report, the effort spent on Space Station Communications and Tracking System Modeling and RF Link Simulation is described in detail. The effort is mainly divided into three parts: frequency division multiple access (FDMA) system simulation modeling and software implementation; a study on design and evaluation of a functional computerized RF link simulation/analysis system for Space Station; and a study on design and evaluation of simulation system architecture. This report documents the results of these studies. In addition, a separate User's Manual on Space Communications Simulation System (SCSS) (Version 1) documents the software developed for the Space Station FDMA communications system simulation. The final report, SCSS user's manual, and the software located in the NASA JSC system analysis division's VAX 750 computer together serve as the deliverables from LinCom for this project effort.

  12. Simulation of a G-tolerance curve using the pulsatile cardiovascular model

    NASA Technical Reports Server (NTRS)

    Solomon, M.; Srinivasan, R.

    1985-01-01

    A computer simulation study, performed to assess the ability of the cardiovascular model to reproduce the G tolerance curve (G level versus tolerance time) is reported. A composite strength duration curve derived from experimental data obtained in human centrifugation studies was used for comparison. The effects of abolishing automomic control and of blood volume loss on G tolerance were also simulated. The results provide additional validation of the model. The need for the presence of autonomic reflexes even at low levels of G is pointed out. The low margin of safety with a loss of blood volume indicated by the simulation results underscores the necessity for protective measures during Shuttle reentry.

  13. A translator and simulator for the Burroughs D machine

    NASA Technical Reports Server (NTRS)

    Roberts, J.

    1972-01-01

    The D Machine is described as a small user microprogrammable computer designed to be a versatile building block for such diverse functions as: disk file controllers, I/O controllers, and emulators. TRANSLANG is an ALGOL-like language, which allows D Machine users to write microprograms in an English-like format as opposed to creating binary bit pattern maps. The TRANSLANG translator parses TRANSLANG programs into D Machine microinstruction bit patterns which can be executed on the D Machine simulator. In addition to simulation and translation, the two programs also offer several debugging tools, such as: a full set of diagnostic error messages, register dumps, simulated memory dumps, traces on instructions and groups of instructions, and breakpoints.

  14. Dynamic and fluid-structure interaction simulations of bioprosthetic heart valves using parametric design with T-splines and Fung-type material models

    NASA Astrophysics Data System (ADS)

    Hsu, Ming-Chen; Kamensky, David; Xu, Fei; Kiendl, Josef; Wang, Chenglong; Wu, Michael C. H.; Mineroff, Joshua; Reali, Alessandro; Bazilevs, Yuri; Sacks, Michael S.

    2015-06-01

    This paper builds on a recently developed immersogeometric fluid-structure interaction (FSI) methodology for bioprosthetic heart valve (BHV) modeling and simulation. It enhances the proposed framework in the areas of geometry design and constitutive modeling. With these enhancements, BHV FSI simulations may be performed with greater levels of automation, robustness and physical realism. In addition, the paper presents a comparison between FSI analysis and standalone structural dynamics simulation driven by prescribed transvalvular pressure, the latter being a more common modeling choice for this class of problems. The FSI computation achieved better physiological realism in predicting the valve leaflet deformation than its standalone structural dynamics counterpart.

  15. Swarming Robot Design, Construction and Software Implementation

    NASA Technical Reports Server (NTRS)

    Stolleis, Karl A.

    2014-01-01

    In this paper is presented an overview of the hardware design, construction overview, software design and software implementation for a small, low-cost robot to be used for swarming robot development. In addition to the work done on the robot, a full simulation of the robotic system was developed using Robot Operating System (ROS) and its associated simulation. The eventual use of the robots will be exploration of evolving behaviors via genetic algorithms and builds on the work done at the University of New Mexico Biological Computation Lab.

  16. ∆ E /∆ E Measurements of Energetic Ions Using CVD Diamond Detectors

    DOE PAGES

    Alghamdi, Ahmed; Heilbronn, Lawrence; Castellanos, Luis A.; ...

    2018-06-20

    Experimental and computational results of a Δ E /Δ E diamond detection system are presented. The Δ E /Δ E detection system was evaluated using energetic proton and iron beams striking thick polyethylene targets at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL). The measured data for diamond sensor A show good agreement with the Geant4 simulation. In addition, simulations have demonstrated the ability to identify hydrogen isotopes using a diamond detection system.

  17. ∆ E /∆ E Measurements of Energetic Ions Using CVD Diamond Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alghamdi, Ahmed; Heilbronn, Lawrence; Castellanos, Luis A.

    Experimental and computational results of a Δ E /Δ E diamond detection system are presented. The Δ E /Δ E detection system was evaluated using energetic proton and iron beams striking thick polyethylene targets at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL). The measured data for diamond sensor A show good agreement with the Geant4 simulation. In addition, simulations have demonstrated the ability to identify hydrogen isotopes using a diamond detection system.

  18. The Primary Care Computer Simulation: Optimal Primary Care Manager Empanelment.

    DTIC Science & Technology

    1997-05-01

    explored in which a team consisted of two providers, two nurses, and a nurse aide . Each team had a specific exam room assigned to them. Additionally, a...team consisting of one provider, one nurse, and one nurse aide was simulated. The model also examined the effects of adding two exam rooms. The study...minutes. The optimal solution, which reduced patient time to below 90 minutes, was the mix of one provider, a nurse, and a nurse aide in which each

  19. ASCR Workshop on Quantum Computing for Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward

    This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms formore » linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.« less

  20. NetMOD version 1.0 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion John

    2014-01-01

    NetMOD (Network Monitoring for Optimal Detection) is a Java-based software package for conducting simulation of seismic networks. Specifically, NetMOD simulates the detection capabilities of seismic monitoring networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed atmore » each of the stations. From these signal-to-noise ratios (SNR), the probability of detection can be computed given a detection threshold. This manual describes how to configure and operate NetMOD to perform seismic detection simulations. In addition, NetMOD is distributed with a simulation dataset for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) International Monitoring System (IMS) seismic network for the purpose of demonstrating NetMOD's capabilities and providing user training. The tutorial sections of this manual use this dataset when describing how to perform the steps involved when running a simulation.« less

  1. Synthesis and conformational analysis by 1H NMR and restrained molecular dynamics simulations of the cyclic decapeptide [Ser-Tyr-Ser-Met-Glu-His-Phe-Arg-Trp-Gly

    NASA Astrophysics Data System (ADS)

    Buono, Ronald A.; Kucharczyk, Nathalie; Neuenschwander, Magrit; Kemmink, Johan; Hwang, Lih-Yueh; Fauchère, Jean-Luc; Venanzi, Carol A.

    1996-06-01

    The design of enzyme mimics with therapeutic and industrial applications has interested both experimental and computational chemists for several decades. Recent advances in the computational methodology of restrained molecular dynamics, used in conjunction with data obtained from two-dimensional 1H NMR spectroscopy, make it a promising method to study peptide and protein structure and function. Several issues, however, need to be addressed in order to assess the validity of this method for its explanatory and predictive value. Among the issues addressed in this study are: the accuracy and generizability of the GROMOS peptide molecular mechanics force field; the effect of inclusion of solvent on the simulations; and the effect of different types of restraining algorithms on the computational results. The decapeptide Ser-Tyr-Ser-Met-Glu-His-Phe-Arg-Trp-Gly, which corresponds to the sequence of ACTH1-10, has been synthesized, cyclized, and studied by two-dimensional 1H NMR spectroscopy. Restrained molecular dynamics (RMD) and time-averaged restrained molecular dynamics (TARMD) simulations were carried out on four different distance-geometry starting structures in order to determine and contrast the behavior of cyclic ACTH1-10 in vacuum and in solution. For the RMD simulations, the structures did not fit the NOE data well, even at high values of the restraining potential. The TARMD simulation method, however, was able to give structures that fit the NOE data at high values of the restraining potential. In both cases, inclusion of explicit solvent molecules in the simulation had little effect on the quality of the fit, although it was found to dampen the motion of the cyclic peptide. For both simulation techniques, the number and size of the NOE violations increased as the restraining potential approached zero. This is due, presumably, to inadequacies in the force field. Additional TARMD vacuum-phase simulations, run with a larger memory length or with a larger sampling size (16 additional distance-geometry structures), yielded no significantly different results. The computed data were then analyzed to help explain the sparse NOE data and poor chymotryptic activity of the cyclic peptide. Cyclic ACTH1-10, which contains the functional moieties of the catalytic triad of chymotrypsin, was evaluated as a potential mimic of chymotrypsin by measurement of the rate of hydrolysis of esters of L-and d-phenylalanine. The poor rate of hydrolysis is attributed to the flexibility of the decapeptide, the motion of the side chains, which result in the absence of long-range NOEs, the small size of the macrocycle relative to that of the substrate, and the inappropriate orientation of the Gly, His, and Ser residues. The results demonstrate the utility of this method in computer-aided molecular design of cyclic peptides and suggest structural modifications for future work based on a larger and more rigid peptide framework.

  2. The effects of model composition design choices on high-fidelity simulations of motoneuron recruitment and firing behaviors

    NASA Astrophysics Data System (ADS)

    Allen, John M.; Elbasiouny, Sherif M.

    2018-06-01

    Objective. Computational models often require tradeoffs, such as balancing detail with efficiency; yet optimal balance should incorporate sound design features that do not bias the results of the specific scientific question under investigation. The present study examines how model design choices impact simulation results. Approach. We developed a rigorously-validated high-fidelity computational model of the spinal motoneuron pool to study three long-standing model design practices which have yet to be examined for their impact on motoneuron recruitment, firing rate, and force simulations. The practices examined were the use of: (1) generic cell models to simulate different motoneuron types, (2) discrete property ranges for different motoneuron types, and (3) biological homogeneity of cell properties within motoneuron types. Main results. Our results show that each of these practices accentuates conditions of motoneuron recruitment based on the size principle, and minimizes conditions of mixed and reversed recruitment orders, which have been observed in animal and human recordings. Specifically, strict motoneuron orderly size recruitment occurs, but in a compressed range, after which mixed and reverse motoneuron recruitment occurs due to the overlap in electrical properties of different motoneuron types. Additionally, these practices underestimate the motoneuron firing rates and force data simulated by existing models. Significance. Our results indicate that current modeling practices increase conditions of motoneuron recruitment based on the size principle, and decrease conditions of mixed and reversed recruitment order, which, in turn, impacts the predictions made by existing models on motoneuron recruitment, firing rate, and force. Additionally, mixed and reverse motoneuron recruitment generated higher muscle force than orderly size motoneuron recruitment in these simulations and represents one potential scheme to increase muscle efficiency. The examined model design practices, as well as the present results, are applicable to neuronal modeling throughout the nervous system.

  3. The effects of model composition design choices on high-fidelity simulations of motoneuron recruitment and firing behaviors.

    PubMed

    Allen, John M; Elbasiouny, Sherif M

    2018-06-01

    Computational models often require tradeoffs, such as balancing detail with efficiency; yet optimal balance should incorporate sound design features that do not bias the results of the specific scientific question under investigation. The present study examines how model design choices impact simulation results. We developed a rigorously-validated high-fidelity computational model of the spinal motoneuron pool to study three long-standing model design practices which have yet to be examined for their impact on motoneuron recruitment, firing rate, and force simulations. The practices examined were the use of: (1) generic cell models to simulate different motoneuron types, (2) discrete property ranges for different motoneuron types, and (3) biological homogeneity of cell properties within motoneuron types. Our results show that each of these practices accentuates conditions of motoneuron recruitment based on the size principle, and minimizes conditions of mixed and reversed recruitment orders, which have been observed in animal and human recordings. Specifically, strict motoneuron orderly size recruitment occurs, but in a compressed range, after which mixed and reverse motoneuron recruitment occurs due to the overlap in electrical properties of different motoneuron types. Additionally, these practices underestimate the motoneuron firing rates and force data simulated by existing models. Our results indicate that current modeling practices increase conditions of motoneuron recruitment based on the size principle, and decrease conditions of mixed and reversed recruitment order, which, in turn, impacts the predictions made by existing models on motoneuron recruitment, firing rate, and force. Additionally, mixed and reverse motoneuron recruitment generated higher muscle force than orderly size motoneuron recruitment in these simulations and represents one potential scheme to increase muscle efficiency. The examined model design practices, as well as the present results, are applicable to neuronal modeling throughout the nervous system.

  4. 3D printing in chemical engineering and catalytic technology: structured catalysts, mixers and reactors.

    PubMed

    Parra-Cabrera, Cesar; Achille, Clement; Kuhn, Simon; Ameloot, Rob

    2018-01-02

    Computer-aided fabrication technologies combined with simulation and data processing approaches are changing our way of manufacturing and designing functional objects. Also in the field of catalytic technology and chemical engineering the impact of additive manufacturing, also referred to as 3D printing, is steadily increasing thanks to a rapidly decreasing equipment threshold. Although still in an early stage, the rapid and seamless transition between digital data and physical objects enabled by these fabrication tools will benefit both research and manufacture of reactors and structured catalysts. Additive manufacturing closes the gap between theory and experiment, by enabling accurate fabrication of geometries optimized through computational fluid dynamics and the experimental evaluation of their properties. This review highlights the research using 3D printing and computational modeling as digital tools for the design and fabrication of reactors and structured catalysts. The goal of this contribution is to stimulate interactions at the crossroads of chemistry and materials science on the one hand and digital fabrication and computational modeling on the other.

  5. Parallel task processing of very large datasets

    NASA Astrophysics Data System (ADS)

    Romig, Phillip Richardson, III

    This research concerns the use of distributed computer technologies for the analysis and management of very large datasets. Improvements in sensor technology, an emphasis on global change research, and greater access to data warehouses all are increase the number of non-traditional users of remotely sensed data. We present a framework for distributed solutions to the challenges of datasets which exceed the online storage capacity of individual workstations. This framework, called parallel task processing (PTP), incorporates both the task- and data-level parallelism exemplified by many image processing operations. An implementation based on the principles of PTP, called Tricky, is also presented. Additionally, we describe the challenges and practical issues in modeling the performance of parallel task processing with large datasets. We present a mechanism for estimating the running time of each unit of work within a system and an algorithm that uses these estimates to simulate the execution environment and produce estimated runtimes. Finally, we describe and discuss experimental results which validate the design. Specifically, the system (a) is able to perform computation on datasets which exceed the capacity of any one disk, (b) provides reduction of overall computation time as a result of the task distribution even with the additional cost of data transfer and management, and (c) in the simulation mode accurately predicts the performance of the real execution environment.

  6. Theoretical Study of White Dwarf Double Stars

    NASA Astrophysics Data System (ADS)

    Hira, Ajit; Koetter, Ted; Rivera, Ruben; Diaz, Juan

    2015-04-01

    We continue our interest in the computational simulation of the astrophysical phenomena with a study of gravitationally-bound binary stars, composed of at least one white dwarf star. Of particular interest to astrophysicists are the conditions inside a white dwarf star in the time frame leading up to its explosive end as a Type Ia supernova, for an understanding of the massive stellar explosions. In addition, the studies of the evolution of white dwarfs could serve as promising probes of theories of gravitation. We developed FORTRAN computer programs to implement our models for white dwarfs and other stars. These codes allow for different sizes and masses of stars. Simulations were done in the mass interval from 0.1 to 2.0 solar masses. Our goal was to obtain both atmospheric and orbital parameters. The computational results thus obtained are compared with relevant observational data. The data are further analyzed to identify trends in terms of sizes and masses of stars. We hope to extend our computational studies to blue giant stars in the future. Research Supported by National Science Foundation.

  7. A program to compute the soft Robinson-Foulds distance between phylogenetic networks.

    PubMed

    Lu, Bingxin; Zhang, Louxin; Leong, Hon Wai

    2017-03-14

    Over the past two decades, phylogenetic networks have been studied to model reticulate evolutionary events. The relationships among phylogenetic networks, phylogenetic trees and clusters serve as the basis for reconstruction and comparison of phylogenetic networks. To understand these relationships, two problems are raised: the tree containment problem, which asks whether a phylogenetic tree is displayed in a phylogenetic network, and the cluster containment problem, which asks whether a cluster is represented at a node in a phylogenetic network. Both the problems are NP-complete. A fast exponential-time algorithm for the cluster containment problem on arbitrary networks is developed and implemented in C. The resulting program is further extended into a computer program for fast computation of the Soft Robinson-Foulds distance between phylogenetic networks. Two computer programs are developed for facilitating reconstruction and validation of phylogenetic network models in evolutionary and comparative genomics. Our simulation tests indicated that they are fast enough for use in practice. Additionally, the distribution of the Soft Robinson-Foulds distance between phylogenetic networks is demonstrated to be unlikely normal by our simulation data.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, D.; McInnes, L. C.; Woodward, C.

    This report is an outcome of the workshop Multiphysics Simulations: Challenges and Opportunities, sponsored by the Institute of Computing in Science (ICiS). Additional information about the workshop, including relevant reading and presentations on multiphysics issues in applications, algorithms, and software, is available via https://sites.google.com/site/icismultiphysics2011/. We consider multiphysics applications from algorithmic and architectural perspectives, where 'algorithmic' includes both mathematical analysis and computational complexity and 'architectural' includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not alwaysmore » practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.« less

  9. Fast computation of high energy elastic collision scattering angle for electric propulsion plume simulation

    NASA Astrophysics Data System (ADS)

    Araki, Samuel J.

    2016-11-01

    In the plumes of Hall thrusters and ion thrusters, high energy ions experience elastic collisions with slow neutral atoms. These collisions involve a process of momentum exchange, altering the initial velocity vectors of the collision pair. In addition to the momentum exchange process, ions and atoms can exchange electrons, resulting in slow charge-exchange ions and fast atoms. In these simulations, it is particularly important to accurately perform computations of ion-atom elastic collisions in determining the plume current profile and assessing the integration of spacecraft components. The existing models are currently capable of accurate calculation but are not fast enough such that the calculation can be a bottleneck of plume simulations. This study investigates methods to accelerate an ion-atom elastic collision calculation that includes both momentum- and charge-exchange processes. The scattering angles are pre-computed through a classical approach with ab initio spin-orbit free potential and are stored in a two-dimensional array as functions of impact parameter and energy. When performing a collision calculation for an ion-atom pair, the scattering angle is computed by a table lookup and multiple linear interpolations, given the relative energy and randomly determined impact parameter. In order to further accelerate the calculations, the number of collision calculations is reduced by properly defining two cut-off cross-sections for the elastic scattering. In the MCC method, the target atom needs to be sampled; however, it is confirmed that initial target atom velocity does not play a significant role in typical electric propulsion plume simulations such that the sampling process is unnecessary. With these implementations, the computational run-time to perform a collision calculation is reduced significantly compared to previous methods, while retaining the accuracy of the high fidelity models.

  10. Toward an in-situ analytics and diagnostics framework for earth system models

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Wolf, Matthew; Rasch, Philip; Klasky, Scott; Williams, Dean; Jacob, Rob; Ma, Po-Lun; Kuo, Kwo-Sen

    2017-04-01

    The development roadmaps for many earth system models (ESM) aim for a globally cloud-resolving model targeting the pre-exascale and exascale systems of the future. The ESMs will also incorporate more complex physics, chemistry and biology - thereby vastly increasing the fidelity of the information content simulated by the model. We will then be faced with an unprecedented volume of simulation output that would need to be processed and analyzed concurrently in order to derive the valuable scientific results. We are already at this threshold with our current generation of ESMs at higher resolution simulations. Currently, the nominal I/O throughput in the Community Earth System Model (CESM) via Parallel IO (PIO) library is around 100 MB/s. If we look at the high frequency I/O requirements, it would require an additional 1 GB / simulated hour, translating to roughly 4 mins wallclock / simulated-day => 24.33 wallclock hours / simulated-model-year => 1,752,000 core-hours of charge per simulated-model-year on the Titan supercomputer at the Oak Ridge Leadership Computing Facility. There is also a pending need for 3X more volume of simulation output . Meanwhile, many ESMs use instrument simulators to run forward models to compare model simulations against satellite and ground-based instruments, such as radars and radiometers. The CFMIP Observation Simulator Package (COSP) is used in CESM as well as the Accelerated Climate Model for Energy (ACME), one of the ESMs specifically targeting current and emerging leadership-class computing platforms These simulators can be computationally expensive, accounting for as much as 30% of the computational cost. Hence the data are often written to output files that are then used for offline calculations. Again, the I/O bottleneck becomes a limitation. Detection and attribution studies also use large volume of data for pattern recognition and feature extraction to analyze weather and climate phenomenon such as tropical cyclones, atmospheric rivers, blizzards, etc. It is evident that ESMs need an in-situ framework to decouple the diagnostics and analytics from the prognostics and physics computations of the models so that the diagnostic computations could be performed concurrently without limiting model throughput. We are designing a science-driven online analytics framework for earth system models. Our approach is to adopt several data workflow technologies, such as the Adaptable IO System (ADIOS), being developed under the U.S. Exascale Computing Project (ECP) and integrate these to allow for extreme performance IO, in situ workflow integration, science-driven analytics and visualization all in a easy to use computational framework. This will allow science teams to write data 100-1000 times faster and seamlessly move from post processing the output for validation and verification purposes to performing these calculations in situ. We can easily and knowledgeably envision a near-term future where earth system models like ACME and CESM will have to address not only the challenges of the volume of data but also need to consider the velocity of the data. The earth system model of the future in the exascale era, as they incorporate more complex physics at higher resolutions, will be able to analyze more simulation content without having to compromise targeted model throughput.

  11. Simulation and Flight Test Capability for Testing Prototype Sense and Avoid System Elements

    NASA Technical Reports Server (NTRS)

    Howell, Charles T.; Stock, Todd M.; Verstynen, Harry A.; Wehner, Paul J.

    2012-01-01

    NASA Langley Research Center (LaRC) and The MITRE Corporation (MITRE) have developed, and successfully demonstrated, an integrated simulation-to-flight capability for evaluating sense and avoid (SAA) system elements. This integrated capability consists of a MITRE developed fast-time computer simulation for evaluating SAA algorithms, and a NASA LaRC surrogate unmanned aircraft system (UAS) equipped to support hardware and software in-the-loop evaluation of SAA system elements (e.g., algorithms, sensors, architecture, communications, autonomous systems), concepts, and procedures. The fast-time computer simulation subjects algorithms to simulated flight encounters/ conditions and generates a fitness report that records strengths, weaknesses, and overall performance. Reviewed algorithms (and their fitness report) are then transferred to NASA LaRC where additional (joint) airworthiness evaluations are performed on the candidate SAA system-element configurations, concepts, and/or procedures of interest; software and hardware components are integrated into the Surrogate UAS research systems; and flight safety and mission planning activities are completed. Onboard the Surrogate UAS, candidate SAA system element configurations, concepts, and/or procedures are subjected to flight evaluations and in-flight performance is monitored. The Surrogate UAS, which can be controlled remotely via generic Ground Station uplink or automatically via onboard systems, operates with a NASA Safety Pilot/Pilot in Command onboard to permit safe operations in mixed airspace with manned aircraft. An end-to-end demonstration of a typical application of the capability was performed in non-exclusionary airspace in October 2011; additional research, development, flight testing, and evaluation efforts using this integrated capability are planned throughout fiscal year 2012 and 2013.

  12. Quantum analogue computing.

    PubMed

    Kendon, Vivien M; Nemoto, Kae; Munro, William J

    2010-08-13

    We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.

  13. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance.

    PubMed

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-01

    Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.

  14. Ocean-Atmosphere Coupled Model Simulations of Precipitation in the Central Andes

    NASA Technical Reports Server (NTRS)

    Nicholls, Stephen D.; Mohr, Karen I.

    2015-01-01

    The meridional extent and complex orography of the South American continent contributes to a wide diversity of climate regimes ranging from hyper-arid deserts to tropical rainforests to sub-polar highland regions. In addition, South American meteorology and climate are also made further complicated by ENSO, a powerful coupled ocean-atmosphere phenomenon. Modelling studies in this region have typically resorted to either atmospheric mesoscale or atmosphere-ocean coupled global climate models. The latter offers full physics and high spatial resolution, but it is computationally inefficient typically lack an interactive ocean, whereas the former offers high computational efficiency and ocean-atmosphere coupling, but it lacks adequate spatial and temporal resolution to adequate resolve the complex orography and explicitly simulate precipitation. Explicit simulation of precipitation is vital in the Central Andes where rainfall rates are light (0.5-5 mm hr-1), there is strong seasonality, and most precipitation is associated with weak mesoscale-organized convection. Recent increases in both computational power and model development have led to the advent of coupled ocean-atmosphere mesoscale models for both weather and climate study applications. These modelling systems, while computationally expensive, include two-way ocean-atmosphere coupling, high resolution, and explicit simulation of precipitation. In this study, we use the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST), a fully-coupled mesoscale atmosphere-ocean modeling system. Previous work has shown COAWST to reasonably simulate the entire 2003-2004 wet season (Dec-Feb) as validated against both satellite and model analysis data when ECMWF interim analysis data were used for boundary conditions on a 27-9-km grid configuration (Outer grid extent: 60.4S to 17.7N and 118.6W to 17.4W).

  15. A Modular Three-Dimensional Finite-Difference Ground-Water Flow Model

    USGS Publications Warehouse

    McDonald, Michael G.; Harbaugh, Arlen W.; Guo, Weixing; Lu, Guoping

    1988-01-01

    This report presents a finite-difference model and its associated modular computer program. The model simulates flow in three dimensions. The report includes detailed explanations of physical and mathematical concepts on which the model is based and an explanation of how those concepts are incorporated in the modular structure of the computer program. The modular structure consists of a Main Program and a series of highly independent subroutines called 'modules.' The modules are grouped into 'packages.' Each package deals with a specific feature of the hydrologic system which is to be simulated, such as flow from rivers or flow into drains, or with a specific method of solving linear equations which describe the flow system, such as the Strongly Implicit Procedure or Slice-Successive Overrelaxation. The division of the program into modules permits the user to examine specific hydrologic features of the model independently. This also facilita development of additional capabilities because new packages can be added to the program without modifying the existing packages. The input and output systems of the computer program are also designed to permit maximum flexibility. Ground-water flow within the aquifer is simulated using a block-centered finite-difference approach. Layers can be simulated as confined, unconfined, or a combination of confined and unconfined. Flow associated with external stresses, such as wells, areal recharge, evapotranspiration, drains, and streams, can also be simulated. The finite-difference equations can be solved using either the Strongly Implicit Procedure or Slice-Successive Overrelaxation. The program is written in FORTRAN 77 and will run without modification on most computers that have a FORTRAN 77 compiler. For each program ,module, this report includes a narrative description, a flow chart, a list of variables, and a module listing.

  16. Lattice Boltzmann simulation of nonequilibrium effects in oscillatory gas flow.

    PubMed

    Tang, G H; Gu, X J; Barber, R W; Emerson, D R; Zhang, Y H

    2008-08-01

    Accurate evaluation of damping in laterally oscillating microstructures is challenging due to the complex flow behavior. In addition, device fabrication techniques and surface properties will have an important effect on the flow characteristics. Although kinetic approaches such as the direct simulation Monte Carlo (DSMC) method and directly solving the Boltzmann equation can address these challenges, they are beyond the reach of current computer technology for large scale simulation. As the continuum Navier-Stokes equations become invalid for nonequilibrium flows, we take advantage of the computationally efficient lattice Boltzmann method to investigate nonequilibrium oscillating flows. We have analyzed the effects of the Stokes number, Knudsen number, and tangential momentum accommodation coefficient for oscillating Couette flow and Stokes' second problem. Our results are in excellent agreement with DSMC data for Knudsen numbers up to Kn=O(1) and show good agreement for Knudsen numbers as large as 2.5. In addition to increasing the Stokes number, we demonstrate that increasing the Knudsen number or decreasing the accommodation coefficient can also expedite the breakdown of symmetry for oscillating Couette flow. This results in an earlier transition from quasisteady to unsteady flow. Our paper also highlights the deviation in velocity slip between Stokes' second problem and the confined Couette case.

  17. Student Ability, Confidence, and Attitudes Toward Incorporating a Computer into a Patient Interview.

    PubMed

    Ray, Sarah; Valdovinos, Katie

    2015-05-25

    To improve pharmacy students' ability to effectively incorporate a computer into a simulated patient encounter and to improve their awareness of barriers and attitudes towards and their confidence in using a computer during simulated patient encounters. Students completed a survey that assessed their awareness of, confidence in, and attitudes towards computer use during simulated patient encounters. Students were evaluated with a rubric on their ability to incorporate a computer into a simulated patient encounter. Students were resurveyed and reevaluated after instruction. Students improved in their ability to effectively incorporate computer usage into a simulated patient encounter. They also became more aware of and improved their attitudes toward barriers regarding such usage and gained more confidence in their ability to use a computer during simulated patient encounters. Instruction can improve pharmacy students' ability to incorporate a computer into simulated patient encounters. This skill is critical to developing efficiency while maintaining rapport with patients.

  18. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  19. Assess and improve the sustainability of water treatment facility using Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Tejada-Martinez, Andres; Lei, Hongxia; Zhang, Qiong

    2016-11-01

    Fluids problems in water treatment industry are often simplified or omitted since the focus is usually on chemical process only. However hydraulics also plays an important role in determining effluent water quality. Recent studies have demonstrated that computational fluid dynamics (CFD) has the ability to simulate the physical and chemical processes in reactive flows in water treatment facilities, such as in chlorine and ozone disinfection tanks. This study presents the results from CFD simulations of reactive flow in an existing full-scale ozone disinfection tank and in potential designs. Through analysis of the simulation results, we found that baffling factor and CT10 are not optimal indicators of disinfection performance. We also found that the relationship between effluent CT (the product of disinfectant concentration and contact time) obtained from CT transport simulation and baffling factor depends on the location of ozone release. In addition, we analyzed the environmental and economic impacts of ozone disinfection tank designs and developed a composite indicator to quantify the sustainability of ozone disinfection tank in technological, environmental and economic dimensions.

  20. The JINR Tier1 Site Simulation for Research and Development Purposes

    NASA Astrophysics Data System (ADS)

    Korenkov, V.; Nechaevskiy, A.; Ososkov, G.; Pryahina, D.; Trofimov, V.; Uzhinskiy, A.; Voytishin, N.

    2016-02-01

    Distributed complex computing systems for data storage and processing are in common use in the majority of modern scientific centers. The design of such systems is usually based on recommendations obtained via a preliminary simulated model used and executed only once. However big experiments last for years and decades, and the development of their computing system is going on, not only quantitatively but also qualitatively. Even with the substantial efforts invested in the design phase to understand the systems configuration, it would be hard enough to develop a system without additional research of its future evolution. The developers and operators face the problem of the system behaviour predicting after the planned modifications. A system for grid and cloud services simulation is developed at LIT (JINR, Dubna). This simulation system is focused on improving the effciency of the grid/cloud structures development by using the work quality indicators of some real system. The development of such kind of software is very important for making a new grid/cloud infrastructure for such big scientific experiments like the JINR Tier1 site for WLCG. The simulation of some processes of the Tier1 site is considered as an example of our application approach.

  1. Simulation of Transcritical CO2 Refrigeration System with Booster Hot Gas Bypass in Tropical Climate

    NASA Astrophysics Data System (ADS)

    Santosa, I. D. M. C.; Sudirman; Waisnawa, IGNS; Sunu, PW; Temaja, IW

    2018-01-01

    A Simulation computer becomes significant important for performance analysis since there is high cost and time allocation to build an experimental rig, especially for CO2 refrigeration system. Besides, to modify the rig also need additional cos and time. One of computer program simulation that is very eligible to refrigeration system is Engineering Equation System (EES). In term of CO2 refrigeration system, environmental issues becomes priority on the refrigeration system development since the Carbon dioxide (CO2) is natural and clean refrigerant. This study aims is to analysis the EES simulation effectiveness to perform CO2 transcritical refrigeration system with booster hot gas bypass in high outdoor temperature. The research was carried out by theoretical study and numerical analysis of the refrigeration system using the EES program. Data input and simulation validation were obtained from experimental and secondary data. The result showed that the coefficient of performance (COP) decreased gradually with the outdoor temperature variation increasing. The results show the program can calculate the performance of the refrigeration system with quick running time and accurate. So, it will be significant important for the preliminary reference to improve the CO2 refrigeration system design for the hot climate temperature.

  2. Shock simulations of a single-site coarse-grain RDX model using the dissipative particle dynamics method with reactivity

    NASA Astrophysics Data System (ADS)

    Sellers, Michael S.; Lísal, Martin; Schweigert, Igor; Larentzos, James P.; Brennan, John K.

    2017-01-01

    In discrete particle simulations, when an atomistic model is coarse-grained, a tradeoff is made: a boost in computational speed for a reduction in accuracy. The Dissipative Particle Dynamics (DPD) methods help to recover lost accuracy of the viscous and thermal properties, while giving back a relatively small amount of computational speed. Since its initial development for polymers, one of the most notable extensions of DPD has been the introduction of chemical reactivity, called DPD-RX. In 2007, Maillet, Soulard, and Stoltz introduced implicit chemical reactivity in DPD through the concept of particle reactors and simulated the decomposition of liquid nitromethane. We present an extended and generalized version of the DPD-RX method, and have applied it to solid hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX). Demonstration simulations of reacting RDX are performed under shock conditions using a recently developed single-site coarse-grain model and a reduced RDX decomposition mechanism. A description of the methods used to simulate RDX and its transition to hot product gases within DPD-RX is presented. Additionally, we discuss several examples of the effect of shock speed and microstructure on the corresponding material chemistry.

  3. A Progressive Damage Model for unidirectional Fibre Reinforced Composites with Application to Impact and Penetration Simulation

    NASA Astrophysics Data System (ADS)

    Kerschbaum, M.; Hopmann, C.

    2016-06-01

    The computationally efficient simulation of the progressive damage behaviour of continuous fibre reinforced plastics is still a challenging task with currently available computer aided engineering methods. This paper presents an original approach for an energy based continuum damage model which accounts for stress-/strain nonlinearities, transverse and shear stress interaction phenomena, quasi-plastic shear strain components, strain rate effects, regularised damage evolution and consideration of load reversal effects. The physically based modelling approach enables experimental determination of all parameters on ply level to avoid expensive inverse analysis procedures. The modelling strategy, implementation and verification of this model using commercially available explicit finite element software are detailed. The model is then applied to simulate the impact and penetration of carbon fibre reinforced cross-ply specimens with variation of the impact speed. The simulation results show that the presented approach enables a good representation of the force-/displacement curves and especially well agreement with the experimentally observed fracture patterns. In addition, the mesh dependency of the results were assessed for one impact case showing only very little change of the simulation results which emphasises the general applicability of the presented method.

  4. Spacecraft applications of advanced global positioning system technology

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This is the final report on the Texas Instruments Incorporated (TI) simulations study of Spacecraft Application of Advanced Global Positioning System (GPS) Technology. This work was conducted for the NASA Johnson Space Center (JSC) under contract NAS9-17781. GPS, in addition to its baselined capability as a highly accurate spacecraft navigation system, can provide traffic control, attitude control, structural control, and uniform time base. In Phase 1 of this program, another contractor investigated the potential of GPS in these four areas and compared GPS to other techniques. This contract was for the Phase 2 effort, to study the performance of GPS for these spacecraft applications through computer simulations. TI had previously developed simulation programs for GPS differential navigation and attitude measurement. These programs were adapted for these specific spacecraft applications. In addition, TI has extensive expertise in the design and production of advanced GPS receivers, including space-qualified GPS receivers. We have drawn on this background to augment the simulation results in the system level overview, which is Section 2 of this report.

  5. ASIS v1.0: an adaptive solver for the simulation of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Cariolle, Daniel; Moinat, Philippe; Teyssèdre, Hubert; Giraud, Luc; Josse, Béatrice; Lefèvre, Franck

    2017-04-01

    This article reports on the development and tests of the adaptive semi-implicit scheme (ASIS) solver for the simulation of atmospheric chemistry. To solve the ordinary differential equation systems associated with the time evolution of the species concentrations, ASIS adopts a one-step linearized implicit scheme with specific treatments of the Jacobian of the chemical fluxes. It conserves mass and has a time-stepping module to control the accuracy of the numerical solution. In idealized box-model simulations, ASIS gives results similar to the higher-order implicit schemes derived from the Rosenbrock's and Gear's methods and requires less computation and run time at the moderate precision required for atmospheric applications. When implemented in the MOCAGE chemical transport model and the Laboratoire de Météorologie Dynamique Mars general circulation model, the ASIS solver performs well and reveals weaknesses and limitations of the original semi-implicit solvers used by these two models. ASIS can be easily adapted to various chemical schemes and further developments are foreseen to increase its computational efficiency, and to include the computation of the concentrations of the species in aqueous-phase in addition to gas-phase chemistry.

  6. The Cell Collective: Toward an open and collaborative approach to systems biology

    PubMed Central

    2012-01-01

    Background Despite decades of new discoveries in biomedical research, the overwhelming complexity of cells has been a significant barrier to a fundamental understanding of how cells work as a whole. As such, the holistic study of biochemical pathways requires computer modeling. Due to the complexity of cells, it is not feasible for one person or group to model the cell in its entirety. Results The Cell Collective is a platform that allows the world-wide scientific community to create these models collectively. Its interface enables users to build and use models without specifying any mathematical equations or computer code - addressing one of the major hurdles with computational research. In addition, this platform allows scientists to simulate and analyze the models in real-time on the web, including the ability to simulate loss/gain of function and test what-if scenarios in real time. Conclusions The Cell Collective is a web-based platform that enables laboratory scientists from across the globe to collaboratively build large-scale models of various biological processes, and simulate/analyze them in real time. In this manuscript, we show examples of its application to a large-scale model of signal transduction. PMID:22871178

  7. Using Numerical Modeling to Simulate Space Capsule Ground Landings

    NASA Technical Reports Server (NTRS)

    Heymsfield, Ernie; Fasanella, Edwin L.

    2009-01-01

    Experimental work is being conducted at the National Aeronautics and Space Administration s (NASA) Langley Research Center (LaRC) to investigate ground landing capabilities of the Orion crew exploration vehicle (CEV). The Orion capsule is NASA s replacement for the Space Shuttle. The Orion capsule will service the International Space Station and be used for future space missions to the Moon and to Mars. To evaluate the feasibility of Orion ground landings, a series of capsule impact tests are being performed at the NASA Langley Landing and Impact Research Facility (LandIR). The experimental results derived at LandIR provide means to validate and calibrate nonlinear dynamic finite element models, which are also being developed during this study. Because of the high cost and time involvement intrinsic to full-scale testing, numerical simulations are favored over experimental work. Subsequent to a numerical model validated by actual test responses, impact simulations will be conducted to study multiple impact scenarios not practical to test. Twenty-one swing tests using the LandIR gantry were conducted during the June 07 through October 07 time period to evaluate the Orion s impact response. Results for two capsule initial pitch angles, 0deg and -15deg , along with their computer simulations using LS-DYNA are presented in this article. A soil-vehicle friction coefficient of 0.45 was determined by comparing the test stopping distance with computer simulations. In addition, soil modeling accuracy is presented by comparing vertical penetrometer impact tests with computer simulations for the soil model used during the swing tests.

  8. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-02-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second generation DGVM LPJ-GUESS to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, that increased the model's speed by approximately the factor 8, we were able to faster detect shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south-transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  9. High resolution global climate modelling; the UPSCALE project, a large simulation campaign

    NASA Astrophysics Data System (ADS)

    Mizielinski, M. S.; Roberts, M. J.; Vidale, P. L.; Schiemann, R.; Demory, M.-E.; Strachan, J.; Edwards, T.; Stephens, A.; Lawrence, B. N.; Pritchard, M.; Chiu, P.; Iwi, A.; Churchill, J.; del Cano Novales, C.; Kettleborough, J.; Roseblade, W.; Selwood, P.; Foster, M.; Glover, M.; Malcolm, A.

    2014-01-01

    The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project constructed and ran an ensemble of HadGEM3 (Hadley centre Global Environment Model 3) atmosphere-only global climate simulations over the period 1985-2011, at resolutions of N512 (25 km), N216 (60 km) and N96 (130 km) as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe) in 2012, with additional resources supplied by the Natural Environmental Research Council (NERC) and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the high performance computing center Stuttgart (HLRS), and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA) for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE dataset. This dataset is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.

  10. High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign

    NASA Astrophysics Data System (ADS)

    Mizielinski, M. S.; Roberts, M. J.; Vidale, P. L.; Schiemann, R.; Demory, M.-E.; Strachan, J.; Edwards, T.; Stephens, A.; Lawrence, B. N.; Pritchard, M.; Chiu, P.; Iwi, A.; Churchill, J.; del Cano Novales, C.; Kettleborough, J.; Roseblade, W.; Selwood, P.; Foster, M.; Glover, M.; Malcolm, A.

    2014-08-01

    The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project constructed and ran an ensemble of HadGEM3 (Hadley Centre Global Environment Model 3) atmosphere-only global climate simulations over the period 1985-2011, at resolutions of N512 (25 km), N216 (60 km) and N96 (130 km) as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe) in 2012, with additional resources supplied by the Natural Environment Research Council (NERC) and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the High Performance Computing Center Stuttgart (HLRS), and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA) for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE data set. This data set is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.

  11. Simulation Training for Residents Focused on Mechanical Ventilation: A Randomized Trial Using Mannequin-Based Versus Computer-Based Simulation.

    PubMed

    Spadaro, Savino; Karbing, Dan Stieper; Fogagnolo, Alberto; Ragazzi, Riccardo; Mojoli, Francesco; Astolfi, Luca; Gioia, Antonio; Marangoni, Elisabetta; Rees, Stephen Edward; Volta, Carlo Alberto

    2017-12-01

    Advances in knowledge regarding mechanical ventilation (MV), in particular lung-protective ventilation strategies, have been shown to reduce mortality. However, the translation of these advances in knowledge into better therapeutic performance in real-life clinical settings continues to lag. High-fidelity simulation with a mannequin allows students to interact in lifelike situations; this may be a valuable addition to traditional didactic teaching. The purpose of this study is to compare computer-based and mannequin-based approaches for training residents on MV. This prospective randomized single-blind trial involved 50 residents. All participants attended the same didactic lecture on respiratory pathophysiology and were subsequently randomized into two groups: the mannequin group (n = 25) and the computer screen-based simulator group (n = 25). One week later, each underwent a training assessment using five different scenarios of acute respiratory failure of different etiologies. Later, both groups underwent further testing of patient management, using in situ high-fidelity simulation of a patient with acute respiratory distress syndrome. Baseline knowledge was not significantly different between the two groups (P = 0.72). Regarding the training assessment, no significant differences were detected between the groups. In the final assessment, the scores of only the mannequin group significantly improved between the training and final session in terms of either global rating score [3.0 (2.5-4.0) vs. 2.0 (2.0-3.0), P = 0.005] or percentage of key score (82% vs. 71%, P = 0.001). Mannequin-based simulation has the potential to improve skills in managing MV.

  12. Steady and Unsteady Nozzle Simulations Using the Conservation Element and Solution Element Method

    NASA Technical Reports Server (NTRS)

    Friedlander, David Joshua; Wang, Xiao-Yen J.

    2014-01-01

    This paper presents results from computational fluid dynamic (CFD) simulations of a three-stream plug nozzle. Time-accurate, Euler, quasi-1D and 2D-axisymmetric simulations were performed as part of an effort to provide a CFD-based approach to modeling nozzle dynamics. The CFD code used for the simulations is based on the space-time Conservation Element and Solution Element (CESE) method. Steady-state results were validated using the Wind-US code and a code utilizing the MacCormack method while the unsteady results were partially validated via an aeroacoustic benchmark problem. The CESE steady-state flow field solutions showed excellent agreement with solutions derived from the other methods and codes while preliminary unsteady results for the three-stream plug nozzle are also shown. Additionally, a study was performed to explore the sensitivity of gross thrust computations to the control surface definition. The results showed that most of the sensitivity while computing the gross thrust is attributed to the control surface stencil resolution and choice of stencil end points and not to the control surface definition itself.Finally, comparisons between the quasi-1D and 2D-axisymetric solutions were performed in order to gain insight on whether a quasi-1D solution can capture the steady and unsteady nozzle phenomena without the cost of a 2D-axisymmetric simulation. Initial results show that while the quasi-1D solutions are similar to the 2D-axisymmetric solutions, the inability of the quasi-1D simulations to predict two dimensional phenomena limits its accuracy.

  13. Quench simulation studies of the TAC Jelly Roll superferric dipole corrector elements for the SSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez, G.

    Using the computer program SSC-DTAC-T, which is a modification of the quench computer program SSC-RR, to model Jelly Roll coils, the quench behavior of the dipole corrector element (TAC design with Jelly roll winding) is studied. The simulations are made as a function of the length of the magnet, the copper to superconducting ratio, and the thickness of insulation surrounding the wires. The magnet is quite well self-protected under all of these considerations. In addition, this implies that the other corrector multipoles (quadrupole, sextupole, octupole, etc.) which use the same conductor winding technique are self-protected. A passive protection system ismore » likely to work for these elements. 6 refs., 2 figs., 1 tab.« less

  14. Three-dimensional heat transfer effects during the growth of LiCaAlF 6 in a modified Bridgman furnace

    NASA Astrophysics Data System (ADS)

    Brandon, Simon; Derby, Jeffrey J.; Atherton, L. Jeffrey; Roberts, David H.; Vital, Russel L.

    1993-09-01

    A novel process modification, the simultaneous growth of three cylindrical Cr:LiCaAlf 6 (Cr:LiCAF) crystals grown from a common seed in a vertical Bridgman furnace of rectangular cross section, is assessed using computational modeling. The analysis employs the FIDAP finite-element package and accounts for three-dimensional, steady-state, conductive heat transfer throughout the system. The induction heating system is rigorously simulated via solution of Maxwell's equations. The implementation of realistic thermal boundary conditions and furnace details is shown to be important. Furnace design features are assessed through calculations, and simulations indicate expected growth conditions to be favorable. In addition, the validity of using ampoules containing "dummy" charges for experimental furnace characterization measurements is examined through test computations.

  15. An analytical study of electric vehicle handling dynamics

    NASA Technical Reports Server (NTRS)

    Greene, J. E.; Segal, D. J.

    1979-01-01

    Hypothetical electric vehicle configurations were studied by applying available analytical methods. Elementary linearized models were used in addition to a highly sophisticated vehicle dynamics computer simulation technique. Physical properties of specific EV's were defined for various battery and powertrain packaging approaches applied to a range of weight distribution and inertial properties which characterize a generic class of EV's. Computer simulations of structured maneuvers were performed for predicting handling qualities in the normal driving range and during various extreme conditions related to accident avoidance. Results indicate that an EV with forward weight bias will possess handling qualities superior to a comparable EV that is rear-heavy or equally balanced. The importance of properly matching tires, suspension systems, and brake system front/rear torque proportioning to a given EV configuration during the design stage is demonstrated.

  16. Rehearsing decisions may help teenagers: an evaluation of a simulation game.

    PubMed

    Alemi, F; Cherry, F; Meffert, G

    1989-01-01

    This paper presents a new approach to preventing adolescent pregnancy. Information alone is not sufficient to prevent teenage pregnancy. The teenagers ability to choose and remain committed to a decision also needs to be developed. Because decision making skills are best learned through practice in an environment with frequent feedback, we have developed a computer game which simulates the consequences of different sexual roles. In addition, the game is intended to increase communication about sex between teenagers and their role models (peers, teachers and/or parents). Increased communication is expected to reduce the feeling of guilt and lead to either consistent abstention from sex or consistent contraceptive use. The paper reports on the development of the computer game and the preliminary evaluation of its impact.

  17. Frequency modulation television analysis: Distortion analysis

    NASA Technical Reports Server (NTRS)

    Hodge, W. H.; Wong, W. H.

    1973-01-01

    Computer simulation is used to calculate the time-domain waveform of standard T-pulse-and-bar test signal distorted in passing through an FM television system. The simulator includes flat or preemphasized systems and requires specification of the RF predetection filter characteristics. The predetection filters are modeled with frequency-symmetric Chebyshev (0.1-db ripple) and Butterworth filters. The computer was used to calculate distorted output signals for sixty-four different specified systems, and the output waveforms are plotted for all sixty-four. Comparison of the plotted graphs indicates that a Chebyshev predetection filter of four poles causes slightly more signal distortion than a corresponding Butterworth filter and the signal distortion increases as the number of poles increases. An increase in the peak deviation also increases signal distortion. Distortion also increases with the addition of preemphasis.

  18. Indonesia’s Electricity Demand Dynamic Modelling

    NASA Astrophysics Data System (ADS)

    Sulistio, J.; Wirabhuana, A.; Wiratama, M. G.

    2017-06-01

    Electricity Systems modelling is one of the emerging area in the Global Energy policy studies recently. System Dynamics approach and Computer Simulation has become one the common methods used in energy systems planning and evaluation in many conditions. On the other hand, Indonesia experiencing several major issues in Electricity system such as fossil fuel domination, demand - supply imbalances, distribution inefficiency, and bio-devastation. This paper aims to explain the development of System Dynamics modelling approaches and computer simulation techniques in representing and predicting electricity demand in Indonesia. In addition, this paper also described the typical characteristics and relationship of commercial business sector, industrial sector, and family / domestic sector as electricity subsystems in Indonesia. Moreover, it will be also present direct structure, behavioural, and statistical test as model validation approach and ended by conclusions.

  19. Integration of Russian Tier-1 Grid Center with High Performance Computers at NRC-KI for LHC experiments and beyond HENP

    NASA Astrophysics Data System (ADS)

    Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.

    2015-12-01

    The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.

  20. Multiphase, multi-electrode Joule heat computations for glass melter and in situ vitrification simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowery, P.S.; Lessor, D.L.

    Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less

  1. Quantum computing without wavefunctions: time-dependent density functional theory for universal quantum computation.

    PubMed

    Tempel, David G; Aspuru-Guzik, Alán

    2012-01-01

    We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.

  2. The nature of undergraduates' conceptual understanding of oxygen transport and utilization in humans: Can cardiopulmonary simulation software enhance learning of propositional knowledge and/or diagnose alternative conceptions in novices and intermediates?

    NASA Astrophysics Data System (ADS)

    Wissing, Dennis Robert

    The purpose of the this research was to explore undergraduates' conceptual development for oxygen transport and utilization, as a component of a cardiopulmonary physiology and advanced respiratory care course in the allied health program. This exploration focused on the student's development of knowledge and the presence of alternative conceptions, prior to, during, and after completing cardiopulmonary physiology and advanced respiratory care courses. Using the simulation program, SimBioSysTM (Samsel, 1994), student-participants completed a series of laboratory exercises focusing on cardiopulmonary disease states. This study examined data gathered from: (1) a novice group receiving the simulation program prior to instruction, (2) a novice group that experienced the simulation program following course completion in cardiopulmonary physiology, and (3) an intermediate group who experienced the simulation program following completion of formal education in Respiratory Care. This research was based on the theory of Human Constructivism as described by Mintzes, Wandersee, and Novak (1997). Data-gathering techniques were based on theories supported by Novak (1984), Wandersee (1997), and Chi (1997). Data were generated by exams, interviews, verbal analysis (Chi, 1997), and concept mapping. Results suggest that simulation may be an effective instructional method for assessing conceptual development and diagnosing alternative conceptions in undergraduates enrolled in a cardiopulmonary science program. Use of simulation in conjunction with clinical interview and concept mapping may assist in verifying gaps in learning and conceptual knowledge. This study found only limited evidence to support the use of computer simulation prior to lecture to augment learning. However, it was demonstrated that students' prelecture experience with the computer simulation helped the instructor assess what the learner knew so he or she could be taught accordingly. In addition, use of computer simulation after formal instruction was shown to be useful in aiding students identified by the instructor as needing remediation.

  3. A Probabilistic Framework for the Validation and Certification of Computer Simulations

    NASA Technical Reports Server (NTRS)

    Ghanem, Roger; Knio, Omar

    2000-01-01

    The paper presents a methodology for quantifying, propagating, and managing the uncertainty in the data required to initialize computer simulations of complex phenomena. The purpose of the methodology is to permit the quantitative assessment of a certification level to be associated with the predictions from the simulations, as well as the design of a data acquisition strategy to achieve a target level of certification. The value of a methodology that can address the above issues is obvious, specially in light of the trend in the availability of computational resources, as well as the trend in sensor technology. These two trends make it possible to probe physical phenomena both with physical sensors, as well as with complex models, at previously inconceivable levels. With these new abilities arises the need to develop the knowledge to integrate the information from sensors and computer simulations. This is achieved in the present work by tracing both activities back to a level of abstraction that highlights their commonalities, thus allowing them to be manipulated in a mathematically consistent fashion. In particular, the mathematical theory underlying computer simulations has long been associated with partial differential equations and functional analysis concepts such as Hilbert spares and orthogonal projections. By relying on a probabilistic framework for the modeling of data, a Hilbert space framework emerges that permits the modeling of coefficients in the governing equations as random variables, or equivalently, as elements in a Hilbert space. This permits the development of an approximation theory for probabilistic problems that parallels that of deterministic approximation theory. According to this formalism, the solution of the problem is identified by its projection on a basis in the Hilbert space of random variables, as opposed to more traditional techniques where the solution is approximated by its first or second-order statistics. The present representation, in addition to capturing significantly more information than the traditional approach, facilitates the linkage between different interacting stochastic systems as is typically observed in real-life situations.

  4. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.

    PubMed

    Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia

    2016-03-08

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.

  5. Efficient morse decompositions of vector fields.

    PubMed

    Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene

    2008-01-01

    Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.

  6. Development of simulation computer complex specification

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The Training Simulation Computer Complex Study was one of three studies contracted in support of preparations for procurement of a shuttle mission simulator for shuttle crew training. The subject study was concerned with definition of the software loads to be imposed on the computer complex to be associated with the shuttle mission simulator and the development of procurement specifications based on the resulting computer requirements. These procurement specifications cover the computer hardware and system software as well as the data conversion equipment required to interface the computer to the simulator hardware. The development of the necessary hardware and software specifications required the execution of a number of related tasks which included, (1) simulation software sizing, (2) computer requirements definition, (3) data conversion equipment requirements definition, (4) system software requirements definition, (5) a simulation management plan, (6) a background survey, and (7) preparation of the specifications.

  7. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-07-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  8. Adaptive screening for depression--recalibration of an item bank for the assessment of depression in persons with mental and somatic diseases and evaluation in a simulated computer-adaptive test environment.

    PubMed

    Forkmann, Thomas; Kroehne, Ulf; Wirtz, Markus; Norra, Christine; Baumeister, Harald; Gauggel, Siegfried; Elhan, Atilla Halil; Tennant, Alan; Boecker, Maren

    2013-11-01

    This study conducted a simulation study for computer-adaptive testing based on the Aachen Depression Item Bank (ADIB), which was developed for the assessment of depression in persons with somatic diseases. Prior to computer-adaptive test simulation, the ADIB was newly calibrated. Recalibration was performed in a sample of 161 patients treated for a depressive syndrome, 103 patients from cardiology, and 103 patients from otorhinolaryngology (mean age 44.1, SD=14.0; 44.7% female) and was cross-validated in a sample of 117 patients undergoing rehabilitation for cardiac diseases (mean age 58.4, SD=10.5; 24.8% women). Unidimensionality of the itembank was checked and a Rasch analysis was performed that evaluated local dependency (LD), differential item functioning (DIF), item fit and reliability. CAT-simulation was conducted with the total sample and additional simulated data. Recalibration resulted in a strictly unidimensional item bank with 36 items, showing good Rasch model fit (item fit residuals<|2.5|) and no DIF or LD. CAT simulation revealed that 13 items on average were necessary to estimate depression in the range of -2 and +2 logits when terminating at SE≤0.32 and 4 items if using SE≤0.50. Receiver Operating Characteristics analysis showed that θ estimates based on the CAT algorithm have good criterion validity with regard to depression diagnoses (Area Under the Curve≥.78 for all cut-off criteria). The recalibration of the ADIB succeeded and the simulation studies conducted suggest that it has good screening performance in the samples investigated and that it may reasonably add to the improvement of depression assessment. © 2013.

  9. Computational simulation of concurrent engineering for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  10. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  11. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Astrophysics Data System (ADS)

    Chamis, C. C.; Singhal, S. N.

    1993-02-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  12. VCSim3: a VR simulator for cardiovascular interventions.

    PubMed

    Korzeniowski, Przemyslaw; White, Ruth J; Bello, Fernando

    2018-01-01

    Effective and safe performance of cardiovascular interventions requires excellent catheter/guidewire manipulation skills. These skills are currently mainly gained through an apprenticeship on real patients, which may not be safe or cost-effective. Computer simulation offers an alternative for core skills training. However, replicating the physical behaviour of real instruments navigated through blood vessels is a challenging task. We have developed VCSim3-a virtual reality simulator for cardiovascular interventions. The simulator leverages an inextensible Cosserat rod to model virtual catheters and guidewires. Their mechanical properties were optimized with respect to their real counterparts scanned in a silicone phantom using X-ray CT imaging. The instruments are manipulated via a VSP haptic device. Supporting solutions such as fluoroscopic visualization, contrast flow propagation, cardiac motion, balloon inflation, and stent deployment, enable performing a complete angioplasty procedure. We present detailed results of simulation accuracy of the virtual instruments, along with their computational performance. In addition, the results of a preliminary face and content validation study conveyed on a group of 17 interventional radiologists are given. VR simulation of cardiovascular procedure can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. VCSim3 is still a prototype, yet the initial results indicate that it provides promising foundations for further development.

  13. Star Clusters Simulations Using GRAPE-5

    NASA Astrophysics Data System (ADS)

    Fukushige, Toshiyuki

    We discuss simulations of star cluster, such as globular cluster, galaxy, and galaxy cluster, using GRAPE(GRAvity PipE)-5. GRAPE-5 is a new version of special-purpose computer for many-body simulation, GRAPE. GRAPE-5 has eight custom pipeline LSI (G5 chip) per board, and its peak performance is 38.4 Gflops. GRAPE-5 is different from its predecessor, GRAPE-3, regarding four points: a) the calculation speed per chip is 8 time faster, b) the PCI bus is adapted as an interface between host computer and GRAPE-5, and, therefore, the communication speed is order of magnitude faster, c) in addition to the pure 1/r potential, GRAPE-5 can calculate force with arbitrary cutoff function so that it can be applied to the Ewald or P3M methods, and d) the pair wise force calculated on GRAPE-5 is about 10 times more accurate. Using the GRAPE-5 system with Barnes-Hut tree algorithm, we can complete force calculations for one timestep in 10(N/106) seconds. This speed enables us to perform a pre-collapse globular cluster simulation with real number of particles, and a galaxy simulation with more than 1 million particles, within several days. We also present some results of star cluster simulations using the GRAPE-5 system.

  14. Three-dimensional time dependent computation of turbulent flow

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Reynolds, W. C.; Ferziger, J. H.

    1975-01-01

    The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.

  15. Antenna design and development for the microwave subsystem experiments for the terminal configured vehicle project

    NASA Technical Reports Server (NTRS)

    Becher, J.; Cohen, N.; Rublee, J.

    1981-01-01

    The feasibility of classifying an airport terminal area for multipath effects, i.e., fadeout potentials or limits of video resolution, is examined. Established transmission links in terminal areas were modeled for landing approaches and overflight patterns. A computer program to obtain signal strength based on a described flight path was written. The application of this model to evaluate the signal transmission obtained in an actual flight equipped with additional signal strength monitoring equipment is described. The actual and computed received signal are compared, and the feasibility of the computer simulation for predicting signal amplitude fluctuation is evaluated.

  16. Simulation of Local Blood Flow in Human Brain under Altered Gravity

    NASA Technical Reports Server (NTRS)

    Kim, Chang Sung; Kiris, Cetin; Kwak, Dochan

    2003-01-01

    In addition to the altered gravitational forces, specific shapes and connections of arteries in the brain vary in the human population (Cebral et al., 2000; Ferrandez et al., 2002). Considering the geometric variations, pulsatile unsteadiness, and moving walls, computational approach in analyzing altered blood circulation will offer an economical alternative to experiments. This paper presents a computational approach for modeling the local blood flow through the human brain under altered gravity. This computational approach has been verified through steady and unsteady experimental measurements and then applied to the unsteady blood flows through a carotid bifurcation model and an idealized Circle of Willis (COW) configuration under altered gravity conditions.

  17. Simulation and analysis of a model dinoflagellate predator-prey system

    NASA Astrophysics Data System (ADS)

    Mazzoleni, M. J.; Antonelli, T.; Coyne, K. J.; Rossi, L. F.

    2015-12-01

    This paper analyzes the dynamics of a model dinoflagellate predator-prey system and uses simulations to validate theoretical and experimental studies. A simple model for predator-prey interactions is derived by drawing upon analogies from chemical kinetics. This model is then modified to account for inefficiencies in predation. Simulation results are shown to closely match the model predictions. Additional simulations are then run which are based on experimental observations of predatory dinoflagellate behavior, and this study specifically investigates how the predatory dinoflagellate Karlodinium veneficum uses toxins to immobilize its prey and increase its feeding rate. These simulations account for complex dynamics that were not included in the basic models, and the results from these computational simulations closely match the experimentally observed predatory behavior of K. veneficum and reinforce the notion that predatory dinoflagellates utilize toxins to increase their feeding rate.

  18. Distributed communication and psychosocial performance in simulated space dwelling groups

    NASA Astrophysics Data System (ADS)

    Hienz, R. D.; Brady, J. V.; Hursh, S. R.; Ragusa, L. C.; Rouse, C. O.; Gasior, E. D.

    2005-05-01

    The present report describes the development and application of a distributed interactive multi-person simulation in a computer-generated planetary environment as an experimental test bed for modeling the human performance effects of variations in the types of communication modes available, and in the types of stress and incentive conditions underlying the completion of mission goals. The results demonstrated a high degree of interchangeability between communication modes (audio, text) when one mode was not available. Additionally, the addition of time pressure stress to complete tasks resulted in a reduction in performance effectiveness, and these performance reductions were ameliorated via the introduction of positive incentives contingent upon improved performances. The results obtained confirmed that cooperative and productive psychosocial interactions can be maintained between individually isolated and dispersed members of simulated spaceflight crews communicating and problem-solving effectively over extended time intervals without the benefit of one another's physical presence.

  19. Expanding the catalog of binary black-hole simulations: aligned-spin configurations

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; SXS Collaboration

    2015-04-01

    A major goal of numerical relativity is to model the inspiral and merger of binary black holes through sufficiently accurate and long simulations, to enable the successful detection of gravitational waves. However, covering the full parameter space of binary configurations is a computationally daunting task. The SXS Collaboration has made important progress in this direction recently, with a catalog of 174 publicly available binary black-hole simulations [black-holes.org/waveforms]. Nevertheless, the parameter-space coverage remains sparse, even for non-precessing binaries. In this talk, I will describe an addition to the SXS catalog to improve its coverage, consisting of 95 new simulations of aligned-spin binaries with moderate mass ratios and dimensionless spins as high as 0.9. Some applications of these new simulations will also be mentioned.

  20. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  1. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem

    PubMed Central

    Akutsah, Francis; Olusanya, Micheal O.; Adewumi, Aderemi O.

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems. PMID:29554662

  2. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem.

    PubMed

    Ezugwu, Absalom E; Akutsah, Francis; Olusanya, Micheal O; Adewumi, Aderemi O

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems.

  3. Numerical Investigation of Plasma Detachment in Magnetic Nozzle Experiments

    NASA Technical Reports Server (NTRS)

    Sankaran, Kamesh; Polzin, Kurt A.

    2008-01-01

    At present there exists no generally accepted theoretical model that provides a consistent physical explanation of plasma detachment from an externally-imposed magnetic nozzle. To make progress towards that end, simulation of plasma flow in the magnetic nozzle of an arcjet experiment is performed using a multidimensional numerical simulation tool that includes theoretical models of the various dispersive and dissipative processes present in the plasma. This is an extension of the simulation tool employed in previous work by Sankaran et al. The aim is to compare the computational results with various proposed magnetic nozzle detachment theories to develop an understanding of the physical mechanisms that cause detachment. An applied magnetic field topology is obtained using a magnetostatic field solver (see Fig. I), and this field is superimposed on the time-dependent magnetic field induced in the plasma to provide a self-consistent field description. The applied magnetic field and model geometry match those found in experiments by Kuriki and Okada. This geometry is modeled because there is a substantial amount of experimental data that can be compared to the computational results, allowing for validation of the model. In addition, comparison of the simulation results with the experimentally obtained plasma parameters will provide insight into the mechanisms that lead to plasma detachment, revealing how they scale with different input parameters. Further studies will focus on modeling literature experiments both for the purpose of additional code validation and to extract physical insight regarding the mechanisms driving detachment.

  4. Mars oxygen production system design

    NASA Technical Reports Server (NTRS)

    Cotton, Charles E.; Pillow, Linda K.; Perkinson, Robert C.; Brownlie, R. P.; Chwalowski, P.; Carmona, M. F.; Coopersmith, J. P.; Goff, J. C.; Harvey, L. L.; Kovacs, L. A.

    1989-01-01

    The design and construction phase is summarized of the Mars oxygen demonstration project. The basic hardware required to produce oxygen from simulated Mars atmosphere was assembled and tested. Some design problems still remain with the sample collection and storage system. In addition, design and development of computer compatible data acquisition and control instrumentation is ongoing.

  5. Evaluating the Cognitive Consequences of Playing "Portal" for a Short Duration

    ERIC Educational Resources Information Center

    Adams, Deanne M.; Pilegard, Celeste; Mayer, Richard E.

    2016-01-01

    Learning physics often requires overcoming common misconceptions based on naïve interpretations of observations in the everyday world. One proposed way to help learners build appropriate physics intuitions is to expose them to computer simulations in which motion is based on Newtonian principles. In addition, playing video games that require…

  6. Mars oxygen production system design

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This report summarizes the design and construction of the Mars oxygen demonstration project. The basic hardware required to produce oxygen from simulated Mars atmosphere has been assembled and tested. Some design problems still remain with the sample collection and storage system. In addition, design and development of computer data acquisition and control instrumentation is continuing.

  7. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    ERIC Educational Resources Information Center

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  8. Additions and improvements to the high energy density physics capabilities in the FLASH code

    NASA Astrophysics Data System (ADS)

    Lamb, D. Q.; Flocke, N.; Graziani, C.; Tzeferacos, P.; Weide, K.

    2016-10-01

    FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities have been added to FLASH to make it an open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. In particular, we showcase the ability of FLASH to simulate the Faraday Rotation Measure produced by the presence of magnetic fields; and proton radiography, proton self-emission, and Thomson scattering diagnostics with and without the presence of magnetic fields. We also describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at the University of Chicago by the DOE NNSA ASC through the Argonne Institute for Computing in Science under field work proposal 57789; and the NSF under Grant PHY-0903997.

  9. Simulating Responses of Gravitational-Wave Instrumentation

    NASA Technical Reports Server (NTRS)

    Armstrong, John; Edlund, Jeffrey; Vallisneri. Michele

    2006-01-01

    Synthetic LISA is a computer program for simulating the responses of the instrumentation of the NASA/ESA Laser Interferometer Space Antenna (LISA) mission, the purpose of which is to detect and study gravitational waves. Synthetic LISA generates synthetic time series of the LISA fundamental noises, as filtered through all the time-delay-interferometry (TDI) observables. (TDI is a method of canceling phase noise in temporally varying unequal-arm interferometers.) Synthetic LISA provides a streamlined module to compute the TDI responses to gravitational waves, according to a full model of TDI (including the motion of the LISA array and the temporal and directional dependence of the arm lengths). Synthetic LISA is written in the C++ programming language as a modular package that accommodates the addition of code for specific gravitational wave sources or for new noise models. In addition, time series for waves and noises can be easily loaded from disk storage or electronic memory. The package includes a Python-language interface for easy, interactive steering and scripting. Through Python, Synthetic LISA can read and write data files in Flexible Image Transport System (FITS), which is a commonly used astronomical data format.

  10. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  11. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  12. A flow-simulation model of the tidal Potomac River

    USGS Publications Warehouse

    Schaffranek, Raymond W.

    1987-01-01

    A one-dimensional model capable of simulating flow in a network of interconnected channels has been applied to the tidal Potomac River including its major tributaries and embayments between Washington, D.C., and Indian Head, Md. The model can be used to compute water-surface elevations and flow discharges at any of 66 predetermined locations or at any alternative river cross sections definable within the network of channels. In addition, the model can be used to provide tidal-interchange flow volumes and to evaluate tidal excursions and the flushing properties of the riverine system. Comparisons of model-computed results with measured watersurface elevations and discharges demonstrate the validity and accuracy of the model. Tidal-cycle flow volumes computed by the calibrated model have been verified to be within an accuracy of ? 10 percent. Quantitative characteristics of the hydrodynamics of the tidal river are identified and discussed. The comprehensive flow data provided by the model can be used to better understand the geochemical, biological, and other processes affecting the river's water quality.

  13. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    PubMed

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  14. Error Analysis of Indirect Broadband Monitoring of Multilayer Optical Coatings using Computer Simulations

    NASA Astrophysics Data System (ADS)

    Semenov, Z. V.; Labusov, V. A.

    2017-11-01

    Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.

  15. Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack

    Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.

  16. Computer Simulations to Study Diffraction Effects of Stacking Faults in Beta-SiC: II. Experimental Verification. 2; Experimental Verification

    NASA Technical Reports Server (NTRS)

    Pujar, Vijay V.; Cawley, James D.; Levine, S. (Technical Monitor)

    2000-01-01

    Earlier results from computer simulation studies suggest a correlation between the spatial distribution of stacking errors in the Beta-SiC structure and features observed in X-ray diffraction patterns of the material. Reported here are experimental results obtained from two types of nominally Beta-SiC specimens, which yield distinct XRD data. These samples were analyzed using high resolution transmission electron microscopy (HRTEM) and the stacking error distribution was directly determined. The HRTEM results compare well to those deduced by matching the XRD data with simulated spectra, confirming the hypothesis that the XRD data is indicative not only of the presence and density of stacking errors, but also that it can yield information regarding their distribution. In addition, the stacking error population in both specimens is related to their synthesis conditions and it appears that it is similar to the relation developed by others to explain the formation of the corresponding polytypes.

  17. Overcoming free energy barriers using unconstrained molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hénin, Jérôme; Chipot, Christophe

    2004-08-01

    Association of unconstrained molecular dynamics (MD) and the formalisms of thermodynamic integration and average force [Darve and Pohorille, J. Chem. Phys. 115, 9169 (2001)] have been employed to determine potentials of mean force. When implemented in a general MD code, the additional computational effort, compared to other standard, unconstrained simulations, is marginal. The force acting along a chosen reaction coordinate ξ is estimated from the individual forces exerted on the chemical system and accumulated as the simulation progresses. The estimated free energy derivative computed for small intervals of ξ is canceled by an adaptive bias to overcome the barriers of the free energy landscape. Evolution of the system along the reaction coordinate is, thus, limited by its sole self-diffusion properties. The illustrative examples of the reversible unfolding of deca-L-alanine, the association of acetate and guanidinium ions in water, the dimerization of methane in water, and its transfer across the water liquid-vapor interface are examined to probe the efficiency of the method.

  18. Micromagnetics of rare-earth efficient permanent magnets

    NASA Astrophysics Data System (ADS)

    Fischbacher, Johann; Kovacs, Alexander; Gusenbauer, Markus; Oezelt, Harald; Exl, Lukas; Bance, Simon; Schrefl, Thomas

    2018-05-01

    The development of permanent magnets containing less or no rare-earth elements is linked to profound knowledge of the coercivity mechanism. Prerequisites for a promising permanent magnet material are a high spontaneous magnetization and a sufficiently high magnetic anisotropy. In addition to the intrinsic magnetic properties the microstructure of the magnet plays a significant role in establishing coercivity. The influence of the microstructure on coercivity, remanence, and energy density product can be understood by using micromagnetic simulations. With advances in computer hardware and numerical methods, hysteresis curves of magnets can be computed quickly so that the simulations can readily provide guidance for the development of permanent magnets. The potential of rare-earth reduced and rare-earth free permanent magnets is investigated using micromagnetic simulations. The results show excellent hard magnetic properties can be achieved in grain boundary engineered NdFeB, rare-earth magnets with a ThMn12 structure, Co-based nano-wires, and L10-FeNi provided that the magnet’s microstructure is optimized.

  19. Numerical simulation of a full-loop circulating fluidized bed under different operating conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Yupeng; Musser, Jordan M.; Li, Tingwen

    Both experimental and computational studies of the fluidization of high-density polyethylene (HDPE) particles in a small-scale full-loop circulating fluidized bed are conducted. Experimental measurements of pressure drop are taken at different locations along the bed. The solids circulation rate is measured with an advanced Particle Image Velocimetry (PIV) technique. The bed height of the quasi-static region in the standpipe is also measured. Comparative numerical simulations are performed with a Computational Fluid Dynamics solver utilizing a Discrete Element Method (CFD-DEM). This paper reports a detailed and direct comparison between CFD-DEM results and experimental data for realistic gas-solid fluidization in a full-loopmore » circulating fluidized bed system. The comparison reveals good agreement with respect to system component pressure drop and inventory height in the standpipe. In addition, the effect of different drag laws applied within the CFD simulation is examined and compared with experimental results.« less

  20. Overcoming free energy barriers using unconstrained molecular dynamics simulations.

    PubMed

    Hénin, Jérôme; Chipot, Christophe

    2004-08-15

    Association of unconstrained molecular dynamics (MD) and the formalisms of thermodynamic integration and average force [Darve and Pohorille, J. Chem. Phys. 115, 9169 (2001)] have been employed to determine potentials of mean force. When implemented in a general MD code, the additional computational effort, compared to other standard, unconstrained simulations, is marginal. The force acting along a chosen reaction coordinate xi is estimated from the individual forces exerted on the chemical system and accumulated as the simulation progresses. The estimated free energy derivative computed for small intervals of xi is canceled by an adaptive bias to overcome the barriers of the free energy landscape. Evolution of the system along the reaction coordinate is, thus, limited by its sole self-diffusion properties. The illustrative examples of the reversible unfolding of deca-L-alanine, the association of acetate and guanidinium ions in water, the dimerization of methane in water, and its transfer across the water liquid-vapor interface are examined to probe the efficiency of the method. (c) 2004 American Institute of Physics.

Top