Sample records for computer simulation aspect

  1. Framework to model neutral particle flux in convex high aspect ratio structures using one-dimensional radiosity

    NASA Astrophysics Data System (ADS)

    Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.

  2. Computational structural mechanics engine structures computational simulator

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1989-01-01

    The Computational Structural Mechanics (CSM) program at Lewis encompasses: (1) fundamental aspects for formulating and solving structural mechanics problems, and (2) development of integrated software systems to computationally simulate the performance/durability/life of engine structures.

  3. A Primer on Simulation and Gaming.

    ERIC Educational Resources Information Center

    Barton, Richard F.

    In a primer intended for the administrative professions, for the behavioral sciences, and for education, simulation and its various aspects are defined, illustrated, and explained. Man-model simulation, man-computer simulation, all-computer simulation, and analysis are discussed as techniques for studying object systems (parts of the "real…

  4. Technology, Pedagogy, and Epistemology: Opportunities and Challenges of Using Computer Modeling and Simulation Tools in Elementary Science Methods

    ERIC Educational Resources Information Center

    Schwarz, Christina V.; Meyer, Jason; Sharma, Ajay

    2007-01-01

    This study infused computer modeling and simulation tools in a 1-semester undergraduate elementary science methods course to advance preservice teachers' understandings of computer software use in science teaching and to help them learn important aspects of pedagogy and epistemology. Preservice teachers used computer modeling and simulation tools…

  5. Mechanical Aspects of Interfaces and Surfaces in Ceramic Containing Systems.

    DTIC Science & Technology

    1984-12-14

    of a computer model to simulate the crack damage. The model is based on the fracture mechanics of cracks engulfed by the short stress pulse generated...by drop impact. Inertial effects of the crack faces are a particularly important aspect of the model. The computer scheme thereby allows the stress...W. R. Beaumont, "On the Toughness of Particulate Filled Polymers." Water Drop Impact X. E. D. Case and A. G. Evans, "A Computer -Generated Simulation

  6. Computational composite mechanics for aerospace propulsion structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1986-01-01

    Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial fabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating (1) complex composite structural behavior in general and (2) specific aerospace propulsion structural components in particular.

  7. Computational composite mechanics for aerospace propulsion structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1987-01-01

    Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial frabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating: (1) complex composite structural behavior in general, and (2) specific aerospace propulsion structural components in particular.

  8. Epistemological Issues Concerning Computer Simulations in Science and Their Implications for Science Education

    ERIC Educational Resources Information Center

    Greca, Ileana M.; Seoane, Eugenia; Arriassecq, Irene

    2014-01-01

    Computers and simulations represent an undeniable aspect of daily scientific life, the use of simulations being comparable to the introduction of the microscope and the telescope, in the development of knowledge. In science education, simulations have been proposed for over three decades as useful tools to improve the conceptual understanding of…

  9. Computer-generated forces in distributed interactive simulation

    NASA Astrophysics Data System (ADS)

    Petty, Mikel D.

    1995-04-01

    Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.

  10. Computer modeling and simulators as part of university training for NPP operating personnel

    NASA Astrophysics Data System (ADS)

    Volman, M.

    2017-01-01

    This paper considers aspects of a program for training future nuclear power plant personnel developed by the NPP Department of Ivanovo State Power Engineering University. Computer modeling is used for numerical experiments on the kinetics of nuclear reactors in Mathcad. Simulation modeling is carried out on the computer and full-scale simulator of water-cooled power reactor for the simulation of neutron-physical reactor measurements and the start-up - shutdown process.

  11. Structural Technology Evaluation and Analysis Program (STEAP). Delivery Order 0037: Prognosis-Based Control Reconfiguration for an Aircraft with Faulty Actuator to Enable Performance in a Degraded State

    DTIC Science & Technology

    2010-12-01

    computers in 1953. HIL motion simulators were also built for the dynamic testing of vehicle com- ponents (e.g. suspensions, bodies ) with hydraulic or...complex, comprehensive mechanical systems can be simulated in real-time by parallel computers; examples include multi- body sys- tems, brake systems...hard constraints in a multivariable control framework. And the third aspect is the ability to perform online optimization. These aspects results in

  12. Ocean modelling on the CYBER 205 at GFDL

    NASA Technical Reports Server (NTRS)

    Cox, M.

    1984-01-01

    At the Geophysical Fluid Dynamics Laboratory, research is carried out for the purpose of understanding various aspects of climate, such as its variability, predictability, stability and sensitivity. The atmosphere and oceans are modelled mathematically and their phenomenology studied by computer simulation methods. The present state-of-the-art in the computer simulation of large scale oceans on the CYBER 205 is discussed. While atmospheric modelling differs in some aspects, the basic approach used is similar. The equations of the ocean model are presented along with a short description of the numerical techniques used to find their solution. Computational considerations and a typical solution are presented in section 4.

  13. Computer simulation of space charge

    NASA Astrophysics Data System (ADS)

    Yu, K. W.; Chung, W. K.; Mak, S. S.

    1991-05-01

    Using the particle-mesh (PM) method, a one-dimensional simulation of the well-known Langmuir-Child's law is performed on an INTEL 80386-based personal computer system. The program is coded in turbo basic (trademark of Borland International, Inc.). The numerical results obtained were in excellent agreement with theoretical predictions and the computational time required is quite modest. This simulation exercise demonstrates that some simple computer simulation using particles may be implemented successfully on PC's that are available today, and hopefully this will provide the necessary incentives for newcomers to the field who wish to acquire a flavor of the elementary aspects of the practice.

  14. Modelling and simulation techniques for membrane biology.

    PubMed

    Burrage, Kevin; Hancock, John; Leier, André; Nicolau, Dan V

    2007-07-01

    One of the most important aspects of Computational Cell Biology is the understanding of the complicated dynamical processes that take place on plasma membranes. These processes are often so complicated that purely temporal models cannot always adequately capture the dynamics. On the other hand, spatial models can have large computational overheads. In this article, we review some of these issues with respect to chemistry, membrane microdomains and anomalous diffusion and discuss how to select appropriate modelling and simulation paradigms based on some or all the following aspects: discrete, continuous, stochastic, delayed and complex spatial processes.

  15. Application of the TEMPEST computer code for simulating hydrogen distribution in model containment structures. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trent, D.S.; Eyler, L.L.

    In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.

  16. Programming Video Games and Simulations in Science Education: Exploring Computational Thinking through Code Analysis

    ERIC Educational Resources Information Center

    Garneli, Varvara; Chorianopoulos, Konstantinos

    2018-01-01

    Various aspects of computational thinking (CT) could be supported by educational contexts such as simulations and video-games construction. In this field study, potential differences in student motivation and learning were empirically examined through students' code. For this purpose, we performed a teaching intervention that took place over five…

  17. Fractal Simulations of African Design in Pre-College Computing Education

    ERIC Educational Resources Information Center

    Eglash, Ron; Krishnamoorthy, Mukkai; Sanchez, Jason; Woodbridge, Andrew

    2011-01-01

    This article describes the use of fractal simulations of African design in a high school computing class. Fractal patterns--repetitions of shape at multiple scales--are a common feature in many aspects of African design. In African architecture we often see circular houses grouped in circular complexes, or rectangular houses in rectangular…

  18. Modeling aspects of human memory for scientific study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caudell, Thomas P.; Watson, Patrick; McDaniel, Mark A.

    Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closermore » to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.« less

  19. GPU based 3D feature profile simulation of high-aspect ratio contact hole etch process under fluorocarbon plasmas

    NASA Astrophysics Data System (ADS)

    Chun, Poo-Reum; Lee, Se-Ah; Yook, Yeong-Geun; Choi, Kwang-Sung; Cho, Deog-Geun; Yu, Dong-Hun; Chang, Won-Seok; Kwon, Deuk-Chul; Im, Yeon-Ho

    2013-09-01

    Although plasma etch profile simulation has been attracted much interest for developing reliable plasma etching, there still exist big gaps between current research status and predictable modeling due to the inherent complexity of plasma process. As an effort to address this issue, we present 3D feature profile simulation coupled with well-defined plasma-surface kinetic model for silicon dioxide etching process under fluorocarbon plasmas. To capture the realistic plasma surface reaction behaviors, a polymer layer based surface kinetic model was proposed to consider the simultaneous polymer deposition and oxide etching. Finally, the realistic plasma surface model was used for calculation of speed function for 3D topology simulation, which consists of multiple level set based moving algorithm, and ballistic transport module. In addition, the time consumable computations in the ballistic transport calculation were improved drastically by GPU based numerical computation, leading to the real time computation. Finally, we demonstrated that the surface kinetic model could be coupled successfully for 3D etch profile simulations in high-aspect ratio contact hole plasma etching.

  20. Composite mechanics for engine structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1987-01-01

    Recent research activities and accomplishments at Lewis Research Center on composite mechanics for engine structures are summarized. The activities focused mainly on developing procedures for the computational simulation of composite intrinsic and structural behavior. The computational simulation encompasses all aspects of composite mechanics, advanced three-dimensional finite-element methods, damage tolerance, composite structural and dynamic response, and structural tailoring and optimization.

  1. Composite mechanics for engine structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1989-01-01

    Recent research activities and accomplishments at Lewis Research Center on composite mechanics for engine structures are summarized. The activities focused mainly on developing procedures for the computational simulation of composite intrinsic and structural behavior. The computational simulation encompasses all aspects of composite mechanics, advanced three-dimensional finite-element methods, damage tolerance, composite structural and dynamic response, and structural tailoring and optimization.

  2. Huntington II Simulation Program - MALAR. Student Workbook, Teacher's Guide, and Resource Handbook.

    ERIC Educational Resources Information Center

    Friedland, James; Frishman, Austin

    Described is the computer model "MALAR" which deals with malaria and its eradication. A computer program allows the tenth- to twelfth-grade student to attempt to control a malaria epidemic. This simulation provides a context within which to study the biological, economic, social, political, and ecological aspects of a classic world health problem.…

  3. Winter Simulation Conference, Miami Beach, Fla., December 4-6, 1978, Proceedings. Volumes 1 & 2

    NASA Technical Reports Server (NTRS)

    Highland, H. J. (Editor); Nielsen, N. R.; Hull, L. G.

    1978-01-01

    The papers report on the various aspects of simulation such as random variate generation, simulation optimization, ranking and selection of alternatives, model management, documentation, data bases, and instructional methods. Simulation studies in a wide variety of fields are described, including system design and scheduling, government and social systems, agriculture, computer systems, the military, transportation, corporate planning, ecosystems, health care, manufacturing and industrial systems, computer networks, education, energy, production planning and control, financial models, behavioral models, information systems, and inventory control.

  4. Some issues related to simulation of the tracking and communications computer network

    NASA Technical Reports Server (NTRS)

    Lacovara, Robert C.

    1989-01-01

    The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.

  5. Some issues related to simulation of the tracking and communications computer network

    NASA Astrophysics Data System (ADS)

    Lacovara, Robert C.

    1989-12-01

    The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.

  6. Global two-fluid simulations of geodesic acoustic modes in strongly shaped tight aspect ratio tokamak plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, J. R.; Hnat, B.; Thyagaraja, A.

    2013-05-15

    Following recent observations suggesting the presence of the geodesic acoustic mode (GAM) in ohmically heated discharges in the Mega Amp Spherical Tokamak (MAST) [J. R. Robinson et al., Plasma Phys. Controlled Fusion 54, 105007 (2012)], the behaviour of the GAM is studied numerically using the two fluid, global code CENTORI [P. J. Knight et al. Comput. Phys. Commun. 183, 2346 (2012)]. We examine mode localisation and effects of magnetic geometry, given by aspect ratio, elongation, and safety factor, on the observed frequency of the mode. An excellent agreement between simulations and experimental data is found for simulation plasma parameters matchedmore » to those of MAST. Increasing aspect ratio yields good agreement between the GAM frequency found in the simulations and an analytical result obtained for elongated large aspect ratio plasmas.« less

  7. Perspectives on the Future of CFD

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2000-01-01

    This viewgraph presentation gives an overview of the future of computational fluid dynamics (CFD), which in the past has pioneered the field of flow simulation. Over time CFD has progressed as computing power. Numerical methods have been advanced as CPU and memory capacity increases. Complex configurations are routinely computed now and direct numerical simulations (DNS) and large eddy simulations (LES) are used to study turbulence. As the computing resources changed to parallel and distributed platforms, computer science aspects such as scalability (algorithmic and implementation) and portability and transparent codings have advanced. Examples of potential future (or current) challenges include risk assessment, limitations of the heuristic model, and the development of CFD and information technology (IT) tools.

  8. Visual Computing Environment Workshop

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles (Compiler)

    1998-01-01

    The Visual Computing Environment (VCE) is a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis.

  9. LLNL Mercury Project Trinity Open Science Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, Shawn A.

    The Mercury Monte Carlo particle transport code is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. In the proposed Trinity Open Science calculations, I will investigate computer science aspects of the code which are relevant to convergence of the simulation quantities with increasing Monte Carlo particle counts.

  10. A Framework for Modeling and Simulation of the Artificial

    DTIC Science & Technology

    2012-01-01

    y or n) >> y Name: petra Simple Aspects: face_shape/thin, nose/small, skintone/light, hair_color/black, hair_type/curly Integrated Aspects...Multiconference. Orlando, FL (2012) 23. Mittal, S., Risco- Martin , J.: Netcentric System of Systems Engineering with DEVS Unified Process. CRC Press (2012) 24...Mittal, S., Risco- Martin , J., Zeigler, B.: DEVS-based simulation web services for net-centric T&E. In: Proceedings of the 2007 summer computer

  11. Applications of a Pharmacokinetic Simulation Program in Pharmacy Courses.

    ERIC Educational Resources Information Center

    Ingram, D.; And Others

    1979-01-01

    Presents a multicompartment model which illustrates aspects of drug absorption, distribution, and elimination in the human body for a course in pharmacokinetics. The course work consists of the interpretation of computer generated simulated data. (Author/CMV)

  12. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    PubMed

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  13. Helical gears with circular arc teeth: Generation, geometry, precision and adjustment to errors, computer aided simulation of conditions of meshing and bearing contact

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Tsay, Chung-Biau

    1987-01-01

    The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.

  14. Computer simulation of electron flow in linear-beam microwave tubes

    NASA Astrophysics Data System (ADS)

    Kumar, Lalit

    1990-12-01

    The computer simulation of electron flow in linear-beam microwave tubes, such as a travelling-wave tube (TWT) and klystron, is used for designing and optimising the electron gun and collector and for analysing the large-signal beam-wave interaction phenomenon. Major aspects of simulation of electron flow in static and rf fields present in such tubes are discussed. Some advancements made in this respect and results obtained from computer programs developed by the research group at CEERI for a gridded electron gun, depressed collector, and large-signal analysis of TWT and klystron are presented.

  15. Advances in Integrated Computational Materials Engineering "ICME"

    NASA Astrophysics Data System (ADS)

    Hirsch, Jürgen

    The methods of Integrated Computational Materials Engineering that were developed and successfully applied for Aluminium have been constantly improved. The main aspects and recent advances of integrated material and process modeling are simulations of material properties like strength and forming properties and for the specific microstructure evolution during processing (rolling, extrusion, annealing) under the influence of material constitution and process variations through the production process down to the final application. Examples are discussed for the through-process simulation of microstructures and related properties of Aluminium sheet, including DC ingot casting, pre-heating and homogenization, hot and cold rolling, final annealing. New results are included of simulation solution annealing and age hardening of 6xxx alloys for automotive applications. Physically based quantitative descriptions and computer assisted evaluation methods are new ICME methods of integrating new simulation tools also for customer applications, like heat affected zones in welding of age hardening alloys. The aspects of estimating the effect of specific elements due to growing recycling volumes requested also for high end Aluminium products are also discussed, being of special interest in the Aluminium producing industries.

  16. AceCloud: Molecular Dynamics Simulations in the Cloud.

    PubMed

    Harvey, M J; De Fabritiis, G

    2015-05-26

    We present AceCloud, an on-demand service for molecular dynamics simulations. AceCloud is designed to facilitate the secure execution of large ensembles of simulations on an external cloud computing service (currently Amazon Web Services). The AceCloud client, integrated into the ACEMD molecular dynamics package, provides an easy-to-use interface that abstracts all aspects of interaction with the cloud services. This gives the user the experience that all simulations are running on their local machine, minimizing the learning curve typically associated with the transition to using high performance computing services.

  17. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  18. Analysis, preliminary design and simulation systems for control-structure interaction problems

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.

    1991-01-01

    Software aspects of control-structure interaction (CSI) analysis are discussed. The following subject areas are covered: (1) implementation of a partitioned algorithm for simulation of large CSI problems; (2) second-order discrete Kalman filtering equations for CSI simulations; and (3) parallel computations and control of adaptive structures.

  19. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  20. Computational structural mechanics for engine structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1989-01-01

    The computational structural mechanics (CSM) program at Lewis encompasses: (1) fundamental aspects for formulating and solving structural mechanics problems, and (2) development of integrated software systems to computationally simulate the performance/durability/life of engine structures. It is structured to mainly supplement, complement, and whenever possible replace, costly experimental efforts which are unavoidable during engineering research and development programs. Specific objectives include: investigate unique advantages of parallel and multiprocesses for: reformulating/solving structural mechanics and formulating/solving multidisciplinary mechanics and develop integrated structural system computational simulators for: predicting structural performances, evaluating newly developed methods, and for identifying and prioritizing improved/missing methods needed. Herein the CSM program is summarized with emphasis on the Engine Structures Computational Simulator (ESCS). Typical results obtained using ESCS are described to illustrate its versatility.

  1. Computer modeling of human decision making

    NASA Technical Reports Server (NTRS)

    Gevarter, William B.

    1991-01-01

    Models of human decision making are reviewed. Models which treat just the cognitive aspects of human behavior are included as well as models which include motivation. Both models which have associated computer programs, and those that do not, are considered. Since flow diagrams, that assist in constructing computer simulation of such models, were not generally available, such diagrams were constructed and are presented. The result provides a rich source of information, which can aid in construction of more realistic future simulations of human decision making.

  2. Implementing ADM1 for plant-wide benchmark simulations in Matlab/Simulink.

    PubMed

    Rosen, C; Vrecko, D; Gernaey, K V; Pons, M N; Jeppsson, U

    2006-01-01

    The IWA Anaerobic Digestion Model No.1 (ADM1) was presented in 2002 and is expected to represent the state-of-the-art model within this field in the future. Due to its complexity the implementation of the model is not a simple task and several computational aspects need to be considered, in particular if the ADM1 is to be included in dynamic simulations of plant-wide or even integrated systems. In this paper, the experiences gained from a Matlab/Simulink implementation of ADM1 into the extended COST/IWA Benchmark Simulation Model (BSM2) are presented. Aspects related to system stiffness, model interfacing with the ASM family, mass balances, acid-base equilibrium and algebraic solvers for pH and other troublesome state variables, numerical solvers and simulation time are discussed. The main conclusion is that if implemented properly, the ADM1 will also produce high-quality results in dynamic plant-wide simulations including noise, discrete sub-systems, etc. without imposing any major restrictions due to extensive computational efforts.

  3. The Use of Computer Simulation Methods to Reach Data for Economic Analysis of Automated Logistic Systems

    NASA Astrophysics Data System (ADS)

    Neradilová, Hana; Fedorko, Gabriel

    2016-12-01

    Automated logistic systems are becoming more widely used within enterprise logistics processes. Their main advantage is that they allow increasing the efficiency and reliability of logistics processes. In terms of evaluating their effectiveness, it is necessary to take into account the economic aspect of the entire process. However, many users ignore and underestimate this area,which is not correct. One of the reasons why the economic aspect is overlooked is the fact that obtaining information for such an analysis is not easy. The aim of this paper is to present the possibilities of computer simulation methods for obtaining data for full-scale economic analysis implementation.

  4. United States Air Force Training Line Simulator. Final Report.

    ERIC Educational Resources Information Center

    Nauta, Franz; Pierce, Michael B.

    This report describes the technical aspects and potential applications of a computer-based model simulating the flow of airmen through basic training and entry-level technical training. The objective of the simulation is to assess the impacts of alternative recruit classification and training policies under a wide variety of assumptions regarding…

  5. Computational aspects in high intensity ultrasonic surgery planning.

    PubMed

    Pulkkinen, A; Hynynen, K

    2010-01-01

    Therapeutic ultrasound treatment planning is discussed and computational aspects regarding it are reviewed. Nonlinear ultrasound simulations were solved with a combined frequency domain Rayleigh and KZK model. Ultrasonic simulations were combined with thermal simulations and were used to compute heating of muscle tissue in vivo for four different focused ultrasound transducers. The simulations were compared with measurements and good agreement was found for large F-number transducers. However, at F# 1.9 the simulated rate of temperature rise was approximately a factor of 2 higher than the measured ones. The power levels used with the F# 1 transducer were too low to show any nonlinearity. The simulations were used to investigate the importance of nonlinarities generated in the coupling water, and also the importance of including skin in the simulations. Ignoring either of these in the model would lead to larger errors. Most notably, the nonlinearities generated in the water can enhance the focal temperature by more than 100%. The simulations also demonstrated that pulsed high power sonications may provide an opportunity to significantly (up to a factor of 3) reduce the treatment time. In conclusion, nonlinear propagation can play an important role in shaping the energy distribution during a focused ultrasound treatment and it should not be ignored in planning. However, the current simulation methods are accurate only with relatively large F-numbers and better models need to be developed for sharply focused transducers. Copyright 2009 Elsevier Ltd. All rights reserved.

  6. Educational aspects of molecular simulation

    NASA Astrophysics Data System (ADS)

    Allen, Michael P.

    This article addresses some aspects of teaching simulation methods to undergraduates and graduate students. Simulation is increasingly a cross-disciplinary activity, which means that the students who need to learn about simulation methods may have widely differing backgrounds. Also, they may have a wide range of views on what constitutes an interesting application of simulation methods. Almost always, a successful simulation course includes an element of practical, hands-on activity: a balance always needs to be struck between treating the simulation software as a 'black box', and becoming bogged down in programming issues. With notebook computers becoming widely available, students often wish to take away the programs to run themselves, and access to raw computer power is not the limiting factor that it once was; on the other hand, the software should be portable and, if possible, free. Examples will be drawn from the author's experience in three different contexts. (1) An annual simulation summer school for graduate students, run by the UK CCP5 organization, in which practical sessions are combined with an intensive programme of lectures describing the methodology. (2) A molecular modelling module, given as part of a doctoral training centre in the Life Sciences at Warwick, for students who might not have a first degree in the physical sciences. (3) An undergraduate module in Physics at Warwick, also taken by students from other disciplines, teaching high performance computing, visualization, and scripting in the context of a physical application such as Monte Carlo simulation.

  7. Experimental and Computational Study of Sonic and Supersonic Jet Plumes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, E.; Naughton, J. W.; Fletcher, D. G.; Edwards, Thomas A. (Technical Monitor)

    1994-01-01

    Study of sonic and supersonic jet plumes are relevant to understanding such phenomenon as jet-noise, plume signatures, and rocket base-heating and radiation. Jet plumes are simple to simulate and yet, have complex flow structures such as Mach disks, triple points, shear-layers, barrel shocks, shock-shear-layer interaction, etc. Experimental and computational simulation of sonic and supersonic jet plumes have been performed for under- and over-expanded, axisymmetric plume conditions. The computational simulation compare very well with the experimental observations of schlieren pictures. Experimental data such as temperature measurements with hot-wire probes are yet to be measured and will be compared with computed values. Extensive analysis of the computational simulations presents a clear picture of how the complex flow structure develops and the conditions under which self-similar flow structures evolve. From the computations, the plume structure can be further classified into many sub-groups. In the proposed paper, detail results from the experimental and computational simulations for single, axisymmetric, under- and over-expanded, sonic and supersonic plumes will be compared and the fluid dynamic aspects of flow structures will be discussed.

  8. Computational Embryology and Predictive Toxicology of Hypospadias (SOT)

    EPA Science Inventory

    Hypospadias, one of the most common birth defects in human male infants, is a condition in which the urethral opening is misplaced along ventral aspect of the penis. We developed an Adverse Outcome Pathway (AOP) framework and computer simulation that describes the pathogenesis of...

  9. Mesoscopic modelling and simulation of soft matter.

    PubMed

    Schiller, Ulf D; Krüger, Timm; Henrich, Oliver

    2017-12-20

    The deformability of soft condensed matter often requires modelling of hydrodynamical aspects to gain quantitative understanding. This, however, requires specialised methods that can resolve the multiscale nature of soft matter systems. We review a number of the most popular simulation methods that have emerged, such as Langevin dynamics, dissipative particle dynamics, multi-particle collision dynamics, sometimes also referred to as stochastic rotation dynamics, and the lattice-Boltzmann method. We conclude this review with a short glance at current compute architectures for high-performance computing and community codes for soft matter simulation.

  10. A digital computer simulation and study of a direct-energy-transfer power-conditioning system

    NASA Technical Reports Server (NTRS)

    Burns, W. W., III; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.; Paulkovich, J.

    1974-01-01

    A digital computer simulation technique, which can be used to study such composite power-conditioning systems, was applied to a spacecraft direct-energy-transfer power-processing system. The results obtained duplicate actual system performance with considerable accuracy. The validity of the approach and its usefulness in studying various aspects of system performance such as steady-state characteristics and transient responses to severely varying operating conditions are demonstrated experimentally.

  11. Mathematical and Computational Aspects Related to Soil Modeling and Simulation

    DTIC Science & Technology

    2017-09-26

    and simulation challenges at the interface of applied math (homogenization, handling of discontinuous behavior, discrete vs. continuum representations...applied math tools need to be established and used to figure out how to impose compatible boundary conditions, how to better approximate the gradient

  12. Physically Based Virtual Surgery Planning and Simulation Tools for Personal Health Care Systems

    NASA Astrophysics Data System (ADS)

    Dogan, Firat; Atilgan, Yasemin

    The virtual surgery planning and simulation tools have gained a great deal of importance in the last decade in a consequence of increasing capacities at the information technology level. The modern hardware architectures, large scale database systems, grid based computer networks, agile development processes, better 3D visualization and all the other strong aspects of the information technology brings necessary instruments into almost every desk. The last decade’s special software and sophisticated super computer environments are now serving to individual needs inside “tiny smart boxes” for reasonable prices. However, resistance to learning new computerized environments, insufficient training and all the other old habits prevents effective utilization of IT resources by the specialists of the health sector. In this paper, all the aspects of the former and current developments in surgery planning and simulation related tools are presented, future directions and expectations are investigated for better electronic health care systems.

  13. Large eddy simulation of forest canopy flow for wildland fire modeling

    Treesearch

    Eric Mueller; William Mell; Albert Simeoni

    2014-01-01

    Large eddy simulation (LES) based computational fluid dynamics (CFD) simulators have obtained increasing attention in the wildland fire research community, as these tools allow the inclusion of important driving physics. However, due to the complexity of the models, individual aspects must be isolated and tested rigorously to ensure meaningful results. As wind is a...

  14. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    PubMed

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  15. Sonic and Supersonic Jet Plumes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, E.; Naughton, J. W.; Flethcher, D. G.; Edwards, Thomas A. (Technical Monitor)

    1994-01-01

    Study of sonic and supersonic jet plumes are relevant to understanding such phenomenon as jet-noise, plume signatures, and rocket base-heating and radiation. Jet plumes are simple to simulate and yet, have complex flow structures such as Mach disks, triple points, shear-layers, barrel shocks, shock- shear- layer interaction, etc. Experimental and computational simulation of sonic and supersonic jet plumes have been performed for under- and over-expanded, axisymmetric plume conditions. The computational simulation compare very well with the experimental observations of schlieren pictures. Experimental data such as temperature measurements with hot-wire probes are yet to be measured and will be compared with computed values. Extensive analysis of the computational simulations presents a clear picture of how the complex flow structure develops and the conditions under which self-similar flow structures evolve. From the computations, the plume structure can be further classified into many sub-groups. In the proposed paper, detail results from the experimental and computational simulations for single, axisymmetric, under- and over-expanded, sonic and supersonic plumes will be compared and the fluid dynamic aspects of flow structures will be discussed.

  16. The Shuttle Mission Simulator computer generated imagery

    NASA Technical Reports Server (NTRS)

    Henderson, T. H.

    1984-01-01

    Equipment available in the primary training facility for the Space Transportation System (STS) flight crews includes the Fixed Base Simulator, the Motion Base Simulator, the Spacelab Simulator, and the Guidance and Navigation Simulator. The Shuttle Mission Simulator (SMS) consists of the Fixed Base Simulator and the Motion Base Simulator. The SMS utilizes four visual Computer Generated Image (CGI) systems. The Motion Base Simulator has a forward crew station with six-degrees of freedom motion simulation. Operation of the Spacelab Simulator is planned for the spring of 1983. The Guidance and Navigation Simulator went into operation in 1982. Aspects of orbital visual simulation are discussed, taking into account the earth scene, payload simulation, the generation and display of 1079 stars, the simulation of sun glare, and Reaction Control System jet firing plumes. Attention is also given to landing site visual simulation, and night launch and landing simulation.

  17. Interactive Computation for Undergraduates: The Next Generation

    NASA Astrophysics Data System (ADS)

    Kolan, Amy J.

    2017-05-01

    A generation ago (29 years ago), Leo Kadanoff and Michael Vinson created the Computers, Chaos, and Physics course. A major pedagogical thrust of this course was to help students form and test hypotheses via computer simulation of small problems in physics. Recently, this aspect of the 1987 course has been revived for use with first year physics undergraduate students at St. Olaf College.

  18. A review of the use of simulation in dental education.

    PubMed

    Perry, Suzanne; Bridges, Susan Margaret; Burrow, Michael Francis

    2015-02-01

    In line with the advances in technology and communication, medical simulations are being developed to support the acquisition of requisite psychomotor skills before real-life clinical applications. This review article aimed to give a general overview of simulation in a cognate field, clinical dental education. Simulations in dentistry are not a new phenomenon; however, recent developments in virtual-reality technology using computer-generated medical simulations of 3-dimensional images or environments are providing more optimal practice conditions to smooth the transition from the traditional model-based simulation laboratory to the clinic. Evidence as to the positive aspects of virtual reality include increased effectiveness in comparison with traditional simulation teaching techniques, more efficient learning, objective and reproducible feedback, unlimited training hours, and enhanced cost-effectiveness for teaching establishments. Negative aspects have been indicated as initial setup costs, faculty training, and the lack of a variety of content and current educational simulation programs.

  19. Architectural Aspects of Grid Computing and its Global Prospects for E-Science Community

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    2008-05-01

    The paper reviews the imminent Architectural Aspects of Grid Computing for e-Science community for scientific research and business/commercial collaboration beyond physical boundaries. Grid Computing provides all the needed facilities; hardware, software, communication interfaces, high speed internet, safe authentication and secure environment for collaboration of research projects around the globe. It provides highly fast compute engine for those scientific and engineering research projects and business/commercial applications which are heavily compute intensive and/or require humongous amounts of data. It also makes possible the use of very advanced methodologies, simulation models, expert systems and treasure of knowledge available around the globe under the umbrella of knowledge sharing. Thus it makes possible one of the dreams of global village for the benefit of e-Science community across the globe.

  20. A Virtual Look at Epstein–Barr Virus Infection: Biological Interpretations

    PubMed Central

    Delgado-Eckert, Edgar; Hadinoto, Vey; Jarrah, Abdul S; Laubenbacher, Reinhard; Lee, Kichol; Luzuriaga, Katherine; Polys, Nicholas F; Thorley-Lawson, David A

    2007-01-01

    The possibility of using computer simulation and mathematical modeling to gain insight into biological and other complex systems is receiving increased attention. However, it is as yet unclear to what extent these techniques will provide useful biological insights or even what the best approach is. Epstein–Barr virus (EBV) provides a good candidate to address these issues. It persistently infects most humans and is associated with several important diseases. In addition, a detailed biological model has been developed that provides an intricate understanding of EBV infection in the naturally infected human host and accounts for most of the virus' diverse and peculiar properties. We have developed an agent-based computer model/simulation (PathSim, Pathogen Simulation) of this biological model. The simulation is performed on a virtual grid that represents the anatomy of the tonsils of the nasopharyngeal cavity (Waldeyer ring) and the peripheral circulation—the sites of EBV infection and persistence. The simulation is presented via a user friendly visual interface and reproduces quantitative and qualitative aspects of acute and persistent EBV infection. The simulation also had predictive power in validation experiments involving certain aspects of viral infection dynamics. Moreover, it allows us to identify switch points in the infection process that direct the disease course towards the end points of persistence, clearance, or death. Lastly, we were able to identify parameter sets that reproduced aspects of EBV-associated diseases. These investigations indicate that such simulations, combined with laboratory and clinical studies and animal models, will provide a powerful approach to investigating and controlling EBV infection, including the design of targeted anti-viral therapies. PMID:17953479

  1. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing

    PubMed Central

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-01

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343

  2. NASA/ESA CV-990 Spacelab simulation. Appendixes: C, data-handling: Planning and implementation; D, communications; E, mission documentation

    NASA Technical Reports Server (NTRS)

    Reller, J. O., Jr.

    1976-01-01

    Data handling, communications, and documentation aspects of the ASSESS mission are described. Most experiments provided their own data handling equipment, although some used the airborne computer for backup, and one experiment required real-time computations. Communications facilities were set up to simulate those to be provided between Spacelab and the ground, including a downlink TV system. Mission documentation was kept to a minimum and proved sufficient. Examples are given of the basic documents of the mission.

  3. A Theory for the Neural Basis of Language Part 2: Simulation Studies of the Model

    ERIC Educational Resources Information Center

    Baron, R. J.

    1974-01-01

    Computer simulation studies of the proposed model are presented. Processes demonstrated are (1) verbally directed recall of visual experience; (2) understanding of verbal information; (3) aspects of learning and forgetting; (4) the dependence of recognition and understanding in context; and (5) elementary concepts of sentence production. (Author)

  4. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  5. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  6. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    PubMed

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  7. Random walk on lattices: Graph-theoretic approach to simulating long-range diffusion-attachment growth models

    NASA Astrophysics Data System (ADS)

    Limkumnerd, Surachate

    2014-03-01

    Interest in thin-film fabrication for industrial applications have driven both theoretical and computational aspects of modeling its growth. One of the earliest attempts toward understanding the morphological structure of a film's surface is through a class of solid-on-solid limited-mobility growth models such as the Family, Wolf-Villain, or Das Sarma-Tamborenea models, which have produced fascinating surface roughening behaviors. These models, however, restrict the motion of an incidence atom to be within the neighborhood of its landing site, which renders them inept for simulating long-distance surface diffusion such as that observed in thin-film growth using a molecular-beam epitaxy technique. Naive extension of these models by repeatedly applying the local diffusion rules for each hop to simulate large diffusion length can be computationally very costly when certain statistical aspects are demanded. We present a graph-theoretic approach to simulating a long-range diffusion-attachment growth model. Using the Markovian assumption and given a local diffusion bias, we derive the transition probabilities for a random walker to traverse from one lattice site to the others after a large, possibly infinite, number of steps. Only computation with linear-time complexity is required for the surface morphology calculation without other probabilistic measures. The formalism is applied, as illustrations, to simulate surface growth on a two-dimensional flat substrate and around a screw dislocation under the modified Wolf-Villain diffusion rule. A rectangular spiral ridge is observed in the latter case with a smooth front feature similar to that obtained from simulations using the well-known multiple registration technique. An algorithm for computing the inverse of a class of substochastic matrices is derived as a corollary.

  8. Parameter Sweep and Optimization of Loosely Coupled Simulations Using the DAKOTA Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elwasif, Wael R; Bernholdt, David E; Pannala, Sreekanth

    2012-01-01

    The increasing availability of large scale computing capabilities has accelerated the development of high-fidelity coupled simulations. Such simulations typically involve the integration of models that implement various aspects of the complex phenomena under investigation. Coupled simulations are playing an integral role in fields such as climate modeling, earth systems modeling, rocket simulations, computational chemistry, fusion research, and many other computational fields. Model coupling provides scientists with systematic ways to virtually explore the physical, mathematical, and computational aspects of the problem. Such exploration is rarely done using a single execution of a simulation, but rather by aggregating the results from manymore » simulation runs that, together, serve to bring to light novel knowledge about the system under investigation. Furthermore, it is often the case (particularly in engineering disciplines) that the study of the underlying system takes the form of an optimization regime, where the control parameter space is explored to optimize an objective functions that captures system realizability, cost, performance, or a combination thereof. Novel and flexible frameworks that facilitate the integration of the disparate models into a holistic simulation are used to perform this research, while making efficient use of the available computational resources. In this paper, we describe the integration of the DAKOTA optimization and parameter sweep toolkit with the Integrated Plasma Simulator (IPS), a component-based framework for loosely coupled simulations. The integration allows DAKOTA to exploit the internal task and resource management of the IPS to dynamically instantiate simulation instances within a single IPS instance, allowing for greater control over the trade-off between efficiency of resource utilization and time to completion. We present a case study showing the use of the combined DAKOTA-IPS system to aid in the design of a lithium ion battery (LIB) cell, by studying a coupled system involving the electrochemistry and ion transport at the lower length scales and thermal energy transport at the device scales. The DAKOTA-IPS system provides a flexible tool for use in optimization and parameter sweep studies involving loosely coupled simulations that is suitable for use in situations where changes to the constituent components in the coupled simulation are impractical due to intellectual property or code heritage issues.« less

  9. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less

  10. Multiphase, multi-electrode Joule heat computations for glass melter and in situ vitrification simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowery, P.S.; Lessor, D.L.

    Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less

  11. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  12. Crystal Nucleation in Liquids: Open Questions and Future Challenges in Molecular Dynamics Simulations

    PubMed Central

    2016-01-01

    The nucleation of crystals in liquids is one of nature’s most ubiquitous phenomena, playing an important role in areas such as climate change and the production of drugs. As the early stages of nucleation involve exceedingly small time and length scales, atomistic computer simulations can provide unique insights into the microscopic aspects of crystallization. In this review, we take stock of the numerous molecular dynamics simulations that, in the past few decades, have unraveled crucial aspects of crystal nucleation in liquids. We put into context the theoretical framework of classical nucleation theory and the state-of-the-art computational methods by reviewing simulations of such processes as ice nucleation and the crystallization of molecules in solutions. We shall see that molecular dynamics simulations have provided key insights into diverse nucleation scenarios, ranging from colloidal particles to natural gas hydrates, and that, as a result, the general applicability of classical nucleation theory has been repeatedly called into question. We have attempted to identify the most pressing open questions in the field. We believe that, by improving (i) existing interatomic potentials and (ii) currently available enhanced sampling methods, the community can move toward accurate investigations of realistic systems of practical interest, thus bringing simulations a step closer to experiments. PMID:27228560

  13. Crystal Nucleation in Liquids: Open Questions and Future Challenges in Molecular Dynamics Simulations.

    PubMed

    Sosso, Gabriele C; Chen, Ji; Cox, Stephen J; Fitzner, Martin; Pedevilla, Philipp; Zen, Andrea; Michaelides, Angelos

    2016-06-22

    The nucleation of crystals in liquids is one of nature's most ubiquitous phenomena, playing an important role in areas such as climate change and the production of drugs. As the early stages of nucleation involve exceedingly small time and length scales, atomistic computer simulations can provide unique insights into the microscopic aspects of crystallization. In this review, we take stock of the numerous molecular dynamics simulations that, in the past few decades, have unraveled crucial aspects of crystal nucleation in liquids. We put into context the theoretical framework of classical nucleation theory and the state-of-the-art computational methods by reviewing simulations of such processes as ice nucleation and the crystallization of molecules in solutions. We shall see that molecular dynamics simulations have provided key insights into diverse nucleation scenarios, ranging from colloidal particles to natural gas hydrates, and that, as a result, the general applicability of classical nucleation theory has been repeatedly called into question. We have attempted to identify the most pressing open questions in the field. We believe that, by improving (i) existing interatomic potentials and (ii) currently available enhanced sampling methods, the community can move toward accurate investigations of realistic systems of practical interest, thus bringing simulations a step closer to experiments.

  14. The Enhancement of Simulation Based Learning Exercises through Formalised Reflection, Focus Groups and Group Presentation

    ERIC Educational Resources Information Center

    Mawdesley, M.; Long, G.; Al-jibouri, S.; Scott, D.

    2011-01-01

    Computer based simulations and games can be useful tools in teaching aspects of construction project management that are not easily transmitted through traditional lecture based approaches. However, it can be difficult to quantify their utility and it is essential to ensure that students are achieving the learning outcomes required rather than…

  15. a Discrete Mathematical Model to Simulate Malware Spreading

    NASA Astrophysics Data System (ADS)

    Del Rey, A. Martin; Sánchez, G. Rodriguez

    2012-10-01

    With the advent and worldwide development of Internet, the study and control of malware spreading has become very important. In this sense, some mathematical models to simulate malware propagation have been proposed in the scientific literature, and usually they are based on differential equations exploiting the similarities with mathematical epidemiology. The great majority of these models study the behavior of a particular type of malware called computer worms; indeed, to the best of our knowledge, no model has been proposed to simulate the spreading of a computer virus (the traditional type of malware which differs from computer worms in several aspects). In this sense, the purpose of this work is to introduce a new mathematical model not based on continuous mathematics tools but on discrete ones, to analyze and study the epidemic behavior of computer virus. Specifically, cellular automata are used in order to design such model.

  16. Method and Apparatus for Encouraging Physiological Self-Regulation Through Modulation of an Operator's Control Input to a Video Game or Training Simulator

    NASA Technical Reports Server (NTRS)

    Palsson, Olafur S. (Inventor); Harris, Randall L., Sr. (Inventor); Pope, Alan T. (Inventor)

    2002-01-01

    Apparatus and methods for modulating the control authority (i.e., control function) of a computer simulation or game input device (e.g., joystick, button control) using physiological information so as to affect the user's ability to impact or control the simulation or game with the input device. One aspect is to use the present invention, along with a computer simulation or game, to affect physiological state or physiological self-regulation according to some programmed criterion (e.g., increase, decrease, or maintain) in order to perform better at the game task. When the affected physiological state or physiological self-regulation is the target of self-regulation or biofeedback training, the simulation or game play reinforces therapeutic changes in the physiological signal(s).

  17. The application of virtual reality systems as a support of digital manufacturing and logistics

    NASA Astrophysics Data System (ADS)

    Golda, G.; Kampa, A.; Paprocka, I.

    2016-08-01

    Modern trends in development of computer aided techniques are heading toward the integration of design competitive products and so-called "digital manufacturing and logistics", supported by computer simulation software. All phases of product lifecycle: starting from design of a new product, through planning and control of manufacturing, assembly, internal logistics and repairs, quality control, distribution to customers and after-sale service, up to its recycling or utilization should be aided and managed by advanced packages of product lifecycle management software. Important problems for providing the efficient flow of materials in supply chain management of whole product lifecycle, using computer simulation will be described on that paper. Authors will pay attention to the processes of acquiring relevant information and correct data, necessary for virtual modeling and computer simulation of integrated manufacturing and logistics systems. The article describes possibilities of use an applications of virtual reality software for modeling and simulation the production and logistics processes in enterprise in different aspects of product lifecycle management. The authors demonstrate effective method of creating computer simulations for digital manufacturing and logistics and show modeled and programmed examples and solutions. They pay attention to development trends and show options of the applications that go beyond enterprise.

  18. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Sewy, D.; Pickering, C.; Sauers, R.

    1984-01-01

    The purpose of the phase 2 of the power subsystem automation study was to demonstrate the feasibility of using computer software to manage an aspect of the electrical power subsystem on a space station. The state of the art in expert systems software was investigated in this study. This effort resulted in the demonstration of prototype expert system software for managing one aspect of a simulated space station power subsystem.

  19. The Global Energy Situation on Earth, Student Guide. Computer Technology Program Environmental Education Units.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This is the student guide in a set of five computer-oriented environmental/energy education units. Contents of this guide are: (1) Introduction to the unit; (2) The "EARTH" program; (3) Exercises; and (4) Sources of information on the energy crisis. This guide supplements a simulation which allows students to analyze different aspects of…

  20. Computational Simulation of Acoustic Modes in Rocket Combustors

    NASA Technical Reports Server (NTRS)

    Harper, Brent (Technical Monitor); Merkle, C. L.; Sankaran, V.; Ellis, M.

    2004-01-01

    A combination of computational fluid dynamic analysis and analytical solutions is being used to characterize the dominant modes in liquid rocket engines in conjunction with laboratory experiments. The analytical solutions are based on simplified geometries and flow conditions and are used for careful validation of the numerical formulation. The validated computational model is then extended to realistic geometries and flow conditions to test the effects of various parameters on chamber modes, to guide and interpret companion laboratory experiments in simplified combustors, and to scale the measurements to engine operating conditions. In turn, the experiments are used to validate and improve the model. The present paper gives an overview of the numerical and analytical techniques along with comparisons illustrating the accuracy of the computations as a function of grid resolution. A representative parametric study of the effect of combustor mean flow Mach number and combustor aspect ratio on the chamber modes is then presented for both transverse and longitudinal modes. The results show that higher mean flow Mach numbers drive the modes to lower frequencies. Estimates of transverse wave mechanics in a high aspect ratio combustor are then contrasted with longitudinal modes in a long and narrow combustor to provide understanding of potential experimental simulations.

  1. Virtual Reality and Computer-Enhanced Training Applied to Wheeled Mobility: An Overview of Work in Pittsburgh

    ERIC Educational Resources Information Center

    Cooper, Rory A.; Ding, Dan; Simpson, Richard; Fitzgerald, Shirley G.; Spaeth, Donald M.; Guo, Songfeng; Koontz, Alicia M.; Cooper, Rosemarie; Kim, Jongbae; Boninger, Michael L.

    2005-01-01

    Some aspects of assistive technology can be enhanced by the application of virtual reality. Although virtual simulation offers a range of new possibilities, learning to navigate in a virtual environment is not equivalent to learning to navigate in the real world. Therefore, virtual reality simulation is advocated as a useful preparation for…

  2. Computational Infrastructure for Engine Structural Performance Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1997-01-01

    Select computer codes developed over the years to simulate specific aspects of engine structures are described. These codes include blade impact integrated multidisciplinary analysis and optimization, progressive structural fracture, quantification of uncertainties for structural reliability and risk, benefits estimation of new technology insertion and hierarchical simulation of engine structures made from metal matrix and ceramic matrix composites. Collectively these codes constitute a unique infrastructure readiness to credibly evaluate new and future engine structural concepts throughout the development cycle from initial concept, to design and fabrication, to service performance and maintenance and repairs, and to retirement for cause and even to possible recycling. Stated differently, they provide 'virtual' concurrent engineering for engine structures total-life-cycle-cost.

  3. Computational Aerodynamic Simulations of a Spacecraft Cabin Ventilation Fan Design

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2010-01-01

    Quieter working environments for astronauts are needed if future long-duration space exploration missions are to be safe and productive. Ventilation and payload cooling fans are known to be dominant sources of noise, with the International Space Station being a good case in point. To address this issue cost effectively, early attention to fan design, selection, and installation has been recommended, leading to an effort by NASA to examine the potential for small-fan noise reduction by improving fan aerodynamic design. As a preliminary part of that effort, the aerodynamics of a cabin ventilation fan designed by Hamilton Sundstrand has been simulated using computational fluid dynamics codes, and the computed solutions analyzed to quantify various aspects of the fan aerodynamics and performance. Four simulations were performed at the design rotational speed: two at the design flow rate and two at off-design flow rates. Following a brief discussion of the computational codes, various aerodynamic- and performance-related quantities derived from the computed flow fields are presented along with relevant flow field details. The results show that the computed fan performance is in generally good agreement with stated design goals.

  4. Simulation of Dynamics of a Flexible Miniature Airplane

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    2005-01-01

    A short report discusses selected aspects of the development of the University of Florida micro-aerial vehicle (UFMAV) basically, a miniature airplane that has a flexible wing and is representative of a new class of airplanes that would operate autonomously or under remote control and be used for surveillance and/or scientific observation. The flexibility of the wing is to be optimized such that passive deformation of the wing in the presence of aerodynamic disturbances would reduce the overall response of the airplane to disturbances, thereby rendering the airplane more stable as an observation platform. The aspect of the development emphasized in the report is that of computational simulation of dynamics of the UFMAV in flight, for the purpose of generating mathematical models for use in designing control systems for the airplane. The simulations are performed by use of data from a wind-tunnel test of the airplane in combination with commercial software, in which are codified a standard set of equations of motion of an airplane, and a set of mathematical routines to compute trim conditions and extract linear state space models.

  5. Numerical simulations of vortex-induced vibrations of a flexible riser with different aspect ratiosin uniform and shear currents

    NASA Astrophysics Data System (ADS)

    Duanmu, Yu; Zou, Lu; Wan, De-cheng

    2017-12-01

    This paper aimed at describing numerical simulations of vortex-induced vibrations (VIVs) of a long flexible riser with different length-to-diameter ratio (aspect ratio) in uniform and shear currents. Three aspect ratios were simulated: L/D = 500, 750 and 1 000. The simulation was carried out by the in-house computational fluid dynamics (CFD) solver viv-FOAM-SJTU developed by the authors, which was coupled with the strip method and developed on the OpenFOAM platform. Moreover, the radial basis function (RBF) dynamic grid technique is applied to the viv-FOAM-SJTU solver to simulate the VIV in both in-line (IL) and cross-flow (CF) directions of flexible riser with high aspect ratio. The validation of the benchmark case has been completed. With the same parameters, the aspect ratio shows a significant influence on VIV of a long flexible riser. The increase of aspect ratio exerted a strong effect on the IL equilibrium position of the riser while producing little effect on the curvature of riser. With the aspect ratio rose from 500 to 1 000, the maximum IL mean displacement increased from 3 times the diameter to 8 times the diameter. On the other hand, the vibration mode of the riser would increase with the increase of aspect ratio. When the aspect ratio was 500, the CF vibration was shown as a standing wave with a 3rd order single mode. When the aspect ratio was 1 000, the modal weights of the 5th and 6th modes are high, serving as the dominant modes. The effect of the flow profile on the oscillating mode becomes more and more apparent when the aspect ratio is high, and the dominant mode of riser in shear flow is usually higher than that in uniform flow. When the aspect ratio was 750, the CF oscillations in both uniform flow and shear flow showed multi-mode vibration of the 4th and 5th mode. While, the dominant mode in uniform flow is the 4th order, and the dominant mode in shear flow is the 5th order.

  6. Application of Psychological Theories in Agent-Based Modeling: The Case of the Theory of Planned Behavior.

    PubMed

    Scalco, Andrea; Ceschi, Andrea; Sartori, Riccardo

    2018-01-01

    It is likely that computer simulations will assume a greater role in the next future to investigate and understand reality (Rand & Rust, 2011). Particularly, agent-based models (ABMs) represent a method of investigation of social phenomena that blend the knowledge of social sciences with the advantages of virtual simulations. Within this context, the development of algorithms able to recreate the reasoning engine of autonomous virtual agents represents one of the most fragile aspects and it is indeed crucial to establish such models on well-supported psychological theoretical frameworks. For this reason, the present work discusses the application case of the theory of planned behavior (TPB; Ajzen, 1991) in the context of agent-based modeling: It is argued that this framework might be helpful more than others to develop a valid representation of human behavior in computer simulations. Accordingly, the current contribution considers issues related with the application of the model proposed by the TPB inside computer simulations and suggests potential solutions with the hope to contribute to shorten the distance between the fields of psychology and computer science.

  7. Construct validity of individual and summary performance metrics associated with a computer-based laparoscopic simulator.

    PubMed

    Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason

    2014-06-01

    Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.

  8. Using Open Source Software in Visual Simulation Development

    DTIC Science & Technology

    2005-09-01

    increased the use of the technology in training activities. Using open source/free software tools in the process can expand these possibilities...resulting in even greater cost reduction and allowing the flexibility needed in a training environment. This thesis presents a configuration and architecture...to be used when developing training visual simulations using both personal computers and open source tools. Aspects of the requirements needed in a

  9. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  10. The transesophageal echocardiography simulator based on computed tomography images.

    PubMed

    Piórkowski, Adam; Kempny, Aleksander

    2013-02-01

    Simulators are a new tool in education in many fields, including medicine, where they greatly improve familiarity with medical procedures, reduce costs, and, importantly, cause no harm to patients. This is so in the case of transesophageal echocardiography (TEE), in which the use of a simulator facilitates spatial orientation and helps in case studies. The aim of the project described in this paper is to simulate an examination by TEE. This research makes use of available computed tomography data to simulate the corresponding echocardiographic view. This paper describes the essential characteristics that distinguish these two modalities and the key principles of the wave phenomena that should be considered in the simulation process, taking into account the conditions specific to the echocardiography. The construction of the CT2TEE (Web-based TEE simulator) is also presented. The considerations include ray-tracing and ray-casting techniques in the context of ultrasound beam and artifact simulation. An important aspect of the interaction with the user is raised.

  11. High-performance computational fluid dynamics: a custom-code approach

    NASA Astrophysics Data System (ADS)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  12. Computers in Traffic Education.

    ERIC Educational Resources Information Center

    Alexander, O. P.

    1983-01-01

    Traffic education covers basic road skills, legal/insurance aspects, highway code, accident causation/prevention, and vehicle maintenance. Microcomputer applications to traffic education are outlined, followed by a selected example of programs currently available (focusing on drill/practice, simulation, problem-solving, data manipulation, games,…

  13. 'The Monkey and the Hunter' and Other Projectile Motion Experiments with Logo.

    ERIC Educational Resources Information Center

    Kolodiy, George Oleh

    1988-01-01

    Presents the LOGO computer language as a source to experience and investigate scientific laws. Discusses aspects and uses of LOGO. Lists two LOGO programs, one to simulate a gravitational field and the other projectile motion. (MVL)

  14. A Hybrid Cloud Computing Service for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Yang, C. P.

    2016-12-01

    Cloud Computing is becoming a norm for providing computing capabilities for advancing Earth sciences including big Earth data management, processing, analytics, model simulations, and many other aspects. A hybrid spatiotemporal cloud computing service is bulit at George Mason NSF spatiotemporal innovation center to meet this demands. This paper will report the service including several aspects: 1) the hardware includes 500 computing services and close to 2PB storage as well as connection to XSEDE Jetstream and Caltech experimental cloud computing environment for sharing the resource; 2) the cloud service is geographically distributed at east coast, west coast, and central region; 3) the cloud includes private clouds managed using open stack and eucalyptus, DC2 is used to bridge these and the public AWS cloud for interoperability and sharing computing resources when high demands surfing; 4) the cloud service is used to support NSF EarthCube program through the ECITE project, ESIP through the ESIP cloud computing cluster, semantics testbed cluster, and other clusters; 5) the cloud service is also available for the earth science communities to conduct geoscience. A brief introduction about how to use the cloud service will be included.

  15. Explicit finite-difference simulation of optical integrated devices on massive parallel computers.

    PubMed

    Sterkenburgh, T; Michels, R M; Dress, P; Franke, H

    1997-02-20

    An explicit method for the numerical simulation of optical integrated circuits by means of the finite-difference time-domain (FDTD) method is presented. This method, based on an explicit solution of Maxwell's equations, is well established in microwave technology. Although the simulation areas are small, we verified the behavior of three interesting problems, especially nonparaxial problems, with typical aspects of integrated optical devices. Because numerical losses are within acceptable limits, we suggest the use of the FDTD method to achieve promising quantitative simulation results.

  16. Patient-specific coronary artery blood flow simulation using myocardial volume partitioning

    NASA Astrophysics Data System (ADS)

    Kim, Kyung Hwan; Kang, Dongwoo; Kang, Nahyup; Kim, Ji-Yeon; Lee, Hyong-Euk; Kim, James D. K.

    2013-03-01

    Using computational simulation, we can analyze cardiovascular disease in non-invasive and quantitative manners. More specifically, computational modeling and simulation technology has enabled us to analyze functional aspect such as blood flow, as well as anatomical aspect such as stenosis, from medical images without invasive measurements. Note that the simplest ways to perform blood flow simulation is to apply patient-specific coronary anatomy with other average-valued properties; in this case, however, such conditions cannot fully reflect accurate physiological properties of patients. To resolve this limitation, we present a new patient-specific coronary blood flow simulation method by myocardial volume partitioning considering artery/myocardium structural correspondence. We focus on that blood supply is closely related to the mass of each myocardial segment corresponding to the artery. Therefore, we applied this concept for setting-up simulation conditions in the way to consider many patient-specific features as possible from medical image: First, we segmented coronary arteries and myocardium separately from cardiac CT; then the myocardium is partitioned into multiple regions based on coronary vasculature. The myocardial mass and required blood mass for each artery are estimated by converting myocardial volume fraction. Finally, the required blood mass is used as boundary conditions for each artery outlet, with given average aortic blood flow rate and pressure. To show effectiveness of the proposed method, fractional flow reserve (FFR) by simulation using CT image has been compared with invasive FFR measurement of real patient data, and as a result, 77% of accuracy has been obtained.

  17. Noise Radiation From a Leading-Edge Slat

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Choudhari, Meelan M.

    2009-01-01

    This paper extends our previous computations of unsteady flow within the slat cove region of a multi-element high-lift airfoil configuration, which showed that both statistical and structural aspects of the experimentally observed unsteady flow behavior can be captured via 3D simulations over a computational domain of narrow spanwise extent. Although such narrow domain simulation can account for the spanwise decorrelation of the slat cove fluctuations, the resulting database cannot be applied towards acoustic predictions of the slat without invoking additional approximations to synthesize the fluctuation field over the rest of the span. This deficiency is partially alleviated in the present work by increasing the spanwise extent of the computational domain from 37.3% of the slat chord to nearly 226% (i.e., 15% of the model span). The simulation database is used to verify consistency with previous computational results and, then, to develop predictions of the far-field noise radiation in conjunction with a frequency-domain Ffowcs-Williams Hawkings solver.

  18. Employment of adaptive learning techniques for the discrimination of acoustic emissions

    NASA Astrophysics Data System (ADS)

    Erkes, J. W.; McDonald, J. F.; Scarton, H. A.; Tam, K. C.; Kraft, R. P.

    1983-11-01

    The following aspects of this study on the discrimination of acoustic emissions (AE) were examined: (1) The analytical development and assessment of digital signal processing techniques for AE signal dereverberation, noise reduction, and source characterization; (2) The modeling and verification of some aspects of key selected techniques through a computer-based simulation; and (3) The study of signal propagation physics and their effect on received signal characteristics for relevant physical situations.

  19. Computational Control Workstation: Users' perspectives

    NASA Technical Reports Server (NTRS)

    Roithmayr, Carlos M.; Straube, Timothy M.; Tave, Jeffrey S.

    1993-01-01

    A Workstation has been designed and constructed for rapidly simulating motions of rigid and elastic multibody systems. We examine the Workstation from the point of view of analysts who use the machine in an industrial setting. Two aspects of the device distinguish it from other simulation programs. First, one uses a series of windows and menus on a computer terminal, together with a keyboard and mouse, to provide a mathematical and geometrical description of the system under consideration. The second hallmark is a facility for animating simulation results. An assessment of the amount of effort required to numerically describe a system to the Workstation is made by comparing the process to that used with other multibody software. The apparatus for displaying results as a motion picture is critiqued as well. In an effort to establish confidence in the algorithms that derive, encode, and solve equations of motion, simulation results from the Workstation are compared to answers obtained with other multibody programs. Our study includes measurements of computational speed.

  20. Blade Displacement Predictions for the Full-Scale UH-60A Airloads Rotor

    NASA Technical Reports Server (NTRS)

    Bledron, Robert T.; Lee-Rausch, Elizabeth M.

    2014-01-01

    An unsteady Reynolds-Averaged Navier-Stokes solver for unstructured grids is loosely coupled to a rotorcraft comprehensive code and used to simulate two different test conditions from a wind-tunnel test of a full-scale UH-60A rotor. Performance data and sectional airloads from the simulation are compared with corresponding tunnel data to assess the level of fidelity of the aerodynamic aspects of the simulation. The focus then turns to a comparison of the blade displacements, both rigid (blade root) and elastic. Comparisons of computed root motions are made with data from three independent measurement systems. Finally, comparisons are made between computed elastic bending and elastic twist, and the corresponding measurements obtained from a photogrammetry system. Overall the correlation between computed and measured displacements was good, especially for the root pitch and lag motions and the elastic bending deformation. The correlation of root lead-lag motion and elastic twist deformation was less favorable.

  1. Simulation of Hazards and Poses for a Rocker-Bogie Rover

    NASA Technical Reports Server (NTRS)

    Backes, Paul; Norris, Jeffrey; Powell, Mark; Tharp, Gregory

    2004-01-01

    Provisions for specification of hazards faced by a robotic vehicle (rover) equipped with a rocker-bogie suspension, for prediction of collisions between the vehicle and the hazards, and for simulation of poses of the vehicle at selected positions on the terrain have been incorporated into software that simulates the movements of the vehicle on planned paths across the terrain. The software in question is that of the Web Interface for Telescience (WITS), selected aspects of which have been described in a number of prior NASA Tech Briefs articles. To recapitulate: The WITS is a system of computer software that enables scientists, located at geographically dispersed computer terminals connected to the World Wide Web, to command instrumented robotic vehicles (rovers) during exploration of Mars and perhaps eventually of other planets. The WITS also has potential for adaptation to terrestrial use in telerobotics and other applications that involve computer-based remote monitoring, supervision, control, and planning.

  2. Investigation of Carbohydrate Recognition via Computer Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Quentin R.; Lindsay, Richard J.; Petridis, Loukas

    Carbohydrate recognition by proteins, such as lectins and other (bio)molecules, can be essential for many biological functions. Interest has arisen due to potential protein and drug design and future bioengineering applications. A quantitative measurement of carbohydrate-protein interaction is thus important for the full characterization of sugar recognition. Here, we focus on the aspect of utilizing computer simulations and biophysical models to evaluate the strength and specificity of carbohydrate recognition in this review. With increasing computational resources, better algorithms and refined modeling parameters, using state-of-the-art supercomputers to calculate the strength of the interaction between molecules has become increasingly mainstream. We reviewmore » the current state of this technique and its successful applications for studying protein-sugar interactions in recent years.« less

  3. Investigation of Carbohydrate Recognition via Computer Simulation

    DOE PAGES

    Johnson, Quentin R.; Lindsay, Richard J.; Petridis, Loukas; ...

    2015-04-28

    Carbohydrate recognition by proteins, such as lectins and other (bio)molecules, can be essential for many biological functions. Interest has arisen due to potential protein and drug design and future bioengineering applications. A quantitative measurement of carbohydrate-protein interaction is thus important for the full characterization of sugar recognition. Here, we focus on the aspect of utilizing computer simulations and biophysical models to evaluate the strength and specificity of carbohydrate recognition in this review. With increasing computational resources, better algorithms and refined modeling parameters, using state-of-the-art supercomputers to calculate the strength of the interaction between molecules has become increasingly mainstream. We reviewmore » the current state of this technique and its successful applications for studying protein-sugar interactions in recent years.« less

  4. 3D nozzle flow simulations including state-to-state kinetics calculation

    NASA Astrophysics Data System (ADS)

    Cutrone, L.; Tuttafesta, M.; Capitelli, M.; Schettino, A.; Pascazio, G.; Colonna, G.

    2014-12-01

    In supersonic and hypersonic flows, thermal and chemical non-equilibrium is one of the fundamental aspects that must be taken into account for the accurate characterization of the plasma. In this paper, we present an optimized methodology to approach plasma numerical simulation by state-to-state kinetics calculations in a fully 3D Navier-Stokes CFD solver. Numerical simulations of an expanding flow are presented aimed at comparing the behavior of state-to-state chemical kinetics models with respect to the macroscopic thermochemical non-equilibrium models that are usually used in the numerical computation of high temperature hypersonic flows. The comparison is focused both on the differences in the numerical results and on the computational effort associated with each approach.

  5. Software for Engineering Simulations of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Shireman, Kirk; McSwain, Gene; McCormick, Bernell; Fardelos, Panayiotis

    2005-01-01

    Spacecraft Engineering Simulation II (SES II) is a C-language computer program for simulating diverse aspects of operation of a spacecraft characterized by either three or six degrees of freedom. A functional model in SES can include a trajectory flight plan; a submodel of a flight computer running navigational and flight-control software; and submodels of the environment, the dynamics of the spacecraft, and sensor inputs and outputs. SES II features a modular, object-oriented programming style. SES II supports event-based simulations, which, in turn, create an easily adaptable simulation environment in which many different types of trajectories can be simulated by use of the same software. The simulation output consists largely of flight data. SES II can be used to perform optimization and Monte Carlo dispersion simulations. It can also be used to perform simulations for multiple spacecraft. In addition to its generic simulation capabilities, SES offers special capabilities for space-shuttle simulations: for this purpose, it incorporates submodels of the space-shuttle dynamics and a C-language version of the guidance, navigation, and control components of the space-shuttle flight software.

  6. Quantum-assisted biomolecular modelling.

    PubMed

    Harris, Sarah A; Kendon, Vivien M

    2010-08-13

    Our understanding of the physics of biological molecules, such as proteins and DNA, is limited because the approximations we usually apply to model inert materials are not, in general, applicable to soft, chemically inhomogeneous systems. The configurational complexity of biomolecules means the entropic contribution to the free energy is a significant factor in their behaviour, requiring detailed dynamical calculations to fully evaluate. Computer simulations capable of taking all interatomic interactions into account are therefore vital. However, even with the best current supercomputing facilities, we are unable to capture enough of the most interesting aspects of their behaviour to properly understand how they work. This limits our ability to design new molecules, to treat diseases, for example. Progress in biomolecular simulation depends crucially on increasing the computing power available. Faster classical computers are in the pipeline, but these provide only incremental improvements. Quantum computing offers the possibility of performing huge numbers of calculations in parallel, when it becomes available. We discuss the current open questions in biomolecular simulation, how these might be addressed using quantum computation and speculate on the future importance of quantum-assisted biomolecular modelling.

  7. A comparative study on real lab and simulation lab in communication engineering from students' perspectives

    NASA Astrophysics Data System (ADS)

    Balakrishnan, B.; Woods, P. C.

    2013-05-01

    Over the years, rapid development in computer technology has engendered simulation-based laboratory (lab) in addition to the traditional hands-on (physical) lab. Many higher education institutions adopt simulation lab, replacing some existing physical lab experiments. The creation of new systems for conducting engineering lab activities has raised concerns among educators on the merits and shortcomings of both physical and simulation labs; at the same time, many arguments have been raised on the differences of both labs. Investigating the effectiveness of both labs is complicated, as there are multiple factors that should be considered. In view of this challenge, a study on students' perspectives on their experience related to key aspects on engineering laboratory exercise was conducted. In this study, the Visual Auditory Read and Kinetic model was utilised to measure the students' cognitive styles. The investigation was done through a survey among participants from Multimedia University, Malaysia. The findings revealed that there are significant differences for most of the aspects in physical and simulation labs.

  8. Comparing nonlinear MHD simulations of low-aspect-ratio RFPs to RELAX experiments

    NASA Astrophysics Data System (ADS)

    McCollam, K. J.; den Hartog, D. J.; Jacobson, C. M.; Sovinec, C. R.; Masamune, S.; Sanpei, A.

    2016-10-01

    Standard reversed-field pinch (RFP) plasmas provide a nonlinear dynamical system as a validation domain for numerical MHD simulation codes, with applications in general toroidal confinement scenarios including tokamaks. Using the NIMROD code, we simulate the nonlinear evolution of RFP plasmas similar to those in the RELAX experiment. The experiment's modest Lundquist numbers S (as low as a few times 104) make closely matching MHD simulations tractable given present computing resources. Its low aspect ratio ( 2) motivates a comparison study using cylindrical and toroidal geometries in NIMROD. We present initial results from nonlinear single-fluid runs at S =104 for both geometries and a range of equilibrium parameters, which preliminarily show that the magnetic fluctuations are roughly similar between the two geometries and between simulation and experiment, though there appear to be some qualitative differences in their temporal evolution. Runs at higher S are planned. This work is supported by the U.S. DOE and by the Japan Society for the Promotion of Science.

  9. Computer simulation of the processes of inactivation of bacterial cells by dynamic low-coherent speckles

    NASA Astrophysics Data System (ADS)

    Ulianova, Onega V.; Ulyanov, Sergey S.; Sazanova, Elena V.; Zhihong, Zhang; Sibo, Zhou; Luo, Qingming; Zudina, Irina; Bednov, Andrey

    2006-05-01

    Biochemical, biophysical and optical aspects of interaction of low-coherent light with bacterial cells have been discussed. Influence of low-coherent speckles on the colonies grows is investigated. It has been demonstrated that effects of light on the inhibition of cells (Francisella Tularensis) are connected with speckle dynamics. The regimes of illumination of cell suspension with purpose of devitalization of hazard bacteria, caused very dangerous infections, such as tularemia, are found. Mathematical model of interaction of low-coherent laser radiation with bacteria suspension has been proposed. Computer simulations of the processes of laser-cells interaction have been carried out.

  10. Simulation Speed Analysis and Improvements of Modelica Models for Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jorissen, Filip; Wetter, Michael; Helsen, Lieve

    This paper presents an approach for speeding up Modelica models. Insight is provided into how Modelica models are solved and what determines the tool’s computational speed. Aspects such as algebraic loops, code efficiency and integrator choice are discussed. This is illustrated using simple building simulation examples and Dymola. The generality of the work is in some cases verified using OpenModelica. Using this approach, a medium sized office building including building envelope, heating ventilation and air conditioning (HVAC) systems and control strategy can be simulated at a speed five hundred times faster than real time.

  11. Molecular dynamics simulations of collision-induced absorption: Implementation in LAMMPS

    NASA Astrophysics Data System (ADS)

    Fakhardji, W.; Gustafsson, M.

    2017-02-01

    We pursue simulations of collision-induced absorption in a mixture of argon and xenon gas at room temperature by means of classical molecular dynamics. The established theoretical approach (Hartmann et al. 2011 J. Chem. Phys. 134 094316) is implemented with the molecular dynamics package LAMMPS. The bound state features in the absorption spectrum are well reproduced with the molecular dynamics simulation in comparison with a laboratory measurement. The magnitude of the computed absorption, however, is underestimated in a large part of the spectrum. We suggest some aspects of the simulation that could be improved.

  12. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, D.; Alfonsi, A.; Talbot, P.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less

  13. From good intentions to healthy habits: towards integrated computational models of goal striving and habit formation.

    PubMed

    Pirolli, Peter

    2016-08-01

    Computational models were developed in the ACT-R neurocognitive architecture to address some aspects of the dynamics of behavior change. The simulations aim to address the day-to-day goal achievement data available from mobile health systems. The models refine current psychological theories of self-efficacy, intended effort, and habit formation, and provide an account for the mechanisms by which goal personalization, implementation intentions, and remindings work.

  14. Computer Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossiblemore » to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes« less

  15. HRLSim: a high performance spiking neural network simulator for GPGPU clusters.

    PubMed

    Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan

    2014-02-01

    Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.

  16. Science Education: An Experiment in Facilitating the Learning of Neurophysiology.

    ERIC Educational Resources Information Center

    Levitan, Herbert

    1981-01-01

    Summarizes the experiences of a zoology professor attempting to construct a student-centered course in neurophysiology. Various aspects of the organization and conduct of the course are described, including the beginning experience, topics of interest, lecture, laboratory, computer simulation, examinations, student lectures. Evaluation of the…

  17. Methods for improving simulations of biological systems: systemic computation and fractal proteins

    PubMed Central

    Bentley, Peter J.

    2009-01-01

    Modelling and simulation are becoming essential for new fields such as synthetic biology. Perhaps the most important aspect of modelling is to follow a clear design methodology that will help to highlight unwanted deficiencies. The use of tools designed to aid the modelling process can be of benefit in many situations. In this paper, the modelling approach called systemic computation (SC) is introduced. SC is an interaction-based language, which enables individual-based expression and modelling of biological systems, and the interactions between them. SC permits a precise description of a hypothetical mechanism to be written using an intuitive graph-based or a calculus-based notation. The same description can then be directly run as a simulation, merging the hypothetical mechanism and the simulation into the same entity. However, even when using well-designed modelling tools to produce good models, the best model is not always the most accurate one. Frequently, computational constraints or lack of data make it infeasible to model an aspect of biology. Simplification may provide one way forward, but with inevitable consequences of decreased accuracy. Instead of attempting to replace an element with a simpler approximation, it is sometimes possible to substitute the element with a different but functionally similar component. In the second part of this paper, this modelling approach is described and its advantages are summarized using an exemplar: the fractal protein model. Finally, the paper ends with a discussion of good biological modelling practice by presenting lessons learned from the use of SC and the fractal protein model. PMID:19324681

  18. SiMon: Simulation Monitor for Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Xuran Qian, Penny; Cai, Maxwell Xu; Portegies Zwart, Simon; Zhu, Ming

    2017-09-01

    Scientific discovery via numerical simulations is important in modern astrophysics. This relatively new branch of astrophysics has become possible due to the development of reliable numerical algorithms and the high performance of modern computing technologies. These enable the analysis of large collections of observational data and the acquisition of new data via simulations at unprecedented accuracy and resolution. Ideally, simulations run until they reach some pre-determined termination condition, but often other factors cause extensive numerical approaches to break down at an earlier stage. In those cases, processes tend to be interrupted due to unexpected events in the software or the hardware. In those cases, the scientist handles the interrupt manually, which is time-consuming and prone to errors. We present the Simulation Monitor (SiMon) to automatize the farming of large and extensive simulation processes. Our method is light-weight, it fully automates the entire workflow management, operates concurrently across multiple platforms and can be installed in user space. Inspired by the process of crop farming, we perceive each simulation as a crop in the field and running simulation becomes analogous to growing crops. With the development of SiMon we relax the technical aspects of simulation management. The initial package was developed for extensive parameter searchers in numerical simulations, but it turns out to work equally well for automating the computational processing and reduction of observational data reduction.

  19. Remote visualization and scale analysis of large turbulence datatsets

    NASA Astrophysics Data System (ADS)

    Livescu, D.; Pulido, J.; Burns, R.; Canada, C.; Ahrens, J.; Hamann, B.

    2015-12-01

    Accurate simulations of turbulent flows require solving all the dynamically relevant scales of motions. This technique, called Direct Numerical Simulation, has been successfully applied to a variety of simple flows; however, the large-scale flows encountered in Geophysical Fluid Dynamics (GFD) would require meshes outside the range of the most powerful supercomputers for the foreseeable future. Nevertheless, the current generation of petascale computers has enabled unprecedented simulations of many types of turbulent flows which focus on various GFD aspects, from the idealized configurations extensively studied in the past to more complex flows closer to the practical applications. The pace at which such simulations are performed only continues to increase; however, the simulations themselves are restricted to a small number of groups with access to large computational platforms. Yet the petabytes of turbulence data offer almost limitless information on many different aspects of the flow, from the hierarchy of turbulence moments, spectra and correlations, to structure-functions, geometrical properties, etc. The ability to share such datasets with other groups can significantly reduce the time to analyze the data, help the creative process and increase the pace of discovery. Using the largest DOE supercomputing platforms, we have performed some of the biggest turbulence simulations to date, in various configurations, addressing specific aspects of turbulence production and mixing mechanisms. Until recently, the visualization and analysis of such datasets was restricted by access to large supercomputers. The public Johns Hopkins Turbulence database simplifies the access to multi-Terabyte turbulence datasets and facilitates turbulence analysis through the use of commodity hardware. First, one of our datasets, which is part of the database, will be described and then a framework that adds high-speed visualization and wavelet support for multi-resolution analysis of turbulence will be highlighted. The addition of wavelet support reduces the latency and bandwidth requirements for visualization, allowing for many concurrent users, and enables new types of analyses, including scale decomposition and coherent feature extraction.

  20. MO-E-18C-04: Advanced Computer Simulation and Visualization Tools for Enhanced Understanding of Core Medical Physics Concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naqvi, S

    2014-06-15

    Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physicalmore » principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as virtual experiments that give deeper and long lasting understanding of core principles. The student can then make sound judgements in novel situations encountered beyond routine clinical activities.« less

  1. The Overshoot Phenomenon in Geodynamics Codes

    NASA Astrophysics Data System (ADS)

    Kommu, R. K.; Heien, E. M.; Kellogg, L. H.; Bangerth, W.; Heister, T.; Studley, E. H.

    2013-12-01

    The overshoot phenomenon is a common occurrence in numerical software when a continuous function on a finite dimensional discretized space is used to approximate a discontinuous jump, in temperature and material concentration, for example. The resulting solution overshoots, and undershoots, the discontinuous jump. Numerical simulations play an extremely important role in mantle convection research. This is both due to the strong temperature and stress dependence of viscosity and also due to the inaccessibility of deep earth. Under these circumstances, it is essential that mantle convection simulations be extremely accurate and reliable. CitcomS and ASPECT are two finite element based mantle convection simulations developed and maintained by the Computational Infrastructure for Geodynamics. CitcomS is a finite element based mantle convection code that is designed to run on multiple high-performance computing platforms. ASPECT, an adaptive mesh refinement (AMR) code built on the Deal.II library, is also a finite element based mantle convection code that scales well on various HPC platforms. CitcomS and ASPECT both exhibit the overshoot phenomenon. One attempt at controlling the overshoot uses the Entropy Viscosity method, which introduces an artificial diffusion term in the energy equation of mantle convection. This artificial diffusion term is small where the temperature field is smooth. We present results from CitcomS and ASPECT that quantify the effect of the Entropy Viscosity method in reducing the overshoot phenomenon. In the discontinuous Galerkin (DG) finite element method, the test functions used in the method are continuous within each element but are discontinuous across inter-element boundaries. The solution space in the DG method is discontinuous. FEniCS is a collection of free software tools that automate the solution of differential equations using finite element methods. In this work we also present results from a finite element mantle convection simulation implemented in FEniCS that investigates the effect of using DG elements in reducing the overshoot problem.

  2. An Efficient Finite Element Framework to Assess Flexibility Performances of SMA Self-Expandable Carotid Artery Stents

    PubMed Central

    Ferraro, Mauro; Auricchio, Ferdinando; Boatti, Elisa; Scalet, Giulia; Conti, Michele; Morganti, Simone; Reali, Alessandro

    2015-01-01

    Computer-based simulations are nowadays widely exploited for the prediction of the mechanical behavior of different biomedical devices. In this aspect, structural finite element analyses (FEA) are currently the preferred computational tool to evaluate the stent response under bending. This work aims at developing a computational framework based on linear and higher order FEA to evaluate the flexibility of self-expandable carotid artery stents. In particular, numerical simulations involving large deformations and inelastic shape memory alloy constitutive modeling are performed, and the results suggest that the employment of higher order FEA allows accurately representing the computational domain and getting a better approximation of the solution with a widely-reduced number of degrees of freedom with respect to linear FEA. Moreover, when buckling phenomena occur, higher order FEA presents a superior capability of reproducing the nonlinear local effects related to buckling phenomena. PMID:26184329

  3. An investigation of the information propagation and entropy transport aspects of Stirling machine numerical simulation

    NASA Technical Reports Server (NTRS)

    Goldberg, Louis F.

    1992-01-01

    Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality.

  4. The Integrated Plasma Simulator: A Flexible Python Framework for Coupled Multiphysics Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, Samantha S; Elwasif, Wael R; Bernholdt, David E

    2011-11-01

    High-fidelity coupled multiphysics simulations are an increasingly important aspect of computational science. In many domains, however, there has been very limited experience with simulations of this sort, therefore research in coupled multiphysics often requires computational frameworks with significant flexibility to respond to the changing directions of the physics and mathematics. This paper presents the Integrated Plasma Simulator (IPS), a framework designed for loosely coupled simulations of fusion plasmas. The IPS provides users with a simple component architecture into which a wide range of existing plasma physics codes can be inserted as components. Simulations can take advantage of multiple levels ofmore » parallelism supported in the IPS, and can be controlled by a high-level ``driver'' component, or by other coordination mechanisms, such as an asynchronous event service. We describe the requirements and design of the framework, and how they were implemented in the Python language. We also illustrate the flexibility of the framework by providing examples of different types of simulations that utilize various features of the IPS.« less

  5. [The history of development of evolutionary methods in St. Petersburg school of computer simulation in biology].

    PubMed

    Menshutkin, V V; Kazanskiĭ, A B; Levchenko, V F

    2010-01-01

    The history of rise and development of evolutionary methods in Saint Petersburg school of biological modelling is traced and analyzed. Some pioneering works in simulation of ecological and evolutionary processes, performed in St.-Petersburg school became an exemplary ones for many followers in Russia and abroad. The individual-based approach became the crucial point in the history of the school as an adequate instrument for construction of models of biological evolution. This approach is natural for simulation of the evolution of life-history parameters and adaptive processes in populations and communities. In some cases simulated evolutionary process was used for solving a reverse problem, i. e., for estimation of uncertain life-history parameters of population. Evolutionary computations is one more aspect of this approach application in great many fields. The problems and vistas of ecological and evolutionary modelling in general are discussed.

  6. Computer Simulations of Ion Transport in Polymer Electrolyte Membranes.

    PubMed

    Mogurampelly, Santosh; Borodin, Oleg; Ganesan, Venkat

    2016-06-07

    Understanding the mechanisms and optimizing ion transport in polymer membranes have been the subject of active research for more than three decades. We present an overview of the progress and challenges involved with the modeling and simulation aspects of the ion transport properties of polymer membranes. We are concerned mainly with atomistic and coarser level simulation studies and discuss some salient work in the context of pure binary and single ion conducting polymer electrolytes, polymer nanocomposites, block copolymers, and ionic liquid-based hybrid electrolytes. We conclude with an outlook highlighting future directions.

  7. Guidebook for solar process-heat applications

    NASA Astrophysics Data System (ADS)

    Fazzolare, R.; Mignon, G.; Campoy, L.; Luttmann, F.

    1981-01-01

    The potential for solar process heat in Arizona and some of the general technical aspects of solar, such as insolation, siting, and process analysis are explored. Major aspects of a solar plant design are presented. Collectors, storage, and heat exchange are discussed. Reducing hardware costs to annual dollar benefits is also discussed. Rate of return, cash flow, and payback are discussed as they relate to solar systems. Design analysis procedures are presented. The design cost optimization techniques using a yearly computer simulation of a solar process operation is demonstrated.

  8. Realistic natural atmospheric phenomena and weather effects for interactive virtual environments

    NASA Astrophysics Data System (ADS)

    McLoughlin, Leigh

    Clouds and the weather are important aspects of any natural outdoor scene, but existing dynamic techniques within computer graphics only offer the simplest of cloud representations. The problem that this work looks to address is how to provide a means of simulating clouds and weather features such as precipitation, that are suitable for virtual environments. Techniques for cloud simulation are available within the area of meteorology, but numerical weather prediction systems are computationally expensive, give more numerical accuracy than we require for graphics and are restricted to the laws of physics. Within computer graphics, we often need to direct and adjust physical features or to bend reality to meet artistic goals, which is a key difference between the subjects of computer graphics and physical science. Pure physically-based simulations, however, evolve their solutions according to pre-set rules and are notoriously difficult to control. The challenge then is for the solution to be computationally lightweight and able to be directed in some measure while at the same time producing believable results. This work presents a lightweight physically-based cloud simulation scheme that simulates the dynamic properties of cloud formation and weather effects. The system simulates water vapour, cloud water, cloud ice, rain, snow and hail. The water model incorporates control parameters and the cloud model uses an arbitrary vertical temperature profile, with a tool described to allow the user to define this. The result of this work is that clouds can now be simulated in near real-time complete with precipitation. The temperature profile and tool then provide a means of directing the resulting formation..

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeure, I.M.

    The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less

  10. Toward Theory-Based Instruction in Scientific Problem Solving.

    ERIC Educational Resources Information Center

    Heller, Joan I.; And Others

    Several empirical and theoretical analyses related to scientific problem-solving are reviewed, including: detailed studies of individuals at different levels of expertise, and computer models simulating some aspects of human information processing during problem solving. Analysis of these studies has revealed many facets about the nature of the…

  11. BASIC Simulation Programs; Volumes I and II. Biology, Earth Science, Chemistry.

    ERIC Educational Resources Information Center

    Digital Equipment Corp., Maynard, MA.

    Computer programs which teach concepts and processes related to biology, earth science, and chemistry are presented. The seven biology problems deal with aspects of genetics, evolution and natural selection, gametogenesis, enzymes, photosynthesis, and the transport of material across a membrane. Four earth science problems concern climates, the…

  12. Applying WEPP technologies to western alkaline surface coal mines

    Treesearch

    J. Q. Wu; S. Dun; H. Rhee; X. Liu; W. J. Elliot; T. Golnar; J. R. Frankenberger; D. C. Flanagan; P. W. Conrad; R. L. McNearny

    2011-01-01

    One aspect of planning surface mining operations, regulated by the National Pollutant Discharge Elimination System (NPDES), is estimating potential environmental impacts during mining operations and the reclamation period that follows. Practical computer simulation tools are effective for evaluating site-specific sediment control and reclamation plans for the NPDES....

  13. Visual Computing Environment

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Putt, Charles W.

    1997-01-01

    The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.

  14. Physically-based in silico light sheet microscopy for visualizing fluorescent brain models

    PubMed Central

    2015-01-01

    Background We present a physically-based computational model of the light sheet fluorescence microscope (LSFM). Based on Monte Carlo ray tracing and geometric optics, our method simulates the operational aspects and image formation process of the LSFM. This simulated, in silico LSFM creates synthetic images of digital fluorescent specimens that can resemble those generated by a real LSFM, as opposed to established visualization methods producing visually-plausible images. We also propose an accurate fluorescence rendering model which takes into account the intrinsic characteristics of fluorescent dyes to simulate the light interaction with fluorescent biological specimen. Results We demonstrate first results of our visualization pipeline to a simplified brain tissue model reconstructed from the somatosensory cortex of a young rat. The modeling aspects of the LSFM units are qualitatively analysed, and the results of the fluorescence model were quantitatively validated against the fluorescence brightness equation and characteristic emission spectra of different fluorescent dyes. AMS subject classification Modelling and simulation PMID:26329404

  15. Force fields and scoring functions for carbohydrate simulation.

    PubMed

    Xiong, Xiuming; Chen, Zhaoqiang; Cossins, Benjamin P; Xu, Zhijian; Shao, Qiang; Ding, Kai; Zhu, Weiliang; Shi, Jiye

    2015-01-12

    Carbohydrate dynamics plays a vital role in many biological processes, but we are not currently able to probe this with experimental approaches. The highly flexible nature of carbohydrate structures differs in many aspects from other biomolecules, posing significant challenges for studies employing computational simulation. Over past decades, computational study of carbohydrates has been focused on the development of structure prediction methods, force field optimization, molecular dynamics simulation, and scoring functions for carbohydrate-protein interactions. Advances in carbohydrate force fields and scoring functions can be largely attributed to enhanced computational algorithms, application of quantum mechanics, and the increasing number of experimental structures determined by X-ray and NMR techniques. The conformational analysis of carbohydrates is challengeable and has gone into intensive study in elucidating the anomeric, the exo-anomeric, and the gauche effects. Here, we review the issues associated with carbohydrate force fields and scoring functions, which will have a broad application in the field of carbohydrate-based drug design. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Atomdroid: a computational chemistry tool for mobile platforms.

    PubMed

    Feldt, Jonas; Mata, Ricardo A; Dieterich, Johannes M

    2012-04-23

    We present the implementation of a new molecular mechanics program designed for use in mobile platforms, the first specifically built for these devices. The software is designed to run on Android operating systems and is compatible with several modern tablet-PCs and smartphones available in the market. It includes molecular viewer/builder capabilities with integrated routines for geometry optimizations and Monte Carlo simulations. These functionalities allow it to work as a stand-alone tool. We discuss some particular development aspects, as well as the overall feasibility of using computational chemistry software packages in mobile platforms. Benchmark calculations show that through efficient implementation techniques even hand-held devices can be used to simulate midsized systems using force fields.

  17. Verification of component mode techniques for flexible multibody systems

    NASA Technical Reports Server (NTRS)

    Wiens, Gloria J.

    1990-01-01

    Investigations were conducted in the modeling aspects of flexible multibodies undergoing large angular displacements. Models were to be generated and analyzed through application of computer simulation packages employing the 'component mode synthesis' techniques. Multibody Modeling, Verification and Control Laboratory (MMVC) plan was implemented, which includes running experimental tests on flexible multibody test articles. From these tests, data was to be collected for later correlation and verification of the theoretical results predicted by the modeling and simulation process.

  18. A Distributed Data Base System Concept for Defense Test and Evaluation.

    DTIC Science & Technology

    1983-03-01

    measure adequately all variables that affect the outcome of the test. 7. Free - Play The second aspect of variable control that should be considered is the...amount of free - play permitted by a particular simulation. In a computer simulation free - play , if any, is limited to those elements designed into it by...the programmers. In operational testing of prototypes, however, a great deal of free - play can be introduced by allowing players to react to situations

  19. Numerical approach of the injection molding process of fiber-reinforced composite with considering fiber orientation

    NASA Astrophysics Data System (ADS)

    Nguyen Thi, T. B.; Yokoyama, A.; Ota, K.; Kodama, K.; Yamashita, K.; Isogai, Y.; Furuichi, K.; Nonomura, C.

    2014-05-01

    One of the most important challenges in the injection molding process of the short-glass fiber/thermoplastic composite parts is being able to predict the fiber orientation, since it controls the mechanical and the physical properties of the final parts. Folgar and Tucker included into the Jeffery equation a diffusive type of term, which introduces a phenomenological coefficient for modeling the randomizing effect of the mechanical interactions between the fibers, to predict the fiber orientation in concentrated suspensions. Their experiments indicated that this coefficient depends on the fiber volume fraction and aspect ratio. However, a definition of the fiber interaction coefficient, which is very necessary in the fiber orientation simulations, hasn't still been proven yet. Consequently, this study proposed a developed fiber interaction model that has been introduced a fiber dynamics simulation in order to obtain a global fiber interaction coefficient. This supposed that the coefficient is a sum function of the fiber concentration, aspect ratio, and angular velocity. The proposed model was incorporated into a computer aided engineering simulation package C-Mold. Short-glass fiber/polyamide-6 composites were produced in the injection molding with the fiber weight concentration of 30 wt.%, 50 wt.%, and 70 wt.%. The physical properties of these composites were examined, and their fiber orientation distributions were measured by micro-computed-tomography equipment μ-CT. The simulation results showed a good agreement with experiment results.

  20. Validation of computer simulation training for esophagogastroduodenoscopy: Pilot study.

    PubMed

    Sedlack, Robert E

    2007-08-01

    Little is known regarding the value of esophagogastroduodenoscopy (EGD) simulators in education. The purpose of the present paper was to validate the use of computer simulation in novice EGD training. In phase 1, expert endoscopists evaluated various aspects of simulation fidelity as compared to live endoscopy. Additionally, computer-recorded performance metrics were assessed by comparing the recorded scores from users of three different experience levels. In phase 2, the transfer of simulation-acquired skills to the clinical setting was assessed in a two-group, randomized pilot study. The setting was a large gastroenterology (GI) Fellowship training program; in phase 1, 21 subjects (seven expert, intermediate and novice endoscopist), made up the three experience groups. In phase 2, eight novice GI fellows were involved in the two-group, randomized portion of the study examining the transfer of simulation skills to the clinical setting. During the initial validation phase, each of the 21 subjects completed two standardized EDG scenarios on a computer simulator and their performance scores were recorded for seven parameters. Following this, staff participants completed a questionnaire evaluating various aspects of the simulator's fidelity. Finally, four novice GI fellows were randomly assigned to receive 6 h of simulator-augmented training (SAT group) in EGD prior to beginning 1 month of patient-based EGD training. The remaining fellows experienced 1 month of patient-based training alone (PBT group). Results of the seven measured performance parameters were compared between three groups of varying experience using a Wilcoxon ranked sum test. The staffs' simulator fidelity survey used a 7-point Likert scale (1, very unrealistic; 4, neutral; 7, very realistic) for each of the parameters examined. During the second phase of this study, supervising staff rated both SAT and PBT fellows' patient-based performance daily. Scoring in each skill was completed using a 7-point Likert scale (1, strongly disagree; 4, neutral; 7, strongly agree). Median scores were compared between groups using the Wilcoxon ranked sum test. Staff evaluations of fidelity found that only two of the parameters examined (anatomy and scope maneuverability) had a significant degree of realism. The remaining areas were felt to be limited in their fidelity. Of the computer-recorded performance scores, only the novice group could be reliably identified from the other two experience groups. In the clinical application phase, the median Patient Discomfort ratings were superior in the PBT group (6; interquartile range [IQR], 5-6) as compared to the SAT group (5; IQR, 4-6; P = 0.015). PBT fellows' ratings were also superior in Sedation, Patient Discomfort, Independence and Competence during various phases of the evaluation. At no point were SAT fellows rated higher than the PBT group in any of the parameters examined. This EGD simulator has limitations to the degree of fidelity and can differentiate only novice endoscopists from other levels of experience. Finally, skills learned during EGD simulation training do not appear to translate well into patient-based endoscopy skills. These findings suggest against a key element of validity for the use of this computer simulator in novice EGD training.

  1. Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems

    NASA Astrophysics Data System (ADS)

    Sandwell, David; Smith-Konter, Bridget

    2018-05-01

    We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.

  2. A Computational and Experimental Study of Resonators in Three Dimensions

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W.; Ju, H.; Jones, Michael G.; Watson, Willie R.; Parrott, Tony L.

    2009-01-01

    In a previous work by the present authors, a computational and experimental investigation of the acoustic properties of two-dimensional slit resonators was carried out. The present paper reports the results of a study extending the previous work to three dimensions. This investigation has two basic objectives. The first is to validate the computed results from direct numerical simulations of the flow and acoustic fields of slit resonators in three dimensions by comparing with experimental measurements in a normal incidence impedance tube. The second objective is to study the flow physics of resonant liners responsible for sound wave dissipation. Extensive comparisons are provided between computed and measured acoustic liner properties with both discrete frequency and broadband sound sources. Good agreements are found over a wide range of frequencies and sound pressure levels. Direct numerical simulation confirms the previous finding in two dimensions that vortex shedding is the dominant dissipation mechanism at high sound pressure intensity. However, it is observed that the behavior of the shed vortices in three dimensions is quite different from those of two dimensions. In three dimensions, the shed vortices tend to evolve into ring (circular in plan form) vortices, even though the slit resonator opening from which the vortices are shed has an aspect ratio of 2.5. Under the excitation of discrete frequency sound, the shed vortices align themselves into two regularly spaced vortex trains moving away from the resonator opening in opposite directions. This is different from the chaotic shedding of vortices found in two-dimensional simulations. The effect of slit aspect ratio at a fixed porosity is briefly studied. For the range of liners considered in this investigation, it is found that the absorption coefficient of a liner increases when the open area of the single slit is subdivided into multiple, smaller slits.

  3. Learning and evolution in bacterial taxis: an operational amplifier circuit modeling the computational dynamics of the prokaryotic 'two component system' protein network.

    PubMed

    Di Paola, Vieri; Marijuán, Pedro C; Lahoz-Beltra, Rafael

    2004-01-01

    Adaptive behavior in unicellular organisms (i.e., bacteria) depends on highly organized networks of proteins governing purposefully the myriad of molecular processes occurring within the cellular system. For instance, bacteria are able to explore the environment within which they develop by utilizing the motility of their flagellar system as well as a sophisticated biochemical navigation system that samples the environmental conditions surrounding the cell, searching for nutrients or moving away from toxic substances or dangerous physical conditions. In this paper we discuss how proteins of the intervening signal transduction network could be modeled as artificial neurons, simulating the dynamical aspects of the bacterial taxis. The model is based on the assumption that, in some important aspects, proteins can be considered as processing elements or McCulloch-Pitts artificial neurons that transfer and process information from the bacterium's membrane surface to the flagellar motor. This simulation of bacterial taxis has been carried out on a hardware realization of a McCulloch-Pitts artificial neuron using an operational amplifier. Based on the behavior of the operational amplifier we produce a model of the interaction between CheY and FliM, elements of the prokaryotic two component system controlling chemotaxis, as well as a simulation of learning and evolution processes in bacterial taxis. On the one side, our simulation results indicate that, computationally, these protein 'switches' are similar to McCulloch-Pitts artificial neurons, suggesting a bridge between evolution and learning in dynamical systems at cellular and molecular levels and the evolutive hardware approach. On the other side, important protein 'tactilizing' properties are not tapped by the model, and this suggests further complexity steps to explore in the approach to biological molecular computing.

  4. Enhanced Representation of Turbulent Flow Phenomena in Large-Eddy Simulations of the Atmospheric Boundary Layer using Grid Refinement with Pseudo-Spectral Numerics

    NASA Astrophysics Data System (ADS)

    Torkelson, G. Q.; Stoll, R., II

    2017-12-01

    Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.

  5. User Delay Cost Model and Facilities Maintenance Cost Model for a Terminal Control Area : Volume 1. Model Formulation and Demonstration

    DOT National Transportation Integrated Search

    1978-05-01

    The User Delay Cost Model (UDCM) is a Monte Carlo computer simulation of essential aspects of Terminal Control Area (TCA) air traffic movements that would be affected by facility outages. The model can also evaluate delay effects due to other factors...

  6. On the influence of viaduct and ground heating on pollutant dispersion in 2D street canyons and toward single-sided ventilated buildings

    EPA Science Inventory

    This paper employs Computational Fluid Dynamic (CFD) simulations to investigate the influence of ground heating intensities and viaduct configurations on gaseous and particle dispersion within two-dimensional idealized street canyons (typical aspect ratio H/W=1) and their transpo...

  7. The Use of Computer-Simulated Trajectories to Teach Real Particle Flight

    ERIC Educational Resources Information Center

    Gagnon, Michel

    2011-01-01

    The close relationship between charged particles and electromagnetic fields has been well known since the 19th century, thanks to James Clerk Maxwell's brilliant unified theory of electricity and magnetism. Today, electromagnetism is recognized as an essential aspect of human activity and has consequently become a major component of senior…

  8. Multiple-relaxation-time lattice Boltzmann study of the magnetic field effects on natural convection of non-Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Yang, Xuguang; Wang, Lei

    In this paper, the magnetic field effects on natural convection of power-law non-Newtonian fluids in rectangular enclosures are numerically studied by the multiple-relaxation-time (MRT) lattice Boltzmann method (LBM). To maintain the locality of the LBM, a local computing scheme for shear rate is used. Thus, all simulations can be easily performed on the Graphics Processing Unit (GPU) using NVIDIA’s CUDA, and high computational efficiency can be achieved. The numerical simulations presented here span a wide range of thermal Rayleigh number (104≤Ra≤106), Hartmann number (0≤Ha≤20), power-law index (0.5≤n≤1.5) and aspect ratio (0.25≤AR≤4.0) to identify the different flow patterns and temperature distributions. The results show that the heat transfer rate is increased with the increase of thermal Rayleigh number, while it is decreased with the increase of Hartmann number, and the average Nusselt number is found to decrease with an increase in the power-law index. Moreover, the effects of aspect ratio have also investigated in detail.

  9. Coarse Grained Model for Biological Simulations: Recent Refinements and Validation

    PubMed Central

    Vicatos, Spyridon; Rychkova, Anna; Mukherjee, Shayantani; Warshel, Arieh

    2014-01-01

    Exploring the free energy landscape of proteins and modeling the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of various simplified coarse grained (CG) models offers an effective way of sampling the landscape, but most current models are not expected to give a reliable description of protein stability and functional aspects. The main problem is associated with insufficient focus on the electrostatic features of the model. In this respect our recent CG model offers significant advantage as it has been refined while focusing on its electrostatic free energy. Here we review the current state of our model, describing recent refinement, extensions and validation studies while focusing on demonstrating key applications. These include studies of protein stability, extending the model to include membranes and electrolytes and electrodes as well as studies of voltage activated proteins, protein insertion trough the translocon, the action of molecular motors and even the coupling of the stalled ribosome and the translocon. Our example illustrates the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins and large macromolecular complexes. PMID:25050439

  10. Silicon material task. Part 3: Low-cost silicon solar array project

    NASA Technical Reports Server (NTRS)

    Roques, R. A.; Coldwell, D. M.

    1977-01-01

    The feasibility of a process for carbon reduction of low impurity silica in a plasma heat source was investigated to produce low-cost solar-grade silicon. Theoretical aspects of the reaction chemistry were studied with the aid of a computer program using iterative free energy minimization. These calculations indicate a threshold temperature exists at 2400 K below which no silicon is formed. The computer simulation technique of molecular dynamics was used to study the quenching of product species.

  11. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.

    PubMed

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.

  12. Nonlinear relaxation algorithms for circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, R.A.

    Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less

  13. Plane-Wave DFT Methods for Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bylaska, Eric J.

    A detailed description of modern plane-wave DFT methods and software (contained in the NWChem package) are described that allow for both geometry optimization and ab initio molecular dynamics simulations. Significant emphasis is placed on aspects of these methods that are of interest to computational chemists and useful for simulating chemistry, including techniques for calculating charged systems, exact exchange (i.e. hybrid DFT methods), and highly efficient AIMD/MM methods. Sample applications on the structure of the goethite+water interface and the hydrolysis of nitroaromatic molecules are described.

  14. Improvement on a simplified model for protein folding simulation.

    PubMed

    Zhang, Ming; Chen, Changjun; He, Yi; Xiao, Yi

    2005-11-01

    Improvements were made on a simplified protein model--the Ramachandran model-to achieve better computer simulation of protein folding. To check the validity of such improvements, we chose the ultrafast folding protein Engrailed Homeodomain as an example and explored several aspects of its folding. The engrailed homeodomain is a mainly alpha-helical protein of 61 residues from Drosophila melanogaster. We found that the simplified model of Engrailed Homeodomain can fold into a global minimum state with a tertiary structure in good agreement with its native structure.

  15. Numerical simulation of h-adaptive immersed boundary method for freely falling disks

    NASA Astrophysics Data System (ADS)

    Zhang, Pan; Xia, Zhenhua; Cai, Qingdong

    2018-05-01

    In this work, a freely falling disk with aspect ratio 1/10 is directly simulated by using an adaptive numerical model implemented on a parallel computation framework JASMIN. The adaptive numerical model is a combination of the h-adaptive mesh refinement technique and the implicit immersed boundary method (IBM). Our numerical results agree well with the experimental results in all of the six degrees of freedom of the disk. Furthermore, very similar vortex structures observed in the experiment were also obtained.

  16. Multiscale Aspects of Modeling Gas-Phase Nanoparticle Synthesis

    PubMed Central

    Buesser, B.; Gröhn, A.J.

    2013-01-01

    Aerosol reactors are utilized to manufacture nanoparticles in industrially relevant quantities. The development, understanding and scale-up of aerosol reactors can be facilitated with models and computer simulations. This review aims to provide an overview of recent developments of models and simulations and discuss their interconnection in a multiscale approach. A short introduction of the various aerosol reactor types and gas-phase particle dynamics is presented as a background for the later discussion of the models and simulations. Models are presented with decreasing time and length scales in sections on continuum, mesoscale, molecular dynamics and quantum mechanics models. PMID:23729992

  17. Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.

    PubMed

    Dematté, Lorenzo

    2012-01-01

    Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output

  18. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE PAGES

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...

    2016-09-16

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  19. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  20. Simulations of DNA stretching by flow field in microchannels with complex geometry.

    PubMed

    Huang, Chiou-De; Kang, Dun-Yen; Hsieh, Chih-Chen

    2014-01-01

    Recently, we have reported the experimental results of DNA stretching by flow field in three microchannels (C. H. Lee and C. C. Hsieh, Biomicrofluidics 7(1), 014109 (2013)) designed specifically for the purpose of preconditioning DNA conformation for easier stretching. The experimental results do not only demonstrate the superiority of the new devices but also provides detailed observation of DNA behavior in complex flow field that was not available before. In this study, we use Brownian dynamics-finite element method (BD-FEM) to simulate DNA behavior in these microchannels, and compare the results against the experiments. Although the hydrodynamic interaction (HI) between DNA segments and between DNA and the device boundaries was not included in the simulations, the simulation results are in fairly good agreement with the experimental data from either the aspect of the single molecule behavior or from the aspect of ensemble averaged properties. The discrepancy between the simulation and the experimental results can be explained by the neglect of HI effect in the simulations. Considering the huge savings on the computational cost from neglecting HI, we conclude that BD-FEM can be used as an efficient and economic designing tool for developing new microfluidic device for DNA manipulation.

  1. A Vectorial Model to Compute Terrain Parameters, Local and Remote Sheltering, Scattering and Albedo using TIN Domains for Hydrologic Modeling.

    NASA Astrophysics Data System (ADS)

    Moreno, H. A.; Ogden, F. L.; Steinke, R. C.; Alvarez, L. V.

    2015-12-01

    Triangulated Irregular Networks (TINs) are increasingly popular for terrain representation in high performance surface and hydrologic modeling by their skill to capture significant changes in surface forms such as topographical summits, slope breaks, ridges, valley floors, pits and cols. This work presents a methodology for estimating slope, aspect and the components of the incoming solar radiation by using a vectorial approach within a topocentric coordinate system by establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computing a unit vector defining the position of the sun at each hour and DOY. Thus, a dot product determines the radiation flux at each TIN element. Remote shading is computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector. Sky view fractions are computed by a simplified scanning algorithm in prescribed directions and are useful to determine diffuse radiation. Finally, remote radiation scattering is computed from the sky view factor complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. This methodology represents an improvement on the current algorithms to compute terrain and radiation parameters on TINs in an efficient manner. All terrain features (e.g. slope, aspect, sky view factors and remote sheltering) can be pre-computed and stored for easy access for a subsequent ground surface or hydrologic simulation.

  2. Using a million cell simulation of the cerebellum: network scaling and task generality.

    PubMed

    Li, Wen-Ke; Hausknecht, Matthew J; Stone, Peter; Mauk, Michael D

    2013-11-01

    Several factors combine to make it feasible to build computer simulations of the cerebellum and to test them in biologically realistic ways. These simulations can be used to help understand the computational contributions of various cerebellar components, including the relevance of the enormous number of neurons in the granule cell layer. In previous work we have used a simulation containing 12000 granule cells to develop new predictions and to account for various aspects of eyelid conditioning, a form of motor learning mediated by the cerebellum. Here we demonstrate the feasibility of scaling up this simulation to over one million granule cells using parallel graphics processing unit (GPU) technology. We observe that this increase in number of granule cells requires only twice the execution time of the smaller simulation on the GPU. We demonstrate that this simulation, like its smaller predecessor, can emulate certain basic features of conditioned eyelid responses, with a slight improvement in performance in one measure. We also use this simulation to examine the generality of the computation properties that we have derived from studying eyelid conditioning. We demonstrate that this scaled up simulation can learn a high level of performance in a classic machine learning task, the cart-pole balancing task. These results suggest that this parallel GPU technology can be used to build very large-scale simulations whose connectivity ratios match those of the real cerebellum and that these simulations can be used guide future studies on cerebellar mediated tasks and on machine learning problems. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Data mining to support simulation modeling of patient flow in hospitals.

    PubMed

    Isken, Mark W; Rajagopalan, Balaji

    2002-04-01

    Spiraling health care costs in the United States are driving institutions to continually address the challenge of optimizing the use of scarce resources. One of the first steps towards optimizing resources is to utilize capacity effectively. For hospital capacity planning problems such as allocation of inpatient beds, computer simulation is often the method of choice. One of the more difficult aspects of using simulation models for such studies is the creation of a manageable set of patient types to include in the model. The objective of this paper is to demonstrate the potential of using data mining techniques, specifically clustering techniques such as K-means, to help guide the development of patient type definitions for purposes of building computer simulation or analytical models of patient flow in hospitals. Using data from a hospital in the Midwest this study brings forth several important issues that researchers need to address when applying clustering techniques in general and specifically to hospital data.

  4. Generalized Born Models of Macromolecular Solvation Effects

    NASA Astrophysics Data System (ADS)

    Bashford, Donald; Case, David A.

    2000-10-01

    It would often be useful in computer simulations to use a simple description of solvation effects, instead of explicitly representing the individual solvent molecules. Continuum dielectric models often work well in describing the thermodynamic aspects of aqueous solvation, and approximations to such models that avoid the need to solve the Poisson equation are attractive because of their computational efficiency. Here we give an overview of one such approximation, the generalized Born model, which is simple and fast enough to be used for molecular dynamics simulations of proteins and nucleic acids. We discuss its strengths and weaknesses, both for its fidelity to the underlying continuum model and for its ability to replace explicit consideration of solvent molecules in macromolecular simulations. We focus particularly on versions of the generalized Born model that have a pair-wise analytical form, and therefore fit most naturally into conventional molecular mechanics calculations.

  5. LB3D: A parallel implementation of the Lattice-Boltzmann method for simulation of interacting amphiphilic fluids

    NASA Astrophysics Data System (ADS)

    Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V.

    2017-08-01

    We introduce the lattice-Boltzmann code LB3D, version 7.1. Building on a parallel program and supporting tools which have enabled research utilising high performance computing resources for nearly two decades, LB3D version 7 provides a subset of the research code functionality as an open source project. Here, we describe the theoretical basis of the algorithm as well as computational aspects of the implementation. The software package is validated against simulations of meso-phases resulting from self-assembly in ternary fluid mixtures comprising immiscible and amphiphilic components such as water-oil-surfactant systems. The impact of the surfactant species on the dynamics of spinodal decomposition are tested and quantitative measurement of the permeability of a body centred cubic (BCC) model porous medium for a simple binary mixture is described. Single-core performance and scaling behaviour of the code are reported for simulations on current supercomputer architectures.

  6. Simulations of acoustic waves in channels and phonation in glottal ducts

    NASA Astrophysics Data System (ADS)

    Yang, Jubiao; Krane, Michael; Zhang, Lucy

    2014-11-01

    Numerical simulations of acoustic wave propagation were performed by solving compressible Navier-Stokes equations using finite element method. To avoid numerical contamination of acoustic field induced by non-physical reflections at computational boundaries, a Perfectly Matched Layer (PML) scheme was implemented to attenuate the acoustic waves and their reflections near these boundaries. The acoustic simulation was further combined with the simulation of interaction of vocal fold vibration and glottal flow, using our fully-coupled Immersed Finite Element Method (IFEM) approach, to study phonation in the glottal channel. In order to decouple the aeroelastic and aeroacoustic aspects of phonation, the airway duct used has a uniform cross section with PML properly applied. The dynamics of phonation were then studied by computing the terms of the equations of motion for a control volume comprised of the fluid in the vicinity of the vocal folds. It is shown that the principal dynamics is comprised of the near cancellation of the pressure force driving the flow through the glottis, and the aerodynamic drag on the vocal folds. Aeroacoustic source strengths are also presented, estimated from integral quantities computed in the source region, as well as from the radiated acoustic field.

  7. Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ

    Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing.more » Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.« less

  8. Automated Knowledge Discovery from Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael C.; DeCoste, D.; Enke, B. L.; Mazzoni, D.; Merline, W. J.; Scharenbroich, L.

    2006-01-01

    In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior. Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise. However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more efficient knowledge extraction.

  9. Numerical approach of the injection molding process of fiber-reinforced composite with considering fiber orientation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen Thi, T. B., E-mail: thanhbinh.skku@gmail.com, E-mail: yokoyama@kit.ac.jp; Yokoyama, A., E-mail: thanhbinh.skku@gmail.com, E-mail: yokoyama@kit.ac.jp; Ota, K., E-mail: kei-ota@toyobo.jp, E-mail: katsuhiro-kodama@toyobo.jp, E-mail: katsuhisa-yamashita@toyobo.jp, E-mail: yumiko-isogai@toyobo.jp, E-mail: kenji-furuichi@toyobo.jp, E-mail: chisato-nonomura@toyobo.jp

    2014-05-15

    One of the most important challenges in the injection molding process of the short-glass fiber/thermoplastic composite parts is being able to predict the fiber orientation, since it controls the mechanical and the physical properties of the final parts. Folgar and Tucker included into the Jeffery equation a diffusive type of term, which introduces a phenomenological coefficient for modeling the randomizing effect of the mechanical interactions between the fibers, to predict the fiber orientation in concentrated suspensions. Their experiments indicated that this coefficient depends on the fiber volume fraction and aspect ratio. However, a definition of the fiber interaction coefficient, whichmore » is very necessary in the fiber orientation simulations, hasn't still been proven yet. Consequently, this study proposed a developed fiber interaction model that has been introduced a fiber dynamics simulation in order to obtain a global fiber interaction coefficient. This supposed that the coefficient is a sum function of the fiber concentration, aspect ratio, and angular velocity. The proposed model was incorporated into a computer aided engineering simulation package C-Mold. Short-glass fiber/polyamide-6 composites were produced in the injection molding with the fiber weight concentration of 30 wt.%, 50 wt.%, and 70 wt.%. The physical properties of these composites were examined, and their fiber orientation distributions were measured by micro-computed-tomography equipment μ-CT. The simulation results showed a good agreement with experiment results.« less

  10. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    Models, measures, and techniques for evaluating the effectiveness of aircraft computing systems were developed. By "effectiveness" in this context we mean the extent to which the user, i.e., a commercial air carrier, may expect to benefit from the computational tasks accomplished by a computing system in the environment of an advanced commercial aircraft. Thus, the concept of effectiveness involves aspects of system performance, reliability, and worth (value, benefit) which are appropriately integrated in the process of evaluating system effectiveness. Specifically, the primary objectives are: the development of system models that provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer.

  11. Computational open-channel hydraulics for movable-bed problems

    USGS Publications Warehouse

    Lai, Chintu; ,

    1990-01-01

    As a major branch of computational hydraulics, notable advances have been made in numerical modeling of unsteady open-channel flow since the beginning of the computer age. According to the broader definition and scope of 'computational hydraulics,' the basic concepts and technology of modeling unsteady open-channel flow have been systematically studied previously. As a natural extension, computational open-channel hydraulics for movable-bed problems are addressed in this paper. The introduction of the multimode method of characteristics (MMOC) has made the modeling of this class of unsteady flows both practical and effective. New modeling techniques are developed, thereby shedding light on several aspects of computational hydraulics. Some special features of movable-bed channel-flow simulation are discussed here in the same order as given by the author in the fixed-bed case.

  12. Efficiently passing messages in distributed spiking neural network simulation.

    PubMed

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  13. Modeling the fundamental characteristics and processes of the spacecraft functioning

    NASA Technical Reports Server (NTRS)

    Bazhenov, V. I.; Osin, M. I.; Zakharov, Y. V.

    1986-01-01

    The fundamental aspects of modeling of spacecraft characteristics by using computing means are considered. Particular attention is devoted to the design studies, the description of physical appearance of the spacecraft, and simulated modeling of spacecraft systems. The fundamental questions of organizing the on-the-ground spacecraft testing and the methods of mathematical modeling were presented.

  14. Stochastic Evolutionary Algorithms for Planning Robot Paths

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard

    2006-01-01

    A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.

  15. Correlating Computed and Flight Instructor Assessments of Straight-In Landing Approaches by Novice Pilots on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Heath, Bruce E.; Khan, M. Javed; Rossi, Marcia; Ali, Syed Firasat

    2005-01-01

    The rising cost of flight training and the low cost of powerful computers have resulted in increasing use of PC-based flight simulators. This has prompted FAA standards regulating such use and allowing aspects of training on simulators meeting these standards to be substituted for flight time. However, the FAA regulations require an authorized flight instructor as part of the training environment. Thus, while costs associated with flight time have been reduced, the cost associated with the need for a flight instructor still remains. The obvious area of research, therefore, has been to develop intelligent simulators. However, the two main challenges of such attempts have been training strategies and assessment. The research reported in this paper was conducted to evaluate various performance metrics of a straight-in landing approach by 33 novice pilots flying a light single engine aircraft simulation. These metrics were compared to assessments of these flights by two flight instructors to establish a correlation between the two techniques in an attempt to determine a composite performance metric for this flight maneuver.

  16. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE PAGES

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  17. Stochastic optimization of GeantV code by use of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  18. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  19. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 3: Design philosophy and programming details

    USGS Publications Warehouse

    Torak, L.J.

    1993-01-01

    A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.

  20. Computational Modeling And Analysis Of Synthetic Jets

    NASA Technical Reports Server (NTRS)

    Mittal, Rajat; Cattafesta, Lou

    2005-01-01

    In the last report we focused on the study of 3D synthetic jets of moderate jet aspect-ratio. Jets in quiescent and cross-flow cases were investigated. Since most of the synthetic jets in practical applications are found to be of large aspect ratio, the focus was shifted to studying synthetic jets of large aspect ratio. In the current year, further progress has been made by studying jets of aspect ratio 8 and infinity. Some other aspects of the jet, like the vorticity flux is looked into apart from analyzing the vortex dynamics, velocity profiles and the other dynamical characteristics of the jet which allows us to extract some insight into the effect of these modifications on the jet performance. Also, efforts were made to qualitatively validate the simulated results with the NASA Langley test cases at higher jet Reynolds number for the quiescent jet case.

  1. Report from the MPP Working Group to the NASA Associate Administrator for Space Science and Applications

    NASA Technical Reports Server (NTRS)

    Fischer, James R.; Grosch, Chester; Mcanulty, Michael; Odonnell, John; Storey, Owen

    1987-01-01

    NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era.

  2. A parallel Monte Carlo code for planar and SPECT imaging: implementation, verification and applications in (131)I SPECT.

    PubMed

    Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F

    2002-02-01

    This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.

  3. A Computational Study of the Flow Physics of Acoustic Liners

    NASA Technical Reports Server (NTRS)

    Tam, Christopher

    2006-01-01

    The present investigation is a continuation of a previous joint project between the Florida State University and the NASA Langley Research Center Liner Physics Team. In the previous project, a study of acoustic liners, in two dimensions, inside a normal incidence impedance tube was carried out. The study consisted of two parts. The NASA team was responsible for the experimental part of the project. This involved performing measurements in an impedance tube with a large aspect ratio slit resonator. The FSU team was responsible for the computation part of the project. This involved performing direct numerical simulation (DNS) of the NASA experiment in two dimensions using CAA methodology. It was agreed that upon completion of numerical simulation, the computed values of the liner impedance were to be sent to NASA for validation with experimental results. On following this procedure good agreements were found between numerical results and experimental measurements over a wide range of frequencies and sound-pressure-level. Broadband incident sound waves were also simulated numerically and measured experimentally. Overall, good agreements were also found.

  4. Simulations of High Speed Fragment Trajectories

    NASA Astrophysics Data System (ADS)

    Yeh, Peter; Attaway, Stephen; Arunajatesan, Srinivasan; Fisher, Travis

    2017-11-01

    Flying shrapnel from an explosion are capable of traveling at supersonic speeds and distances much farther than expected due to aerodynamic interactions. Predicting the trajectories and stable tumbling modes of arbitrary shaped fragments is a fundamental problem applicable to range safety calculations, damage assessment, and military technology. Traditional approaches rely on characterizing fragment flight using a single drag coefficient, which may be inaccurate for fragments with large aspect ratios. In our work we develop a procedure to simulate trajectories of arbitrary shaped fragments with higher fidelity using high performance computing. We employ a two-step approach in which the force and moment coefficients are first computed as a function of orientation using compressible computational fluid dynamics. The force and moment data are then input into a six-degree-of-freedom rigid body dynamics solver to integrate trajectories in time. Results of these high fidelity simulations allow us to further understand the flight dynamics and tumbling modes of a single fragment. Furthermore, we use these results to determine the validity and uncertainty of inexpensive methods such as the single drag coefficient model.

  5. Dish layouts analysis method for concentrative solar power plant.

    PubMed

    Xu, Jinshan; Gan, Shaocong; Li, Song; Ruan, Zhongyuan; Chen, Shengyong; Wang, Yong; Gui, Changgui; Wan, Bin

    2016-01-01

    Designs leading to maximize the use of sun radiation of a given reflective area without increasing the expense on investment are important to solar power plants construction. We here provide a method that allows one to compute shade area at any given time as well as the total shading effect of a day. By establishing a local coordinate system with the origin at the apex of a parabolic dish and z -axis pointing to the sun, neighboring dishes only with [Formula: see text] would shade onto the dish when in tracking mode. This procedure reduces the required computational resources, simplifies the calculation and allows a quick search for the optimum layout by considering all aspects leading to optimized arrangement: aspect ratio, shifting and rotation. Computer simulations done with information on dish Stirling system as well as DNI data released from NREL, show that regular-spacing is not an optimal layout, shifting and rotating column by certain amount can bring more benefits.

  6. Simulation of a Geiger-Mode Imaging LADAR System for Performance Assessment

    PubMed Central

    Kim, Seongjoon; Lee, Impyeong; Kwon, Yong Joon

    2013-01-01

    As LADAR systems applications gradually become more diverse, new types of systems are being developed. When developing new systems, simulation studies are an essential prerequisite. A simulator enables performance predictions and optimal system parameters at the design level, as well as providing sample data for developing and validating application algorithms. The purpose of the study is to propose a method for simulating a Geiger-mode imaging LADAR system. We develop simulation software to assess system performance and generate sample data for the applications. The simulation is based on three aspects of modeling—the geometry, radiometry and detection. The geometric model computes the ranges to the reflection points of the laser pulses. The radiometric model generates the return signals, including the noises. The detection model determines the flight times of the laser pulses based on the nature of the Geiger-mode detector. We generated sample data using the simulator with the system parameters and analyzed the detection performance by comparing the simulated points to the reference points. The proportion of the outliers in the simulated points reached 25.53%, indicating the need for efficient outlier elimination algorithms. In addition, the false alarm rate and dropout rate of the designed system were computed as 1.76% and 1.06%, respectively. PMID:23823970

  7. A detailed model for simulation of catchment scale subsurface hydrologic processes

    NASA Technical Reports Server (NTRS)

    Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.

  8. Fluid–Structure Interaction Analysis of Papillary Muscle Forces Using a Comprehensive Mitral Valve Model with 3D Chordal Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toma, Milan; Jensen, Morten Ø.; Einstein, Daniel R.

    2015-07-17

    Numerical models of native heart valves are being used to study valve biomechanics to aid design and development of repair procedures and replacement devices. These models have evolved from simple two-dimensional approximations to complex three-dimensional, fully coupled fluid-structure interaction (FSI) systems. Such simulations are useful for predicting the mechanical and hemodynamic loading on implanted valve devices. A current challenge for improving the accuracy of these predictions is choosing and implementing modeling boundary conditions. In order to address this challenge, we are utilizing an advanced in-vitro system to validate FSI conditions for the mitral valve system. Explanted ovine mitral valves weremore » mounted in an in vitro setup, and structural data for the mitral valve was acquired with *CT. Experimental data from the in-vitro ovine mitral valve system were used to validate the computational model. As the valve closes, the hemodynamic data, high speed lea et dynamics, and force vectors from the in-vitro system were compared to the results of the FSI simulation computational model. The total force of 2.6 N per papillary muscle is matched by the computational model. In vitro and in vivo force measurements are important in validating and adjusting material parameters in computational models. The simulations can then be used to answer questions that are otherwise not possible to investigate experimentally. This work is important to maximize the validity of computational models of not just the mitral valve, but any biomechanical aspect using computational simulation in designing medical devices.« less

  9. Fluid-Structure Interaction Analysis of Papillary Muscle Forces Using a Comprehensive Mitral Valve Model with 3D Chordal Structure.

    PubMed

    Toma, Milan; Jensen, Morten Ø; Einstein, Daniel R; Yoganathan, Ajit P; Cochran, Richard P; Kunzelman, Karyn S

    2016-04-01

    Numerical models of native heart valves are being used to study valve biomechanics to aid design and development of repair procedures and replacement devices. These models have evolved from simple two-dimensional approximations to complex three-dimensional, fully coupled fluid-structure interaction (FSI) systems. Such simulations are useful for predicting the mechanical and hemodynamic loading on implanted valve devices. A current challenge for improving the accuracy of these predictions is choosing and implementing modeling boundary conditions. In order to address this challenge, we are utilizing an advanced in vitro system to validate FSI conditions for the mitral valve system. Explanted ovine mitral valves were mounted in an in vitro setup, and structural data for the mitral valve was acquired with [Formula: see text]CT. Experimental data from the in vitro ovine mitral valve system were used to validate the computational model. As the valve closes, the hemodynamic data, high speed leaflet dynamics, and force vectors from the in vitro system were compared to the results of the FSI simulation computational model. The total force of 2.6 N per papillary muscle is matched by the computational model. In vitro and in vivo force measurements enable validating and adjusting material parameters to improve the accuracy of computational models. The simulations can then be used to answer questions that are otherwise not possible to investigate experimentally. This work is important to maximize the validity of computational models of not just the mitral valve, but any biomechanical aspect using computational simulation in designing medical devices.

  10. Two-dimensional simulation of high-power laser-surface interaction

    NASA Astrophysics Data System (ADS)

    Goldman, S. Robert; Wilke, Mark D.; Green, Ray E.; Busch, George E.; Johnson, Randall P.

    1998-09-01

    For laser intensities in the range of 108 - 109 W/cm2, and pulse lengths of order 10 microseconds or longer, we have modified the inertial confinement fusion code Lasnex to simulate gaseous and some dense material aspects of the laser-matter interaction. The unique aspect of our treatment consists of an ablation model which defines a dense material-vapor interface and then calculates the mass flow across this interface. The model treats the dense material as a rigid two-dimensional mass and heat reservoir suppressing all hydrodynamic motion in the dense material. The computer simulations and additional post-processors provide predictions for measurements including impulse given to the target, pressures at the target interface, electron temperatures and densities in the vapor-plasma plume region, and emission of radiation from the target. We will present an analysis of some relatively well diagnosed experiments which have been useful in developing our modeling. The simulations match experimentally obtained target impulses, pressures at the target surface inside the laser spot, and radiation emission from the target to within about 20%. Hence our simulational technique appears to form a useful basis for further investigation of laser-surface interaction in this intensity, pulse-width range.

  11. A Computing Infrastructure for Supporting Climate Studies

    NASA Astrophysics Data System (ADS)

    Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team

    2011-12-01

    Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.

  12. Computational study of textured ferroelectric polycrystals: Dielectric and piezoelectric properties of template-matrix composites

    NASA Astrophysics Data System (ADS)

    Zhou, Jie E.; Yan, Yongke; Priya, Shashank; Wang, Yu U.

    2017-01-01

    Quantitative relationships between processing, microstructure, and properties in textured ferroelectric polycrystals and the underlying responsible mechanisms are investigated by phase field modeling and computer simulation. This study focuses on three important aspects of textured ferroelectric ceramics: (i) grain microstructure evolution during templated grain growth processing, (ii) crystallographic texture development as a function of volume fraction and seed size of the templates, and (iii) dielectric and piezoelectric properties of the obtained template-matrix composites of textured polycrystals. Findings on the third aspect are presented here, while an accompanying paper of this work reports findings on the first two aspects. In this paper, the competing effects of crystallographic texture and template seed volume fraction on the dielectric and piezoelectric properties of ferroelectric polycrystals are investigated. The phase field model of ferroelectric composites consisting of template seeds embedded in matrix grains is developed to simulate domain evolution, polarization-electric field (P-E), and strain-electric field (ɛ-E) hysteresis loops. The coercive field, remnant polarization, dielectric permittivity, piezoelectric coefficient, and dissipation factor are studied as a function of grain texture and template seed volume fraction. It is found that, while crystallographic texture significantly improves the polycrystal properties towards those of single crystals, a higher volume fraction of template seeds tends to decrease the electromechanical properties, thus canceling the advantage of ferroelectric polycrystals textured by templated grain growth processing. This competing detrimental effect is shown to arise from the composite effect, where the template phase possesses material properties inferior to the matrix phase, causing mechanical clamping and charge accumulation at inter-phase interfaces between matrix and template inclusions. The computational results are compared with complementary experiments, where good agreement is obtained.

  13. A computer model of context-dependent perception in a very simple world

    NASA Astrophysics Data System (ADS)

    Lara-Dammer, Francisco; Hofstadter, Douglas R.; Goldstone, Robert L.

    2017-11-01

    We propose the foundations of a computer model of scientific discovery that takes into account certain psychological aspects of human observation of the world. To this end, we simulate two main components of such a system. The first is a dynamic microworld in which physical events take place, and the second is an observer that visually perceives entities and events in the microworld. For reason of space, this paper focuses only on the starting phase of discovery, which is the relatively simple visual inputs of objects and collisions.

  14. Numerical simulation of a hovering rotor using embedded grids

    NASA Technical Reports Server (NTRS)

    Duque, Earl-Peter N.; Srinivasan, Ganapathi R.

    1992-01-01

    The flow field for a rotor blade in hover was computed by numerically solving the compressible thin-layer Navier-Stokes equations on embedded grids. In this work, three embedded grids were used to discretize the flow field - one for the rotor blade and two to convect the rotor wake. The computations were performed at two hovering test conditions, for a two-bladed rectangular rotor of aspect ratio six. The results compare fairly with experiment and illustrates the use of embedded grids in solving helicopter type flow fields.

  15. Module-based multiscale simulation of angiogenesis in skeletal muscle

    PubMed Central

    2011-01-01

    Background Mathematical modeling of angiogenesis has been gaining momentum as a means to shed new light on the biological complexity underlying blood vessel growth. A variety of computational models have been developed, each focusing on different aspects of the angiogenesis process and occurring at different biological scales, ranging from the molecular to the tissue levels. Integration of models at different scales is a challenging and currently unsolved problem. Results We present an object-oriented module-based computational integration strategy to build a multiscale model of angiogenesis that links currently available models. As an example case, we use this approach to integrate modules representing microvascular blood flow, oxygen transport, vascular endothelial growth factor transport and endothelial cell behavior (sensing, migration and proliferation). Modeling methodologies in these modules include algebraic equations, partial differential equations and agent-based models with complex logical rules. We apply this integrated model to simulate exercise-induced angiogenesis in skeletal muscle. The simulation results compare capillary growth patterns between different exercise conditions for a single bout of exercise. Results demonstrate how the computational infrastructure can effectively integrate multiple modules by coordinating their connectivity and data exchange. Model parameterization offers simulation flexibility and a platform for performing sensitivity analysis. Conclusions This systems biology strategy can be applied to larger scale integration of computational models of angiogenesis in skeletal muscle, or other complex processes in other tissues under physiological and pathological conditions. PMID:21463529

  16. Image synthesis for SAR system, calibration and processor design

    NASA Technical Reports Server (NTRS)

    Holtzman, J. C.; Abbott, J. L.; Kaupp, V. H.; Frost, V. S.

    1978-01-01

    The Point Scattering Method of simulating radar imagery rigorously models all aspects of the imaging radar phenomena. Its computational algorithms operate on a symbolic representation of the terrain test site to calculate such parameters as range, angle of incidence, resolution cell size, etc. Empirical backscatter data and elevation data are utilized to model the terrain. Additionally, the important geometrical/propagation effects such as shadow, foreshortening, layover, and local angle of incidence are rigorously treated. Applications of radar image simulation to a proposed calibrated SAR system are highlighted: soil moisture detection and vegetation discrimination.

  17. Kinetics of the electric double layer formation modelled by the finite difference method

    NASA Astrophysics Data System (ADS)

    Valent, Ivan

    2017-11-01

    Dynamics of the elctric double layer formation in 100 mM NaCl solution for sudden potentail steps of 10 and 20 mV was simulated using the Poisson-Nernst-Planck theory and VLUGR2 solver for partial differential equations. The used approach was verified by comparing the obtained steady-state solution with the available exact solution. The simulations allowed for detailed analysis of the relaxation processes of the individual ions and the electric potential. Some computational aspects of the problem were discussed.

  18. Advances in the computation of transonic separated flows over finite wings

    NASA Technical Reports Server (NTRS)

    Kaynak, Unver; Flores, Jolen

    1989-01-01

    Problems encountered in numerical simulations of transonic wind-tunnel experiments with low-aspect-ratio wings are surveyed and illustrated. The focus is on the zonal Euler/Navier-Stokes program developed by Holst et al. (1985) and its application to shock-induced separation. The physical basis and numerical implementation of the method are reviewed, and results are presented from studies of the effects of artificial dissipation, boundary conditions, grid refinement, the turbulence model, and geometry representation on the simulation accuracy. Extensive graphs and diagrams and typical flow visualizations are provided.

  19. Optical implementation of the synthetic discriminant function

    NASA Astrophysics Data System (ADS)

    Butler, S.; Riggins, J.

    1984-10-01

    Much attention is focused on the use of coherent optical pattern recognition (OPR) using matched spatial filters for robotics and intelligent systems. The OPR problem consists of three aspects -- information input, information processing, and information output. This paper discusses the information processing aspect which consists of choosing a filter to provide robust correlation with high efficiency. The filter should ideally be invariant to image shift, rotation and scale, provide a reasonable signal-to-noise (S/N) ratio and allow high throughput efficiency. The physical implementation of a spatial matched filter involves many choices. These include the use of conventional holograms or computer-generated holograms (CGH) and utilizing absorption or phase materials. Conventional holograms inherently modify the reference image by non-uniform emphasis of spatial frequencies. Proper use of film nonlinearity provides improved filter performance by emphasizing frequency ranges crucial to target discrimination. In the case of a CGH, the emphasis of the reference magnitude and phase can be controlled independently of the continuous tone or binary writing processes. This paper describes computer simulation and optical implementation of a geometrical shape and a Synthetic Discriminant Function (SDF) matched filter. The authors chose the binary Allebach-Keegan (AK) CGH algorithm to produce actual filters. The performances of these filters were measured to verify the simulation results. This paper provides a brief summary of the matched filter theory, the SDF, CGH algorithms, Phase-Only-Filtering, simulation procedures, and results.

  20. Direct Numerical Simulation of Automobile Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kurbatskii, Konstantin; Tam, Christopher K. W.

    2000-01-01

    The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.

  1. ChemCalc: a building block for tomorrow's chemical infrastructure.

    PubMed

    Patiny, Luc; Borel, Alain

    2013-05-24

    Web services, as an aspect of cloud computing, are becoming an important part of the general IT infrastructure, and scientific computing is no exception to this trend. We propose a simple approach to develop chemical Web services, through which servers could expose the essential data manipulation functionality that students and researchers need for chemical calculations. These services return their results as JSON (JavaScript Object Notation) objects, which facilitates their use for Web applications. The ChemCalc project http://www.chemcalc.org demonstrates this approach: we present three Web services related with mass spectrometry, namely isotopic distribution simulation, peptide fragmentation simulation, and molecular formula determination. We also developed a complete Web application based on these three Web services, taking advantage of modern HTML5 and JavaScript libraries (ChemDoodle and jQuery).

  2. Optimal mapping of irregular finite element domains to parallel processors

    NASA Technical Reports Server (NTRS)

    Flower, J.; Otto, S.; Salama, M.

    1987-01-01

    Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.

  3. Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2017-03-01

    The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.

  4. Understanding Slat Noise Sources

    NASA Technical Reports Server (NTRS)

    Khorrami, Medhi R.

    2003-01-01

    Model-scale aeroacoustic tests of large civil transports point to the leading-edge slat as a dominant high-lift noise source in the low- to mid-frequencies during aircraft approach and landing. Using generic multi-element high-lift models, complementary experimental and numerical tests were carefully planned and executed at NASA in order to isolate slat noise sources and the underlying noise generation mechanisms. In this paper, a brief overview of the supporting computational effort undertaken at NASA Langley Research Center, is provided. Both tonal and broadband aspects of slat noise are discussed. Recent gains in predicting a slat s far-field acoustic noise, current shortcomings of numerical simulations, and other remaining open issues, are presented. Finally, an example of the ever-expanding role of computational simulations in noise reduction studies also is given.

  5. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method

    PubMed Central

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller’s scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller’s algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller’s algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller’s algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data. PMID:26958442

  6. Effectiveness of a Computer-Mediated Simulations Program in School Biology on Pupils' Learning Outcomes in Cell Theory

    ERIC Educational Resources Information Center

    Kiboss, Joel K.; Ndirangu, Mwangi; Wekesa, Eric W.

    2004-01-01

    Biology knowledge and understanding is important not only for the conversion of the loftiest dreams into reality for a better life of individuals but also for preparing secondary pupils for such fields as agriculture, medicine, biotechnology, and genetic engineering. But a recent study has revealed that many aspects of school science (biology…

  7. The Management of Cognitive Load During Complex Cognitive Skill Acquisition by Means of Computer-Simulated Problem Solving

    ERIC Educational Resources Information Center

    Kester, Liesbeth; Kirschner, Paul A.; van Merrienboer, Jeroen J.G.

    2005-01-01

    This study compared the effects of two information presentation formats on learning to solve problems in electrical circuits. In one condition, the split-source format, information relating to procedural aspects of the functioning of an electrical circuit was not integrated in a circuit diagram, while information in the integrated format condition…

  8. Patentability aspects of computational cancer models

    NASA Astrophysics Data System (ADS)

    Lishchuk, Iryna

    2017-07-01

    Multiscale cancer models, implemented in silico, simulate tumor progression at various spatial and temporal scales. Having the innovative substance and possessing the potential of being applied as decision support tools in clinical practice, patenting and obtaining patent rights in cancer models seems prima facie possible. What legal hurdles the cancer models need to overcome for being patented we inquire from this paper.

  9. Skills-O-Mat: Computer Supported Interactive Motion- and Game-Based Training in Mixing Alginate in Dental Education

    ERIC Educational Resources Information Center

    Hannig, Andreas; Lemos, Martin; Spreckelsen, Cord; Ohnesorge-Radtke, Ulla; Rafai, Nicole

    2013-01-01

    The training of motor skills is a crucial aspect of medical education today. Serious games and haptic virtual simulations have been used in the training of surgical procedures. Otherwise, however, a combination of serious games and motor skills training is rarely used in medical education. This article presents Skills-O-Mat, an interactive serious…

  10. Computational Modeling of Reading in Semantic Dementia: Comment on Woollams, Lambon Ralph, Plaut, and Patterson (2007)

    ERIC Educational Resources Information Center

    Coltheart, Max; Tree, Jeremy J.; Saunders, Steven J.

    2010-01-01

    Woollams, Lambon Ralph, Plaut, and Patterson (see record 2007-05396-004) reported detailed data on reading in 51 cases of semantic dementia. They simulated some aspects of these data using a connectionist parallel distributed processing (PDP) triangle model of reading. We argue here that a different model of reading, the dual route cascaded (DRC)…

  11. Probalistic Finite Elements (PFEM) structural dynamics and fracture mechanics

    NASA Technical Reports Server (NTRS)

    Liu, Wing-Kam; Belytschko, Ted; Mani, A.; Besterfield, G.

    1989-01-01

    The purpose of this work is to develop computationally efficient methodologies for assessing the effects of randomness in loads, material properties, and other aspects of a problem by a finite element analysis. The resulting group of methods is called probabilistic finite elements (PFEM). The overall objective of this work is to develop methodologies whereby the lifetime of a component can be predicted, accounting for the variability in the material and geometry of the component, the loads, and other aspects of the environment; and the range of response expected in a particular scenario can be presented to the analyst in addition to the response itself. Emphasis has been placed on methods which are not statistical in character; that is, they do not involve Monte Carlo simulations. The reason for this choice of direction is that Monte Carlo simulations of complex nonlinear response require a tremendous amount of computation. The focus of efforts so far has been on nonlinear structural dynamics. However, in the continuation of this project, emphasis will be shifted to probabilistic fracture mechanics so that the effect of randomness in crack geometry and material properties can be studied interactively with the effect of random load and environment.

  12. A multiscale method for modeling high-aspect-ratio micro/nano flows

    NASA Astrophysics Data System (ADS)

    Lockerby, Duncan; Borg, Matthew; Reese, Jason

    2012-11-01

    In this paper we present a new multiscale scheme for simulating micro/nano flows of high aspect ratio in the flow direction, e.g. within long ducts, tubes, or channels, of varying section. The scheme consists of applying a simple hydrodynamic description over the entire domain, and allocating micro sub-domains in very small ``slices'' of the channel. Every micro element is a molecular dynamics simulation (or other appropriate model, e.g., a direct simulation Monte Carlo method for micro-channel gas flows) over the local height of the channel/tube. The number of micro elements as well as their streamwise position is chosen to resolve the geometrical features of the macro channel. While there is no direct communication between individual micro elements, coupling occurs via an iterative imposition of mass and momentum-flux conservation on the macro scale. The greater the streamwise scale of the geometry, the more significant is the computational speed-up when compared to a full MD simulation. We test our new multiscale method on the case of a converging/diverging nanochannel conveying a simple Lennard-Jones liquid. We validate the results from our simulations by comparing them to a full MD simulation of the same test case. Supported by EPSRC Programme Grant, EP/I011927/1.

  13. 3D Simulation of External Flooding Events for the RISMC Pathway

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prescott, Steven; Mandelli, Diego; Sampath, Ramprasad

    2015-09-01

    Incorporating 3D simulations as part of the Risk-Informed Safety Margins Characterization (RISMIC) Toolkit allows analysts to obtain a more complete picture of complex system behavior for events including external plant hazards. External events such as flooding have become more important recently – however these can be analyzed with existing and validated simulated physics toolkits. In this report, we describe these approaches specific to flooding-based analysis using an approach called Smoothed Particle Hydrodynamics. The theory, validation, and example applications of the 3D flooding simulation are described. Integrating these 3D simulation methods into computational risk analysis provides a spatial/visual aspect to themore » design, improves the realism of results, and can prove visual understanding to validate the analysis of flooding.« less

  14. Design and Analyses of High Aspect Ratio Nozzles for Distributed Propulsion Acoustic Measurements

    NASA Technical Reports Server (NTRS)

    Dippold, Vance F., III

    2016-01-01

    A series of three convergent, round-to-rectangular high aspect ratio (HAR) nozzles were designed for acoustic testing at the NASA Glenn Research Center Nozzle Acoustic Test Rig (NATR). The HAR nozzles had exit area aspect ratios of 8:1, 12:1, and 16:1. The nozzles were designed to mimic a distributed propulsion system array with a slot nozzle. The nozzle designs were screened using Reynolds-Averaged Navier-Stokes (RANS) simulations. In addition to meeting the geometric constraints required for testing in the NATR, the HAR nozzles were designed to be free of flow features that would produce unwanted noise (e.g., flow separations) and to have uniform flow at the nozzle exit. Multiple methods were used to generate HAR nozzle designs. The final HAR nozzle designs were generated in segments using a computer code that parameterized each segment. RANS screening simulations showed that intermediate nozzle designs suffered flow separation, a normal shockwave at the nozzle exit (caused by an aerodynamic throat produced by boundary layer growth), and non-uniform flow at the nozzle exit. The RANS simulations showed that the final HAR nozzle designs were free of flow separations, but were not entirely successful at producing a fully uniform flow at the nozzle exit. The final designs suffered a pair of counter-rotating vortices along the outboard walls of the nozzle. The 16:1 aspect ratio HAR nozzle had the least uniform flow at the exit plane; the 8:1 aspect ratio HAR nozzles had a fairly uniform flow at the nozzle exit plane.

  15. Molecular dynamics simulations of membrane proteins and their interactions: from nanoscale to mesoscale.

    PubMed

    Chavent, Matthieu; Duncan, Anna L; Sansom, Mark Sp

    2016-10-01

    Molecular dynamics simulations provide a computational tool to probe membrane proteins and systems at length scales ranging from nanometers to close to a micrometer, and on microsecond timescales. All atom and coarse-grained simulations may be used to explore in detail the interactions of membrane proteins and specific lipids, yielding predictions of lipid binding sites in good agreement with available structural data. Building on the success of protein-lipid interaction simulations, larger scale simulations reveal crowding and clustering of proteins, resulting in slow and anomalous diffusional dynamics, within realistic models of cell membranes. Current methods allow near atomic resolution simulations of small membrane organelles, and of enveloped viruses to be performed, revealing key aspects of their structure and functionally important dynamics. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. [A new concept in digestive surgery: the computer assisted surgical procedure, from virtual reality to telemanipulation].

    PubMed

    Marescaux, J; Clément, J M; Nord, M; Russier, Y; Tassetti, V; Mutter, D; Cotin, S; Ayache, N

    1997-11-01

    Surgical simulation increasingly appears to be an essential aspect of tomorrow's surgery. The development of a hepatic surgery simulator is an advanced concept calling for a new writing system which will transform the medical world: virtual reality. Virtual reality extends the perception of our five senses by representing more than the real state of things by the means of computer sciences and robotics. It consists of three concepts: immersion, navigation and interaction. Three reasons have led us to develop this simulator: the first is to provide the surgeon with a comprehensive visualisation of the organ. The second reason is to allow for planning and surgical simulation that could be compared with the detailed flight-plan for a commercial jet pilot. The third lies in the fact that virtual reality is an integrated part of the concept of computer assisted surgical procedure. The project consists of a sophisticated simulator which has to include five requirements: visual fidelity, interactivity, physical properties, physiological properties, sensory input and output. In this report we will describe how to get a realistic 3D model of the liver from bi-dimensional 2D medical images for anatomical and surgical training. The introduction of a tumor and the consequent planning and virtual resection is also described, as are force feedback and real-time interaction.

  17. PROCESS SIMULATION OF COLD PRESSING OF ARMSTRONG CP-Ti POWDERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabau, Adrian S; Gorti, Sarma B; Peter, William H

    A computational methodology is presented for the process simulation of cold pressing of Armstrong CP-Ti Powders. The computational model was implemented in the commercial finite element program ABAQUSTM. Since the powder deformation and consolidation is governed by specific pressure-dependent constitutive equations, several solution algorithms were developed for the ABAQUS user material subroutine, UMAT. The solution algorithms were developed for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations that describe the evolution of the state variables. Since ABAQUS requires the use of a full Newton-Raphson algorithm for the stress-strainmore » equations, an algorithm for obtaining the tangent/linearization moduli, which is consistent with the return-mapping algorithm, also was developed. Numerical simulation results are presented for the cold compaction of the Ti powders. Several simulations were conducted for cylindrical samples with different aspect ratios. The numerical simulation results showed that for the disk samples, the minimum von Mises stress was approximately half than its maximum value. The hydrostatic stress distribution exhibits a variation smaller than that of the von Mises stress. It was found that for the disk and cylinder samples the minimum hydrostatic stresses were approximately 23 and 50% less than its maximum value, respectively. It was also found that the minimum density was noticeably affected by the sample height.« less

  18. Cerebro-cerebellar interactions underlying temporal information processing.

    PubMed

    Aso, Kenji; Hanakawa, Takashi; Aso, Toshihiko; Fukuyama, Hidenao

    2010-12-01

    The neural basis of temporal information processing remains unclear, but it is proposed that the cerebellum plays an important role through its internal clock or feed-forward computation functions. In this study, fMRI was used to investigate the brain networks engaged in perceptual and motor aspects of subsecond temporal processing without accompanying coprocessing of spatial information. Direct comparison between perceptual and motor aspects of time processing was made with a categorical-design analysis. The right lateral cerebellum (lobule VI) was active during a time discrimination task, whereas the left cerebellar lobule VI was activated during a timed movement generation task. These findings were consistent with the idea that the cerebellum contributed to subsecond time processing in both perceptual and motor aspects. The feed-forward computational theory of the cerebellum predicted increased cerebro-cerebellar interactions during time information processing. In fact, a psychophysiological interaction analysis identified the supplementary motor and dorsal premotor areas, which had a significant functional connectivity with the right cerebellar region during a time discrimination task and with the left lateral cerebellum during a timed movement generation task. The involvement of cerebro-cerebellar interactions may provide supportive evidence that temporal information processing relies on the simulation of timing information through feed-forward computation in the cerebellum.

  19. Cause and Cure - Deterioration in Accuracy of CFD Simulations With Use of High-Aspect-Ratio Triangular Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar

    2017-01-01

    Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD re-searchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions and also cause numerical instability. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where triangular/tetrahedral elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identities the reason behind the difficulties in use of such high-aspect ratio triangular/tetrahedral elements is presented here. As will be shown, it turns out that the degree of accuracy deterioration of gradient computation involving a triangular element is hinged on the value of its shape factor Gamma def = sq sin Alpha1 + sq sin Alpha2 + sq sin Alpha3, where Alpha1; Alpha2 and Alpha3 are the internal angles of the element. In fact, it is shown that the degree of accuracy deterioration increases monotonically as the value of Gamma decreases monotonically from its maximal value 9/4 (attained by an equilateral triangle only) to a value much less than 1 (associated with a highly obtuse triangle). By taking advantage of the fact that a high-aspect ratio triangle is not necessarily highly obtuse, and in fact it can have a shape factor whose value is close to the maximal value 9/4, a potential solution to avoid accuracy deterioration of gradient computation associated with a high-aspect ratio triangular grid is given. Also a brief discussion on the extension of the current mathematical framework to the tetrahedral-grid case along with some of the practical results of this extension is also provided. Furthermore, through the use of numerical simulations of practical viscous problems involving high-Reynolds number flows, the effectiveness of the gradient evaluation procedures within the CESE framework (that have their basis on the analysis presented here) to produce accurate and stable results on such high-aspect ratio meshes is also showcased.

  20. Simulated lumbar minimally invasive surgery educational model with didactic and technical components.

    PubMed

    Chitale, Rohan; Ghobrial, George M; Lobel, Darlene; Harrop, James

    2013-10-01

    The learning and development of technical skills are paramount for neurosurgical trainees. External influences and a need for maximizing efficiency and proficiency have encouraged advancements in simulator-based learning models. To confirm the importance of establishing an educational curriculum for teaching minimally invasive techniques of pedicle screw placement using a computer-enhanced physical model of percutaneous pedicle screw placement with simultaneous didactic and technical components. A 2-hour educational curriculum was created to educate neurosurgical residents on anatomy, pathophysiology, and technical aspects associated with image-guided pedicle screw placement. Predidactic and postdidactic practical and written scores were analyzed and compared. Scores were calculated for each participant on the basis of the optimal pedicle screw starting point and trajectory for both fluoroscopy and computed tomographic navigation. Eight trainees participated in this module. Average mean scores on the written didactic test improved from 78% to 100%. The technical component scores for fluoroscopic guidance improved from 58.8 to 52.9. Technical score for computed tomography-navigated guidance also improved from 28.3 to 26.6. Didactic and technical quantitative scores with a simulator-based educational curriculum improved objectively measured resident performance. A minimally invasive spine simulation model and curriculum may serve a valuable function in the education of neurosurgical residents and outcomes for patients.

  1. 1D-3D hybrid modeling-from multi-compartment models to full resolution models in space and time.

    PubMed

    Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M; Queisser, Gillian

    2014-01-01

    Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator-which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics.

  2. Publicly Open Virtualized Gaming Environment For Simulation of All Aspects Related to '100 Year Starship Study'

    NASA Astrophysics Data System (ADS)

    Obousy, R. K.

    2012-09-01

    Sending a mission to distant stars will require our civilization to develop new technologies and change the way we live. The complexity of the task is enormous [1] thus, the thought is to involve people from around the globe through the ``citizen scientist'' paradigm. The suggestion is a ``Gaming Virtual Reality Network'' (GVRN) to simulate sociological and technological aspects involved in this project. Currently there is work being done [2] in developing a technology which will construct computer games within GVRN. This technology will provide quick and easy ways for individuals to develop game scenarios related to various aspects of the ``100YSS'' project. People will be involved in solving certain tasks just by play games. Players will be able to modify conditions, add new technologies, geological conditions, social movements and assemble new strategies just by writing scenarios. The system will interface with textual and video information, extract scenarios written in millions of texts and use it to assemble new games. Thus, players will be able to simulate enormous amounts of possibilities. Information technologies will be involved which will require us to start building the system in a way that any modules can be easily replaced. Thus, GVRN should be modular and open to the community.

  3. Numerical simulation of the tip vortex off a low-aspect-ratio wing at transonic speed

    NASA Technical Reports Server (NTRS)

    Mansour, N. N.

    1984-01-01

    The viscous transonic flow around a low aspect ratio wing was computed by an implicit, three dimensional, thin-layer Navier-Stokes solver. The grid around the geometry of interest is obtained numerically as a solution to a Dirichlet problem for the cube. A low aspect ratio wing with large sweep, twist, taper, and camber is the chosen geometry. The topology chosen to wrap the mesh around the wing with good tip resolution is a C-O type mesh. The flow around the wing was computed for a free stream Mach number of 0.82 at an angle of attack of 5 deg. At this Mach number, an oblique shock forms on the upper surface of the wing, and a tip vortex and three dimensional flow separation off the wind surface are observed. Particle path lines indicate that the three dimensional flow separation on the wing surface is part of the roots of the tip vortex formation. The lifting of the tip vortex before the wing trailing edge is observed by following the trajectory of particles release around the wing tip.

  4. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de

    2017-04-15

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less

  5. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-04-01

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.

  6. Cart3D Simulations for the Second AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian

    2017-01-01

    Simulation results are presented for all test cases prescribed in the Second AIAA Sonic Boom Prediction Workshop. For each of the four nearfield test cases, we compute pressure signatures at specified distances and off-track angles, using an inviscid, embedded-boundary Cartesian-mesh flow solver with output-based mesh adaptation. The cases range in complexity from an axisymmetric body to a full low-boom aircraft configuration with a powered nacelle. For efficiency, boom carpets are decomposed into sets of independent meshes and computed in parallel. This also facilitates the use of more effective meshing strategies - each off-track angle is computed on a mesh with good azimuthal alignment, higher aspect ratio cells, and more tailored adaptation. The nearfield signatures generally exhibit good convergence with mesh refinement. We introduce a local error estimation procedure to highlight regions of the signatures most sensitive to mesh refinement. Results are also presented for the two propagation test cases, which investigate the effects of atmospheric profiles on ground noise. Propagation is handled with an augmented Burgers' equation method (NASA's sBOOM), and ground noise metrics are computed with LCASB.

  7. Search and rescue in collapsed structures: engineering and social science aspects.

    PubMed

    El-Tawil, Sherif; Aguirre, Benigno

    2010-10-01

    This paper discusses the social science and engineering dimensions of search and rescue (SAR) in collapsed buildings. First, existing information is presented on factors that influence the behaviour of trapped victims, particularly human, physical, socioeconomic and circumstantial factors. Trapped victims are most often discussed in the context of structural collapse and injuries sustained. Most studies in this area focus on earthquakes as the type of disaster that produces the most extensive structural damage. Second, information is set out on the engineering aspects of urban search and rescue (USAR) in the United States, including the role of structural engineers in USAR operations, training and certification of structural specialists, and safety and general procedures. The use of computational simulation to link the engineering and social science aspects of USAR is discussed. This could supplement training of local SAR groups and USAR teams, allowing them to understand better the collapse process and how voids form in a rubble pile. A preliminary simulation tool developed for this purpose is described. © 2010 The Author(s). Journal compilation © Overseas Development Institute, 2010.

  8. Aspects of numerical and representational methods related to the finite-difference simulation of advective and dispersive transport of freshwater in a thin brackish aquifer

    USGS Publications Warehouse

    Merritt, M.L.

    1993-01-01

    The simulation of the transport of injected freshwater in a thin brackish aquifer, overlain and underlain by confining layers containing more saline water, is shown to be influenced by the choice of the finite-difference approximation method, the algorithm for representing vertical advective and dispersive fluxes, and the values assigned to parametric coefficients that specify the degree of vertical dispersion and molecular diffusion that occurs. Computed potable water recovery efficiencies will differ depending upon the choice of algorithm and approximation method, as will dispersion coefficients estimated based on the calibration of simulations to match measured data. A comparison of centered and backward finite-difference approximation methods shows that substantially different transition zones between injected and native waters are depicted by the different methods, and computed recovery efficiencies vary greatly. Standard and experimental algorithms and a variety of values for molecular diffusivity, transverse dispersivity, and vertical scaling factor were compared in simulations of freshwater storage in a thin brackish aquifer. Computed recovery efficiencies vary considerably, and appreciable differences are observed in the distribution of injected freshwater in the various cases tested. The results demonstrate both a qualitatively different description of transport using the experimental algorithms and the interrelated influences of molecular diffusion and transverse dispersion on simulated recovery efficiency. When simulating natural aquifer flow in cross-section, flushing of the aquifer occurred for all tested coefficient choices using both standard and experimental algorithms. ?? 1993.

  9. Geometric saliency to characterize radar exploitation performance

    NASA Astrophysics Data System (ADS)

    Nolan, Adam; Keserich, Brad; Lingg, Andrew; Goley, Steve

    2014-06-01

    Based on the fundamental scattering mechanisms of facetized computer-aided design (CAD) models, we are able to define expected contributions (EC) to the radar signature. The net result of this analysis is the prediction of the salient aspects and contributing vehicle morphology based on the aspect. Although this approach does not provide the fidelity of an asymptotic electromagnetic (EM) simulation, it does provide very fast estimates of the unique scattering that can be consumed by a signature exploitation algorithm. The speed of this approach is particularly relevant when considering the high dimensionality of target configuration variability due to articulating parts which are computationally burdensome to predict. The key scattering phenomena considered in this work are the specular response from a single bounce interaction with surfaces and dihedral response formed between the ground plane and vehicle. Results of this analysis are demonstrated for a set of civilian target models.

  10. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  11. Curvilinear immersed-boundary method for simulating unsteady flows in shallow natural streams with arbitrarily complex obstacles

    NASA Astrophysics Data System (ADS)

    Kang, Seokkoo; Borazjani, Iman; Sotiropoulos, Fotis

    2008-11-01

    Unsteady 3D simulations of flows in natural streams is a challenging task due to the complexity of the bathymetry, the shallowness of the flow, and the presence of multiple nature- and man-made obstacles. This work is motivated by the need to develop a powerful numerical method for simulating such flows using coherent-structure-resolving turbulence models. We employ the curvilinear immersed boundary method of Ge and Sotiropoulos (Journal of Computational Physics, 2007) and address the critical issue of numerical efficiency in large aspect ratio computational domains and grids such as those encountered in long and shallow open channels. We show that the matrix-free Newton-Krylov method for solving the momentum equations coupled with an algebraic multigrid method with incomplete LU preconditioner for solving the Poisson equation yield a robust and efficient procedure for obtaining time-accurate solutions in such problems. We demonstrate the potential of the numerical approach by carrying out a direct numerical simulation of flow in a long and shallow meandering stream with multiple hydraulic structures.

  12. Modeling Criminal Activity in Urban Landscapes

    NASA Astrophysics Data System (ADS)

    Brantingham, Patricia; Glässer, Uwe; Jackson, Piper; Vajihollahi, Mona

    Computational and mathematical methods arguably have an enormous potential for serving practical needs in crime analysis and prevention by offering novel tools for crime investigations and experimental platforms for evidence-based policy making. We present a comprehensive formal framework and tool support for mathematical and computational modeling of criminal behavior to facilitate systematic experimental studies of a wide range of criminal activities in urban environments. The focus is on spatial and temporal aspects of different forms of crime, including opportunistic and serial violent crimes. However, the proposed framework provides a basis to push beyond conventional empirical research and engage the use of computational thinking and social simulations in the analysis of terrorism and counter-terrorism.

  13. Computing and Visualizing the Complex Dynamics of Earthquake Fault Systems: Towards Ensemble Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Rundle, J.; Rundle, P.; Donnellan, A.; Li, P.

    2003-12-01

    We consider the problem of the complex dynamics of earthquake fault systems, and whether numerical simulations can be used to define an ensemble forecasting technology similar to that used in weather and climate research. To effectively carry out such a program, we need 1) a topological realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention of a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults extending throughout California, from the Mexico-California border to the Mendocino Triple Junction. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of all 654 fault segments (degrees of freedom) in the model. Previous versions of Virtual California had used only 215 fault segments to model the strike slip faults in southern California. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a small Beowulf cluster consisting of 10 cpus. We are also planning to run the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We also compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems.

  14. Towards an orientation-distribution-based multi-scale approach for remodelling biological tissues.

    PubMed

    Menzel, A; Harrysson, M; Ristinmaa, M

    2008-10-01

    The mechanical behaviour of soft biological tissues is governed by phenomena occurring on different scales of observation. From the computational modelling point of view, a vital aspect consists of the appropriate incorporation of micromechanical effects into macroscopic constitutive equations. In this work, particular emphasis is placed on the simulation of soft fibrous tissues with the orientation of the underlying fibres being determined by distribution functions. A straightforward but convenient Taylor-type homogenisation approach links the micro- or rather meso-level of fibres to the overall macro-level and allows to reflect macroscopically orthotropic response. As a key aspect of this work, evolution equations for the fibre orientations are accounted for so that physiological effects like turnover or rather remodelling are captured. Concerning numerical applications, the derived set of equations can be embedded into a nonlinear finite element context so that first elementary simulations are finally addressed.

  15. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less

  16. Aspect-related Vegetation Differences Amplify Soil Moisture Variability in Semiarid Landscapes

    NASA Astrophysics Data System (ADS)

    Yetemen, O.; Srivastava, A.; Kumari, N.; Saco, P. M.

    2017-12-01

    Soil moisture variability (SMV) in semiarid landscapes is affected by vegetation, soil texture, climate, aspect, and topography. The heterogeneity in vegetation cover that results from the effects of microclimate, terrain attributes (slope gradient, aspect, drainage area etc.), soil properties, and spatial variability in precipitation have been reported to act as the dominant factors modulating SMV in semiarid ecosystems. However, the role of hillslope aspect in SMV, though reported in many field studies, has not received the same degree of attention probably due to the lack of extensive large datasets. Numerical simulations can then be used to elucidate the contribution of aspect-driven vegetation patterns to this variability. In this work, we perform a sensitivity analysis to study on variables driving SMV using the CHILD landscape evolution model equipped with a spatially-distributed solar-radiation component that couples vegetation dynamics and surface hydrology. To explore how aspect-driven vegetation heterogeneity contributes to the SMV, CHILD was run using a range of parameters selected to reflect different scenarios (from uniform to heterogeneous vegetation cover). Throughout the simulations, the spatial distribution of soil moisture and vegetation cover are computed to estimate the corresponding coefficients of variation. Under the uniform spatial precipitation forcing and uniform soil properties, the factors affecting the spatial distribution of solar insolation are found to play a key role in the SMV through the emergence of aspect-driven vegetation patterns. Hence, factors such as catchment gradient, aspect, and latitude, define water stress and vegetation growth, and in turn affect the available soil moisture content. Interestingly, changes in soil properties (porosity, root depth, and pore-size distribution) over the domain are not as effective as the other factors. These findings show that the factors associated to aspect-related vegetation differences amplify the soil moisture variability of semi-arid landscapes.

  17. Kinetic Theory and Simulation of Single-Channel Water Transport

    NASA Astrophysics Data System (ADS)

    Tajkhorshid, Emad; Zhu, Fangqiang; Schulten, Klaus

    Water translocation between various compartments of a system is a fundamental process in biology of all living cells and in a wide variety of technological problems. The process is of interest in different fields of physiology, physical chemistry, and physics, and many scientists have tried to describe the process through physical models. Owing to advances in computer simulation of molecular processes at an atomic level, water transport has been studied in a variety of molecular systems ranging from biological water channels to artificial nanotubes. While simulations have successfully described various kinetic aspects of water transport, offering a simple, unified model to describe trans-channel translocation of water turned out to be a nontrivial task.

  18. Time-accurate simulations of a shear layer forced at a single frequency

    NASA Technical Reports Server (NTRS)

    Claus, R. W.; Huang, P. G.; Macinnes, J. M.

    1988-01-01

    Calculations are presented for the forced shear layer studied experimentally by Oster and Wygnanski, and Weisbrot. Two different computational approaches are examined: Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES). The DNS approach solves the full three dimensional Navier-Stokes equations for a temporally evolving mixing layer, while the LES approach solves the two dimensional Navier-Stokes equations with a subgrid scale turbulence model. While the comparison between these calculations and experimental data was hampered by a lack of information on the inflow boundary conditions, the calculations are shown to qualitatively agree with several aspects of the experiment. The sensitivity of these calculations to factors such as mesh refinement and Reynolds number is illustrated.

  19. 1D-3D hybrid modeling—from multi-compartment models to full resolution models in space and time

    PubMed Central

    Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M.; Queisser, Gillian

    2014-01-01

    Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator—which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics. PMID:25120463

  20. An omics perspective to the molecular mechanisms of anticancer metallo-drugs in the computational microscope era.

    PubMed

    Spinello, Angelo; Magistrato, Alessandra

    2017-08-01

    Metallo-drugs have attracted enormous interest for cancer treatment. The achievements of this drug-type are summarized by the success story of cisplatin. That being said, there have been many drawbacks with its clinical use, which prompted decades worth of research efforts to move towards safer and more effective agents, either containing platinum or different metals. Areas covered: In this review, the authors provide an atomistic picture of the molecular mechanisms involving selected metallo-drugs from structural and molecular simulation studies. They also provide an omics perspective, pointing out many unsettled aspects of the most relevant families of metallo-drugs at an epigenetic level. Expert opinion: Molecular simulations are able to provide detailed information at atomistic and temporal (ps) resolutions that are rarely accessible to experiments. The increasing accuracy of computational methods and the growing performance of computational platforms, allow us to mirror wet lab experiments in silico. Consequently, the molecular mechanisms of drugs action/failure can be directly viewed on a computer screen, like a 'computational microscope', allowing us to harness this knowledge for the design of the next-generation of metallo-drugs.

  1. Design, Materials, and Mechanobiology of Biodegradable Scaffolds for Bone Tissue Engineering

    PubMed Central

    Velasco, Marco A.; Narváez-Tovar, Carlos A.; Garzón-Alvarado, Diego A.

    2015-01-01

    A review about design, manufacture, and mechanobiology of biodegradable scaffolds for bone tissue engineering is given. First, fundamental aspects about bone tissue engineering and considerations related to scaffold design are established. Second, issues related to scaffold biomaterials and manufacturing processes are discussed. Finally, mechanobiology of bone tissue and computational models developed for simulating how bone healing occurs inside a scaffold are described. PMID:25883972

  2. User's guide to STIPPAN: A panel method program for slotted tunnel interference prediction

    NASA Technical Reports Server (NTRS)

    Kemp, W. B., Jr.

    1985-01-01

    Guidelines are presented for use of the computer program STIPPAN to simulate the subsonic flow in a slotted wind tunnel test section with a known model disturbance. Input data requirements are defined in detail and other aspects of the program usage are discussed in more general terms. The program is written for use in a CDC CYBER 200 class vector processing system.

  3. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostadin, Damevski

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less

  4. The Application of Modeling and Simulation to the Behavioral Deficit of Autism

    NASA Technical Reports Server (NTRS)

    Anton, John J.

    2010-01-01

    This abstract describes a research effort to apply technological advances in virtual reality simulation and computer-based games to create behavioral modification programs for individuals with Autism Spectrum Disorder (ASD). The research investigates virtual social skills training within a 3D game environment to diminish the impact of ASD social impairments and to increase learning capacity for optimal intellectual capability. Individuals with autism will encounter prototypical social contexts via computer interface and will interact with 3D avatars with predefined roles within a game-like environment. Incremental learning objectives will combine to form a collaborative social environment. A secondary goal of the effort is to begin the research and development of virtual reality exercises aimed at triggering the release of neurotransmitters to promote critical aspects of synaptic maturation at an early age to change the course of the disease.

  5. Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI

    NASA Astrophysics Data System (ADS)

    Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan

    2016-10-01

    Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.

  6. Cause and Cure - Deterioration in Accuracy of CFD Simulations with Use of High-Aspect-Ratio Triangular/Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar

    2017-01-01

    Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD researchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where simplex elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identifies the reason behind the difficulties in use of such high-aspect ratio simplex elements is formulated using two different approaches and presented here. Drawing insights from the analysis, a potential solution to avoid that pitfall is also provided as part of this work. Furthermore, through the use of numerical simulations of practical viscous problems involving high-Reynolds number flows, how the gradient evaluation procedures of the CESE framework can be effectively used to produce accurate and stable results on such high-aspect ratio simplex meshes is also showcased.

  7. Towards a supported common NEAMS software stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormac Garvey

    2012-04-01

    The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less

  8. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  9. Design issues for optimum solar cell configuration

    NASA Astrophysics Data System (ADS)

    Kumar, Atul; Thakur, Ajay D.

    2018-05-01

    A computer based simulation of solar cell structure is performed to study the optimization of pn junction configuration for photovoltaic action. The fundamental aspects of photovoltaic action viz, absorption, separation collection, and their dependence on material properties and deatails of device structures is discussed. Using SCAPS 1D we have simulated the ideal pn junction and shown the effect of band offset and carrier densities on solar cell performance. The optimum configuration can be achieved by optimizing transport of carriers in pn junction under effect of field dependent recombination (tunneling) and density dependent recombination (SRH, Auger) mechanisms.

  10. Simulation of Atmospheric-Entry Capsules in the Subsonic Regime

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Childs, Robert E.; Garcia, Joseph A.

    2015-01-01

    The accuracy of Computational Fluid Dynamics predictions of subsonic capsule aerodynamics is examined by comparison against recent NASA wind-tunnel data at high-Reynolds-number flight conditions. Several aspects of numerical and physical modeling are considered, including inviscid numerical scheme, mesh adaptation, rough-wall modeling, rotation and curvature corrections for eddy-viscosity models, and Detached-Eddy Simulations of the unsteady wake. All of these are considered in isolation against relevant data where possible. The results indicate that an improved predictive capability is developed by considering physics-based approaches and validating the results against flight-relevant experimental data.

  11. Component Framework for Loosely Coupled High Performance Integrated Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Elwasif, W. R.; Bernholdt, D. E.; Shet, A. G.; Batchelor, D. B.; Foley, S.

    2010-11-01

    We present the design and implementation of a component-based simulation framework for the execution of coupled time-dependent plasma modeling codes. The Integrated Plasma Simulator (IPS) provides a flexible lightweight component model that streamlines the integration of stand alone codes into coupled simulations. Standalone codes are adapted to the IPS component interface specification using a thin wrapping layer implemented in the Python programming language. The framework provides services for inter-component method invocation, configuration, task, and data management, asynchronous event management, simulation monitoring, and checkpoint/restart capabilities. Services are invoked, as needed, by the computational components to coordinate the execution of different aspects of coupled simulations on Massive parallel Processing (MPP) machines. A common plasma state layer serves as the foundation for inter-component, file-based data exchange. The IPS design principles, implementation details, and execution model will be presented, along with an overview of several use cases.

  12. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems

    PubMed Central

    Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.

    2015-01-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228

  13. POLARIS: Agent-based modeling framework development and implementation for integrated travel demand and network and operations simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auld, Joshua; Hope, Michael; Ley, Hubert

    This paper discusses the development of an agent-based modelling software development kit, and the implementation and validation of a model using it that integrates dynamic simulation of travel demand, network supply and network operations. A description is given of the core utilities in the kit: a parallel discrete event engine, interprocess exchange engine, and memory allocator, as well as a number of ancillary utilities: visualization library, database IO library, and scenario manager. The overall framework emphasizes the design goals of: generality, code agility, and high performance. This framework allows the modeling of several aspects of transportation system that are typicallymore » done with separate stand-alone software applications, in a high-performance and extensible manner. The issue of integrating such models as dynamic traffic assignment and disaggregate demand models has been a long standing issue for transportation modelers. The integrated approach shows a possible way to resolve this difficulty. The simulation model built from the POLARIS framework is a single, shared-memory process for handling all aspects of the integrated urban simulation. The resulting gains in computational efficiency and performance allow planning models to be extended to include previously separate aspects of the urban system, enhancing the utility of such models from the planning perspective. Initial tests with case studies involving traffic management center impacts on various network events such as accidents, congestion and weather events, show the potential of the system.« less

  14. Functional requirements for the man-vehicle systems research facility. [identifying and correcting human errors during flight simulation

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.

    1980-01-01

    The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.

  15. Extended model of restricted beam for FSO links

    NASA Astrophysics Data System (ADS)

    Poliak, Juraj; Wilfert, Otakar

    2012-10-01

    Modern wireless optical communication systems in many aspects overcome wire or radio communications. Their advantages are license-free operation and broad bandwidth that they offer. The medium in free-space optical (FSO) links is the atmosphere. Operation of outdoor FSO links struggles with many atmospheric phenomena that deteriorate phase and amplitude of the transmitted optical beam. This beam originates in the transmitter and is affected by its individual parts, especially by the lens socket and the transmitter aperture, where attenuation and diffraction effects take place. Both of these phenomena unfavourable influence the beam and cause degradation of link availability, or its total malfunction. Therefore, both of these phenomena should be modelled and simulated, so that one can judge the link function prior to the realization of the system. Not only the link availability and reliability are concerned, but also economic aspects. In addition, the transmitted beam is not, generally speaking, circularly symmetrical, what makes the link simulation more difficult. In a comprehensive model, it is necessary to take into account the ellipticity of the beam that is restricted by circularly symmetrical aperture where then the attenuation and diffraction occur. General model is too computationally extensive; therefore simplification of the calculations by means of analytical and numerical approaches will be discussed. Presented model is not only simulated using computer, but also experimentally proven. One can then deduce the ability of the model to describe the reality and to estimate how far can one go with approximations, i.e. limitations of the model are discussed.

  16. Processing of Visual Imagery by an Adaptive Model of the Visual System: Its Performance and its Significance. Final Report, June 1969-March 1970.

    ERIC Educational Resources Information Center

    Tallman, Oliver H.

    A digital simulation of a model for the processing of visual images is derived from known aspects of the human visual system. The fundamental principle of computation suggested by a biological model is a transformation that distributes information contained in an input stimulus everywhere in a transform domain. Each sensory input contributes under…

  17. Integrals for IBS and beam cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.; /Fermilab

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  18. Integrals for IBS and Beam Cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  19. Monostatic Radar Cross Section Estimation of Missile Shaped Object Using Physical Optics Method

    NASA Astrophysics Data System (ADS)

    Sasi Bhushana Rao, G.; Nambari, Swathi; Kota, Srikanth; Ranga Rao, K. S.

    2017-08-01

    Stealth Technology manages many signatures for a target in which most radar systems use radar cross section (RCS) for discriminating targets and classifying them with regard to Stealth. During a war target’s RCS has to be very small to make target invisible to enemy radar. In this study, Radar Cross Section of perfectly conducting objects like cylinder, truncated cone (frustum) and circular flat plate is estimated with respect to parameters like size, frequency and aspect angle. Due to the difficulties in exactly predicting the RCS, approximate methods become the alternative. Majority of approximate methods are valid in optical region and where optical region has its own strengths and weaknesses. Therefore, the analysis given in this study is purely based on far field monostatic RCS measurements in the optical region. Computation is done using Physical Optics (PO) method for determining RCS of simple models. In this study not only the RCS of simple models but also missile shaped and rocket shaped models obtained from the cascaded objects with backscatter has been computed using Matlab simulation. Rectangular plots are obtained for RCS in dbsm versus aspect angle for simple and missile shaped objects using Matlab simulation. Treatment of RCS, in this study is based on Narrow Band.

  20. Towards pattern generation and chaotic series prediction with photonic reservoir computers

    NASA Astrophysics Data System (ADS)

    Antonik, Piotr; Hermans, Michiel; Duport, François; Haelterman, Marc; Massar, Serge

    2016-03-01

    Reservoir Computing is a bio-inspired computing paradigm for processing time dependent signals that is particularly well suited for analog implementations. Our team has demonstrated several photonic reservoir computers with performance comparable to digital algorithms on a series of benchmark tasks such as channel equalisation and speech recognition. Recently, we showed that our opto-electronic reservoir computer could be trained online with a simple gradient descent algorithm programmed on an FPGA chip. This setup makes it in principle possible to feed the output signal back into the reservoir, and thus highly enrich the dynamics of the system. This will allow to tackle complex prediction tasks in hardware, such as pattern generation and chaotic and financial series prediction, which have so far only been studied in digital implementations. Here we report simulation results of our opto-electronic setup with an FPGA chip and output feedback applied to pattern generation and Mackey-Glass chaotic series prediction. The simulations take into account the major aspects of our experimental setup. We find that pattern generation can be easily implemented on the current setup with very good results. The Mackey-Glass series prediction task is more complex and requires a large reservoir and more elaborate training algorithm. With these adjustments promising result are obtained, and we now know what improvements are needed to match previously reported numerical results. These simulation results will serve as basis of comparison for experiments we will carry out in the coming months.

  1. Colonoscopy procedure simulation: virtual reality training based on a real time computational approach.

    PubMed

    Wen, Tingxi; Medveczky, David; Wu, Jackie; Wu, Jianhuang

    2018-01-25

    Colonoscopy plays an important role in the clinical screening and management of colorectal cancer. The traditional 'see one, do one, teach one' training style for such invasive procedure is resource intensive and ineffective. Given that colonoscopy is difficult, and time-consuming to master, the use of virtual reality simulators to train gastroenterologists in colonoscopy operations offers a promising alternative. In this paper, a realistic and real-time interactive simulator for training colonoscopy procedure is presented, which can even include polypectomy simulation. Our approach models the colonoscopy as thick flexible elastic rods with different resolutions which are dynamically adaptive to the curvature of the colon. More material characteristics of this deformable material are integrated into our discrete model to realistically simulate the behavior of the colonoscope. We present a simulator for training colonoscopy procedure. In addition, we propose a set of key aspects of our simulator that give fast, high fidelity feedback to trainees. We also conducted an initial validation of this colonoscopic simulator to determine its clinical utility and efficacy.

  2. Evaluating variability with atomistic simulations: the effect of potential and calculation methodology on the modeling of lattice and elastic constants

    NASA Astrophysics Data System (ADS)

    Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.

    2018-07-01

    Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.

  3. Transient Three-Dimensional Analysis of Nozzle Side Load in Regeneratively Cooled Engines

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    2005-01-01

    Three-dimensional numerical investigations on the start-up side load physics for a regeneratively cooled, high-aspect-ratio nozzle were performed. The objectives of this study are to identify the three-dimensional side load physics and to compute the associated aerodynamic side load using an anchored computational methodology. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and a transient inlet condition based on an engine system simulation. Computations were performed for both the adiabatic and cooled walls in order to understand the effect of boundary conditions. Finite-rate chemistry was used throughout the study so that combustion effect is always included. The results show that three types of shock evolution are responsible for side loads: generation of combustion wave; transitions among free-shock separation, restricted-shock separation, and simultaneous free-shock and restricted shock separations; along with oscillation of shocks across the lip. Wall boundary conditions drastically affect the computed side load physics: the adiabatic nozzle prefers free-shock separation while the cooled nozzle favors restricted-shock separation, resulting in higher peak side load for the cooled nozzle than that of the adiabatic nozzle. By comparing the computed physics with those of test observations, it is concluded that cooled wall is a more realistic boundary condition, and the oscillation of the restricted-shock separation flow pattern across the lip along with its associated tangential shock motion are the dominant side load physics for a regeneratively cooled, high aspect-ratio rocket engine.

  4. Ordering Unstructured Meshes for Sparse Matrix Computations on Leading Parallel Systems

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Li, Xiaoye; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The ability of computers to solve hitherto intractable problems and simulate complex processes using mathematical models makes them an indispensable part of modern science and engineering. Computer simulations of large-scale realistic applications usually require solving a set of non-linear partial differential equations (PDES) over a finite region. For example, one thrust area in the DOE Grand Challenge projects is to design future accelerators such as the SpaHation Neutron Source (SNS). Our colleagues at SLAC need to model complex RFQ cavities with large aspect ratios. Unstructured grids are currently used to resolve the small features in a large computational domain; dynamic mesh adaptation will be added in the future for additional efficiency. The PDEs for electromagnetics are discretized by the FEM method, which leads to a generalized eigenvalue problem Kx = AMx, where K and M are the stiffness and mass matrices, and are very sparse. In a typical cavity model, the number of degrees of freedom is about one million. For such large eigenproblems, direct solution techniques quickly reach the memory limits. Instead, the most widely-used methods are Krylov subspace methods, such as Lanczos or Jacobi-Davidson. In all the Krylov-based algorithms, sparse matrix-vector multiplication (SPMV) must be performed repeatedly. Therefore, the efficiency of SPMV usually determines the eigensolver speed. SPMV is also one of the most heavily used kernels in large-scale numerical simulations.

  5. Noise in Neuronal and Electronic Circuits: A General Modeling Framework and Non-Monte Carlo Simulation Techniques.

    PubMed

    Kilinc, Deniz; Demir, Alper

    2017-08-01

    The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.

  6. A Framework for Image-Based Modeling of Acute Myocardial Ischemia Using Intramurally Recorded Extracellular Potentials.

    PubMed

    Burton, Brett M; Aras, Kedar K; Good, Wilson W; Tate, Jess D; Zenger, Brian; MacLeod, Rob S

    2018-05-21

    The biophysical basis for electrocardiographic evaluation of myocardial ischemia stems from the notion that ischemic tissues develop, with relative uniformity, along the endocardial aspects of the heart. These injured regions of subendocardial tissue give rise to intramural currents that lead to ST segment deflections within electrocardiogram (ECG) recordings. The concept of subendocardial ischemic regions is often used in clinical practice, providing a simple and intuitive description of ischemic injury; however, such a model grossly oversimplifies the presentation of ischemic disease-inadvertently leading to errors in ECG-based diagnoses. Furthermore, recent experimental studies have brought into question the subendocardial ischemia paradigm suggesting instead a more distributed pattern of tissue injury. These findings come from experiments and so have both the impact and the limitations of measurements from living organisms. Computer models have often been employed to overcome the constraints of experimental approaches and have a robust history in cardiac simulation. To this end, we have developed a computational simulation framework aimed at elucidating the effects of ischemia on measurable cardiac potentials. To validate our framework, we simulated, visualized, and analyzed 226 experimentally derived acute myocardial ischemic events. Simulation outcomes agreed both qualitatively (feature comparison) and quantitatively (correlation, average error, and significance) with experimentally obtained epicardial measurements, particularly under conditions of elevated ischemic stress. Our simulation framework introduces a novel approach to incorporating subject-specific, geometric models and experimental results that are highly resolved in space and time into computational models. We propose this framework as a means to advance the understanding of the underlying mechanisms of ischemic disease while simultaneously putting in place the computational infrastructure necessary to study and improve ischemia models aimed at reducing diagnostic errors in the clinic.

  7. Research study demonstrates computer simulation can predict warpage and assist in its elimination

    NASA Astrophysics Data System (ADS)

    Glozer, G.; Post, S.; Ishii, K.

    1994-10-01

    Programs for predicting warpage in injection molded parts are relatively new. Commercial software for simulating the flow and cooling stages of injection molding have steadily gained acceptance; however, warpage software is not yet as readily accepted. This study focused on gaining an understanding of the predictive capabilities of the warpage software. The following aspects of this study were unique. (1) Quantitative results were found using a statistically designed set of experiments. (2) Comparisons between experimental and simulation results were made with parts produced in a well-instrumented and controlled injection molding machine. (3) The experimental parts were accurately measured on a coordinate measuring machine with a non-contact laser probe. (4) The effect of part geometry on warpage was investigated.

  8. Numerical solutions of atmospheric flow over semielliptical simulated hills

    NASA Technical Reports Server (NTRS)

    Shieh, C. F.; Frost, W.

    1981-01-01

    Atmospheric motion over obstacles on plane surfaces to compute simulated wind fields over terrain features was studied. Semielliptical, two dimensional geometry and numerical simulation of flow over rectangular geometries is also discussed. The partial differential equations for the vorticity, stream function, turbulence kinetic energy, and turbulence length scale were solved by a finite difference technique. The mechanism of flow separation induced by a semiellipse is the same as flow over a gradually sloping surface for which the flow separation is caused by the interaction between the viscous force, the pressure force, and the turbulence level. For flow over bluff bodies, a downstream recirculation bubble is created which increases the aspect ratio and/or the turbulence level results in flow reattachment close behind the obstacle.

  9. Simulations of the Neutron Star Crust

    NASA Astrophysics Data System (ADS)

    Schramm, Stefan; Nandi, Rana

    The properties of the neutron star crust are crucially important for many physical processes occurring in the star. For instance, the crustal transport coefficients define the temperature evolution of accreting stars after bursts, which can be compared to observation. Furthermore, the structure of the inner crust can modify the neutrino transport through the matter considerably, significantly impacting the dynamics of supernova explosions. Therefore, we perform numerical studies of the inner crust, and among other aspects, investigate the dependence of the pasta phase on the isospin properties of the nuclear interactions. To this end we developed an efficient computer code to simulate the inner and outer crust using molecular dynamics techniques. First results of the simulations and insights into the crust-core transition are presented.

  10. Mapping the Limitations of Breakthrough Analysis in Fixed-Bed Adsorption

    NASA Technical Reports Server (NTRS)

    Knox, James Clinton

    2017-01-01

    The separation of gases through adsorption plays an important role in the chemical processing industry, where the separation step is often the costliest part of a chemical process and thus worthy of careful study and optimization. This work developed a number of new, archival aspects on the computer simulations used for the refinement and design of these gas adsorption processes: 1. Presented a new approach to fit the undetermined heat and mass transfer coefficients in the axially dispersed plug flow equation and associated balance equations 2. Examined and described the conditions where non-physical simulation results can arise 3. Presented an approach to determine the limits of the axial dispersion and LDF mass transfer terms above which non-physical simulation results occur.

  11. On the simulation and mitigation of anisoplanatic optical turbulence for long range imaging

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; LeMaster, Daniel A.

    2017-05-01

    We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.

  12. Collecting data from a sensor network in a single-board computer

    NASA Astrophysics Data System (ADS)

    Casciati, F.; Casciati, S.; Chen, Z.-C.; Faravelli, L.; Vece, M.

    2015-07-01

    The EU-FP7 project SPARTACUS, currently in progress, sees the international cooperation of several partners toward the design and implementation of a satellite based asset tracking for supporting emergency management in crisis operations. Due to the emergency environment, one has to rely on a low power consumption wireless communication. Therefore, the communication hardware and software must be designed to match requirements which can only be foreseen at the level of more or less likely scenarios. The latter aspect suggests a deep use of a simulator (instead of a real network of sensors) to cover extreme situations. The former power consumption remark suggests the use of a minimal computer (Raspberry Pi) as data collector. In this paper, the results of a broad simulation campaign are reported in order to investigate the accuracy of the received data and the global power consumption for each of the considered scenarios.

  13. Using Computational Cognitive Modeling to Diagnose Possible Sources of Aviation Error

    NASA Technical Reports Server (NTRS)

    Byrne, M. D.; Kirlik, Alex

    2003-01-01

    We present a computational model of a closed-loop, pilot-aircraft-visual scene-taxiway system created to shed light on possible sources of taxi error. Creating the cognitive aspects of the model using ACT-R required us to conduct studies with subject matter experts to identify experiential adaptations pilots bring to taxiing. Five decision strategies were found, ranging from cognitively-intensive but precise, to fast, frugal but robust. We provide evidence for the model by comparing its behavior to a NASA Ames Research Center simulation of Chicago O'Hare surface operations. Decision horizons were highly variable; the model selected the most accurate strategy given time available. We found a signature in the simulation data of the use of globally robust heuristics to cope with short decision horizons as revealed by errors occurring most frequently at atypical taxiway geometries or clearance routes. These data provided empirical support for the model.

  14. Computational model of polarized actin cables and cytokinetic actin ring formation in budding yeast

    PubMed Central

    Tang, Haosu; Bidone, Tamara C.

    2015-01-01

    The budding yeast actin cables and contractile ring are important for polarized growth and division, revealing basic aspects of cytoskeletal function. To study these formin-nucleated structures, we built a 3D computational model with actin filaments represented as beads connected by springs. Polymerization by formins at the bud tip and bud neck, crosslinking, severing, and myosin pulling, are included. Parameter values were estimated from prior experiments. The model generates actin cable structures and dynamics similar to those of wild type and formin deletion mutant cells. Simulations with increased polymerization rate result in long, wavy cables. Simulated pulling by type V myosin stretches actin cables. Increasing the affinity of actin filaments for the bud neck together with reduced myosin V pulling promotes the formation of a bundle of antiparallel filaments at the bud neck, which we suggest as a model for the assembly of actin filaments to the contractile ring. PMID:26538307

  15. Lattice Boltzmann simulation of antiplane shear loading of a stationary crack

    NASA Astrophysics Data System (ADS)

    Schlüter, Alexander; Kuhn, Charlotte; Müller, Ralf

    2018-01-01

    In this work, the lattice Boltzmann method is applied to study the dynamic behaviour of linear elastic solids under antiplane shear deformation. In this case, the governing set of partial differential equations reduces to a scalar wave equation for the out of plane displacement in a two dimensional domain. The lattice Boltzmann approach developed by Guangwu (J Comput Phys 161(1):61-69, 2000) in 2006 is used to solve the problem numerically. Some aspects of the scheme are highlighted, including the treatment of the boundary conditions. Subsequently, the performance of the lattice Boltzmann scheme is tested for a stationary crack problem for which an analytic solution exists. The treatment of cracks is new compared to the examples that are discussed in Guangwu's work. Furthermore, the lattice Boltzmann simulations are compared to finite element computations. Finally, the influence of the lattice Boltzmann relaxation parameter on the stability of the scheme is illustrated.

  16. Computer simulation of the effects of shoe cushioning on internal and external loading during running impacts.

    PubMed

    Miller, Ross H; Hamill, Joseph

    2009-08-01

    Biomechanical aspects of running injuries are often inferred from external loading measurements. However, previous research has suggested that relationships between external loading and potential injury-inducing internal loads can be complex and nonintuitive. Further, the loading response to training interventions can vary widely between subjects. In this study, we use a subject-specific computer simulation approach to estimate internal and external loading of the distal tibia during the impact phase for two runners when running in shoes with different midsole cushioning parameters. The results suggest that: (1) changes in tibial loading induced by footwear are not reflected by changes in ground reaction force (GRF) magnitudes; (2) the GRF loading rate is a better surrogate measure of tibial loading and stress fracture risk than the GRF magnitude; and (3) averaging results across groups may potentially mask differential responses to training interventions between individuals.

  17. Irregular-Mesh Terrain Analysis and Incident Solar Radiation for Continuous Hydrologic Modeling in Mountain Watersheds

    NASA Astrophysics Data System (ADS)

    Moreno, H. A.; Ogden, F. L.; Alvarez, L. V.

    2016-12-01

    This research work presents a methodology for estimating terrain slope degree, aspect (slope orientation) and total incoming solar radiation from Triangular Irregular Network (TIN) terrain models. The algorithm accounts for self shading and cast shadows, sky view fractions for diffuse radiation, remote albedo and atmospheric backscattering, by using a vectorial approach within a topocentric coordinate system and establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computingunit vector defining the position of the sun at each hour and day of the year. Thus, a dot product determines the radiation flux at each TIN element. Cast shadows are computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector only in the visible horizon range. Sky view fractions are computed by a simplified scanning algorithm from the highest to the lowest triangles along prescribed directions and visible distances, useful to determine diffuse radiation. Finally, remotealbedo is computed from the sky view fraction complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. The sensitivity of the different radiative components is tested a in a moutainuous watershed in Wyoming, to seasonal changes in weather and surrounding albedo (snow). This methodology represents an improvement on the current algorithms to compute terrain and radiation values on triangular-based models in an accurate and efficient manner. All terrain-related features (e.g. slope, aspect, sky view fraction) can be pre-computed and stored for easy access for a subsequent, progressive-in-time, numerical simulation.

  18. Toward Automatic Verification of Goal-Oriented Flow Simulations

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2014-01-01

    We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.

  19. Hybrid neuro-heuristic methodology for simulation and control of dynamic systems over time interval.

    PubMed

    Woźniak, Marcin; Połap, Dawid

    2017-09-01

    Simulation and positioning are very important aspects of computer aided engineering. To process these two, we can apply traditional methods or intelligent techniques. The difference between them is in the way they process information. In the first case, to simulate an object in a particular state of action, we need to perform an entire process to read values of parameters. It is not very convenient for objects for which simulation takes a long time, i.e. when mathematical calculations are complicated. In the second case, an intelligent solution can efficiently help on devoted way of simulation, which enables us to simulate the object only in a situation that is necessary for a development process. We would like to present research results on developed intelligent simulation and control model of electric drive engine vehicle. For a dedicated simulation method based on intelligent computation, where evolutionary strategy is simulating the states of the dynamic model, an intelligent system based on devoted neural network is introduced to control co-working modules while motion is in time interval. Presented experimental results show implemented solution in situation when a vehicle transports things over area with many obstacles, what provokes sudden changes in stability that may lead to destruction of load. Therefore, applied neural network controller prevents the load from destruction by positioning characteristics like pressure, acceleration, and stiffness voltage to absorb the adverse changes of the ground. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  1. Microtomographic imaging in the process of bone modeling and simulation

    NASA Astrophysics Data System (ADS)

    Mueller, Ralph

    1999-09-01

    Micro-computed tomography ((mu) CT) is an emerging technique to nondestructively image and quantify trabecular bone in three dimensions. Where the early implementations of (mu) CT focused more on technical aspects of the systems and required equipment not normally available to the general public, a more recent development emphasized practical aspects of micro- tomographic imaging. That system is based on a compact fan- beam type of tomograph, also referred to as desktop (mu) CT. Desk-top (mu) CT has been used extensively for the investigation of osteoporosis related health problems gaining new insight into the organization of trabecular bone and the influence of osteoporotic bone loss on bone architecture and the competence of bone. Osteoporosis is a condition characterized by excessive bone loss and deterioration in bone architecture. The reduced quality of bone increases the risk of fracture. Current imaging technologies do not allow accurate in vivo measurements of bone structure over several decades or the investigation of the local remodeling stimuli at the tissue level. Therefore, computer simulations and new experimental modeling procedures are necessary for determining the long-term effects of age, menopause, and osteoporosis on bone. Microstructural bone models allow us to study not only the effects of osteoporosis on the skeleton but also to assess and monitor the effectiveness of new treatment regimens. The basis for such approaches are realistic models of bone and a sound understanding of the underlying biological and mechanical processes in bone physiology. In this article, strategies for new approaches to bone modeling and simulation in the study and treatment of osteoporosis and age-related bone loss are presented. The focus is on the bioengineering and imaging aspects of osteoporosis research. With the introduction of desk-top (mu) CT, a new generation of imaging instruments has entered the arena allowing easy and relatively inexpensive access to the three-dimensional microstructure of bone, thereby giving bone researchers a powerful tool for the exploration of age-related bone loss and osteoporosis.

  2. Distributed and collaborative synthetic environments

    NASA Technical Reports Server (NTRS)

    Bajaj, Chandrajit L.; Bernardini, Fausto

    1995-01-01

    Fast graphics workstations and increased computing power, together with improved interface technologies, have created new and diverse possibilities for developing and interacting with synthetic environments. A synthetic environment system is generally characterized by input/output devices that constitute the interface between the human senses and the synthetic environment generated by the computer; and a computation system running a real-time simulation of the environment. A basic need of a synthetic environment system is that of giving the user a plausible reproduction of the visual aspect of the objects with which he is interacting. The goal of our Shastra research project is to provide a substrate of geometric data structures and algorithms which allow the distributed construction and modification of the environment, efficient querying of objects attributes, collaborative interaction with the environment, fast computation of collision detection and visibility information for efficient dynamic simulation and real-time scene display. In particular, we address the following issues: (1) A geometric framework for modeling and visualizing synthetic environments and interacting with them. We highlight the functions required for the geometric engine of a synthetic environment system. (2) A distribution and collaboration substrate that supports construction, modification, and interaction with synthetic environments on networked desktop machines.

  3. Crystal nucleation of colloidal hard dumbbells

    NASA Astrophysics Data System (ADS)

    Ni, Ran; Dijkstra, Marjolein

    2011-01-01

    Using computer simulations, we investigate the homogeneous crystal nucleation in suspensions of colloidal hard dumbbells. The free energy barriers are determined by Monte Carlo simulations using the umbrella sampling technique. We calculate the nucleation rates for the plastic crystal and the aperiodic crystal phase using the kinetic prefactor as determined from event driven molecular dynamics simulations. We find good agreement with the nucleation rates determined from spontaneous nucleation events observed in event driven molecular dynamics simulations within error bars of one order of magnitude. We study the effect of aspect ratio of the dumbbells on the nucleation of plastic and aperiodic crystal phases, and we also determine the structure of the critical nuclei. Moreover, we find that the nucleation of the aligned close-packed crystal structure is strongly suppressed by a high free energy barrier at low supersaturations and slow dynamics at high supersaturations.

  4. Man-vehicle systems research facility advanced aircraft flight simulator throttle mechanism

    NASA Technical Reports Server (NTRS)

    Kurasaki, S. S.; Vallotton, W. C.

    1985-01-01

    The Advanced Aircraft Flight Simulator is equipped with a motorized mechanism that simulates a two engine throttle control system that can be operated via a computer driven performance management system or manually by the pilots. The throttle control system incorporates features to simulate normal engine operations and thrust reverse and vary the force feel to meet a variety of research needs. While additional testing to integrate the work required is principally now in software design, since the mechanical aspects function correctly. The mechanism is an important part of the flight control system and provides the capability to conduct human factors research of flight crews with advanced aircraft systems under various flight conditions such as go arounds, coupled instrument flight rule approaches, normal and ground operations and emergencies that would or would not normally be experienced in actual flight.

  5. Detecting aircraft with a low-resolution infrared sensor.

    PubMed

    Jakubowicz, Jérémie; Lefebvre, Sidonie; Maire, Florian; Moulines, Eric

    2012-06-01

    Existing computer simulations of aircraft infrared signature (IRS) do not account for dispersion induced by uncertainty on input data, such as aircraft aspect angles and meteorological conditions. As a result, they are of little use to estimate the detection performance of IR optronic systems; in this case, the scenario encompasses a lot of possible situations that must be indeed addressed, but cannot be singly simulated. In this paper, we focus on low-resolution infrared sensors and we propose a methodological approach for predicting simulated IRS dispersion of poorly known aircraft and performing aircraft detection on the resulting set of low-resolution infrared images. It is based on a sensitivity analysis, which identifies inputs that have negligible influence on the computed IRS and can be set at a constant value, on a quasi-Monte Carlo survey of the code output dispersion, and on a new detection test taking advantage of level sets estimation. This method is illustrated in a typical scenario, i.e., a daylight air-to-ground full-frontal attack by a generic combat aircraft flying at low altitude, over a database of 90,000 simulated aircraft images. Assuming a white noise or a fractional Brownian background model, detection performances are very promising.

  6. Computational modeling of optical projection tomographic microscopy using the finite difference time domain method.

    PubMed

    Coe, Ryan L; Seibel, Eric J

    2012-12-01

    We present a method for modeling image formation in optical projection tomographic microscopy (OPTM) using high numerical aperture (NA) condensers and objectives. Similar to techniques used in computed tomography, OPTM produces three-dimensional, reconstructed images of single cells from two-dimensional projections. The model is capable of simulating axial scanning of a microscope objective to produce projections, which are reconstructed using filtered backprojection. Simulation of optical scattering in transmission optical microscopy is designed to analyze all aspects of OPTM image formation, such as degree of specimen staining, refractive-index matching, and objective scanning. In this preliminary work, a set of simulations is performed to examine the effect of changing the condenser NA, objective scan range, and complex refractive index on the final reconstruction of a microshell with an outer radius of 1.5 μm and an inner radius of 0.9 μm. The model lays the groundwork for optimizing OPTM imaging parameters and triaging efforts to further improve the overall system design. As the model is expanded in the future, it will be used to simulate a more realistic cell, which could lead to even greater impact.

  7. Comparative study on the performance of power and bandwidth efficient modulations in LMSS under fading and interference

    NASA Technical Reports Server (NTRS)

    Liu, Jian; Kim, Junghwan; Kwatra, S. C.; Stevens, Grady H.

    1991-01-01

    Aspects of error performance of various power and bandwidth efficient modulations for the land mobile satellite systems (LMSS) were investigated under multipath fading and interferences by using Monte-Carlo simulation. A differential detection for 16QAM (quadrature amplitude modulation) was proposed to cope with Ricean fading and Doppler shift. Computer simulation results show that the performance of 16QAM with differential detection is as good as that of 16PSK with coherent detection and 3 dB better than that of 16PSK with differential detection, although it degrades by about 4.5 dB as compared to 16QAM with coherent detection under an additive white Gaussian noise (AWGN) channel. For the nonlinear channels, 16QAM with modified signal constellations is introduced and analyzed. The simulation results show that the modified 16QAM exhibits a gain of 2.5 dB over 16PSK under traveling-wave tube nonlinearity, and about 4 dB gain over 16PSK at the bit error rate of 10 exp -5 under AWGN. Computer simulation results for modified 16 QAM under cochannel interference and adjacent-channel interference are also presented.

  8. Performance of a high-work, low-aspect-ratio turbine stator tested with a realistic inlet radial temperature gradient

    NASA Technical Reports Server (NTRS)

    Stabe, Roy G.; Schwab, John R.

    1991-01-01

    A 0.767-scale model of a turbine stator designed for the core of a high-bypass-ratio aircraft engine was tested with uniform inlet conditions and with an inlet radial temperature profile simulating engine conditions. The principal measurements were radial and circumferential surveys of stator-exit total temperature, total pressure, and flow angle. The stator-exit flow field was also computed by using a three-dimensional Navier-Stokes solver. Other than temperature, there were no apparent differences in performance due to the inlet conditions. The computed results compared quite well with the experimental results.

  9. Design of a Neurally Plausible Model of Fear Learning

    PubMed Central

    Krasne, Franklin B.; Fanselow, Michael S.; Zelikowsky, Moriel

    2011-01-01

    A neurally oriented conceptual and computational model of fear conditioning manifested by freezing behavior (FRAT), which accounts for many aspects of delay and context conditioning, has been constructed. Conditioning and extinction are the result of neuromodulation-controlled LTP at synapses of thalamic, cortical, and hippocampal afferents on principal cells and inhibitory interneurons of lateral and basal amygdala. The phenomena accounted for by the model (and simulated by the computational version) include conditioning, secondary reinforcement, blocking, the immediate shock deficit, extinction, renewal, and a range of empirically valid effects of pre- and post-training ablation or inactivation of hippocampus or amygdala nuclei. PMID:21845175

  10. Aspects of GPU perfomance in algorithms with random memory access

    NASA Astrophysics Data System (ADS)

    Kashkovsky, Alexander V.; Shershnev, Anton A.; Vashchenkov, Pavel V.

    2017-10-01

    The numerical code for solving the Boltzmann equation on the hybrid computational cluster using the Direct Simulation Monte Carlo (DSMC) method showed that on Tesla K40 accelerators computational performance drops dramatically with increase of percentage of occupied GPU memory. Testing revealed that memory access time increases tens of times after certain critical percentage of memory is occupied. Moreover, it seems to be the common problem of all NVidia's GPUs arising from its architecture. Few modifications of the numerical algorithm were suggested to overcome this problem. One of them, based on the splitting the memory into "virtual" blocks, resulted in 2.5 times speed up.

  11. Influence of writing and reading intertrack interferences in terms of bit aspect ratio in shingled magnetic recording

    NASA Astrophysics Data System (ADS)

    Nobuhara, Hirofumi; Okamoto, Yoshihiro; Yamashita, Masato; Nakamura, Yasuaki; Osawa, Hisashi; Muraoka, Hiroaki

    2014-05-01

    In this paper, we investigate the influence of the writing and reading intertrack interferences (ITIs) in terms of bit aspect ratio (BAR) in shingled magnetic recording by computer simulation using a read/write model which consists of a writing process based on Stoner-Wohlfarth switching asteroid by a one-side shielded isosceles triangular write head and a reading process by an around shielded read head for a discrete Voronoi medium model. The results show that BAR should be 3 to reduce the influence of writing and reading ITIs, media noise, and additive white Gaussian noise in an assumed areal density of 4.61Tbpsi.

  12. Provenance-aware optimization of workload for distributed data production

    NASA Astrophysics Data System (ADS)

    Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal

    2017-10-01

    Distributed data processing in High Energy and Nuclear Physics (HENP) is a prominent example of big data analysis. Having petabytes of data being processed at tens of computational sites with thousands of CPUs, standard job scheduling approaches either do not address well the problem complexity or are dedicated to one specific aspect of the problem only (CPU, network or storage). Previously we have developed a new job scheduling approach dedicated to distributed data production - an essential part of data processing in HENP (preprocessing in big data terminology). In this contribution, we discuss the load balancing with multiple data sources and data replication, present recent improvements made to our planner and provide results of simulations which demonstrate the advantage against standard scheduling policies for the new use case. Multi-source or provenance is common in computing models of many applications whereas the data may be copied to several destinations. The initial input data set would hence be already partially replicated to multiple locations and the task of the scheduler is to maximize overall computational throughput considering possible data movements and CPU allocation. The studies have shown that our approach can provide a significant gain in overall computational performance in a wide scope of simulations considering realistic size of computational Grid and various input data distribution.

  13. Real-time simulation of the retina allowing visualization of each processing stage

    NASA Astrophysics Data System (ADS)

    Teeters, Jeffrey L.; Werblin, Frank S.

    1991-08-01

    The retina computes to let us see, but can we see the retina compute? Until now, the answer has been no, because the unconscious nature of the processing hides it from our view. Here the authors describe a method of seeing computations performed throughout the retina. This is achieved by using neurophysiological data to construct a model of the retina, and using a special-purpose image processing computer (PIPE) to implement the model in real time. Processing in the model is organized into stages corresponding to computations performed by each retinal cell type. The final stage is the transient (change detecting) ganglion cell. A CCD camera forms the input image, and the activity of a selected retinal cell type is the output which is displayed on a TV monitor. By changing the retina cell driving the monitor, the progressive transformations of the image by the retina can be observed. These simulations demonstrate the ubiquitous presence of temporal and spatial variations in the patterns of activity generated by the retina which are fed into the brain. The dynamical aspects make these patterns very different from those generated by the common DOG (Difference of Gaussian) model of receptive field. Because the retina is so successful in biological vision systems, the processing described here may be useful in machine vision.

  14. Mechanistic insight into prolonged electromechanical delay in dyssynchronous heart failure: a computational study

    PubMed Central

    Constantino, Jason; Hu, Yuxuan; Lardo, Albert C.

    2013-01-01

    In addition to the left bundle branch block type of electrical activation, there are further remodeling aspects associated with dyssynchronous heart failure (HF) that affect the electromechanical behavior of the heart. Among the most important are altered ventricular structure (both geometry and fiber/sheet orientation), abnormal Ca2+ handling, slowed conduction, and reduced wall stiffness. In dyssynchronous HF, the electromechanical delay (EMD), the time interval between local myocyte depolarization and myofiber shortening onset, is prolonged. However, the contributions of the four major HF remodeling aspects in extending EMD in the dyssynchronous failing heart remain unknown. The goal of this study was to determine the individual and combined contributions of HF-induced remodeling aspects to EMD prolongation. We used MRI-based models of dyssynchronous nonfailing and HF canine electromechanics and constructed additional models in which varying combinations of the four remodeling aspects were represented. A left bundle branch block electrical activation sequence was simulated in all models. The simulation results revealed that deranged Ca2+ handling is the primary culprit in extending EMD in dyssynchronous HF, with the other aspects of remodeling contributing insignificantly. Mechanistically, we found that abnormal Ca2+ handling in dyssynchronous HF slows myofiber shortening velocity at the early-activated septum and depresses both myofiber shortening and stretch rate at the late-activated lateral wall. These changes in myofiber dynamics delay the onset of myofiber shortening, thus giving rise to prolonged EMD in dyssynchronous HF. PMID:23934857

  15. Particle simulation on heterogeneous distributed supercomputers

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Dagum, Leonardo

    1993-01-01

    We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweeney, J. J.; Ford, S. R.

    The experience of IFE14 emphasizes the need for a better way to simulate aftershocks during an OSI exercise. The obvious approach is to develop a digital model of aftershocks that can be used either for a real field exercise or for a computer simulation that can be done in an office, for training for example. However, this approach involves consideration of several aspects, such as how and when to introduce waveforms in a way that maximizes the realism of the data and that will be convincing to a savvy, experienced seismic analyst. The purpose of this report is to outlinemore » a plan for how this approach can be implemented.« less

  17. The Use of Air Injection Nozzles for the Forced Excitation of Axial Compressor Blades

    NASA Astrophysics Data System (ADS)

    Raubenheimer, G. A.; van der Spuy, S. J.; von Backström, T. W.

    2013-03-01

    Turbomachines are exposed to many factors which may cause failure of its components. One of these, high cycle fatigue, can be caused by blade flutter. This paper evaluates the use of an air injection nozzle as a means of exciting vibrations on the first stage rotor blades of a rotating axial compressor. Unsteady simulations of the excitation velocity perturbations were performed on the Computational Fluid Dynamics (CFD) software, Numeca FINE™/Turbo. Experimental testing on a three-stage, low Mach number axial flow compressor provided data that was used to implement boundary conditions and to verify certain aspects of the unsteady simulation results.

  18. Modelling and simulation of complex sociotechnical systems: envisioning and analysing work environments

    PubMed Central

    Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter

    2015-01-01

    Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydrationmore » shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.« less

  20. Resources and Approaches for Teaching Quantitative and Computational Skills in the Geosciences and Allied Fields

    NASA Astrophysics Data System (ADS)

    Orr, C. H.; Mcfadden, R. R.; Manduca, C. A.; Kempler, L. A.

    2016-12-01

    Teaching with data, simulations, and models in the geosciences can increase many facets of student success in the classroom, and in the workforce. Teaching undergraduates about programming and improving students' quantitative and computational skills expands their perception of Geoscience beyond field-based studies. Processing data and developing quantitative models are critically important for Geoscience students. Students need to be able to perform calculations, analyze data, create numerical models and visualizations, and more deeply understand complex systems—all essential aspects of modern science. These skills require students to have comfort and skill with languages and tools such as MATLAB. To achieve comfort and skill, computational and quantitative thinking must build over a 4-year degree program across courses and disciplines. However, in courses focused on Geoscience content it can be challenging to get students comfortable with using computational methods to answers Geoscience questions. To help bridge this gap, we have partnered with MathWorks to develop two workshops focused on collecting and developing strategies and resources to help faculty teach students to incorporate data, simulations, and models into the curriculum at the course and program levels. We brought together faculty members from the sciences, including Geoscience and allied fields, who teach computation and quantitative thinking skills using MATLAB to build a resource collection for teaching. These materials, and the outcomes of the workshops are freely available on our website. The workshop outcomes include a collection of teaching activities, essays, and course descriptions that can help faculty incorporate computational skills at the course or program level. The teaching activities include in-class assignments, problem sets, labs, projects, and toolboxes. These activities range from programming assignments to creating and using models. The outcomes also include workshop syntheses that highlights best practices, a set of webpages to support teaching with software such as MATLAB, and an interest group actively discussing aspects these issues in Geoscience and allied fields. Learn more and view the resources at http://serc.carleton.edu/matlab_computation2016/index.html

  1. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    NASA Astrophysics Data System (ADS)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  2. Mechanical testing and finite element analysis of orthodontic teardrop loop.

    PubMed

    Coimbra, Maria Elisa Rodrigues; Penedo, Norman Duque; de Gouvêa, Jayme Pereira; Elias, Carlos Nelson; de Souza Araújo, Mônica Tirre; Coelho, Paulo Guilherme

    2008-02-01

    Understanding how teeth move in response to mechanical loads is an important aspect of orthodontic treatment. Treatment planning should include consideration of the appliances that will meet the desired loading of the teeth to result in optimized treatment outcomes. The purpose of this study was to evaluate the use of computer simulation to predict the force and the torsion obtained after the activation of tear drop loops of 3 heights. Seventy-five retraction loops were divided into 3 groups according to height (6, 7, and 8 mm). The loops were subjected to tensile load through displacements of 0.5, 1.0, 1.5, and 2.0 mm, and the resulting forces and torques were recorded. The loops were designed in AutoCAD software(2005; Autodesk Systems, Alpharetta, GA), and finite element analysis was performed with Ansys software(version 7.0; Swanson Analysis System, Canonsburg, PA). Statistical analysis of the mechanical experiment results was obtained by ANOVA and the Tukey post-hoc test (P < .01). The correlation test and the paired t test (P < .05) were used to compare the computer simulation with the mechanical experiment. The computer simulation accurately predicted the experimentally determined mechanical behavior of tear drop loops of different heights and should be considered an alternative for designing orthodontic appliances before treatment.

  3. A Micro-Level Data-Calibrated Agent-Based Model: The Synergy between Microsimulation and Agent-Based Modeling.

    PubMed

    Singh, Karandeep; Ahn, Chang-Won; Paik, Euihyun; Bae, Jang Won; Lee, Chun-Hee

    2018-01-01

    Artificial life (ALife) examines systems related to natural life, its processes, and its evolution, using simulations with computer models, robotics, and biochemistry. In this article, we focus on the computer modeling, or "soft," aspects of ALife and prepare a framework for scientists and modelers to be able to support such experiments. The framework is designed and built to be a parallel as well as distributed agent-based modeling environment, and does not require end users to have expertise in parallel or distributed computing. Furthermore, we use this framework to implement a hybrid model using microsimulation and agent-based modeling techniques to generate an artificial society. We leverage this artificial society to simulate and analyze population dynamics using Korean population census data. The agents in this model derive their decisional behaviors from real data (microsimulation feature) and interact among themselves (agent-based modeling feature) to proceed in the simulation. The behaviors, interactions, and social scenarios of the agents are varied to perform an analysis of population dynamics. We also estimate the future cost of pension policies based on the future population structure of the artificial society. The proposed framework and model demonstrates how ALife techniques can be used by researchers in relation to social issues and policies.

  4. Efficiency of the neighbor-joining method in reconstructing deep and shallow evolutionary relationships in large phylogenies.

    PubMed

    Kumar, S; Gadagkar, S R

    2000-12-01

    The neighbor-joining (NJ) method is widely used in reconstructing large phylogenies because of its computational speed and the high accuracy in phylogenetic inference as revealed in computer simulation studies. However, most computer simulation studies have quantified the overall performance of the NJ method in terms of the percentage of branches inferred correctly or the percentage of replications in which the correct tree is recovered. We have examined other aspects of its performance, such as the relative efficiency in correctly reconstructing shallow (close to the external branches of the tree) and deep branches in large phylogenies; the contribution of zero-length branches to topological errors in the inferred trees; and the influence of increasing the tree size (number of sequences), evolutionary rate, and sequence length on the efficiency of the NJ method. Results show that the correct reconstruction of deep branches is no more difficult than that of shallower branches. The presence of zero-length branches in realized trees contributes significantly to the overall error observed in the NJ tree, especially in large phylogenies or slowly evolving genes. Furthermore, the tree size does not influence the efficiency of NJ in reconstructing shallow and deep branches in our simulation study, in which the evolutionary process is assumed to be homogeneous in all lineages.

  5. Ceramic matrix composite behavior -- Computational simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamis, C.C.; Murthy, P.L.N.; Mital, S.K.

    Development of analytical modeling and computational capabilities for the prediction of high temperature ceramic matrix composite behavior has been an ongoing research activity at NASA-Lewis Research Center. These research activities have resulted in the development of micromechanics based methodologies to evaluate different aspects of ceramic matrix composite behavior. The basis of the approach is micromechanics together with a unique fiber substructuring concept. In this new concept the conventional unit cell (the smallest representative volume element of the composite) of micromechanics approach has been modified by substructuring the unit cell into several slices and developing the micromechanics based equations at themore » slice level. Main advantage of this technique is that it can provide a much greater detail in the response of composite behavior as compared to a conventional micromechanics based analysis and still maintains a very high computational efficiency. This methodology has recently been extended to model plain weave ceramic composites. The objective of the present paper is to describe the important features of the modeling and simulation and illustrate with select examples of laminated as well as woven composites.« less

  6. Domain modeling and grid generation for multi-block structured grids with application to aerodynamic and hydrodynamic configurations

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Boerstoel, J. W.; Vitagliano, P. L.; Kuyvenhoven, J. L.

    1992-01-01

    About five years ago, a joint development was started of a flow simulation system for engine-airframe integration studies on propeller as well as jet aircraft. The initial system was based on the Euler equations and made operational for industrial aerodynamic design work. The system consists of three major components: a domain modeller, for the graphical interactive subdivision of flow domains into an unstructured collection of blocks; a grid generator, for the graphical interactive computation of structured grids in blocks; and a flow solver, for the computation of flows on multi-block grids. The industrial partners of the collaboration and NLR have demonstrated that the domain modeller, grid generator and flow solver can be applied to simulate Euler flows around complete aircraft, including propulsion system simulation. Extension to Navier-Stokes flows is in progress. Delft Hydraulics has shown that both the domain modeller and grid generator can also be applied successfully for hydrodynamic configurations. An overview is given about the main aspects of both domain modelling and grid generation.

  7. GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA

    NASA Astrophysics Data System (ADS)

    Spiechowicz, J.; Kostur, M.; Machura, L.

    2015-06-01

    This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.

  8. What's the Technology For? Teacher Attention and Pedagogical Goals in a Modeling-Focused Professional Development Workshop

    NASA Astrophysics Data System (ADS)

    Wilkerson, Michelle Hoda; Andrews, Chelsea; Shaban, Yara; Laina, Vasiliki; Gravel, Brian E.

    2016-02-01

    This paper explores the role that technology can play in engaging pre-service teachers with the iterative, "messy" nature of model-based inquiry. Over the course of 5 weeks, 11 pre-service teachers worked in groups to construct models of diffusion using a computational animation and simulation toolkit, and designed lesson plans for the toolkit. Content analyses of group discussions and lesson plans document attention to content, representation, revision, and evaluation as interwoven aspects of modeling over the course of the workshop. When animating, only content and representation were heavily represented in group discussions. When simulating, all four aspects were represented to different extents across groups. Those differences corresponded with different planned uses for the technology during lessons: to teach modeling, to engage learners with one another's ideas, or to reveal student ideas. We identify specific ways in which technology served an important role in eliciting teachers' knowledge and goals related to scientific modeling in the classroom.

  9. Exhaustively sampling peptide adsorption with metadynamics.

    PubMed

    Deighan, Michael; Pfaendtner, Jim

    2013-06-25

    Simulating the adsorption of a peptide or protein and obtaining quantitative estimates of thermodynamic observables remains challenging for many reasons. One reason is the dearth of molecular scale experimental data available for validating such computational models. We also lack simulation methodologies that effectively address the dual challenges of simulating protein adsorption: overcoming strong surface binding and sampling conformational changes. Unbiased classical simulations do not address either of these challenges. Previous attempts that apply enhanced sampling generally focus on only one of the two issues, leaving the other to chance or brute force computing. To improve our ability to accurately resolve adsorbed protein orientation and conformational states, we have applied the Parallel Tempering Metadynamics in the Well-Tempered Ensemble (PTMetaD-WTE) method to several explicitly solvated protein/surface systems. We simulated the adsorption behavior of two peptides, LKα14 and LKβ15, onto two self-assembled monolayer (SAM) surfaces with carboxyl and methyl terminal functionalities. PTMetaD-WTE proved effective at achieving rapid convergence of the simulations, whose results elucidated different aspects of peptide adsorption including: binding free energies, side chain orientations, and preferred conformations. We investigated how specific molecular features of the surface/protein interface change the shape of the multidimensional peptide binding free energy landscape. Additionally, we compared our enhanced sampling technique with umbrella sampling and also evaluated three commonly used molecular dynamics force fields.

  10. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    NASA Astrophysics Data System (ADS)

    Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt

    2015-05-01

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.

  11. Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis

    PubMed Central

    Duarte, Afonso M. S.; Psomopoulos, Fotis E.; Blanchet, Christophe; Bonvin, Alexandre M. J. J.; Corpas, Manuel; Franc, Alain; Jimenez, Rafael C.; de Lucas, Jesus M.; Nyrönen, Tommi; Sipos, Gergely; Suhr, Stephanie B.

    2015-01-01

    With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community. PMID:26157454

  12. Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis.

    PubMed

    Duarte, Afonso M S; Psomopoulos, Fotis E; Blanchet, Christophe; Bonvin, Alexandre M J J; Corpas, Manuel; Franc, Alain; Jimenez, Rafael C; de Lucas, Jesus M; Nyrönen, Tommi; Sipos, Gergely; Suhr, Stephanie B

    2015-01-01

    With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.

  13. Virtual geotechnical laboratory experiments using a simulator

    NASA Astrophysics Data System (ADS)

    Penumadu, Dayakar; Zhao, Rongda; Frost, David

    2000-04-01

    The details of a test simulator that provides a realistic environment for performing virtual laboratory experimentals in soil mechanics is presented. A computer program Geo-Sim that can be used to perform virtual experiments, and allow for real-time observations of material response is presented. The results of experiments, for a given set of input parameters, are obtained with the test simulator using well-trained artificial neural-network-based soil models for different soil types and stress paths. Multimedia capabilities are integrated in Geo-Sim, using software that links and controls a laser disc player with a real-time parallel processing ability. During the simulation of a virtual experiment, relevant portions of the video image of a previously recorded test on an actual soil specimen are dispalyed along with the graphical presentation of response from the feedforward ANN model predictions. The pilot simulator developed to date includes all aspects related to performing a triaxial test on cohesionless soil under undrained and drained conditions. The benefits of the test simulator are also presented.

  14. Hierarchical Modeling of Activation Mechanisms in the ABL and EGFR Kinase Domains: Thermodynamic and Mechanistic Catalysts of Kinase Activation by Cancer Mutations

    PubMed Central

    Dixit, Anshuman; Verkhivker, Gennady M.

    2009-01-01

    Structural and functional studies of the ABL and EGFR kinase domains have recently suggested a common mechanism of activation by cancer-causing mutations. However, dynamics and mechanistic aspects of kinase activation by cancer mutations that stimulate conformational transitions and thermodynamic stabilization of the constitutively active kinase form remain elusive. We present a large-scale computational investigation of activation mechanisms in the ABL and EGFR kinase domains by a panel of clinically important cancer mutants ABL-T315I, ABL-L387M, EGFR-T790M, and EGFR-L858R. We have also simulated the activating effect of the gatekeeper mutation on conformational dynamics and allosteric interactions in functional states of the ABL-SH2-SH3 regulatory complexes. A comprehensive analysis was conducted using a hierarchy of computational approaches that included homology modeling, molecular dynamics simulations, protein stability analysis, targeted molecular dynamics, and molecular docking. Collectively, the results of this study have revealed thermodynamic and mechanistic catalysts of kinase activation by major cancer-causing mutations in the ABL and EGFR kinase domains. By using multiple crystallographic states of ABL and EGFR, computer simulations have allowed one to map dynamics of conformational fluctuations and transitions in the normal (wild-type) and oncogenic kinase forms. A proposed multi-stage mechanistic model of activation involves a series of cooperative transitions between different conformational states, including assembly of the hydrophobic spine, the formation of the Src-like intermediate structure, and a cooperative breakage and formation of characteristic salt bridges, which signify transition to the active kinase form. We suggest that molecular mechanisms of activation by cancer mutations could mimic the activation process of the normal kinase, yet exploiting conserved structural catalysts to accelerate a conformational transition and the enhanced stabilization of the active kinase form. The results of this study reconcile current experimental data with insights from theoretical approaches, pointing to general mechanistic aspects of activating transitions in protein kinases. PMID:19714203

  15. Sustainability of transport structures - some aspects of the nonlinear reliability assessment

    NASA Astrophysics Data System (ADS)

    Pukl, Radomír; Sajdlová, Tereza; Strauss, Alfred; Lehký, David; Novák, Drahomír

    2017-09-01

    Efficient techniques for both nonlinear numerical analysis of concrete structures and advanced stochastic simulation methods have been combined in order to offer an advanced tool for assessment of realistic behaviour, failure and safety assessment of transport structures. The utilized approach is based on randomization of the non-linear finite element analysis of the structural models. Degradation aspects such as carbonation of concrete can be accounted in order predict durability of the investigated structure and its sustainability. Results can serve as a rational basis for the performance and sustainability assessment based on advanced nonlinear computer analysis of the structures of transport infrastructure such as bridges or tunnels. In the stochastic simulation the input material parameters obtained from material tests including their randomness and uncertainty are represented as random variables or fields. Appropriate identification of material parameters is crucial for the virtual failure modelling of structures and structural elements. Inverse analysis using artificial neural networks and virtual stochastic simulations approach is applied to determine the fracture mechanical parameters of the structural material and its numerical model. Structural response, reliability and sustainability have been investigated on different types of transport structures made from various materials using the above mentioned methodology and tools.

  16. The Elastic Behaviour of Sintered Metallic Fibre Networks: A Finite Element Study by Beam Theory

    PubMed Central

    Bosbach, Wolfram A.

    2015-01-01

    Background The finite element method has complimented research in the field of network mechanics in the past years in numerous studies about various materials. Numerical predictions and the planning efficiency of experimental procedures are two of the motivational aspects for these numerical studies. The widespread availability of high performance computing facilities has been the enabler for the simulation of sufficiently large systems. Objectives and Motivation In the present study, finite element models were built for sintered, metallic fibre networks and validated by previously published experimental stiffness measurements. The validated models were the basis for predictions about so far unknown properties. Materials and Methods The finite element models were built by transferring previously published skeletons of fibre networks into finite element models. Beam theory was applied as simplification method. Results and Conclusions The obtained material stiffness isn’t a constant but rather a function of variables such as sample size and boundary conditions. Beam theory offers an efficient finite element method for the simulated fibre networks. The experimental results can be approximated by the simulated systems. Two worthwhile aspects for future work will be the influence of size and shape and the mechanical interaction with matrix materials. PMID:26569603

  17. Understanding Climate Uncertainty with an Ocean Focus

    NASA Astrophysics Data System (ADS)

    Tokmakian, R. T.

    2009-12-01

    Uncertainty in climate simulations arises from various aspects of the end-to-end process of modeling the Earth’s climate. First, there is uncertainty from the structure of the climate model components (e.g. ocean/ice/atmosphere). Even the most complex models are deficient, not only in the complexity of the processes they represent, but in which processes are included in a particular model. Next, uncertainties arise from the inherent error in the initial and boundary conditions of a simulation. Initial conditions are the state of the weather or climate at the beginning of the simulation and other such things, and typically come from observations. Finally, there is the uncertainty associated with the values of parameters in the model. These parameters may represent physical constants or effects, such as ocean mixing, or non-physical aspects of modeling and computation. The uncertainty in these input parameters propagates through the non-linear model to give uncertainty in the outputs. The models in 2020 will no doubt be better than today’s models, but they will still be imperfect, and development of uncertainty analysis technology is a critical aspect of understanding model realism and prediction capability. Smith [2002] and Cox and Stephenson [2007] discuss the need for methods to quantify the uncertainties within complicated systems so that limitations or weaknesses of the climate model can be understood. In making climate predictions, we need to have available both the most reliable model or simulation and a methods to quantify the reliability of a simulation. If quantitative uncertainty questions of the internal model dynamics are to be answered with complex simulations such as AOGCMs, then the only known path forward is based on model ensembles that characterize behavior with alternative parameter settings [e.g. Rougier, 2007]. The relevance and feasibility of using "Statistical Analysis of Computer Code Output" (SACCO) methods for examining uncertainty in ocean circulation due to parameter specification will be described and early results using the ocean/ice components of the CCSM climate model in a designed experiment framework will be shown. Cox, P. and D. Stephenson, Climate Change: A Changing Climate for Prediction, 2007, Science 317 (5835), 207, DOI: 10.1126/science.1145956. Rougier, J. C., 2007: Probabilistic Inference for Future Climate Using an Ensemble of Climate Model Evaluations, Climatic Change, 81, 247-264. Smith L., 2002, What might we learn from climate forecasts? Proc. Nat’l Academy of Sciences, Vol. 99, suppl. 1, 2487-2492 doi:10.1073/pnas.012580599.

  18. Self-organisation of semi-flexible rod-like particles

    NASA Astrophysics Data System (ADS)

    de Braaf, Bart; Oshima Menegon, Mariana; Paquay, Stefan; van der Schoot, Paul

    2017-12-01

    We report on a comprehensive computer simulation study of the liquid-crystal phase behaviour of purely repulsive, semi-flexible rod-like particles. For the four aspect ratios we consider, the particles form five distinct phases depending on their packing fraction and bending flexibility: the isotropic, nematic, smectic A, smectic B, and crystal phase. Upon increasing the particle bending flexibility, the various phase transitions shift to larger packing fractions. Increasing the aspect ratio achieves the opposite effect. We find two different ways in which the layer thickness of the particles in the smectic A phase may respond to an increase in concentration. The layer thickness may either decrease or increase depending on the aspect ratio and flexibility. For the smectic B and the crystalline phases, increasing the concentration always decreases the layer thickness. Finally, we find that the layer spacing jumps to a larger value on transitioning from the smectic A phase to the smectic B phase.

  19. Simulation-Based Probabilistic Seismic Hazard Assessment Using System-Level, Physics-Based Models: Assembling Virtual California

    NASA Astrophysics Data System (ADS)

    Rundle, P. B.; Rundle, J. B.; Morein, G.; Donnellan, A.; Turcotte, D.; Klein, W.

    2004-12-01

    The research community is rapidly moving towards the development of an earthquake forecast technology based on the use of complex, system-level earthquake fault system simulations. Using these topologically and dynamically realistic simulations, it is possible to develop ensemble forecasting methods similar to that used in weather and climate research. To effectively carry out such a program, one needs 1) a topologically realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention on a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults in California, from the Mexico-California border to the Mendocino Triple Junction. Virtual California is a "backslip model", meaning that the long term rate of slip on each fault segment in the model is matched to the observed rate. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of 650 fault segments (degrees of freedom) in the model. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a Beowulf clusters consisting of >10 cpus. We also will report results from implementing the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems. We report recent results on use of Virtual California for probabilistic earthquake forecasting for several sub-groups of major faults in California. These methods have the advantage that system-level fault interactions are explicitly included, as well as laboratory-based friction laws.

  20. Computer simulations and experimental study on crash box of automobile in low speed collision

    NASA Astrophysics Data System (ADS)

    Liu, Yanjie; Ding, Lin; Yan, Shengyuan; Yang, Yongsheng

    2008-11-01

    Based on the problems of energy-absorbing components in the automobile low speed collision process, according to crash box frontal crash test in low speed as the example, the simulation analysis of crash box impact process was carried out by Hyper Mesh and LS-DYNA. Each parameter on the influence modeling was analyzed by mathematics analytical solution and test comparison, which guaranteed that the model was accurate. Combination of experiment and simulation result had determined the weakness part of crash box structure crashworthiness aspect, and improvement method of crash box crashworthiness was discussed. Through numerical simulation of the impact process of automobile crash box, the obtained analysis result was used to optimize the design of crash box. It was helpful to improve the vehicles structure and decrease the collision accident loss at most. And it was also provided a useful method for the further research on the automobile collision.

  1. Assessment of zero-equation SGS models for simulating indoor environment

    NASA Astrophysics Data System (ADS)

    Taghinia, Javad; Rahman, Md Mizanur; Tse, Tim K. T.

    2016-12-01

    The understanding of air-flow in enclosed spaces plays a key role to designing ventilation systems and indoor environment. The computational fluid dynamics aspects dictate that the large eddy simulation (LES) offers a subtle means to analyze complex flows with recirculation and streamline curvature effects, providing more robust and accurate details than those of Reynolds-averaged Navier-Stokes simulations. This work assesses the performance of two zero-equation sub-grid scale models: the Rahman-Agarwal-Siikonen-Taghinia (RAST) model with a single grid-filter and the dynamic Smagorinsky model with grid-filter and test-filter scales. This in turn allows a cross-comparison of the effect of two different LES methods in simulating indoor air-flows with forced and mixed (natural + forced) convection. A better performance against experiments is indicated with the RAST model in wall-bounded non-equilibrium indoor air-flows; this is due to its sensitivity toward both the shear and vorticity parameters.

  2. Space station dynamics, attitude control and momentum management

    NASA Technical Reports Server (NTRS)

    Sunkel, John W.; Singh, Ramen P.; Vengopal, Ravi

    1989-01-01

    The Space Station Attitude Control System software test-bed provides a rigorous environment for the design, development and functional verification of GN and C algorithms and software. The approach taken for the simulation of the vehicle dynamics and environmental models using a computationally efficient algorithm is discussed. The simulation includes capabilities for docking/berthing dynamics, prescribed motion dynamics associated with the Mobile Remote Manipulator System (MRMS) and microgravity disturbances. The vehicle dynamics module interfaces with the test-bed through the central Communicator facility which is in turn driven by the Station Control Simulator (SCS) Executive. The Communicator addresses issues such as the interface between the discrete flight software and the continuous vehicle dynamics, and multi-programming aspects such as the complex flow of control in real-time programs. Combined with the flight software and redundancy management modules, the facility provides a flexible, user-oriented simulation platform.

  3. On time discretizations for the simulation of the batch settling-compression process in one dimension.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Mejías, Camilo

    2016-01-01

    The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.

  4. Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne

    We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.

  5. Modeling, simulation, and analysis of optical remote sensing systems

    NASA Technical Reports Server (NTRS)

    Kerekes, John Paul; Landgrebe, David A.

    1989-01-01

    Remote Sensing of the Earth's resources from space-based sensors has evolved in the past 20 years from a scientific experiment to a commonly used technological tool. The scientific applications and engineering aspects of remote sensing systems have been studied extensively. However, most of these studies have been aimed at understanding individual aspects of the remote sensing process while relatively few have studied their interrelations. A motivation for studying these interrelationships has arisen with the advent of highly sophisticated configurable sensors as part of the Earth Observing System (EOS) proposed by NASA for the 1990's. Two approaches to investigating remote sensing systems are developed. In one approach, detailed models of the scene, the sensor, and the processing aspects of the system are implemented in a discrete simulation. This approach is useful in creating simulated images with desired characteristics for use in sensor or processing algorithm development. A less complete, but computationally simpler method based on a parametric model of the system is also developed. In this analytical model the various informational classes are parameterized by their spectral mean vector and covariance matrix. These class statistics are modified by models for the atmosphere, the sensor, and processing algorithms and an estimate made of the resulting classification accuracy among the informational classes. Application of these models is made to the study of the proposed High Resolution Imaging Spectrometer (HRIS). The interrelationships among observational conditions, sensor effects, and processing choices are investigated with several interesting results.

  6. High performance transcription factor-DNA docking with GPU computing

    PubMed Central

    2012-01-01

    Background Protein-DNA docking is a very challenging problem in structural bioinformatics and has important implications in a number of applications, such as structure-based prediction of transcription factor binding sites and rational drug design. Protein-DNA docking is very computational demanding due to the high cost of energy calculation and the statistical nature of conformational sampling algorithms. More importantly, experiments show that the docking quality depends on the coverage of the conformational sampling space. It is therefore desirable to accelerate the computation of the docking algorithm, not only to reduce computing time, but also to improve docking quality. Methods In an attempt to accelerate the sampling process and to improve the docking performance, we developed a graphics processing unit (GPU)-based protein-DNA docking algorithm. The algorithm employs a potential-based energy function to describe the binding affinity of a protein-DNA pair, and integrates Monte-Carlo simulation and a simulated annealing method to search through the conformational space. Algorithmic techniques were developed to improve the computation efficiency and scalability on GPU-based high performance computing systems. Results The effectiveness of our approach is tested on a non-redundant set of 75 TF-DNA complexes and a newly developed TF-DNA docking benchmark. We demonstrated that the GPU-based docking algorithm can significantly accelerate the simulation process and thereby improving the chance of finding near-native TF-DNA complex structures. This study also suggests that further improvement in protein-DNA docking research would require efforts from two integral aspects: improvement in computation efficiency and energy function design. Conclusions We present a high performance computing approach for improving the prediction accuracy of protein-DNA docking. The GPU-based docking algorithm accelerates the search of the conformational space and thus increases the chance of finding more near-native structures. To the best of our knowledge, this is the first ad hoc effort of applying GPU or GPU clusters to the protein-DNA docking problem. PMID:22759575

  7. The Core Avionics System for the DLR Compact-Satellite Series

    NASA Astrophysics Data System (ADS)

    Montenegro, S.; Dittrich, L.

    2008-08-01

    The Standard Satellite Bus's core avionics system is a further step in the development line of the software and hardware architecture which was first used in the bispectral infrared detector mission (BIRD). The next step improves dependability, flexibility and simplicity of the whole core avionics system. Important aspects of this concept were already implemented, simulated and tested in other ESA and industrial projects. Therefore we can say the basic concept is proven. This paper deals with different aspects of core avionics development and proposes an extension to the existing core avionics system of BIRD to meet current and future requirements regarding flexibility, availability, reliability of small satellite and the continuous increasing demand of mass memory and computational power.

  8. Mathematical and Computational Modeling for Tumor Virotherapy with Mediated Immunity.

    PubMed

    Timalsina, Asim; Tian, Jianjun Paul; Wang, Jin

    2017-08-01

    We propose a new mathematical modeling framework based on partial differential equations to study tumor virotherapy with mediated immunity. The model incorporates both innate and adaptive immune responses and represents the complex interaction among tumor cells, oncolytic viruses, and immune systems on a domain with a moving boundary. Using carefully designed computational methods, we conduct extensive numerical simulation to the model. The results allow us to examine tumor development under a wide range of settings and provide insight into several important aspects of the virotherapy, including the dependence of the efficacy on a few key parameters and the delay in the adaptive immunity. Our findings also suggest possible ways to improve the virotherapy for tumor treatment.

  9. Exploring the early steps of amyloid peptide aggregation by computers.

    PubMed

    Mousseau, Normand; Derreumaux, Philippe

    2005-11-01

    The assembly of normally soluble proteins into amyloid fibrils is a hallmark of neurodegenerative diseases. Because protein aggregation is very complex, involving a variety of oligomeric metastable intermediates, the detailed aggregation paths and structural characterization of the intermediates remain to be determined. Yet, there is strong evidence that these oligomers, which form early in the process of fibrillogenesis, are cytotoxic. In this paper, we review our current understanding of the underlying factors that promote the aggregation of peptides into amyloid fibrils. We focus here on the structural and dynamic aspects of the aggregation as observed in state-of-the-art computer simulations of amyloid-forming peptides with an emphasis on the activation-relaxation technique.

  10. HYDRODYNAMIC SIMULATION OF THE UPPER POTOMAC ESTUARY.

    USGS Publications Warehouse

    Schaffranck, Raymond W.

    1986-01-01

    Hydrodynamics of the upper extent of the Potomac Estuary between Indian Head and Morgantown, Md. , are simulated using a two-dimensional model. The model computes water-surface elevations and depth-averaged velocities by numerically integrating finite-difference forms of the equations of mass and momentum conservation using the alternating direction implicit method. The fundamental, non-linear, unsteady-flow equations, upon which the model is formulated, include additional terms to account for Coriolis acceleration and meteorological influences. Preliminary model/prototype data comparisons show agreement to within 9% for tidal flow volumes and phase differences within the measured-data-recording interval. Use of the model to investigate the hydrodynamics and certain aspects of transport within this Potomac Estuary reach is demonstrated. Refs.

  11. User's guide for the IEBT application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartoletti, T

    INFOSEC Experience-Based Training (IEBT) is a simulation and modeling approach to education in the arena of information security issues and its application to system-specific operations. The IEBT philosophy is that ''Experience is the Best Teacher''. This approach to computer-based training aims to bridge the gap between unappealing ''read the text, answer the questions'' types of training (largely a test of short-term memory), and the far more costly, time-consuming and inconvenient ''real hardware'' laboratory experience. Simulation and modeling supports this bridge by allowing the critical or salient features to be exercised while avoiding those aspects of a real world experience unrelatedmore » to the training goal.« less

  12. Saturated Widths of Magnetic Islands in Tokamak Discharges

    NASA Astrophysics Data System (ADS)

    Halpern, F.; Pankin, A. Y.

    2005-10-01

    The new ISLAND module described in reference [1] implements a quasi-linear model to compute the widths of multiple magnetic islands driven by saturated tearing modes in toroidal plasmas of arbitrary aspect ratio and cross sectional shape. The distortion of the island shape caused by the radial variation in the perturbation is computed in the new module. In transport simulations, the enhanced transport caused by the magnetic islands has the effect of flattening the pressure and current density profiles. This self consistent treatment of the magnetic islands alters the development of the plasma profiles. In addition, it is found that islands closer to the magnetic axis influence the evolution of islands further out in the plasma. In order to investigate such phenomena, the ISLAND module is used within the BALDUR predictive modeling code to compute the widths of multiple magnetic islands in tokamak discharges. The interaction between the islands and sawtooth crashes is examined in simulations of DIII-D and JET discharges. The module is used to compute saturated neoclassical tearing mode island widths for multiple modes in ITER. Preliminary results for island widths in ITER are consistent with those presented [2] by Hegna. [1] F.D. Halpern, G. Bateman, A.H. Kritz and A.Y. Pankin, ``The ISLAND Module for Computing Magnetic Island Widths in Tokamaks,'' submitted to J. Plasma Physics (2005). [2] C.C. Hegna, 2002 Fusion Snowmass Meeting.

  13. Multigrid treatment of implicit continuum diffusion

    NASA Astrophysics Data System (ADS)

    Francisquez, Manaure; Zhu, Ben; Rogers, Barrett

    2017-10-01

    Implicit treatment of diffusive terms of various differential orders common in continuum mechanics modeling, such as computational fluid dynamics, is investigated with spectral and multigrid algorithms in non-periodic 2D domains. In doubly periodic time dependent problems these terms can be efficiently and implicitly handled by spectral methods, but in non-periodic systems solved with distributed memory parallel computing and 2D domain decomposition, this efficiency is lost for large numbers of processors. We built and present here a multigrid algorithm for these types of problems which outperforms a spectral solution that employs the highly optimized FFTW library. This multigrid algorithm is not only suitable for high performance computing but may also be able to efficiently treat implicit diffusion of arbitrary order by introducing auxiliary equations of lower order. We test these solvers for fourth and sixth order diffusion with idealized harmonic test functions as well as a turbulent 2D magnetohydrodynamic simulation. It is also shown that an anisotropic operator without cross-terms can improve model accuracy and speed, and we examine the impact that the various diffusion operators have on the energy, the enstrophy, and the qualitative aspect of a simulation. This work was supported by DOE-SC-0010508. This research used resources of the National Energy Research Scientific Computing Center (NERSC).

  14. Simulation model for port shunting yards

    NASA Astrophysics Data System (ADS)

    Rusca, A.; Popa, M.; Rosca, E.; Rosca, M.; Dragu, V.; Rusca, F.

    2016-08-01

    Sea ports are important nodes in the supply chain, joining two high capacity transport modes: rail and maritime transport. The huge cargo flows transiting port requires high capacity construction and installation such as berths, large capacity cranes, respectively shunting yards. However, the port shunting yards specificity raises several problems such as: limited access since these are terminus stations for rail network, the in-output of large transit flows of cargo relatively to the scarcity of the departure/arrival of a ship, as well as limited land availability for implementing solutions to serve these flows. It is necessary to identify technological solutions that lead to an answer to these problems. The paper proposed a simulation model developed with ARENA computer simulation software suitable for shunting yards which serve sea ports with access to the rail network. Are investigates the principal aspects of shunting yards and adequate measures to increase their transit capacity. The operation capacity for shunting yards sub-system is assessed taking in consideration the required operating standards and the measure of performance (e.g. waiting time for freight wagons, number of railway line in station, storage area, etc.) of the railway station are computed. The conclusion and results, drawn from simulation, help transports and logistics specialists to test the proposals for improving the port management.

  15. Computational Fluid Dynamics (CFD) Simulation of Drag Reduction by Riblets on Automobile

    NASA Astrophysics Data System (ADS)

    Ghazali, N. N. N.; Yau, Y. H.; Badarudin, A.; Lim, Y. C.

    2010-05-01

    One of the ongoing automotive technological developments is the reduction of aerodynamic drag because this has a direct impact on fuel reduction, which is a major topic due to the influence on many other requirements. Passive drag reduction techniques stand as the most portable and feasible way to be implemented in real applications. One of the passive techniques is the longitudinal microgrooves aligned in the flow direction, known as riblets. In this study, the simulation of turbulent flows over an automobile in a virtual wind tunnel has been conducted by computational fluid dynamics (CFD). Three important aspects of this study are: the drag reduction effect of riblets on smooth surface automobile, the position and geometry of the riblets on drag reduction. The simulation involves three stages: geometry modeling, meshing, solving and analysis. The simulation results show that the attachment of riblets on the rear roof surface reduces the drag coefficient by 2.74%. By adjusting the attachment position of the riblets film, reduction rates between the range 0.5%-9.51% are obtained, in which the position of the top middle roof optimizes the effect. Four riblet geometries are investigated, among them the semi-hexagon trapezoidally shaped riblets is considered the most effective. Reduction rate of drag is found ranging from -3.34% to 6.36%.

  16. Anthropomorphic thorax phantom for cardio-respiratory motion simulation in tomographic imaging

    NASA Astrophysics Data System (ADS)

    Bolwin, Konstantin; Czekalla, Björn; Frohwein, Lynn J.; Büther, Florian; Schäfers, Klaus P.

    2018-02-01

    Patient motion during medical imaging using techniques such as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), or single emission computed tomography (SPECT) is well known to degrade images, leading to blurring effects or severe artifacts. Motion correction methods try to overcome these degrading effects. However, they need to be validated under realistic conditions. In this work, a sophisticated anthropomorphic thorax phantom is presented that combines several aspects of a simulator for cardio-respiratory motion. The phantom allows us to simulate various types of cardio-respiratory motions inside a human-like thorax, including features such as inflatable lungs, beating left ventricular myocardium, respiration-induced motion of the left ventricle, moving lung lesions, and moving coronary artery plaques. The phantom is constructed to be MR-compatible. This means that we can not only perform studies in PET, SPECT and CT, but also inside an MRI system. The technical features of the anthropomorphic thorax phantom Wilhelm are presented with regard to simulating motion effects in hybrid emission tomography and radiotherapy. This is supplemented by a study on the detectability of small coronary plaque lesions in PET/CT under the influence of cardio-respiratory motion, and a study on the accuracy of left ventricular blood volumes.

  17. Multiscale Simulations of Protein Landscapes: Using Coarse Grained Models as Reference Potentials to Full Explicit Models

    PubMed Central

    Messer, Benjamin M.; Roca, Maite; Chu, Zhen T.; Vicatos, Spyridon; Kilshtain, Alexandra Vardi; Warshel, Arieh

    2009-01-01

    Evaluating the free energy landscape of proteins and the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of simplified coarse grained (CG) folding models offers an effective way of sampling the landscape but such a treatment, however, may not give the correct description of the effect of the actual protein residues. A general way around this problem that has been put forward in our early work (Fan et al, Theor Chem Acc (1999) 103:77-80) uses the CG model as a reference potential for free energy calculations of different properties of the explicit model. This method is refined and extended here, focusing on improving the electrostatic treatment and on demonstrating key applications. This application includes: evaluation of changes of folding energy upon mutations, calculations of transition states binding free energies (which are crucial for rational enzyme design), evaluation of catalytic landscape and simulation of the time dependent responses to pH changes. Furthermore, the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins is discussed. PMID:20052756

  18. Finite element analysis simulations for ultrasonic array NDE inspections

    NASA Astrophysics Data System (ADS)

    Dobson, Jeff; Tweedie, Andrew; Harvey, Gerald; O'Leary, Richard; Mulholland, Anthony; Tant, Katherine; Gachagan, Anthony

    2016-02-01

    Advances in manufacturing techniques and materials have led to an increase in the demand for reliable and robust inspection techniques to maintain safety critical features. The application of modelling methods to develop and evaluate inspections is becoming an essential tool for the NDE community. Current analytical methods are inadequate for simulation of arbitrary components and heterogeneous materials, such as anisotropic welds or composite structures. Finite element analysis software (FEA), such as PZFlex, can provide the ability to simulate the inspection of these arrangements, providing the ability to economically prototype and evaluate improved NDE methods. FEA is often seen as computationally expensive for ultrasound problems however, advances in computing power have made it a more viable tool. This paper aims to illustrate the capability of appropriate FEA to produce accurate simulations of ultrasonic array inspections - minimizing the requirement for expensive test-piece fabrication. Validation is afforded via corroboration of the FE derived and experimentally generated data sets for a test-block comprising 1D and 2D defects. The modelling approach is extended to consider the more troublesome aspects of heterogeneous materials where defect dimensions can be of the same length scale as the grain structure. The model is used to facilitate the implementation of new ultrasonic array inspection methods for such materials. This is exemplified by considering the simulation of ultrasonic NDE in a weld structure in order to assess new approaches to imaging such structures.

  19. Creating and Testing Simulation Software

    NASA Technical Reports Server (NTRS)

    Heinich, Christina M.

    2013-01-01

    The goal of this project is to learn about the software development process, specifically the process to test and fix components of the software. The paper will cover the techniques of testing code, and the benefits of using one style of testing over another. It will also discuss the overall software design and development lifecycle, and how code testing plays an integral role in it. Coding is notorious for always needing to be debugged due to coding errors or faulty program design. Writing tests either before or during program creation that cover all aspects of the code provide a relatively easy way to locate and fix errors, which will in turn decrease the necessity to fix a program after it is released for common use. The backdrop for this paper is the Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI), a project whose goal is to simulate a launch using simulated models of the ground systems and the connections between them and the control room. The simulations will be used for training and to ensure that all possible outcomes and complications are prepared for before the actual launch day. The code being tested is the Programmable Logic Controller Interface (PLCIF) code, the component responsible for transferring the information from the models to the model Programmable Logic Controllers (PLCs), basic computers that are used for very simple tasks.

  20. Update to Computational Aspects of Nitrogen-Rich HEDMs

    DTIC Science & Technology

    2016-04-01

    ARL-TR-7656 ● APR 2016 US Army Research Laboratory Update to “Computational Aspects of Nitrogen -Rich HEDMs” by Betsy M...Computational Aspects of Nitrogen -Rich HEDMs” by Betsy M Rice, Edward FC Byrd, and William D Mattson Weapons and Materials Research Directorate...

  1. Control aspects of quantum computing using pure and mixed states.

    PubMed

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J

    2012-10-13

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems.

  2. Control aspects of quantum computing using pure and mixed states

    PubMed Central

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J.

    2012-01-01

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems. PMID:22946034

  3. Discovery of a regioselectivity switch in nitrating P450s guided by molecular dynamics simulations and Markov models

    NASA Astrophysics Data System (ADS)

    Dodani, Sheel C.; Kiss, Gert; Cahn, Jackson K. B.; Su, Ye; Pande, Vijay S.; Arnold, Frances H.

    2016-05-01

    The dynamic motions of protein structural elements, particularly flexible loops, are intimately linked with diverse aspects of enzyme catalysis. Engineering of these loop regions can alter protein stability, substrate binding and even dramatically impact enzyme function. When these flexible regions are unresolvable structurally, computational reconstruction in combination with large-scale molecular dynamics simulations can be used to guide the engineering strategy. Here we present a collaborative approach that consists of both experiment and computation and led to the discovery of a single mutation in the F/G loop of the nitrating cytochrome P450 TxtE that simultaneously controls loop dynamics and completely shifts the enzyme's regioselectivity from the C4 to the C5 position of L-tryptophan. Furthermore, we find that this loop mutation is naturally present in a subset of homologous nitrating P450s and confirm that these uncharacterized enzymes exclusively produce 5-nitro-L-tryptophan, a previously unknown biosynthetic intermediate.

  4. Bioinspired decision architectures containing host and microbiome processing units.

    PubMed

    Heyde, K C; Gallagher, P W; Ruder, W C

    2016-09-27

    Biomimetic robots have been used to explore and explain natural phenomena ranging from the coordination of ants to the locomotion of lizards. Here, we developed a series of decision architectures inspired by the information exchange between a host organism and its microbiome. We first modeled the biochemical exchanges of a population of synthetically engineered E. coli. We then built a physical, differential drive robot that contained an integrated, onboard computer vision system. A relay was established between the simulated population of cells and the robot's microcontroller. By placing the robot within a target-containing a two-dimensional arena, we explored how different aspects of the simulated cells and the robot's microcontroller could be integrated to form hybrid decision architectures. We found that distinct decision architectures allow for us to develop models of computation with specific strengths such as runtime efficiency or minimal memory allocation. Taken together, our hybrid decision architectures provide a new strategy for developing bioinspired control systems that integrate both living and nonliving components.

  5. On the interpretation of kernels - Computer simulation of responses to impulse pairs

    NASA Technical Reports Server (NTRS)

    Hung, G.; Stark, L.; Eykhoff, P.

    1983-01-01

    A method is presented for the use of a unit impulse response and responses to impulse pairs of variable separation in the calculation of the second-degree kernels of a quadratic system. A quadratic system may be built from simple linear terms of known dynamics and a multiplier. Computer simulation results on quadratic systems with building elements of various time constants indicate reasonably that the larger time constant term before multiplication dominates in the envelope of the off-diagonal kernel curves as these move perpendicular to and away from the main diagonal. The smaller time constant term before multiplication combines with the effect of the time constant after multiplication to dominate in the kernel curves in the direction of the second-degree impulse response, i.e., parallel to the main diagonal. Such types of insight may be helpful in recognizing essential aspects of (second-degree) kernels; they may be used in simplifying the model structure and, perhaps, add to the physical/physiological understanding of the underlying processes.

  6. Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST.

    PubMed

    Xu, X Q

    2008-07-01

    We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (psi,theta,micro) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.

  7. Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2008-07-01

    We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (ψ,θ,γ,μ) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.

  8. Delayed fission and multifragmentation in sub-keV C60 - Au(0 0 1) collisions via molecular dynamics simulations: Mass distributions and activated statistical decay

    NASA Astrophysics Data System (ADS)

    Bernstein, V.; Kolodney, E.

    2017-10-01

    We have recently observed, both experimentally and computationally, the phenomenon of postcollision multifragmentation in sub-keV surface collisions of a C60 projectile. Namely, delayed multiparticle breakup of a strongly impact deformed and vibrationally excited large cluster collider into several large fragments, after leaving the surface. Molecular dynamics simulations with extensive statistics revealed a nearly simultaneous event, within a sub-psec time window. Here we study, computationally, additional essential aspects of this new delayed collisional fragmentation which were not addressed before. Specifically, we study here the delayed (binary) fission channel for different impact energies both by calculating mass distributions over all fission events and by calculating and analyzing lifetime distributions of the scattered projectile. We observe an asymmetric fission resulting in a most probable fission channel and we find an activated exponential (statistical) decay. Finally, we also calculate and discuss the fragment mass distribution in (triple) multifragmentation over different time windows, in terms of most abundant fragments.

  9. Free-Swinging Failure Tolerance for Robotic Manipulators

    NASA Technical Reports Server (NTRS)

    English, James

    1997-01-01

    Under this GSRP fellowship, software-based failure-tolerance techniques were developed for robotic manipulators. The focus was on failures characterized by the loss of actuator torque at a joint, called free-swinging failures. The research results spanned many aspects of the free-swinging failure-tolerance problem, from preparing for an expected failure to discovery of postfailure capabilities to establishing efficient methods to realize those capabilities. Developed algorithms were verified using computer-based dynamic simulations, and these were further verified using hardware experiments at Johnson Space Center.

  10. An Infrared Camera Simulation for Estimating Spatial Temperature Profiles and Signal-to-Noise Ratios of an Airborne Laser-Illuminated Target

    DTIC Science & Technology

    2007-06-01

    of SNR, she incorporated the effects that an InGaAs photovoltaic detector have in producing the signal along with the photon, Johnson, and shot noises ...the photovoltaic FPA detector modeled? • What detector noise sources limit the computed signal? 3.1 Modeling Methodology Two aspects in the IR camera...Another shot noise source in photovoltaic detectors is dark current. This current represents the current flowing in the detector when no optical radiation

  11. Speculations on the consequences to biology of space shuttle-associated increases in global UV-B radiation

    NASA Technical Reports Server (NTRS)

    Averner, M. M.; Macelroy, R. D.

    1977-01-01

    Various aspects of the impact of ozone depletion on the biosphere are assessed and discussed. Speculations on the factors which determine the extent and nature of biological damage due to an increased flux of ultra violet light are presented. It is concluded that a complete assessment must consider both direct effects (organisms) as well as indirect effects (ecosystems). The role of computer simulation of ecosystem models as a predictive tool is examined.

  12. US-Latin American Workshop on Molecular and Materials Sciences: Theoretical and Computational Aspects Held at the University of Florida, Gainesville, on February 8-10, 1994

    DTIC Science & Technology

    1994-08-09

    Observables During a Collision Inst. de Fisica , Cuernavaca, Mexico Ruben D. Santiago Acosta An Algebraic Model for 3-dimensional Atom-Diatom Inst C...STRUCTURES. MOLECULAR DYNAMICS SIMULATION M. C .Donnamaria and J. R. Grigera Instituto de Fisica de Liquidos y Sistemas Biologicos (IFLYSIB),CONICET...Crybiology, 1981, 18, 631. ACKNOWLEDGMENTS This work has been partially funded by the Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) of

  13. Cultures of simulations vs. cultures of calculations? The development of simulation practices in meteorology and astrophysics

    NASA Astrophysics Data System (ADS)

    Sundberg, Mikaela

    While the distinction between theory and experiment is often used to discuss the place of simulation from a philosophical viewpoint, other distinctions are possible from a sociological perspective. Turkle (1995) distinguishes between cultures of calculation and cultures of simulation and relates these cultures to the distinction between modernity and postmodernity, respectively. What can we understand about contemporary simulation practices in science by looking at them from the point of view of these two computer cultures? What new questions does such an analysis raise for further studies? On the basis of two case studies, the present paper compares and discusses simulation activities in astrophysics and meteorology. It argues that simulation practices manifest aspects of both of these cultures simultaneously, but in different situations. By employing the dichotomies surface/depth, play/seriousness, and extreme/reasonable to characterize and operationalize cultures of calculation and cultures of simulation as sensitizing concepts, the analysis shows how simulation code work shifts from development to use, the importance of but also resistance towards too much visualizations, and how simulation modelers play with extreme values, yet also try to achieve reasonable results compared to observations.

  14. Consortium for Mathematics in the Geosciences (CMG++): Promoting the application of mathematics, statistics, and computational sciences to the geosciences

    NASA Astrophysics Data System (ADS)

    Mead, J.; Wright, G. B.

    2013-12-01

    The collection of massive amounts of high quality data from new and greatly improved observing technologies and from large-scale numerical simulations are drastically improving our understanding and modeling of the earth system. However, these datasets are also revealing important knowledge gaps and limitations of our current conceptual models for explaining key aspects of these new observations. These limitations are impeding progress on questions that have both fundamental scientific and societal significance, including climate and weather, natural disaster mitigation, earthquake and volcano dynamics, earth structure and geodynamics, resource exploration, and planetary evolution. New conceptual approaches and numerical methods for characterizing and simulating these systems are needed - methods that can handle processes which vary through a myriad of scales in heterogeneous, complex environments. Additionally, as certain aspects of these systems may be observable only indirectly or not at all, new statistical methods are also needed. This type of research will demand integrating the expertise of geoscientist together with that of mathematicians, statisticians, and computer scientists. If the past is any indicator, this interdisciplinary research will no doubt lead to advances in all these fields in addition to vital improvements in our ability to predict the behavior of the planetary environment. The Consortium for Mathematics in the Geosciences (CMG++) arose from two scientific workshops held at Northwestern and Princeton in 2011 and 2012 with participants from mathematics, statistics, geoscience and computational science. The mission of CMG++ is to accelerate the traditional interaction between people in these disciplines through the promotion of both collaborative research and interdisciplinary education. We will discuss current activities, describe how people can get involved, and solicit input from the broader AGU community.

  15. Atomistic study of mixing at high Z / low Z interfaces at Warm Dense Matter Conditions

    NASA Astrophysics Data System (ADS)

    Haxhimali, Tomorr; Glosli, James; Rudd, Robert; Lawrence Livermore National Laboratory Team

    2016-10-01

    We use atomistic simulations to study different aspects of mixing occurring at an initially sharp interface of high Z and low Z plasmas in the Warm/Hot Dense Matter regime. We consider a system of Diamond (the low Z component) in contact with Ag (the high Z component), which undergoes rapid isochoric heating from room temperature up to 10 eV, rapidly changing the solids into warm dense matter at solid density. We simulate the motion of ions via the screened Coulomb potential. The electric field, the electron density and ionizations level are computed on the fly by solving Poisson equation. The spatially varying screening lengths computed from the electron cloud are included in this effective interaction; the electrons are not simulated explicitly. We compute the electric field generated at the Ag-C interface as well as the dynamics of the ions during the mixing process occurring at the plasma interface. Preliminary results indicate an anomalous transport of high Z ions (Ag) into the low Z component (C); a phenomenon that is partially related to the enhanced transport of ions due to the generated electric field. These results are in agreement with recent experimental observation on Au-diamond plasma interface. This work was performed under the auspices of the US Dept. of Energy by Lawrence Livermore National Security, LLC under Contract DE-AC52-07NA27344.

  16. Computational fluid dynamics assessment: Volume 1, Computer simulations of the METC (Morgantown Energy Technology Center) entrained-flow gasifier: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celik, I.; Chattree, M.

    1988-07-01

    An assessment of the theoretical and numerical aspects of the computer code, PCGC-2, is made; and the results of the application of this code to the Morgantown Energy Technology Center (METC) advanced gasification facility entrained-flow reactor, ''the gasifier,'' are presented. PCGC-2 is a code suitable for simulating pulverized coal combustion or gasification under axisymmetric (two-dimensional) flow conditions. The governing equations for the gas and particulate phase have been reviewed. The numerical procedure and the related programming difficulties have been elucidated. A single-particle model similar to the one used in PCGC-2 has been developed, programmed, and applied to some simple situationsmore » in order to gain insight to the physics of coal particle heat-up, devolatilization, and char oxidation processes. PCGC-2 was applied to the METC entrained-flow gasifier to study numerically the flash pyrolysis of coal, and gasification of coal with steam or carbon dioxide. The results from the simulations are compared with measurements. The gas and particle residence times, particle temperature, and mass component history were also calculated and the results were analyzed. The results provide useful information for understanding the fundamentals of coal gasification and for assessment of experimental results performed using the reactor considered. 69 refs., 35 figs., 23 tabs.« less

  17. Simulation of an SEIR infectious disease model on the dynamic contact network of conference attendees

    PubMed Central

    2011-01-01

    Background The spread of infectious diseases crucially depends on the pattern of contacts between individuals. Knowledge of these patterns is thus essential to inform models and computational efforts. However, there are few empirical studies available that provide estimates of the number and duration of contacts between social groups. Moreover, their space and time resolutions are limited, so that data are not explicit at the person-to-person level, and the dynamic nature of the contacts is disregarded. In this study, we aimed to assess the role of data-driven dynamic contact patterns between individuals, and in particular of their temporal aspects, in shaping the spread of a simulated epidemic in the population. Methods We considered high-resolution data about face-to-face interactions between the attendees at a conference, obtained from the deployment of an infrastructure based on radiofrequency identification (RFID) devices that assessed mutual face-to-face proximity. The spread of epidemics along these interactions was simulated using an SEIR (Susceptible, Exposed, Infectious, Recovered) model, using both the dynamic network of contacts defined by the collected data, and two aggregated versions of such networks, to assess the role of the data temporal aspects. Results We show that, on the timescales considered, an aggregated network taking into account the daily duration of contacts is a good approximation to the full resolution network, whereas a homogeneous representation that retains only the topology of the contact network fails to reproduce the size of the epidemic. Conclusions These results have important implications for understanding the level of detail needed to correctly inform computational models for the study and management of real epidemics. Please see related article BMC Medicine, 2011, 9:88 PMID:21771290

  18. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  19. Structure and modeling of turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novikov, E.A.

    The {open_quotes}vortex strings{close_quotes} scale l{sub s} {approximately} LRe{sup -3/10} (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scalemore » motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES).« less

  20. The Flostation - an Immersive Cyberspace System

    NASA Technical Reports Server (NTRS)

    Park, Brian

    2006-01-01

    A flostation is a computer-controlled apparatus that, along with one or more computer(s) and other computer-controlled equipment, is part of an immersive cyberspace system. The system is said to be immersive in two senses of the word: (1) It supports the body in a modified form neutral posture experienced in zero gravity and (2) it is equipped with computer-controlled display equipment that helps to give the occupant of the chair a feeling of immersion in an environment that the system is designed to simulate. Neutral immersion was conceived during the Gemini program as a means of training astronauts for working in a zerogravity environment. Current derivatives include neutral-buoyancy tanks and the KC-135 airplane, each of which mimics the effects of zero gravity. While these have performed well in simulating the shorter-duration flights typical of the space program to date, a training device that can take astronauts to the next level will be needed for simulating longer-duration flights such as that of the International Space Station. The flostation is expected to satisfy this need. The flostation could also be adapted and replicated for use in commercial ventures ranging from home entertainment to medical treatment. The use of neutral immersion in the flostation enables the occupant to recline in an optimal posture of rest and meditation. This posture, combines savasana (known to practitioners of yoga) and a modified form of the neutral posture assumed by astronauts in outer space. As the occupant relaxes, awareness of the physical body is reduced. The neutral body posture, which can be maintained for hours without discomfort, is extended to the eyes, ears, and hands. The occupant can be surrounded with a full-field-of-view visual display and nearphone sound, and can be stimulated with full-body vibration and motion cueing. Once fully immersed, the occupant can use neutral hand controllers (that is, hand-posture sensors) to control various aspects of the simulated environment.

  1. Implementing Subduction Models in the New Mantle Convection Code Aspect

    NASA Astrophysics Data System (ADS)

    Arredondo, Katrina; Billen, Magali

    2014-05-01

    The geodynamic community has utilized various numerical modeling codes as scientific questions arise and computer processing power increases. Citcom, a widely used mantle convection code, has limitations and vulnerabilities such as temperature overshoots of hundreds or thousands degrees Kelvin (i.e., Kommu et al., 2013). Recently Aspect intended as a more powerful cousin, is in active development with additions such as Adaptable Mesh Refinement (AMR) and improved solvers (Kronbichler et al., 2012). The validity and ease of use of Aspect is important to its survival and role as a possible upgrade and replacement to Citcom. Development of publishable models illustrates the capacity of Aspect. We present work on the addition of non-linear solvers and stress-dependent rheology to Aspect. With a solid foundational knowledge of C++, these additions were easily added into Aspect and tested against CitcomS. Time-dependent subduction models akin to those in Billen and Hirth (2007) are built and compared in CitcomS and Aspect. Comparison with CitcomS assists in Aspect development and showcases its flexibility, usability and capabilities. References: Billen, M. I., and G. Hirth, 2007. Rheologic controls on slab dynamics. Geochemistry, Geophysics, Geosystems. Kommu, R., E. Heien, L. H. Kellogg, W. Bangerth, T. Heister, E. Studley, 2013. The Overshoot Phenomenon in Geodynamics Codes. American Geophysical Union Fall Meeting. M. Kronbichler, T. Heister, W. Bangerth, 2012, High Accuracy Mantle Convection Simulation through Modern Numerical Methods, Geophys. J. Int.

  2. The Prediction of Unsteady Aerodynamic Loading in High Aspect Ratio Wall Bounded Jets

    NASA Astrophysics Data System (ADS)

    Lurie, Michael B.

    Stealth aircraft are becoming more and more prevalent in the aircraft industry. One of the features of many stealth aircraft is an integrated engine that is mounted above the aircraft fuselage. The engine nozzle is often rectangular with a high aspect ratio, and exhausts onto a jet deck formed by the aircraft fuselage. This configuration allows the aircraft fuselage to shield the noise and other detectable features caused by the engine from the ground. The Northrop Grumman B2 Bomber is perhaps the most well-known example of this configuration. Additionally, stealth technology combined with unmanned aerial vehicles (UAV's) has led to the Joint Unmanned Combat System project, or J-UCAS. Both of the aircraft in development in this project use a wall-bounded high aspect ratio nozzle for stealth purposes. While these engine configurations provide a low radar profile and reduce the noise levels on the ground, they do introduce additional considerations. Since the engine is mounted above the aircraft, the nozzle jet is wall bounded by the fuselage of the aircraft. This is known as the flight deck. The jet stream exiting the nozzle can travel at supersonic speeds and potentially generates shock or expansion waves that impinge on the surface of the deck. The oscillations of these shockwaves on the deck produce localized unsteady forces acting on the aircraft. In addition, the interaction between the high speed jet stream and the slower ambient air causes a shear layer to form from the trailing edge of the nozzle. Turbulent eddies form and increase in size as they move downstream. The interactions of the shear layer with the flight deck produce additional unsteady forces on the aircraft. This thesis presents a study to predict the forces on a flight deck caused by a high aspect ratio wall bounded nozzle using both experimental methods and numerical simulations. The experiments performed were conducted on two different nozzles with aspect ratios of 4-1 and 8-1. Several different run conditions, including subsonic, overexpanded, on-design, and under-expanded, are included to study the effects of Mach number on the unsteady pressure. An aluminum flat plate is used to represent the aft deck. The plate is instrumented with Endevco pressure transducers to capture the fluctuating pressure on the aft deck. A spectral analysis performed on the individual sensors shows that the primary sources of fluctuating pressure are the shear layer along with shock-boundary layer interaction. Additional scaling with the nozzle heights is also presented. The numerical simulations were performed using a fully viscous, hybrid RANS/ LES model. They matched the nozzle characteristics and run conditions performed in the experiment. A detailed comparison between the unsteady pressures predicted by the computational simulations and those measured by the experiment is presented. Several discrepancies between the experimental and numerical results are the result of numerical error caused by the time marching scheme used in the simulations. A proper orthogonal decomposition (POD) method is introduced to further analyze the computational simulations and provide a filtering method to obtain more accurate results.

  3. The Finer Details: Climate Modeling

    NASA Technical Reports Server (NTRS)

    2000-01-01

    If you want to know whether you will need sunscreen or an umbrella for tomorrow's picnic, you can simply read the local weather report. However, if you are calculating the impact of gas combustion on global temperatures, or anticipating next year's rainfall levels to set water conservation policy, you must conduct a more comprehensive investigation. Such complex matters require long-range modeling techniques that predict broad trends in climate development rather than day-to-day details. Climate models are built from equations that calculate the progression of weather-related conditions over time. Based on the laws of physics, climate model equations have been developed to predict a number of environmental factors, for example: 1. Amount of solar radiation that hits the Earth. 2. Varying proportions of gases that make up the air. 3. Temperature at the Earth's surface. 4. Circulation of ocean and wind currents. 5. Development of cloud cover. Numerical modeling of the climate can improve our understanding of both the past and, the future. A model can confirm the accuracy of environmental measurements taken. in, the past and can even fill in gaps in those records. In addition, by quantifying the relationship between different aspects of climate, scientists can estimate how a future change in one aspect may alter the rest of the world. For example, could an increase in the temperature of the Pacific Ocean somehow set off a drought on the other side of the world? A computer simulation could lead to an answer for this and other questions. Quantifying the chaotic, nonlinear activities that shape our climate is no easy matter. You cannot run these simulations on your desktop computer and expect results by the time you have finished checking your morning e-mail. Efficient and accurate climate modeling requires powerful computers that can process billions of mathematical calculations in a single second. The NCCS exists to provide this degree of vast computing capability.

  4. User Interface Developed for Controls/CFD Interdisciplinary Research

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The NASA Lewis Research Center, in conjunction with the University of Akron, is developing analytical methods and software tools to create a cross-discipline "bridge" between controls and computational fluid dynamics (CFD) technologies. Traditionally, the controls analyst has used simulations based on large lumping techniques to generate low-order linear models convenient for designing propulsion system controls. For complex, high-speed vehicles such as the High Speed Civil Transport (HSCT), simulations based on CFD methods are required to capture the relevant flow physics. The use of CFD should also help reduce the development time and costs associated with experimentally tuning the control system. The initial application for this research is the High Speed Civil Transport inlet control problem. A major aspect of this research is the development of a controls/CFD interface for non-CFD experts, to facilitate the interactive operation of CFD simulations and the extraction of reduced-order, time-accurate models from CFD results. A distributed computing approach for implementing the interface is being explored. Software being developed as part of the Integrated CFD and Experiments (ICE) project provides the basis for the operating environment, including run-time displays and information (data base) management. Message-passing software is used to communicate between the ICE system and the CFD simulation, which can reside on distributed, parallel computing systems. Initially, the one-dimensional Large-Perturbation Inlet (LAPIN) code is being used to simulate a High Speed Civil Transport type inlet. LAPIN can model real supersonic inlet features, including bleeds, bypasses, and variable geometry, such as translating or variable-ramp-angle centerbodies. Work is in progress to use parallel versions of the multidimensional NPARC code.

  5. Use of Computational Fluid Dynamics for improving freeze-dryers design and process understanding. Part 1: Modelling the lyophilisation chamber.

    PubMed

    Barresi, Antonello A; Rasetto, Valeria; Marchisio, Daniele L

    2018-05-15

    This manuscript shows how computational models, mainly based on Computational Fluid Dynamics (CFD), can be used to simulate different parts of an industrial freeze-drying equipment and to properly design them; in particular, the freeze-dryer chamber and the duct connecting the chamber with the condenser, with the valves and vanes eventually present are analysed in this work. In Part 1, it will be shown how CFD can be employed to improve specific designs, to perform geometry optimization, to evaluate different design choices and how it is useful to evaluate the effect on product drying and batch variance. Such an approach allows an in-depth process understanding and assessment of the critical aspects of lyophilisation. This can be done by running either steady-state or transient simulations with imposed sublimation rates or with multi-scale approaches. This methodology will be demonstrated on freeze-drying equipment of different sizes, investigating the influence of the equipment geometry and shelf inter-distance. The effect of valve type (butterfly and mushroom) and shape on duct conductance and critical flow conditions will be instead investigated in Part 2. Copyright © 2018. Published by Elsevier B.V.

  6. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  7. Combining fragment homology modeling with molecular dynamics aims at prediction of Ca2+ binding sites in CaBPs

    NASA Astrophysics Data System (ADS)

    Pang, ChunLi; Cao, TianGuang; Li, JunWei; Jia, MengWen; Zhang, SuHua; Ren, ShuXi; An, HaiLong; Zhan, Yong

    2013-08-01

    The family of calcium-binding proteins (CaBPs) consists of dozens of members and contributes to all aspects of the cell's function, from homeostasis to learning and memory. However, the Ca2+-binding mechanism is still unclear for most of CaBPs. To identify the Ca2+-binding sites of CaBPs, this study presented a computational approach which combined the fragment homology modeling with molecular dynamics simulation. For validation, we performed a two-step strategy as follows: first, the approach is used to identify the Ca2+-binding sites of CaBPs, which have the EF-hand Ca2+-binding site and the detailed binding mechanism. To accomplish this, eighteen crystal structures of CaBPs with 49 Ca2+-binding sites are selected to be analyzed including calmodulin. The computational method identified 43 from 49 Ca2+-binding sites. Second, we performed the approach to large-conductance Ca2+-activated K+ (BK) channels which don't have clear Ca2+-binding mechanism. The simulated results are consistent with the experimental data. The computational approach may shed some light on the identification of Ca2+-binding sites in CaBPs.

  8. Target-type probability combining algorithms for multisensor tracking

    NASA Astrophysics Data System (ADS)

    Wigren, Torbjorn

    2001-08-01

    Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.

  9. Computational modeling to predict mechanical function of joints: application to the lower leg with simulation of two cadaver studies.

    PubMed

    Liacouras, Peter C; Wayne, Jennifer S

    2007-12-01

    Computational models of musculoskeletal joints and limbs can provide useful information about joint mechanics. Validated models can be used as predictive devices for understanding joint function and serve as clinical tools for predicting the outcome of surgical procedures. A new computational modeling approach was developed for simulating joint kinematics that are dictated by bone/joint anatomy, ligamentous constraints, and applied loading. Three-dimensional computational models of the lower leg were created to illustrate the application of this new approach. Model development began with generating three-dimensional surfaces of each bone from CT images and then importing into the three-dimensional solid modeling software SOLIDWORKS and motion simulation package COSMOSMOTION. Through SOLIDWORKS and COSMOSMOTION, each bone surface file was filled to create a solid object and positioned necessary components added, and simulations executed. Three-dimensional contacts were added to inhibit intersection of the bones during motion. Ligaments were represented as linear springs. Model predictions were then validated by comparison to two different cadaver studies, syndesmotic injury and repair and ankle inversion following ligament transection. The syndesmotic injury model was able to predict tibial rotation, fibular rotation, and anterior/posterior displacement. In the inversion simulation, calcaneofibular ligament extension and angles of inversion compared well. Some experimental data proved harder to simulate accurately, due to certain software limitations and lack of complete experimental data. Other parameters that could not be easily obtained experimentally can be predicted and analyzed by the computational simulations. In the syndesmotic injury study, the force generated in the tibionavicular and calcaneofibular ligaments reduced with the insertion of the staple, indicating how this repair technique changes joint function. After transection of the calcaneofibular ligament in the inversion stability study, a major increase in force was seen in several of the ligaments on the lateral aspect of the foot and ankle, indicating the recruitment of other structures to permit function after injury. Overall, the computational models were able to predict joint kinematics of the lower leg with particular focus on the ankle complex. This same approach can be taken to create models of other limb segments such as the elbow and wrist. Additional parameters can be calculated in the models that are not easily obtained experimentally such as ligament forces, force transmission across joints, and three-dimensional movement of all bones. Muscle activation can be incorporated in the model through the action of applied forces within the software for future studies.

  10. Computational Insights into Materials and Interfaces for Capacitive Energy Storage

    PubMed Central

    Zhan, Cheng; Lian, Cheng; Zhang, Yu; Thompson, Matthew W.; Xie, Yu; Wu, Jianzhong; Kent, Paul R. C.; Cummings, Peter T.; Wesolowski, David J.

    2017-01-01

    Supercapacitors such as electric double‐layer capacitors (EDLCs) and pseudocapacitors are becoming increasingly important in the field of electrical energy storage. Theoretical study of energy storage in EDLCs focuses on solving for the electric double‐layer structure in different electrode geometries and electrolyte components, which can be achieved by molecular simulations such as classical molecular dynamics (MD), classical density functional theory (classical DFT), and Monte‐Carlo (MC) methods. In recent years, combining first‐principles and classical simulations to investigate the carbon‐based EDLCs has shed light on the importance of quantum capacitance in graphene‐like 2D systems. More recently, the development of joint density functional theory (JDFT) enables self‐consistent electronic‐structure calculation for an electrode being solvated by an electrolyte. In contrast with the large amount of theoretical and computational effort on EDLCs, theoretical understanding of pseudocapacitance is very limited. In this review, we first introduce popular modeling methods and then focus on several important aspects of EDLCs including nanoconfinement, quantum capacitance, dielectric screening, and novel 2D electrode design; we also briefly touch upon pseudocapactive mechanism in RuO2. We summarize and conclude with an outlook for the future of materials simulation and design for capacitive energy storage. PMID:28725531

  11. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  12. Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology

    NASA Astrophysics Data System (ADS)

    Macioł, Piotr; Michalik, Kazimierz

    2016-10-01

    Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.

  13. Simulation analysis of air flow and turbulence statistics in a rib grit roughened duct.

    PubMed

    Vogiatzis, I I; Denizopoulou, A C; Ntinas, G K; Fragos, V P

    2014-01-01

    The implementation of variable artificial roughness patterns on a surface is an effective technique to enhance the rate of heat transfer to fluid flow in the ducts of solar air heaters. Different geometries of roughness elements investigated have demonstrated the pivotal role that vortices and associated turbulence have on the heat transfer characteristics of solar air heater ducts by increasing the convective heat transfer coefficient. In this paper we investigate the two-dimensional, turbulent, unsteady flow around rectangular ribs of variable aspect ratios by directly solving the transient Navier-Stokes and continuity equations using the finite elements method. Flow characteristics and several aspects of turbulent flow are presented and discussed including velocity components and statistics of turbulence. The results reveal the impact that different rib lengths have on the computed mean quantities and turbulence statistics of the flow. The computed turbulence parameters show a clear tendency to diminish downstream with increasing rib length. Furthermore, the applied numerical method is capable of capturing small-scale flow structures resulting from the direct solution of Navier-Stokes and continuity equations.

  14. Computational aspects in mechanical modeling of the articular cartilage tissue.

    PubMed

    Mohammadi, Hadi; Mequanint, Kibret; Herzog, Walter

    2013-04-01

    This review focuses on the modeling of articular cartilage (at the tissue level), chondrocyte mechanobiology (at the cell level) and a combination of both in a multiscale computation scheme. The primary objective is to evaluate the advantages and disadvantages of conventional models implemented to study the mechanics of the articular cartilage tissue and chondrocytes. From monophasic material models as the simplest form to more complicated multiscale theories, these approaches have been frequently used to model articular cartilage and have contributed significantly to modeling joint mechanics, addressing and resolving numerous issues regarding cartilage mechanics and function. It should be noted that attentiveness is important when using different modeling approaches, as the choice of the model limits the applications available. In this review, we discuss the conventional models applicable to some of the mechanical aspects of articular cartilage such as lubrication, swelling pressure and chondrocyte mechanics and address some of the issues associated with the current modeling approaches. We then suggest future pathways for a more realistic modeling strategy as applied for the simulation of the mechanics of the cartilage tissue using multiscale and parallelized finite element method.

  15. The effectiveness of using multimedia computer simulations coupled with social constructivist pedagogy in a college introductory physics classroom

    NASA Astrophysics Data System (ADS)

    Chou, Chiu-Hsiang

    Electricity and Magnetism is legendarily considered a subject incomprehensible to the students in the college introductory level. From a social constructivist perspective, learners are encouraged to assess the quantity and the quality of prior knowledge in a subject domain and to co-construct shared knowledge and understanding by implementing and building on each other's ideas. They become challenged by new data and perspectives thus stimulate a reconceptualization of knowledge and to be actively engaged in discovering new meanings based on experiences grounded in the real-world phenomena they are expected to learn. This process is categorized as a conceptual change learning environment and can facilitate learning of E & M. Computer simulations are an excellent tool to assist the teacher and leaner in achieving these goals and were used in this study. This study examined the effectiveness of computer simulations within a conceptual change learning environment and compared it to more lecture-centered, traditional ways of teaching E & M. An experimental and control group were compared and the following differences were observed. Statistic analyses were done with ANOVA (F-test). The results indicated that the treatment group significantly outperformed the control group on the achievement test, F(1,54) = 12.34, p <.05 and the treatment group had a higher rate of improvement than the control group on two subscales: Isolation of Variables and Abstract Transformation. The results from the Maryland Physics Expectations Survey (MPEX) showed that the treatment students became more field independent and were aware of more fundamental role played by physics concepts in complex problem solving. The protocol analysis of structured interviews revealed that students in the treatment group tended to visualize the problem from different aspects and articulated what they thought in a more scientific approach. Responses to the instructional evaluation questionnaire indicated overwhelming positive ratings of appropriateness and instructional effectiveness of computer simulation instruction. In conclusion, the CSI developed and evaluated in this study provided opportunities for students to refine their preconceptions and practice using new understandings. It suggests substantial promise for the computer simulation in a classroom environment.

  16. Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu; Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715

    2014-11-28

    To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution.more » Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.« less

  17. Spatial Analysis of Traffic and Routing Path Methods for Tsunami Evacuation

    NASA Astrophysics Data System (ADS)

    Fakhrurrozi, A.; Sari, A. M.

    2018-02-01

    Tsunami disaster occurred relatively very fast. Thus, it has a very large-scale impact on both non-material and material aspects. Community evacuation caused mass panic, crowds, and traffic congestion. A further research in spatial based modelling, traffic engineering and splitting zone evacuation simulation is very crucial as an effort to reduce higher losses. This topic covers some information from the previous research. Complex parameters include route selection, destination selection, the spontaneous timing of both the departure of the source and the arrival time to destination and other aspects of the result parameter in various methods. The simulation process and its results, traffic modelling, and routing analysis emphasized discussion which is the closest to real conditions in the tsunami evacuation process. The method that we should highlight is Clearance Time Estimate based on Location Priority in which the computation result is superior to others despite many drawbacks. The study is expected to have input to improve and invent a new method that will be a part of decision support systems for disaster risk reduction of tsunamis disaster.

  18. A Spatial Perspective of Droughts and Pluvials in the Tropics and their Relationships to ENSO in CMIP5 Model Simulations

    NASA Astrophysics Data System (ADS)

    Perez Arango, J. D.; Lintner, B. R.; Lyon, B.

    2016-12-01

    Although many aspects of the tropical response to ENSO are well-known, the spatial characteristics of the rainfall response to ENSO remain relatively unexplored. Moreover, in current generation climate models, the spatial signatures of the ENSO tropical teleconnection are more uncertain than other aspects of ENSO variability, such as the amplitude of rainfall anomalies. Following the approach of Lyon (2004) and Lyon and Barnston (2005), we analyze here integrated measures of the spatial extent of drought and pluvial conditions in the tropics and their relationship to ENSO in observations as well as simulations of Phase 5 of the Coupled Model Intercomparison Project (CMIP5) with prescribed SST forcing. We compute diagnostics including the model ensemble-means and standard deviations of moderate, intermediate, and severe droughts and pluvials and the lagged correlations with respect to ENSO-based SST indices like NINO3. Overall, in a tropics-wide sense, the models generally capture the areal extent of observed droughts and pluvials and their phasing with respect to ENSO. However, at more local scales, e.g., tropical South America, the simulated metrics agree less strongly with observations, underscoring the role of errors in the spatial patterns of ENSO-induced rainfall anomalies.

  19. Visual cognition during real social interaction

    PubMed Central

    Skarratt, Paul A.; Cole, Geoff G.; Kuhn, Gustav

    2012-01-01

    Laboratory studies of social visual cognition often simulate the critical aspects of joint attention by having participants interact with a computer-generated avatar. Recently, there has been a movement toward examining these processes during authentic social interaction. In this review, we will focus on attention to faces, attentional misdirection, and a phenomenon we have termed social inhibition of return (Social IOR), that have revealed aspects of social cognition that were hitherto unknown. We attribute these discoveries to the use of paradigms that allow for more realistic social interactions to take place. We also point to an area that has begun to attract a considerable amount of interest—that of Theory of Mind (ToM) and automatic perspective taking—and suggest that this too might benefit from adopting a similar approach. PMID:22754521

  20. Hybrid quantum and classical methods for computing kinetic isotope effects of chemical reactions in solutions and in enzymes.

    PubMed

    Gao, Jiali; Major, Dan T; Fan, Yao; Lin, Yen-Lin; Ma, Shuhua; Wong, Kin-Yiu

    2008-01-01

    A method for incorporating quantum mechanics into enzyme kinetics modeling is presented. Three aspects are emphasized: 1) combined quantum mechanical and molecular mechanical methods are used to represent the potential energy surface for modeling bond forming and breaking processes, 2) instantaneous normal mode analyses are used to incorporate quantum vibrational free energies to the classical potential of mean force, and 3) multidimensional tunneling methods are used to estimate quantum effects on the reaction coordinate motion. Centroid path integral simulations are described to make quantum corrections to the classical potential of mean force. In this method, the nuclear quantum vibrational and tunneling contributions are not separable. An integrated centroid path integral-free energy perturbation and umbrella sampling (PI-FEP/UM) method along with a bisection sampling procedure was summarized, which provides an accurate, easily convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. In the ensemble-averaged variational transition state theory with multidimensional tunneling (EA-VTST/MT), these three aspects of quantum mechanical effects can be individually treated, providing useful insights into the mechanism of enzymatic reactions. These methods are illustrated by applications to a model process in the gas phase, the decarboxylation reaction of N-methyl picolinate in water, and the proton abstraction and reprotonation process catalyzed by alanine racemase. These examples show that the incorporation of quantum mechanical effects is essential for enzyme kinetics simulations.

  1. Hyper-Systolic Processing on APE100/QUADRICS:. n2-LOOP Computations

    NASA Astrophysics Data System (ADS)

    Lippert, Thomas; Ritzenhöfer, Gero; Glaessner, Uwe; Hoeber, Henning; Seyfried, Armin; Schilling, Klaus

    We investigate the performance gains from hyper-systolic implementations of n2-loop problems on the massively parallel computer Quadrics, exploiting its three-dimensional interprocessor connectivity. For illustration we study the communication aspects of an exact molecular dynamics simulation of n particles with Coulomb (or gravitational) interactions. We compare the interprocessor communication costs of the standard-systolic and the hyper-systolic approaches for various granularities. We predict gain factors as large as three on the Q4 and eight on the QH4 and measure actual performances on these machine configurations. We conclude that it appears feasible to investigate the thermodynamics of a full gravitating n-body problem with O(16.000) particles using the new method on a QH4 system.

  2. Utilizing remote sensing of Thematic Mapper data to improve our understanding of estuarine processes and their influence on the productivity of estuarine-dependent fisheries

    NASA Technical Reports Server (NTRS)

    Browder, J. A.; May, L. N., Jr.; Rosenthal, A.; Baumann, R. H.; Gosselink, J. G.

    1986-01-01

    LANDSAT thematic mapper (TM) data are being used to refine and validate a stochastic spatial computer model to be applied to coastal resource management problems in Louisiana. Two major aspects of the research are: (1) the measurement of area of land (or emergent vegetation) and water and the length of the interface between land and water in TM imagery of selected coastal wetlands (sample marshes); and (2) the comparison of spatial patterns of land and water in the sample marshes of the imagery to that in marshes simulated by a computer model. In addition to activities in these two areas, the potential use of a published autocorrelation statistic is analyzed.

  3. Color reproducibility and dyestuff concentration

    NASA Astrophysics Data System (ADS)

    Csanyi, Sandor

    2002-06-01

    The purpose of this study was to develop a new sensitivity index connected with color matching, which makes it possible to investigate the effects of dyestuff concentration deviations in a larger part of the color space in a comprehensive manner. By the help of computer simulation and experimental design, we examined the color differences resulting from minor concentration changes in approximately 500 formulas of different compositions, altering their total concentration and the proportion of the individual dyes in them. The new sensitivity index makes it possible for the colorist to select the recipe that is the least sensitive to concentration deviations from among the computer color formulas, as well as to add a new aspect to the ranking applied in color matching so far.

  4. Statistics of the stochastically forced Lorenz attractor by the Fokker-Planck equation and cumulant expansions.

    PubMed

    Allawala, Altan; Marston, J B

    2016-11-01

    We investigate the Fokker-Planck description of the equal-time statistics of the three-dimensional Lorenz attractor with additive white noise. The invariant measure is found by computing the zero (or null) mode of the linear Fokker-Planck operator as a problem of sparse linear algebra. Two variants are studied: a self-adjoint construction of the linear operator and the replacement of diffusion with hyperdiffusion. We also access the low-order statistics of the system by a perturbative expansion in equal-time cumulants. A comparison is made to statistics obtained by the standard approach of accumulation via direct numerical simulation. Theoretical and computational aspects of the Fokker-Planck and cumulant expansion methods are discussed.

  5. Perspective on computational and structural aspects of kinase discovery from IPK2014.

    PubMed

    Martin, Eric; Knapp, Stefan; Engh, Richard A; Moebitz, Henrik; Varin, Thibault; Roux, Benoit; Meiler, Jens; Berdini, Valerio; Baumann, Alexander; Vieth, Michal

    2015-10-01

    Recent advances in understanding the activity and selectivity of kinase inhibitors and their relationships to protein structure are presented. Conformational selection in kinases is studied from empirical, data-driven and simulation approaches. Ligand binding and its affinity are, in many cases, determined by the predetermined active and inactive conformation of kinases. Binding affinity and selectivity predictions highlight the current state of the art and advances in computational chemistry as it applies to kinase inhibitor discovery. Kinome wide inhibitor profiling and cell panel profiling lead to a better understanding of selectivity and allow for target validation and patient tailoring hypotheses. This article is part of a Special Issue entitled: Inhibitors of Protein Kinases. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Sustaining Open Source Communities through Hackathons - An Example from the ASPECT Community

    NASA Astrophysics Data System (ADS)

    Heister, T.; Hwang, L.; Bangerth, W.; Kellogg, L. H.

    2016-12-01

    The ecosystem surrounding a successful scientific open source software package combines both social and technical aspects. Much thought has been given to the technology side of writing sustainable software for large infrastructure projects and software libraries, but less about building the human capacity to perpetuate scientific software used in computational modeling. One effective format for building capacity is regular multi-day hackathons. Scientific hackathons bring together a group of science domain users and scientific software contributors to make progress on a specific software package. Innovation comes through the chance to work with established and new collaborations. Especially in the domain sciences with small communities, hackathons give geographically distributed scientists an opportunity to connect face-to-face. They foster lively discussions amongst scientists with different expertise, promote new collaborations, and increase transparency in both the technical and scientific aspects of code development. ASPECT is an open source, parallel, extensible finite element code to simulate thermal convection, that began development in 2011 under the Computational Infrastructure for Geodynamics. ASPECT hackathons for the past 3 years have grown the number of authors to >50, training new code maintainers in the process. Hackathons begin with leaders establishing project-specific conventions for development, demonstrating the workflow for code contributions, and reviewing relevant technical skills. Each hackathon expands the developer community. Over 20 scientists add >6,000 lines of code during the >1 week event. Participants grow comfortable contributing to the repository and over half continue to contribute afterwards. A high return rate of participants ensures continuity and stability of the group as well as mentoring for novice members. We hope to build other software communities on this model, but anticipate each to bring their own unique challenges.

  7. Hormone Purification by Isoelectric Focusing

    NASA Technical Reports Server (NTRS)

    Bier, M.

    1985-01-01

    Various ground-based research approaches are being applied to a more definitive evaluation of the natures and degrees of electroosmosis effects on the separation capabilities of the Isoelectric Focusing (IEF) process. A primary instrumental system for this work involves rotationally stabilized, horizontal electrophoretic columns specially adapted for the IEF process. Representative adaptations include segmentation, baffles/screens, and surface coatings. Comparative performance and development testing are pursued against the type of column or cell established as an engineering model. Previously developed computer simulation capabilities are used to predict low-gravity behavior patterns and performance for IEF apparatus geometries of direct project interest. Three existing mathematical models plus potential new routines for particular aspects of simulating instrument fluid patterns with varied wall electroosmosis influences are being exercised.

  8. A modified Wright-Fisher model that incorporates Ne: A variant of the standard model with increased biological realism and reduced computational complexity.

    PubMed

    Zhao, Lei; Gossmann, Toni I; Waxman, David

    2016-03-21

    The Wright-Fisher model is an important model in evolutionary biology and population genetics. It has been applied in numerous analyses of finite populations with discrete generations. It is recognised that real populations can behave, in some key aspects, as though their size that is not the census size, N, but rather a smaller size, namely the effective population size, Ne. However, in the Wright-Fisher model, there is no distinction between the effective and census population sizes. Equivalently, we can say that in this model, Ne coincides with N. The Wright-Fisher model therefore lacks an important aspect of biological realism. Here, we present a method that allows Ne to be directly incorporated into the Wright-Fisher model. The modified model involves matrices whose size is determined by Ne. Thus apart from increased biological realism, the modified model also has reduced computational complexity, particularly so when Ne⪡N. For complex problems, it may be hard or impossible to numerically analyse the most commonly-used approximation of the Wright-Fisher model that incorporates Ne, namely the diffusion approximation. An alternative approach is simulation. However, the simulations need to be sufficiently detailed that they yield an effective size that is different to the census size. Simulations may also be time consuming and have attendant statistical errors. The method presented in this work may then be the only alternative to simulations, when Ne differs from N. We illustrate the straightforward application of the method to some problems involving allele fixation and the determination of the equilibrium site frequency spectrum. We then apply the method to the problem of fixation when three alleles are segregating in a population. This latter problem is significantly more complex than a two allele problem and since the diffusion equation cannot be numerically solved, the only other way Ne can be incorporated into the analysis is by simulation. We have achieved good accuracy in all cases considered. In summary, the present work extends the realism and tractability of an important model of evolutionary biology and population genetics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Figures of Merit Software: Description, User's Guide, Installation Notes, Versions Description, and License Agreement

    NASA Technical Reports Server (NTRS)

    hoelzer, H. D.; Fourroux, K. A.; Rickman, D. L.; Schrader, C. M.

    2011-01-01

    Figures of Merit (FoMs) and the FoM software provide a method for quantitatively evaluating the quality of a regolith simulant by comparing the simulant to a reference material. FoMs may be used for comparing a simulant to actual regolith material, specification by stating the value a simulant s FoMs must attain to be suitable for a given application and comparing simulants from different vendors or production runs. FoMs may even be used to compare different simulants to each other. A single FoM is conceptually an algorithm that computes a single number for quantifying the similarity or difference of a single characteristic of a simulant material and a reference material and provides a clear measure of how well a simulant and reference material match or compare. FoMs have been constructed to lie between zero and 1, with zero indicating a poor or no match and 1 indicating a perfect match. FoMs are defined for modal composition, particle size distribution, particle shape distribution, (aspect ratio and angularity), and density. This TM covers the mathematics, use, installation, and licensing for the existing FoM code in detail.

  10. Employing inquiry-based computer simulations and embedded scientist videos to teach challenging climate change and nature of science concepts

    NASA Astrophysics Data System (ADS)

    Cohen, Edward Charles

    Design based research was utilized to investigate how students use a greenhouse effect simulation in order to derive best learning practices. During this process, students recognized the authentic scientific process involving computer simulations. The simulation used is embedded within an inquiry-based technology-mediated science curriculum known as Web-based Inquiry Science Environment (WISE). For this research, students from a suburban, diverse, middle school setting use the simulations as part of a two week-long class unit on climate change. A pilot study was conducted during phase one of the research that informed phase two, which encompasses the dissertation. During the pilot study, as students worked through the simulation, evidence of shifts in student motivation, understanding of science content, and ideas about the nature of science became present using a combination of student interviews, focus groups, and students' conversations. Outcomes of the pilot study included improvements to the pedagogical approach. Allowing students to do "Extreme Testing" (e.g., making the world as hot or cold as possible) and increasing the time for free exploration of the simulation are improvements made as a result of the findings of the pilot study. In the dissertation (phase two of the research design) these findings were implemented in a new curriculum scaled for 85 new students from the same school during the next school year. The modifications included new components implementing simulations as an assessment tool for all students and embedded modeling tools. All students were asked to build pre and post models, however due to technological constraints these were not an effective tool. A non-video group of 44 students was established and another group of 41 video students had a WISE curriculum which included twelve minutes of scientists' conversational videos referencing explicit aspects on the nature of science, specifically the use of models and simulations in science. The students in the video group had marked improvement compared to the non-video group on questions regarding modeling as a tool for representing objects and processes of science modeling aspects as evident by multiple data sources. The findings from the dissertation have potential impacts on improving Nature of Science (NOS) concepts around modeling by efficiently embedding short authentic scientific videos that can be easily used by many educators. Compared to published assessments by the American Association for the Advancement of Science (AAAS), due to the curriculum interventions both groups scored higher than the average United States middle school student on many NOS and climate content constructs.

  11. Employing Inquiry-Based Computer Simulations and Embedded Scientist Videos To Teach Challenging Climate Change and Nature of Science Concepts

    NASA Astrophysics Data System (ADS)

    Cohen, E.

    2013-12-01

    Design based research was utilized to investigate how students use a greenhouse effect simulation in order to derive best learning practices. During this process, students recognized the authentic scientific process involving computer simulations. The simulation used is embedded within an inquiry-based technology-mediated science curriculum known as Web-based Inquiry Science Environment (WISE). For this research, students from a suburban, diverse, middle school setting use the simulations as part of a two week-long class unit on climate change. A pilot study was conducted during phase one of the research that informed phase two, which encompasses the dissertation. During the pilot study, as students worked through the simulation, evidence of shifts in student motivation, understanding of science content, and ideas about the nature of science became present using a combination of student interviews, focus groups, and students' conversations. Outcomes of the pilot study included improvements to the pedagogical approach. Allowing students to do 'Extreme Testing' (e.g., making the world as hot or cold as possible) and increasing the time for free exploration of the simulation are improvements made as a result of the findings of the pilot study. In the dissertation (phase two of the research design) these findings were implemented in a new curriculum scaled for 85 new students from the same school during the next school year. The modifications included new components implementing simulations as an assessment tool for all students and embedded modeling tools. All students were asked to build pre and post models, however due to technological constraints these were not an effective tool. A non-video group of 44 students was established and another group of 41 video students had a WISE curriculum which included twelve minutes of scientists' conversational videos referencing explicit aspects on the nature of science, specifically the use of models and simulations in science. The students in the video group had marked improvement compared to the non-video group on questions regarding modeling as a tool for representing objects and processes of science modeling aspects as evident by multiple data sources. The findings from the dissertation have potential impacts on improving Nature of Science (NOS) concepts around modeling by efficiently embedding short authentic scientific videos that can be easily used by many educators. Compared to published assessments by the American Association for the Advancement of Science (AAAS), due to the curriculum interventions both groups scored higher than the average United States middle school student on many NOS and climate content constructs.

  12. Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture

    DOEpatents

    Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert

    2015-07-28

    Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.

  13. Computer-aided design and experimental investigation of a hydrodynamic device: the microwire electrode

    PubMed

    Fulian; Gooch; Fisher; Stevens; Compton

    2000-08-01

    The development and application of a new electrochemical device using a computer-aided design strategy is reported. This novel design is based on the flow of electrolyte solution past a microwire electrode situated centrally within a large duct. In the design stage, finite element simulations were employed to evaluate feasible working geometries and mass transport rates. The computer-optimized designs were then exploited to construct experimental devices. Steady-state voltammetric measurements were performed for a reversible one-electron-transfer reaction to establish the experimental relationship between electrolysis current and solution velocity. The experimental results are compared to those predicted numerically, and good agreement is found. The numerical studies are also used to establish an empirical relationship between the mass transport limited current and the volume flow rate, providing a simple and quantitative alternative for workers who would prefer to exploit this device without the need to develop the numerical aspects.

  14. An interactive physics-based unmanned ground vehicle simulator leveraging open source gaming technology: progress in the development and application of the virtual autonomous navigation environment (VANE) desktop

    NASA Astrophysics Data System (ADS)

    Rohde, Mitchell M.; Crawford, Justin; Toschlog, Matthew; Iagnemma, Karl D.; Kewlani, Guarav; Cummins, Christopher L.; Jones, Randolph A.; Horner, David A.

    2009-05-01

    It is widely recognized that simulation is pivotal to vehicle development, whether manned or unmanned. There are few dedicated choices, however, for those wishing to perform realistic, end-to-end simulations of unmanned ground vehicles (UGVs). The Virtual Autonomous Navigation Environment (VANE), under development by US Army Engineer Research and Development Center (ERDC), provides such capabilities but utilizes a High Performance Computing (HPC) Computational Testbed (CTB) and is not intended for on-line, real-time performance. A product of the VANE HPC research is a real-time desktop simulation application under development by the authors that provides a portal into the HPC environment as well as interaction with wider-scope semi-automated force simulations (e.g. OneSAF). This VANE desktop application, dubbed the Autonomous Navigation Virtual Environment Laboratory (ANVEL), enables analysis and testing of autonomous vehicle dynamics and terrain/obstacle interaction in real-time with the capability to interact within the HPC constructive geo-environmental CTB for high fidelity sensor evaluations. ANVEL leverages rigorous physics-based vehicle and vehicle-terrain interaction models in conjunction with high-quality, multimedia visualization techniques to form an intuitive, accurate engineering tool. The system provides an adaptable and customizable simulation platform that allows developers a controlled, repeatable testbed for advanced simulations. ANVEL leverages several key technologies not common to traditional engineering simulators, including techniques from the commercial video-game industry. These enable ANVEL to run on inexpensive commercial, off-the-shelf (COTS) hardware. In this paper, the authors describe key aspects of ANVEL and its development, as well as several initial applications of the system.

  15. Downscaling seasonal to centennial simulations on distributed computing infrastructures using WRF model. The WRF4G project

    NASA Astrophysics Data System (ADS)

    Cofino, A. S.; Fernández Quiruelas, V.; Blanco Real, J. C.; García Díez, M.; Fernández, J.

    2013-12-01

    Nowadays Grid Computing is powerful computational tool which is ready to be used for scientific community in different areas (such as biomedicine, astrophysics, climate, etc.). However, the use of this distributed computing infrastructures (DCI) is not yet common practice in climate research, and only a few teams and applications in this area take advantage of this infrastructure. Thus, the WRF4G project objective is to popularize the use of this technology in the atmospheric sciences area. In order to achieve this objective, one of the most used applications has been taken (WRF; a limited- area model, successor of the MM5 model), that has a user community formed by more than 8000 researchers worldwide. This community develop its research activity on different areas and could benefit from the advantages of Grid resources (case study simulations, regional hind-cast/forecast, sensitivity studies, etc.). The WRF model is used by many groups, in the climate research community, to carry on downscaling simulations. Therefore this community will also benefit. However, Grid infrastructures have some drawbacks for the execution of applications that make an intensive use of CPU and memory for a long period of time. This makes necessary to develop a specific framework (middleware). This middleware encapsulates the application and provides appropriate services for the monitoring and management of the simulations and the data. Thus,another objective of theWRF4G project consists on the development of a generic adaptation of WRF to DCIs. It should simplify the access to the DCIs for the researchers, and also to free them from the technical and computational aspects of the use of theses DCI. Finally, in order to demonstrate the ability of WRF4G solving actual scientific challenges with interest and relevance on the climate science (implying a high computational cost) we will shown results from different kind of downscaling experiments, like ERA-Interim re-analysis, CMIP5 models, or seasonal. WRF4G is been used to run WRF simulations which are contributing to the CORDEX initiative and others projects like SPECS and EUPORIAS. This work is been partially funded by the European Regional Development Fund (ERDF) and the Spanish National R&D Plan 2008-2011 (CGL2011-28864)

  16. Fermi-level effects in semiconductor processing: A modeling scheme for atomistic kinetic Monte Carlo simulators

    NASA Astrophysics Data System (ADS)

    Martin-Bragado, I.; Castrillo, P.; Jaraiz, M.; Pinacho, R.; Rubio, J. E.; Barbolla, J.; Moroz, V.

    2005-09-01

    Atomistic process simulation is expected to play an important role for the development of next generations of integrated circuits. This work describes an approach for modeling electric charge effects in a three-dimensional atomistic kinetic Monte Carlo process simulator. The proposed model has been applied to the diffusion of electrically active boron and arsenic atoms in silicon. Several key aspects of the underlying physical mechanisms are discussed: (i) the use of the local Debye length to smooth out the atomistic point-charge distribution, (ii) algorithms to correctly update the charge state in a physically accurate and computationally efficient way, and (iii) an efficient implementation of the drift of charged particles in an electric field. High-concentration effects such as band-gap narrowing and degenerate statistics are also taken into account. The efficiency, accuracy, and relevance of the model are discussed.

  17. Dosimetry in MARS spectral CT: TOPAS Monte Carlo simulations and ion chamber measurements.

    PubMed

    Lu, Gray; Marsh, Steven; Damet, Jerome; Carbonez, Pierre; Laban, John; Bateman, Christopher; Butler, Anthony; Butler, Phil

    2017-06-01

    Spectral computed tomography (CT) is an up and coming imaging modality which shows great promise in revealing unique diagnostic information. Because this imaging modality is based on X-ray CT, it is of utmost importance to study the radiation dose aspects of its use. This study reports on the implementation and evaluation of a Monte Carlo simulation tool using TOPAS for estimating dose in a pre-clinical spectral CT scanner known as the MARS scanner. Simulated estimates were compared with measurements from an ionization chamber. For a typical MARS scan, TOPAS estimated for a 30 mm diameter cylindrical phantom a CT dose index (CTDI) of 29.7 mGy; CTDI was measured by ion chamber to within 3% of TOPAS estimates. Although further development is required, our investigation of TOPAS for estimating MARS scan dosimetry has shown its potential for further study of spectral scanning protocols and dose to scanned objects.

  18. Computational Aerothermodynamic Simulation Issues on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; White, Jeffery A.

    2004-01-01

    The synthesis of physical models for gas chemistry and turbulence from the structured grid codes LAURA and VULCAN into the unstructured grid code FUN3D is described. A directionally Symmetric, Total Variation Diminishing (STVD) algorithm and an entropy fix (eigenvalue limiter) keyed to local cell Reynolds number are introduced to improve solution quality for hypersonic aeroheating applications. A simple grid-adaptation procedure is incorporated within the flow solver. Simulations of flow over an ellipsoid (perfect gas, inviscid), Shuttle Orbiter (viscous, chemical nonequilibrium) and comparisons to the structured grid solvers LAURA (cylinder, Shuttle Orbiter) and VULCAN (flat plate) are presented to show current capabilities. The quality of heating in 3D stagnation regions is very sensitive to algorithm options in general, high aspect ratio tetrahedral elements complicate the simulation of high Reynolds number, viscous flow as compared to locally structured meshes aligned with the flow.

  19. Computational analysis of nonlinearities within dynamics of cable-based driving systems

    NASA Astrophysics Data System (ADS)

    Anghelache, G. D.; Nastac, S.

    2017-08-01

    This paper deals with computational nonlinear dynamics of mechanical systems containing some flexural parts within the actuating scheme, and, especially, the situations of the cable-based driving systems were treated. It was supposed both functional nonlinearities and the real characteristic of the power supply, in order to obtain a realistically computer simulation model being able to provide very feasible results regarding the system dynamics. It was taken into account the transitory and stable regimes during a regular exploitation cycle. The authors present a particular case of a lift system, supposed to be representatively for the objective of this study. The simulations were made based on the values of the essential parameters acquired from the experimental tests and/or the regular practice in the field. The results analysis and the final discussions reveal the correlated dynamic aspects within the mechanical parts, the driving system, and the power supply, whole of these supplying potential sources of particular resonances, within some transitory phases of the working cycle, and which can affect structural and functional dynamics. In addition, it was underlines the influences of computational hypotheses on the both quantitative and qualitative behaviour of the system. Obviously, the most significant consequence of this theoretical and computational research consist by developing an unitary and feasible model, useful to dignify the nonlinear dynamic effects into the systems with cable-based driving scheme, and hereby to help an optimization of the exploitation regime including a dynamics control measures.

  20. Quantifying Groundwater Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Poeter, E.; Foglia, L.

    2007-12-01

    Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.

  1. Secondary flow in turbulent ducts with increasing aspect ratio

    NASA Astrophysics Data System (ADS)

    Vinuesa, R.; Schlatter, P.; Nagib, H. M.

    2018-05-01

    Direct numerical simulations of turbulent duct flows with aspect ratios 1, 3, 5, 7, 10, and 14.4 at a center-plane friction Reynolds number Reτ,c≃180 , and aspect ratios 1 and 3 at Reτ,c≃360 , were carried out with the spectral-element code nek5000. The aim of these simulations is to gain insight into the kinematics and dynamics of Prandtl's secondary flow of the second kind and its impact on the flow physics of wall-bounded turbulence. The secondary flow is characterized in terms of the cross-plane component of the mean kinetic energy, and its variation in the spanwise direction of the flow. Our results show that averaging times of around 3000 convective time units (based on duct half-height h ) are required to reach a converged state of the secondary flow, which extends up to a spanwise distance of around ≃5 h measured from the side walls. We also show that if the duct is not wide enough to accommodate the whole extent of the secondary flow, then its structure is modified as reflected through a different spanwise distribution of energy. Another confirmation of the extent of the secondary flow is the decay rate of kinetic energy of any remnant secondary motions for zc/h >5 (where zc is the spanwise distance from the corner) in aspect ratios 7, 10, and 14.4, which exhibits a decreasing level of energy with increasing averaging time ta, and in its rapid rate of decay given by ˜ta-1 . This is the same rate of decay observed in a spanwise-periodic channel simulation, which suggests that at the core, the kinetic energy of the secondary flow integrated over the cross-sectional area, , behaves as a random variable with zero mean, with rate of decay consistent with central limit theorem. Long-time averages of statistics in a region of rectangular ducts extending about the width of a well-designed channel simulation (i.e., extending about ≃3 h on each side of the center plane) indicate that ducts or experimental facilities with aspect ratios larger than 10 may, if properly designed, exhibit good agreement with results obtained from spanwise-periodic channel computations.

  2. Application of linear logic to simulation

    NASA Astrophysics Data System (ADS)

    Clarke, Thomas L.

    1998-08-01

    Linear logic, since its introduction by Girard in 1987 has proven expressive and powerful. Linear logic has provided natural encodings of Turing machines, Petri nets and other computational models. Linear logic is also capable of naturally modeling resource dependent aspects of reasoning. The distinguishing characteristic of linear logic is that it accounts for resources; two instances of the same variable are considered differently from a single instance. Linear logic thus must obey a form of the linear superposition principle. A proportion can be reasoned with only once, unless a special operator is applied. Informally, linear logic distinguishes two kinds of conjunction, two kinds of disjunction, and also introduces a modal storage operator that explicitly indicates propositions that can be reused. This paper discuses the application of linear logic to simulation. A wide variety of logics have been developed; in addition to classical logic, there are fuzzy logics, affine logics, quantum logics, etc. All of these have found application in simulations of one sort or another. The special characteristics of linear logic and its benefits for simulation will be discussed. Of particular interest is a connection that can be made between linear logic and simulated dynamics by using the concept of Lie algebras and Lie groups. Lie groups provide the connection between the exponential modal storage operators of linear logic and the eigen functions of dynamic differential operators. Particularly suggestive are possible relations between complexity result for linear logic and non-computability results for dynamical systems.

  3. SU-F-J-178: A Computer Simulation Model Observer for Task-Based Image Quality Assessment in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; Mutic, S; Anastasio, M

    Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less

  4. Aeroacoustic Simulations of Tandem Cylinders with Subcritical Spacing

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Choudhari, Meelan M.; Khorrami, Mehdi R.; Neuhart, Dan H.; Hutcheson, Florence V.; Brooks, Thomas F.; Stead, Daniel J.

    2008-01-01

    Tandem cylinders are being studied because they model a variety of component level interactions of landing gear. The present effort is directed at the case of two identical cylinders with their centroids separated in the streamwise direction by 1.435 diameters. Experiments in the Basic Aerodynamic Research Tunnel and Quiet Flow Facility at NASA Langley Research Center have provided an extensive experimental database of the nearfield flow and radiated noise. The measurements were conducted at a Mach number of 0.1285 and Reynolds number of 1.66x10(exp 5) based on the cylinder diameter. A trip was used on the upstream cylinder to insure a fully turbulent flow separation and, hence, to simulate a major aspect of high Reynolds number flow. The parallel computational effort uses the three-dimensional Navier-Stokes solver CFL3D with a hybrid, zonal turbulence model that turns off the turbulence production term everywhere except in a narrow ring surrounding solid surfaces. The experiments exhibited an asymmetry in the surface pressure that was persistent despite attempts to eliminate it through small changes in the configuration. To model the asymmetry, the simulations were run with the cylinder configuration at a nonzero but small angle of attack. The computed results and experiments are in general agreement that vortex shedding for the spacing studied herein is weak relative to that observed at supercritical spacings. Although the shedding was subdued in the simulations, it was still more prominent than in the experiments. Overall, the simulation comparisons with measured near-field data and the radiated acoustics are reasonable, especially if one is concerned with capturing the trends relative to larger cylinder spacings. However, the flow details of the 1.435 diameter spacing have not been captured in full even though very fine grid computations have been performed. Some of the discrepancy may be associated with the simulation s inexact representation of the experimental configuration, but numerical and flow modeling errors are also likely contributors to the observed differences.

  5. Crystal engineering of ibuprofen compounds: From molecule to crystal structure to morphology prediction by computational simulation and experimental study

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Liang, Zuozhong; Wu, Fei; Chen, Jian-Feng; Xue, Chunyu; Zhao, Hong

    2017-06-01

    We selected the crystal structures of ibuprofen with seven common space groups (Cc, P21/c, P212121, P21, Pbca, Pna21, and Pbcn), which was generated from ibuprofen molecule by molecular simulation. The predicted crystal structures of ibuprofen with space group P21/c has the lowest total energy and the largest density, which is nearly indistinguishable with experimental result. In addition, the XRD patterns for predicted crystal structure are highly consistent with recrystallization from solvent of ibuprofen. That indicates that the simulation can accurately predict the crystal structure of ibuprofen from the molecule. Furthermore, based on this crystal structure, we predicted the crystal habit in vacuum using the attachment energy (AE) method and considered solvent effects in a systematic way using the modified attachment energy (MAE) model. The simulation can accurately construct a complete process from molecule to crystal structure to morphology prediction. Experimentally, we observed crystal morphologies in four different polarity solvents compounds (ethanol, acetonitrile, ethyl acetate, and toluene). We found that the aspect ratio decreases of crystal habits in this ibuprofen system were found to vary with increasing solvent relative polarity. Besides, the modified crystal morphologies are in good agreement with the observed experimental morphologies. Finally, this work may guide computer-aided design of the desirable crystal morphology.

  6. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  7. Acoustic wave simulation using an overset grid for the global monitoring system

    NASA Astrophysics Data System (ADS)

    Kushida, N.; Le Bras, R.

    2017-12-01

    The International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been monitoring hydro-acoustic and infrasound waves over the globe. Because of the complex natures of the oceans and the atmosphere, computer simulation can play an important role in understanding the observed signals. In this regard, methods which depend on partial differential equations and require minimum modelling, are preferable. So far, to our best knowledge, acoustic wave propagation simulations based on partial differential equations on such a large scale have not been performed (pp 147 - 161 of ref [1], [2]). The main difficulties in building such simulation codes are: (1) considering the inhomogeneity of medium including background flows, (2) high aspect ratio of computational domain, (3) stability during long time integration. To overcome these difficulties, we employ a two-dimensional finite different (FDM) scheme on spherical coordinates with the Yin-Yang overset grid[3] solving the governing equation of acoustic waves introduces by Ostashev et. al.[4]. The comparison with real recording examples in hydro-acoustic will be presented at the conference. [1] Paul C. Etter: Underwater Acoustic Modeling and Simulation, Fourth Edition, CRC Press, 2013. [2] LIAN WANG et. al.: REVIEW OF UNDERWATER ACOUSTIC PROPAGATION MODELS, NPL Report AC 12, 2014. [3] A. Kageyama and T. Sato: "Yin-Yang grid": An overset grid in spherical geometry, Geochem. Geophys. Geosyst., 5, Q09005, 2004. [4] Vladimir E. Ostashev et. al: Equations for finite-difference, time-domain simulation of sound propagation in moving inhomogeneous media and numerical implementation, Acoustical Society of America. DOI: 10.1121/1.1841531, 2005.

  8. Assessment by Monte Carlo computer simulations of the phase behavior of hard spherocylinders confined within cylindrical cavities.

    PubMed

    Viveros-Méndez, Perla X; Gil-Villegas, Alejandro; Aranda Espinoza, Said

    2017-12-21

    The phase behavior of hard spherocylinders (HSCs) confined in cylindrical cavities is studied using Monte Carlo simulations in the canonical ensemble. Results are presented for different values of the particles' aspect ratio l/σ, where l and σ are the length and diameter of the cylinder and hemispherical caps, respectively. Finite cavities with periodic boundary conditions along the principal axis of the cavities have been considered, where the cavity's principal axis is along the z-direction. We first focus our study in the structure induced by varying the degree of confinement, determining the HSC phase diagram for aspect ratios l/σ = 3, 5, 7, and 9, at a fixed packing fraction η = 0.071. By compressing the cavities along the radial direction, the isotropic phase becomes stable before the nematic phase as the length of the cavities is increased, resulting in a second-order transition. The occurrence of phase transitions has also been determined by varying η for constant values of the cavity's length L. Systems with low aspect ratios, l/σ = 3, 5, 7, and 9, exhibit first-order transitions with chiral, paranematic, and isotropic phases, whereas for larger HSCs, l/σ = 50, 70, and 100, the transitions are second order with paranematic, nematic, and isotropic phases, in contrast with the behavior of non-confined systems, with first-order transitions for isotropic, nematic, smectic-A, and solid phases.

  9. Infrared target simulation environment for pattern recognition applications

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas E.; George, Nicholas

    1994-07-01

    The generation of complete databases of IR data is extremely useful for training human observers and testing automatic pattern recognition algorithms. Field data may be used for realism, but require expensive and time-consuming procedures. IR scene simulation methods have emerged as a more economical and efficient alternative for the generation of IR databases. A novel approach to IR target simulation is presented in this paper. Model vehicles at 1:24 scale are used for the simulation of real targets. The temperature profile of the model vehicles is controlled using resistive circuits which are embedded inside the models. The IR target is recorded using an Inframetrics dual channel IR camera system. Using computer processing we place the recorded IR target in a prerecorded background. The advantages of this approach are: (1) the range and 3D target aspect can be controlled by the relative position between the camera and model vehicle; (2) the temperature profile can be controlled by adjusting the power delivered to the resistive circuit; (3) the IR sensor effects are directly incorporated in the recording process, because the real sensor is used; (4) the recorded target can embedded in various types of backgrounds recorded under different weather conditions, times of day etc. The effectiveness of this approach is demonstrated by generating an IR database of three vehicles which is used to train a back propagation neural network. The neural network is capable of classifying vehicle type, vehicle aspect, and relative temperature with a high degree of accuracy.

  10. Hybrid Grid Techniques for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Koomullil, Roy P.; Soni, Bharat K.; Thornburg, Hugh J.

    1996-01-01

    During the past decade, computational simulation of fluid flow for propulsion activities has progressed significantly, and many notable successes have been reported in the literature. However, the generation of a high quality mesh for such problems has often been reported as a pacing item. Hence, much effort has been expended to speed this portion of the simulation process. Several approaches have evolved for grid generation. Two of the most common are structured multi-block, and unstructured based procedures. Structured grids tend to be computationally efficient, and have high aspect ratio cells necessary for efficently resolving viscous layers. Structured multi-block grids may or may not exhibit grid line continuity across the block interface. This relaxation of the continuity constraint at the interface is intended to ease the grid generation process, which is still time consuming. Flow solvers supporting non-contiguous interfaces require specialized interpolation procedures which may not ensure conservation at the interface. Unstructured or generalized indexing data structures offer greater flexibility, but require explicit connectivity information and are not easy to generate for three dimensional configurations. In addition, unstructured mesh based schemes tend to be less efficient and it is difficult to resolve viscous layers. Recently hybrid or generalized element solution and grid generation techniques have been developed with the objective of combining the attractive features of both structured and unstructured techniques. In the present work, recently developed procedures for hybrid grid generation and flow simulation are critically evaluated, and compared to existing structured and unstructured procedures in terms of accuracy and computational requirements.

  11. Free-Swinging Failure Tolerance for Robotic Manipulators. Degree awarded by Purdue Univ.

    NASA Technical Reports Server (NTRS)

    English, James

    1997-01-01

    Under this GSRP fellowship, software-based failure-tolerance techniques were developed for robotic manipulators. The focus was on failures characterized by the loss of actuator torque at a joint, called free-swinging failures. The research results spanned many aspects of the free-swinging failure-tolerance problem, from preparing for an expected failure to discovery of postfailure capabilities to establishing efficient methods to realize those capabilities. Developed algorithms were verified using computer-based dynamic simulations, and these were further verified using hardware experiments at Johnson Space Center.

  12. Precision Attitude Determination System (PADS) system design and analysis: Single-axis gimbal star tracker

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The feasibility is evaluated of an evolutionary development for use of a single-axis gimbal star tracker from prior two-axis gimbal star tracker based system applications. Detailed evaluation of the star tracker gimbal encoder is considered. A brief system description is given including the aspects of tracker evolution and encoder evaluation. System analysis includes evaluation of star availability and mounting constraints for the geosynchronous orbit application, and a covariance simulation analysis to evaluate performance potential. Star availability and covariance analysis digital computer programs are included.

  13. Nature and origins of virtual environments - A bibliographical essay

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.

    1991-01-01

    Virtual environments presented via head-mounted, computer-driven displays provide a new media for communication. They may be analyzed by considering: (1) what may be meant by an environment; (2) what is meant by the process of virtualization; and (3) some aspects of human performance that constrain environmental design. Their origins are traced from previous work in vehicle simulation and multimedia research. Pointers are provided to key technical references, in the dispersed, archival literature, that are relevant to the development and evaluation of virtual-environment interface systems.

  14. Integrated approach for stress analysis of high performance diesel engine cylinder head

    NASA Astrophysics Data System (ADS)

    Chainov, N. D.; Myagkov, L. L.; Malastowski, N. S.; Blinov, A. S.

    2018-03-01

    Growing thermal and mechanical loads due to development of engines with high level of a mean effective pressure determine requirements to cylinder head durability. In this paper, computational schemes for thermal and mechanical stress analysis of a high performance diesel engine cylinder head were described. The most important aspects in this approach are the account of temperature fields of conjugated details (valves and saddles), heat transfer modeling in a cooling jacket of a cylinder head and topology optimization of the detail force scheme. Simulation results are shown and analyzed.

  15. A survey of Applied Psychological Services' models of the human operator

    NASA Technical Reports Server (NTRS)

    Siegel, A. I.; Wolf, J. J.

    1979-01-01

    A historical perspective is presented in terms of the major features and status of two families of computer simulation models in which the human operator plays the primary role. Both task oriented and message oriented models are included. Two other recent efforts are summarized which deal with visual information processing. They involve not whole model development but a family of subroutines customized to add the human aspects to existing models. A global diagram of the generalized model development/validation process is presented and related to 15 criteria for model evaluation.

  16. Approximate Micromechanics Treatise of Composite Impact

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Handler, Louis M.

    2005-01-01

    A formalism is described for micromechanic impact of composites. The formalism consists of numerous equations which describe all aspects of impact from impactor and composite conditions to impact contact, damage progression, and penetration or containment. The formalism is based on through-the-thickness displacement increments simulation which makes it convenient to track local damage in terms of microfailure modes and their respective characteristics. A flow chart is provided to cast the formalism (numerous equations) into a computer code for embedment in composite mechanic codes and/or finite element composite structural analysis.

  17. Electrical Conductivity in Transparent Silver Nanowire Networks: Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Sherrott, Michelle; Mutiso, Rose; Rathmell, Aaron; Wiley, Benjamin; Winey, Karen

    2012-02-01

    We model and experimentally measure the electrical conductivity of two-dimensional networks containing finite, conductive cylinders with aspect ratio ranging from 33 to 333. We have previously used our simulations to explore the effects of cylinder orientation and aspect ratio in three-dimensional composites, and now extend the simulation to consider two-dimensional silver nanowire networks. Preliminary results suggest that increasing the aspect ratio and area fraction of these rods significantly decreases the sheet resistance of the film. For all simulated aspect ratios, this sheet resistance approaches a constant value for high area fractions of rods. This implies that regardless of aspect ratio, there is a limiting minimum sheet resistance that is characteristic of the properties of the nanowires. Experimental data from silver nanowire networks will be incorporated into the simulations to define the contact resistance and corroborate experimentally measured sheet resistances of transparent thin films.

  18. A systematic review to identify areas of enhancements of pandemic simulation models for operational use at provincial and local levels

    PubMed Central

    2012-01-01

    Background In recent years, computer simulation models have supported development of pandemic influenza preparedness policies. However, U.S. policymakers have raised several concerns about the practical use of these models. In this review paper, we examine the extent to which the current literature already addresses these concerns and identify means of enhancing the current models for higher operational use. Methods We surveyed PubMed and other sources for published research literature on simulation models for influenza pandemic preparedness. We identified 23 models published between 1990 and 2010 that consider single-region (e.g., country, province, city) outbreaks and multi-pronged mitigation strategies. We developed a plan for examination of the literature based on the concerns raised by the policymakers. Results While examining the concerns about the adequacy and validity of data, we found that though the epidemiological data supporting the models appears to be adequate, it should be validated through as many updates as possible during an outbreak. Demographical data must improve its interfaces for access, retrieval, and translation into model parameters. Regarding the concern about credibility and validity of modeling assumptions, we found that the models often simplify reality to reduce computational burden. Such simplifications may be permissible if they do not interfere with the performance assessment of the mitigation strategies. We also agreed with the concern that social behavior is inadequately represented in pandemic influenza models. Our review showed that the models consider only a few social-behavioral aspects including contact rates, withdrawal from work or school due to symptoms appearance or to care for sick relatives, and compliance to social distancing, vaccination, and antiviral prophylaxis. The concern about the degree of accessibility of the models is palpable, since we found three models that are currently accessible by the public while other models are seeking public accessibility. Policymakers would prefer models scalable to any population size that can be downloadable and operable in personal computers. But scaling models to larger populations would often require computational needs that cannot be handled with personal computers and laptops. As a limitation, we state that some existing models could not be included in our review due to their limited available documentation discussing the choice of relevant parameter values. Conclusions To adequately address the concerns of the policymakers, we need continuing model enhancements in critical areas including: updating of epidemiological data during a pandemic, smooth handling of large demographical databases, incorporation of a broader spectrum of social-behavioral aspects, updating information for contact patterns, adaptation of recent methodologies for collecting human mobility data, and improvement of computational efficiency and accessibility. PMID:22463370

  19. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    PubMed

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  20. pypet: A Python Toolkit for Data Management of Parameter Explorations

    PubMed Central

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines. PMID:27610080

  1. pypet: A Python Toolkit for Data Management of Parameter Explorations.

    PubMed

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.

  2. Gstat: a program for geostatistical modelling, prediction and simulation

    NASA Astrophysics Data System (ADS)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  3. Accurate and general treatment of electrostatic interaction in Hamiltonian adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.

    2016-10-01

    In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.

  4. Hidden Statistics Approach to Quantum Simulations

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2010-01-01

    Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.

  5. Parametric Simulations of the Great Dark Spots of Neptune

    NASA Astrophysics Data System (ADS)

    Deng, Xiaolong; Le Beau, R.

    2006-09-01

    Observations by Voyager II and the Hubble Space Telescope of the Great Dark Spots (GDS) of Neptune suggest that large vortices with lifespans of years are not uncommon occurrences in the atmosphere of Neptune. The variability of these features over time, in particular the complex motions of GDS-89, make them challenging candidates to simulate in atmospheric models. Previously, using the Explicit Planetary Isentropic-Coordinate (EPIC) General Circulation Model, LeBeau and Dowling (1998) simulated the GDS-like vortex features. Qualitatively, the drift, oscillation, and tail-like features of GDS-89 were recreated, although precise numerical matches were only achieved for the meridional drift rate. In 2001, Stratman et al. applied EPIC to simulate the formation of bright companion clouds to the Great Dark Spots. In 2006, Dowling et al. presented a new version of EPIC, which includes hybrid vertical coordinate, cloud physics, advanced chemistry, and new turbulence models. With the new version of EPIC, more observation results, and more powerful computers, it is the time to revisit CFD simulations of the Neptune's atmosphere and do more detailed work on GDS-like vortices. In this presentation, we apply the new version of EPIC to simulate GDS-89. We test the influences of different parameters in the EPIC model: potential vorticity gradient, wind profile, initial latitude, vortex shape, and vertical structure. The observed motions, especially the latitudinal drift and oscillations in orientation angle and aspect ratio, are used as diagnostics of these unobserved atmospheric conditions. Increased computing power allows for more refined and longer simulations and greater coverage of the parameter space than previous efforts. Improved quantitative results have been achieved, including voritices with near eight-day oscillations and comparable variations in shape to GDS-89. This research has been supported by Kentucky NASA EPSCoR.

  6. Virtual fragment preparation for computational fragment-based drug design.

    PubMed

    Ludington, Jennifer L

    2015-01-01

    Fragment-based drug design (FBDD) has become an important component of the drug discovery process. The use of fragments can accelerate both the search for a hit molecule and the development of that hit into a lead molecule for clinical testing. In addition to experimental methodologies for FBDD such as NMR and X-ray Crystallography screens, computational techniques are playing an increasingly important role. The success of the computational simulations is due in large part to how the database of virtual fragments is prepared. In order to prepare the fragments appropriately it is necessary to understand how FBDD differs from other approaches and the issues inherent in building up molecules from smaller fragment pieces. The ultimate goal of these calculations is to link two or more simulated fragments into a molecule that has an experimental binding affinity consistent with the additive predicted binding affinities of the virtual fragments. Computationally predicting binding affinities is a complex process, with many opportunities for introducing error. Therefore, care should be taken with the fragment preparation procedure to avoid introducing additional inaccuracies.This chapter is focused on the preparation process used to create a virtual fragment database. Several key issues of fragment preparation which affect the accuracy of binding affinity predictions are discussed. The first issue is the selection of the two-dimensional atomic structure of the virtual fragment. Although the particular usage of the fragment can affect this choice (i.e., whether the fragment will be used for calibration, binding site characterization, hit identification, or lead optimization), general factors such as synthetic accessibility, size, and flexibility are major considerations in selecting the 2D structure. Other aspects of preparing the virtual fragments for simulation are the generation of three-dimensional conformations and the assignment of the associated atomic point charges.

  7. Quantification of uncertainties in the tsunami hazard for Cascadia using statistical emulation

    NASA Astrophysics Data System (ADS)

    Guillas, S.; Day, S. J.; Joakim, B.

    2016-12-01

    We present new high resolution tsunami wave propagation and coastal inundation for the Cascadia region in the Pacific Northwest. The coseismic representation in this analysis is novel, and more realistic than in previous studies, as we jointly parametrize multiple aspects of the seabed deformation. Due to the large computational cost of such simulators, statistical emulation is required in order to carry out uncertainty quantification tasks, as emulators efficiently approximate simulators. The emulator replaces the tsunami model VOLNA by a fast surrogate, so we are able to efficiently propagate uncertainties from the source characteristics to wave heights, in order to probabilistically assess tsunami hazard for Cascadia. We employ a new method for the design of the computer experiments in order to reduce the number of runs while maintaining good approximations properties of the emulator. Out of the initial nine parameters, mostly describing the geometry and time variation of the seabed deformation, we drop two parameters since these turn out to not have an influence on the resulting tsunami waves at the coast. We model the impact of another parameter linearly as its influence on the wave heights is identified as linear. We combine this screening approach with the sequential design algorithm MICE (Mutual Information for Computer Experiments), that adaptively selects the input values at which to run the computer simulator, in order to maximize the expected information gain (mutual information) over the input space. As a result, the emulation is made possible and accurate. Starting from distributions of the source parameters that encapsulate geophysical knowledge of the possible source characteristics, we derive distributions of the tsunami wave heights along the coastline.

  8. Virtual ellipsometry on layered micro-facet surfaces.

    PubMed

    Wang, Chi; Wilkie, Alexander; Harcuba, Petr; Novosad, Lukas

    2017-09-18

    Microfacet-based BRDF models are a common tool to describe light scattering from glossy surfaces. Apart from their wide-ranging applications in optics, such models also play a significant role in computer graphics for photorealistic rendering purposes. In this paper, we mainly investigate the computer graphics aspect of this technology, and present a polarisation-aware brute force simulation of light interaction with both single and multiple layered micro-facet surfaces. Such surface models are commonly used in computer graphics, but the resulting BRDF is ultimately often only approximated. Recently, there has been work to try to make these approximations more accurate, and to better understand the behaviour of existing analytical models. However, these brute force verification attempts still emitted the polarisation state of light and, as we found out, this renders them prone to mis-estimating the shape of the resulting BRDF lobe for some particular material types, such as smooth layered dielectric surfaces. For these materials, non-polarising computations can mis-estimate some areas of the resulting BRDF shape by up to 23%. But we also identified some other material types, such as dielectric layers over rough conductors, for which the difference turned out to be almost negligible. The main contribution of our work is to clearly demonstrate that the effect of polarisation is important for accurate simulation of certain material types, and that there are also other common materials for which it can apparently be ignored. As this required a BRDF simulator that we could rely on, a secondary contribution is that we went to considerable lengths to validate our software. We compare it against a state-of-art model from graphics, a library from optics, and also against ellipsometric measurements of real surface samples.

  9. Brain without mind: Computer simulation of neural networks with modifiable neuronal interactions

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Rafelski, Johann; Winston, Jeffrey V.

    1985-07-01

    Aspects of brain function are examined in terms of a nonlinear dynamical system of highly interconnected neuron-like binary decision elements. The model neurons operate synchronously in discrete time, according to deterministic or probabilistic equations of motion. Plasticity of the nervous system, which underlies such cognitive collective phenomena as adaptive development, learning, and memory, is represented by temporal modification of interneuronal connection strengths depending on momentary or recent neural activity. A formal basis is presented for the construction of local plasticity algorithms, or connection-modification routines, spanning a large class. To build an intuitive understanding of the behavior of discrete-time network models, extensive computer simulations have been carried out (a) for nets with fixed, quasirandom connectivity and (b) for nets with connections that evolve under one or another choice of plasticity algorithm. From the former experiments, insights are gained concerning the spontaneous emergence of order in the form of cyclic modes of neuronal activity. In the course of the latter experiments, a simple plasticity routine (“brainwashing,” or “anti-learning”) was identified which, applied to nets with initially quasirandom connectivity, creates model networks which provide more felicitous starting points for computer experiments on the engramming of content-addressable memories and on learning more generally. The potential relevance of this algorithm to developmental neurobiology and to sleep states is discussed. The model considered is at the same time a synthesis of earlier synchronous neural-network models and an elaboration upon them; accordingly, the present article offers both a focused review of the dynamical properties of such systems and a selection of new findings derived from computer simulation.

  10. Integrating Thermodynamic Models in Geodynamic Simulations: The Example of the Community Software ASPECT

    NASA Astrophysics Data System (ADS)

    Dannberg, J.; Heister, T.; Grove, R. R.; Gassmoeller, R.; Spiegelman, M. W.; Bangerth, W.

    2017-12-01

    Earth's surface shows many features whose genesis can only be understood through the interplay of geodynamic and thermodynamic models. This is particularly important in the context of melt generation and transport: Mantle convection determines the distribution of temperature and chemical composition, the melting process itself is then controlled by the thermodynamic relations and in turn influences the properties and the transport of melt. Here, we present our extension of the community geodynamics code ASPECT, which solves the equations of coupled magma/mantle dynamics, and allows to integrate different parametrizations of reactions and phase transitions: They may alternatively be implemented as simple analytical expressions, look-up tables, or computed by a thermodynamics software. As ASPECT uses a variety of numerical methods and solvers, this also gives us the opportunity to compare different approaches of modelling the melting process. In particular, we will elaborate on the spatial and temporal resolution that is required to accurately model phase transitions, and show the potential of adaptive mesh refinement when applied to melt generation and transport. We will assess the advantages and disadvantages of iterating between fluid dynamics and chemical reactions derived from thermodynamic models within each time step, or decoupling them, allowing for different time step sizes. Beyond that, we will expand on the functionality required for an interface between computational thermodynamics and fluid dynamics models from the geodynamics side. Finally, using a simple example of melting of a two-phase, two-component system, we compare different time-stepping and solver schemes in terms of accuracy and efficiency, in dependence of the time scales of fluid flow and chemical reactions relative to each other. Our software provides a framework to integrate thermodynamic models in high resolution, 3d simulations of coupled magma/mantle dynamics, and can be used as a tool to study links between physical processes and geochemical signals in the Earth.

  11. Collaborative simulations and experiments for a novel yield model of coal devolatilization in oxy-coal combustion conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.

    Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less

  12. Collaborative simulations and experiments for a novel yield model of coal devolatilization in oxy-coal combustion conditions

    DOE PAGES

    Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.; ...

    2017-06-03

    Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less

  13. Parametrics on 2D Navier-Stokes analysis of a Mach 2.68 bifurcated rectangular mixed-compression inlet

    NASA Technical Reports Server (NTRS)

    Mizukami, M.; Saunders, J. D.

    1995-01-01

    The supersonic diffuser of a Mach 2.68 bifurcated, rectangular, mixed-compression inlet was analyzed using a two-dimensional (2D) Navier-Stokes flow solver. Parametric studies were performed on turbulence models, computational grids and bleed models. The computer flowfield was substantially different from the original inviscid design, due to interactions of shocks, boundary layers, and bleed. Good agreement with experimental data was obtained in many aspects. Many of the discrepancies were thought to originate primarily from 3D effects. Therefore, a balance should be struck between expending resources on a high fidelity 2D simulation, and the inherent limitations of 2D analysis. The solutions were fairly insensitive to turbulence models, grids and bleed models. Overall, the k-e turbulence model, and the bleed models based on unchoked bleed hole discharge coefficients or uniform velocity are recommended. The 2D Navier-Stokes methods appear to be a useful tool for the design and analysis of supersonic inlets, by providing a higher fidelity simulation of the inlet flowfield than inviscid methods, in a reasonable turnaround time.

  14. Development of an explicit multiblock/multigrid flow solver for viscous flows in complex geometries

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Liou, M. S.; Povinelli, L. A.

    1993-01-01

    A new computer program is being developed for doing accurate simulations of compressible viscous flows in complex geometries. The code employs the full compressible Navier-Stokes equations. The eddy viscosity model of Baldwin and Lomax is used to model the effects of turbulence on the flow. A cell centered finite volume discretization is used for all terms in the governing equations. The Advection Upwind Splitting Method (AUSM) is used to compute the inviscid fluxes, while central differencing is used for the diffusive fluxes. A four-stage Runge-Kutta time integration scheme is used to march solutions to steady state, while convergence is enhanced by a multigrid scheme, local time-stepping, and implicit residual smoothing. To enable simulations of flows in complex geometries, the code uses composite structured grid systems where all grid lines are continuous at block boundaries (multiblock grids). Example results shown are a flow in a linear cascade, a flow around a circular pin extending between the main walls in a high aspect-ratio channel, and a flow of air in a radial turbine coolant passage.

  15. Development of an explicit multiblock/multigrid flow solver for viscous flows in complex geometries

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Liou, M.-S.; Povinelli, L. A.

    1993-01-01

    A new computer program is being developed for doing accurate simulations of compressible viscous flows in complex geometries. The code employs the full compressible Navier-Stokes equations. The eddy viscosity model of Baldwin and Lomax is used to model the effects of turbulence on the flow. A cell centered finite volume discretization is used for all terms in the governing equations. The Advection Upwind Splitting Method (AUSM) is used to compute the inviscid fluxes, while central differencing is used for the diffusive fluxes. A four-stage Runge-Kutta time integration scheme is used to march solutions to steady state, while convergence is enhanced by a multigrid scheme, local time-stepping and implicit residual smoothing. To enable simulations of flows in complex geometries, the code uses composite structured grid systems where all grid lines are continuous at block boundaries (multiblock grids). Example results are shown a flow in a linear cascade, a flow around a circular pin extending between the main walls in a high aspect-ratio channel, and a flow of air in a radial turbine coolant passage.

  16. A discrete mechanics framework for real time virtual surgical simulations with application to virtual laparoscopic nephrectomy.

    PubMed

    Zhou, Xiangmin; Zhang, Nan; Sha, Desong; Shen, Yunhe; Tamma, Kumar K; Sweet, Robert

    2009-01-01

    The inability to render realistic soft-tissue behavior in real time has remained a barrier to face and content aspects of validity for many virtual reality surgical training systems. Biophysically based models are not only suitable for training purposes but also for patient-specific clinical applications, physiological modeling and surgical planning. When considering the existing approaches for modeling soft tissue for virtual reality surgical simulation, the computer graphics-based approach lacks predictive capability; the mass-spring model (MSM) based approach lacks biophysically realistic soft-tissue dynamic behavior; and the finite element method (FEM) approaches fail to meet the real-time requirement. The present development stems from physics fundamental thermodynamic first law; for a space discrete dynamic system directly formulates the space discrete but time continuous governing equation with embedded material constitutive relation and results in a discrete mechanics framework which possesses a unique balance between the computational efforts and the physically realistic soft-tissue dynamic behavior. We describe the development of the discrete mechanics framework with focused attention towards a virtual laparoscopic nephrectomy application.

  17. Neoclassical Simulation of Tokamak Plasmas using Continuum Gyrokinetc Code TEMPEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, X Q

    We present gyrokinetic neoclassical simulations of tokamak plasmas with self-consistent electric field for the first time using a fully nonlinear (full-f) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five dimensional computational grid in phase space. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving gyrokinetic Poisson equation with self-consistent poloidal variation. Withmore » our 4D ({psi}, {theta}, {epsilon}, {mu}) version of the TEMPEST code we compute radial particle and heat flux, the Geodesic-Acoustic Mode (GAM), and the development of neoclassical electric field, which we compare with neoclassical theory with a Lorentz collision model. The present work provides a numerical scheme and a new capability for self-consistently studying important aspects of neoclassical transport and rotations in toroidal magnetic fusion devices.« less

  18. EOS MLS Level 2 Data Processing Software Version 3

    NASA Technical Reports Server (NTRS)

    Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.; hide

    2011-01-01

    This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.

  19. The theory of reasoned action as parallel constraint satisfaction: towards a dynamic computational model of health behavior.

    PubMed

    Orr, Mark G; Thrush, Roxanne; Plaut, David C

    2013-01-01

    The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual's pre-existing belief structure and the beliefs of others in the individual's social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics.

  20. The Theory of Reasoned Action as Parallel Constraint Satisfaction: Towards a Dynamic Computational Model of Health Behavior

    PubMed Central

    Orr, Mark G.; Thrush, Roxanne; Plaut, David C.

    2013-01-01

    The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual’s pre-existing belief structure and the beliefs of others in the individual’s social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics. PMID:23671603

  1. Tracking interface and common curve dynamics for two-fluid flow in porous media

    DOE PAGES

    Mcclure, James E.; Miller, Cass T.; Gray, W. G.; ...

    2016-04-29

    Pore-scale studies of multiphase flow in porous medium systems can be used to understand transport mechanisms and quantitatively determine closure relations that better incorporate microscale physics into macroscale models. Multiphase flow simulators constructed using the lattice Boltzmann method provide a means to conduct such studies, including both the equilibrium and dynamic aspects. Moving, storing, and analyzing the large state space presents a computational challenge when highly-resolved models are applied. We present an approach to simulate multiphase flow processes in which in-situ analysis is applied to track multiphase flow dynamics at high temporal resolution. We compute a comprehensive set of measuresmore » of the phase distributions and the system dynamics, which can be used to aid fundamental understanding and inform closure relations for macroscale models. The measures computed include microscale point representations and macroscale averages of fluid saturations, the pressure and velocity of the fluid phases, interfacial areas, interfacial curvatures, interface and common curve velocities, interfacial orientation tensors, phase velocities and the contact angle between the fluid-fluid interface and the solid surface. Test cases are studied to validate the approach and illustrate how measures of system state can be obtained and used to inform macroscopic theory.« less

  2. Bimorph Silk Microsheets with Programmable Actuating Behavior: Experimental Analysis and Computer Simulations.

    PubMed

    Ye, Chunhong; Nikolov, Svetoslav V; Geryak, Ren D; Calabrese, Rossella; Ankner, John F; Alexeev, Alexander; Kaplan, David L; Tsukruk, Vladimir V

    2016-07-13

    Microscaled self-rolling construct sheets from silk protein material have been fabricated, containing a silk bimorph composed of silk ionomers as an active layer and cross-linked silk β-sheet as the passive layer. The programmable morphology was experimentally explored along with a computational simulation to understand the mechanism of shape reconfiguration. The neutron reflectivity shows that the active silk ionomers layer undergoes remarkable swelling (eight times increase in thickness) after deprotonation while the passive silk β-sheet retains constant volume under the same conditions and supports the bimorph construct. This selective swelling within the silk-on-silk bimorph microsheets generates strong interfacial stress between layers and out-of-plane forces, which trigger autonomous self-rolling into various 3D constructs such as cylindrical and helical tubules. The experimental observations and computational modeling confirmed the role of interfacial stresses and allow programming the morphology of the 3D constructs with particular design. We demonstrated that the biaxial stress distribution over the 2D planar films depends upon the lateral dimensions, thickness and the aspect ratio of the microsheets. The results allow the fine-tuning of autonomous shape transformations for the further design of complex micro-origami constructs and the silk based rolling/unrolling structures provide a promising platform for polymer-based biomimetic devices for implant applications.

  3. Design of a Nanoscale, CMOS-Integrable, Thermal-Guiding Structure for Boolean-Logic and Neuromorphic Computation.

    PubMed

    Loke, Desmond; Skelton, Jonathan M; Chong, Tow-Chong; Elliott, Stephen R

    2016-12-21

    One of the requirements for achieving faster CMOS electronics is to mitigate the unacceptably large chip areas required to steer heat away from or, more recently, toward the critical nodes of state-of-the-art devices. Thermal-guiding (TG) structures can efficiently direct heat by "meta-materials" engineering; however, some key aspects of the behavior of these systems are not fully understood. Here, we demonstrate control of the thermal-diffusion properties of TG structures by using nanometer-scale, CMOS-integrable, graphene-on-silica stacked materials through finite-element-methods simulations. It has been shown that it is possible to implement novel, controllable, thermally based Boolean-logic and spike-timing-dependent plasticity operations for advanced (neuromorphic) computing applications using such thermal-guide architectures.

  4. Research on rolling element bearing fault diagnosis based on genetic algorithm matching pursuit

    NASA Astrophysics Data System (ADS)

    Rong, R. W.; Ming, T. F.

    2017-12-01

    In order to solve the problem of slow computation speed, matching pursuit algorithm is applied to rolling bearing fault diagnosis, and the improvement are conducted from two aspects that are the construction of dictionary and the way to search for atoms. To be specific, Gabor function which can reflect time-frequency localization characteristic well is used to construct the dictionary, and the genetic algorithm to improve the searching speed. A time-frequency analysis method based on genetic algorithm matching pursuit (GAMP) algorithm is proposed. The way to set property parameters for the improvement of the decomposition results is studied. Simulation and experimental results illustrate that the weak fault feature of rolling bearing can be extracted effectively by this proposed method, at the same time, the computation speed increases obviously.

  5. Computational Modeling of Morphogenesis Regulated by Mechanical Feedback

    PubMed Central

    Ramasubramanian, Ashok; Taber, Larry A.

    2008-01-01

    Mechanical forces cause changes in form during embryogenesis and likely play a role in regulating these changes. This paper explores the idea that changes in homeostatic tissue stress (target stress), possibly modulated by genes, drive some morphogenetic processes. Computational models are presented to illustrate how regional variations in target stress can cause a range of complex behaviors involving the bending of epithelia. These models include growth and cytoskeletal contraction regulated by stress-based mechanical feedback. All simulations were carried out using the commercial finite element code ABAQUS, with growth and contraction included by modifying the zero-stress state in the material constitutive relations. Results presented for bending of bilayered beams and invagination of cylindrical and spherical shells provide insight into some of the mechanical aspects that must be considered in studying morphogenetic mechanisms. PMID:17318485

  6. Successes and Challenges for Flow Control Simulations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2008-01-01

    A survey is made of recent computations published for synthetic jet flow control cases from a CFD workshop held in 2004. The three workshop cases were originally chosen to represent different aspects of flow control physics: nominally 2-D synthetic jet into quiescent air, 3-D circular synthetic jet into turbulent boundarylayer crossflow, and nominally 2-D flow-control (both steady suction and oscillatory zero-net-mass-flow) for separation control on a simple wall-mounted aerodynamic hump shape. The purpose of this survey is to summarize the progress as related to these workshop cases, particularly noting successes and remaining challenges for computational methods. It is hoped that this summary will also by extension serve as an overview of the state-of-the-art of CFD for these types of flow-controlled flow fields in general.

  7. Progress on the DPASS project

    NASA Astrophysics Data System (ADS)

    Galkin, Sergei A.; Bogatu, I. N.; Svidzinski, V. A.

    2015-11-01

    A novel project to develop Disruption Prediction And Simulation Suite (DPASS) of comprehensive computational tools to predict, model, and analyze disruption events in tokamaks has been recently started at FAR-TECH Inc. DPASS will eventually address the following aspects of the disruption problem: MHD, plasma edge dynamics, plasma-wall interaction, generation and losses of runaway electrons. DPASS uses the 3-D Disruption Simulation Code (DSC-3D) as a core tool and will have a modular structure. DSC is a one fluid non-linear, time-dependent 3D MHD code to simulate dynamics of tokamak plasma surrounded by pure vacuum B-field in the real geometry of a conducting tokamak vessel. DSC utilizes the adaptive meshless technique with adaptation to the moving plasma boundary, with accurate magnetic flux conservation and resolution of the plasma surface current. DSC has also an option to neglect the plasma inertia to eliminate fast magnetosonic scale. This option can be turned on/off as needed. During Phase I of the project, two modules will be developed: the computational module for modeling the massive gas injection and main plasma respond; and the module for nanoparticle plasma jet injection as an innovative disruption mitigation scheme. We will report on this development progress. Work is supported by the US DOE SBIR grant # DE-SC0013727.

  8. Computational Insights into Materials and Interfaces for Capacitive Energy Storage

    DOE PAGES

    Zhan, Cheng; Lian, Cheng; Zhang, Yu; ...

    2017-04-24

    Supercapacitors such as electric double-layer capacitors (EDLCs) and pseudocapacitors are becoming increasingly important in the field of electrical energy storage. Theoretical study of energy storage in EDLCs focuses on solving for the electric double-layer structure in different electrode geometries and electrolyte components, which can be achieved by molecular simulations such as classical molecular dynamics (MD), classical density functional theory (classical DFT), and Monte-Carlo (MC) methods. In recent years, combining first-principles and classical simulations to investigate the carbon-based EDLCs has shed light on the importance of quantum capacitance in graphene-like 2D systems. More recently, the development of joint density functional theorymore » (JDFT) enables self-consistent electronic-structure calculation for an electrode being solvated by an electrolyte. In contrast with the large amount of theoretical and computational effort on EDLCs, theoretical understanding of pseudocapacitance is very limited. In this review, we first introduce popular modeling methods and then focus on several important aspects of EDLCs including nanoconfinement, quantum capacitance, dielectric screening, and novel 2D electrode design; we also briefly touch upon pseudocapactive mechanism in RuO 2. We summarize and conclude with an outlook for the future of materials simulation and design for capacitive energy storage.« less

  9. A Numerical Model of Viscoelastic Layer Entrainment by Airflow in Cough

    NASA Astrophysics Data System (ADS)

    Mitran, Sorin M.

    2008-07-01

    Coughing is an alternative mode of ensuring mucus clearance in the lung when normal cilia induced flow breaks down. A numerical model of this process is presented with the following aspects. (1) A portion of the airway comprising the first three bronchus generations is modeled as radially reinforced elastic tubes. Elasticity equations are solved to predict airway deformation under effect of airway pressure. (2) The compressible, turbulent flow induced by rapid lung contraction is modeled by direct numerical simulation for Reynolds numbers in the range 5,000-10,000 and by Large Eddy Simulation for Reynolds numbers in the range 5,000-40,000. (3) A two-layer model of the airway surface liquid (ASL) covering the airway epithelial layer is used. The periciliary liquid (PCL) in direct contact with the epithelial layer is considered to be a Newtonian fluid. Forces modeling cilia beating can act upon this layer. The mucus layer between the PCL and the interior airflow is modeled as an Oldroyd-B fluid. The overall computation is a fluid-structure interaction simulation that tracks changes in ASL thickness and airway diameters that result from impulsive airflow boundary conditions imposed at bronchi ends. In particular, the amount of mucus that is evacuated from the system is computed as a function of cough intensity and mucus rheological properties.

  10. Numerical modeling on air quality in an urban environment with changes of the aspect ratio and wind direction.

    PubMed

    Yassin, Mohamed F

    2013-06-01

    Due to heavy traffic emissions within an urban environment, air quality during the last decade becomes worse year by year and hazard to public health. In the present work, numerical modeling of flow and dispersion of gaseous emissions from vehicle exhaust in a street canyon were investigated under changes of the aspect ratio and wind direction. The three-dimensional flow and dispersion of gaseous pollutants were modeled using a computational fluid dynamics (CFD) model which was numerically solved using Reynolds-averaged Navier-Stokes (RANS) equations. The diffusion flow field in the atmospheric boundary layer within the street canyon was studied for different aspect ratios (W/H=1/2, 3/4, and 1) and wind directions (θ=90°, 112.5°, 135°, and 157.5°). The numerical models were validated against wind tunnel results to optimize the turbulence model. The numerical results agreed well with the wind tunnel results. The simulation demonstrated that the minimum concentration at the human respiration height within the street canyon was on the windward side for aspect ratios W/H=1/2 and 1 and wind directions θ=112.5°, 135°, and 157.5°. The pollutant concentration level decreases as the wind direction and aspect ratio increase. The wind velocity and turbulence intensity increase as the aspect ratio and wind direction increase.

  11. Some aspects of robotics calibration, design and control

    NASA Technical Reports Server (NTRS)

    Tawfik, Hazem

    1990-01-01

    The main objective is to introduce techniques in the areas of testing and calibration, design, and control of robotic systems. A statistical technique is described that analyzes a robot's performance and provides quantitative three-dimensional evaluation of its repeatability, accuracy, and linearity. Based on this analysis, a corrective action should be taken to compensate for any existing errors and enhance the robot's overall accuracy and performance. A comparison between robotics simulation software packages that were commercially available (SILMA, IGRIP) and that of Kennedy Space Center (ROBSIM) is also included. These computer codes simulate the kinematics and dynamics patterns of various robot arm geometries to help the design engineer in sizing and building the robot manipulator and control system. A brief discussion on an adaptive control algorithm is provided.

  12. Advances in quantum simulations of ATPase catalysis in the myosin motor.

    PubMed

    Kiani, Farooq Ahmad; Fischer, Stefan

    2015-04-01

    During its contraction cycle, the myosin motor catalyzes the hydrolysis of ATP. Several combined quantum/classical mechanics (QM/MM) studies of this step have been published, which substantially contributed to our thinking about the catalytic mechanism. The methodological difficulties encountered over the years in the simulation of this complex reaction are now understood: (a) Polarization of the protein peptide groups surrounding the highly charged ATP(4-) cannot be neglected. (b) Some unsuspected protein groups need to be treated QM. (c) Interactions with the γ-phosphate versus the β-phosphate favor a concurrent versus a sequential mechanism, respectively. Thus, these practical aspects strongly influence the computed mechanism, and should be considered when studying other catalyzed phosphor-ester hydrolysis reactions, such as in ATPases or GTPases. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Multiscale modeling and characterization for performance and safety of lithium-ion batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pannala, Sreekanth; Turner, John A.; Allu, Srikanth

    Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. In this paper we describe a new, open source computational framework for Lithium-ion battery simulations that is designed to support a variety of model types and formulations. This framework has been used to create three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. The modelmore » development and validation are supported by experimental methods such as IR-imaging, X-ray tomography and micro-Raman mapping.« less

  14. Multiscale modeling and characterization for performance and safety of lithium-ion batteries

    DOE PAGES

    Pannala, Sreekanth; Turner, John A.; Allu, Srikanth; ...

    2015-08-19

    Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. In this paper we describe a new, open source computational framework for Lithium-ion battery simulations that is designed to support a variety of model types and formulations. This framework has been used to create three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. The modelmore » development and validation are supported by experimental methods such as IR-imaging, X-ray tomography and micro-Raman mapping.« less

  15. GIDL analysis of the process variation effect in gate-all-around nanowire FET

    NASA Astrophysics Data System (ADS)

    Kim, Shinkeun; Seo, Youngsoo; Lee, Jangkyu; Kang, Myounggon; Shin, Hyungcheol

    2018-02-01

    In this paper, the gate-induced drain leakage (GIDL) is analyzed on gate-all-around (GAA) Nanowire FET (NW FET) with ellipse-shaped channel induced by process variation effect (PVE). The fabrication process of nanowire can lead to change the shape of channel cross section from circle to ellipse. The effect of distorted channel shape is investigated and verified by technology computer-aided design (TCAD) simulation in terms of the GIDL current. The simulation results demonstrate that the components of GIDL current are two mechanisms of longitudinal band-to-band tunneling (L-BTBT) at body/drain junction and transverse band-to-band tunneling (T-BTBT) at gate/drain junction. These two mechanisms are investigated on channel radius (rnw) and aspect ratio of ellipse-shape respectively and together.

  16. Aeroelastic Analysis of SUGAR Truss-Braced Wing Wind-Tunnel Model Using FUN3D and a Nonlinear Structural Model

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Scott, Robert C.; Allen, Timothy J.; Sexton, Bradley W.

    2015-01-01

    Considerable attention has been given in recent years to the design of highly flexible aircraft. The results of numerous studies demonstrate the significant performance benefits of strut-braced wing (SBW) and trussbraced wing (TBW) configurations. Critical aspects of the TBW configuration are its larger aspect ratio, wing span and thinner wings. These aspects increase the importance of considering fluid/structure and control system coupling. This paper presents high-fidelity Navier-Stokes simulations of the dynamic response of the flexible Boeing Subsonic Ultra Green Aircraft Research (SUGAR) truss-braced wing wind-tunnel model. The latest version of the SUGAR TBW finite element model (FEM), v.20, is used in the present simulations. Limit cycle oscillations (LCOs) of the TBW wing/strut/nacelle are simulated at angle-of-attack (AoA) values of -1, 0 and +1 degree. The modal data derived from nonlinear static aeroelastic MSC.Nastran solutions are used at AoAs of -1 and +1 degrees. The LCO amplitude is observed to be dependent on AoA. LCO amplitudes at -1 degree are larger than those at +1 degree. The LCO amplitude at zero degrees is larger than either -1 or +1 degrees. These results correlate well with both wind-tunnel data and the behavior observed in previous studies using linear aerodynamics. The LCO onset at zero degrees AoA has also been computed using unloaded v.20 FEM modes. While the v.20 model increases the dynamic pressure at which LCO onset is observed, it is found that the LCO onset at and above Mach 0.82 is much different than that produced by an earlier version of the FEM, v. 19.

  17. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Schöbi, Roland; Sudret, Bruno

    2017-06-01

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.

  18. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  19. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch

    2017-06-15

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less

  20. Characterization of Unsteady Flow Structures Near Landing-Edge Slat. Part 2; 2D Computations

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi; Choudhari, Meelan M.; Jenkins, Luther N.

    2004-01-01

    In our previous computational studies of a generic high-lift configuration, quasi-laminar (as opposed to fully turbulent) treatment of the slat cove region proved to be an effective approach for capturing the unsteady dynamics of the cove flow field. Combined with acoustic propagation via Ffowes Williams and Hawkings formulation, the quasi-laminar simulations captured some important features of the slat cove noise measured with microphone array techniques. However. a direct assessment of the computed cove flow field was not feasible due to the unavailability of off-surface flow measurements. To remedy this shortcoming, we have undertaken a combined experiment and computational study aimed at characterizing the flow structures and fluid mechanical processes within the slat cove region. Part I of this paper outlines the experimental aspects of this investigation focused on the 30P30N high-lift configuration; the present paper describes the accompanying computational results including a comparison between computation and experiment at various angles of attack. Even through predictions of the time-averaged flow field agree well with the measured data, the study indicates the need for further refinement of the zonal turbulence approach in order to capture the full dynamics of the cove's fluctuating flow field.

  1. A computer model for the simulation of nanoparticle deposition in the alveolar structures of the human lungs.

    PubMed

    Sturm, Robert

    2015-11-01

    According to epidemiological and experimental studies, inhalation of nanoparticles is commonly believed as a main trigger for several pulmonary dysfunctions and lung diseases. Concerning the transport and deposition of such nano-scale particles in the different structures of the human lungs, some essential questions are still in need of a clarification. Therefore, main objective of the study was the simulation of nanoparticle deposition in the alveolar region of the human respiratory tract (HRT). Respective factors describing the aerodynamic behavior of spherical and non-spherical particles in the inhaled air stream (i.e., Cunningham slip correction factors, dynamic shape factors, equivalent-volume diameters, aerodynamic diameters) were computed. Alveolar deposition of diverse nanomaterials according to several known mechanisms, among which Brownian diffusion and sedimentation play a superior role, was approximated by the use of empirical and analytical formulae. Deposition calculations were conducted with a currently developed program, termed NANODEP, which allows the variation of numerous input parameters with regard to particle geometry, lung morphometry, and aerosol inhalation. Generally, alveolar deposition of nanoparticles concerned for this study varies between 0.1% and 12.4% during sitting breathing and between 2.0% and 20.1% during heavy-exercise breathing. Prolate particles (e.g., nanotubes) exhibit a significant increase in deposition, when their aspect ratio is enhanced. In contrast, deposition of oblate particles (e.g., nanoplatelets) is remarkably declined with any reduction of the aspect ratio. The study clearly demonstrates that alveolar deposition of nanoparticles represents a topic certainly being of superior interest for physicists and respiratory physicians in future.

  2. Numerical sedimentation particle-size analysis using the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.

    2015-12-01

    Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.

  3. Infrared radiative transfer through a regular array of cuboidal clouds

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN; Weinman, J. A.

    1981-01-01

    Infrared radiative transfer through a regular array of cuboidal clouds is studied and the interaction of the sides of the clouds with each other and the ground is considered. The theory is developed for black clouds and is extended to scattering clouds using a variable azimuth two-stream approximation. It is shown that geometrical considerations often dominate over the microphysical aspects of radiative transfer through the clouds. For example, the difference in simulated 10 micron brightness temperature between black isothermal cubic clouds and cubic clouds of optical depth 10, is less than 2 deg for zenith angles less than 50 deg for all cloud fractions when viewed parallel to the array. The results show that serious errors are made in flux and cooling rate computations if broken clouds are modeled as planiform. Radiances computed by the usual practice of area-weighting cloudy and clear sky radiances are in error by 2 to 8 K in brightness temperature for cubic clouds over a wide range of cloud fractions and zenith angles. It is also shown that the lapse rate does not markedly affect the exiting radiances for cuboidal clouds of unit aspect ratio and optical depth 10.

  4. Newly synthesized dihydroquinazoline derivative from the aspect of combined spectroscopic and computational study

    NASA Astrophysics Data System (ADS)

    El-Azab, Adel S.; Mary, Y. Sheena; Mary, Y. Shyma; Panicker, C. Yohannan; Abdel-Aziz, Alaa A.-M.; El-Sherbeny, Magda A.; Armaković, Stevan; Armaković, Sanja J.; Van Alsenoy, Christian

    2017-04-01

    In this work, spectroscopic characterization of 2-(2-(4-oxo-3-phenethyl-3,4-dihydroquinazolin-2-ylthio)ethyl)isoindoline-1,3-dione have been obtained with experimentally and theoretically. Complete assignments of fundamental vibrations were performed on the basis of the potential energy distribution of the vibrational modes and good agreement between the experimental and scaled wavenumbers has been achieved. Frontier molecular orbitals have been used as indicators of stability and reactivity. Intramolecular interactions have been investigated by NBO analysis. The dipole moment, linear polarizability and first and second order hyperpolarizability values were also computed. In order to determine molecule sites prone to electrophilic attacks DFT calculations of average local ionization energy (ALIE) and Fukui functions have been performed as well. Intra-molecular non-covalent interactions have been determined and analyzed by the analysis of charge density. Stability of title molecule have also been investigated from the aspect of autoxidation, by calculations of bond dissociation energies (BDE), and hydrolysis, by calculations of radial distribution functions after molecular dynamics (MD) simulations. In order to assess the biological potential of the title compound a molecular docking study towards breast cancer type 2 complex has been performed.

  5. Plant metabolic modeling: achieving new insight into metabolism and metabolic engineering.

    PubMed

    Baghalian, Kambiz; Hajirezaei, Mohammad-Reza; Schreiber, Falk

    2014-10-01

    Models are used to represent aspects of the real world for specific purposes, and mathematical models have opened up new approaches in studying the behavior and complexity of biological systems. However, modeling is often time-consuming and requires significant computational resources for data development, data analysis, and simulation. Computational modeling has been successfully applied as an aid for metabolic engineering in microorganisms. But such model-based approaches have only recently been extended to plant metabolic engineering, mainly due to greater pathway complexity in plants and their highly compartmentalized cellular structure. Recent progress in plant systems biology and bioinformatics has begun to disentangle this complexity and facilitate the creation of efficient plant metabolic models. This review highlights several aspects of plant metabolic modeling in the context of understanding, predicting and modifying complex plant metabolism. We discuss opportunities for engineering photosynthetic carbon metabolism, sucrose synthesis, and the tricarboxylic acid cycle in leaves and oil synthesis in seeds and the application of metabolic modeling to the study of plant acclimation to the environment. The aim of the review is to offer a current perspective for plant biologists without requiring specialized knowledge of bioinformatics or systems biology. © 2014 American Society of Plant Biologists. All rights reserved.

  6. Plant Metabolic Modeling: Achieving New Insight into Metabolism and Metabolic Engineering

    PubMed Central

    Baghalian, Kambiz; Hajirezaei, Mohammad-Reza; Schreiber, Falk

    2014-01-01

    Models are used to represent aspects of the real world for specific purposes, and mathematical models have opened up new approaches in studying the behavior and complexity of biological systems. However, modeling is often time-consuming and requires significant computational resources for data development, data analysis, and simulation. Computational modeling has been successfully applied as an aid for metabolic engineering in microorganisms. But such model-based approaches have only recently been extended to plant metabolic engineering, mainly due to greater pathway complexity in plants and their highly compartmentalized cellular structure. Recent progress in plant systems biology and bioinformatics has begun to disentangle this complexity and facilitate the creation of efficient plant metabolic models. This review highlights several aspects of plant metabolic modeling in the context of understanding, predicting and modifying complex plant metabolism. We discuss opportunities for engineering photosynthetic carbon metabolism, sucrose synthesis, and the tricarboxylic acid cycle in leaves and oil synthesis in seeds and the application of metabolic modeling to the study of plant acclimation to the environment. The aim of the review is to offer a current perspective for plant biologists without requiring specialized knowledge of bioinformatics or systems biology. PMID:25344492

  7. Navier-Stokes Simulation of Airconditioning Facility of a Large Modem Computer Room

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA recently assembled one of the world's fastest operational supercomputers to meet the agency's new high performance computing needs. This large-scale system, named Columbia, consists of 20 interconnected SGI Altix 512-processor systems, for a total of 10,240 Intel Itanium-2 processors. High-fidelity CFD simulations were performed for the NASA Advanced Supercomputing (NAS) computer room at Ames Research Center. The purpose of the simulations was to assess the adequacy of the existing air handling and conditioning system and make recommendations for changes in the design of the system if needed. The simulations were performed with NASA's OVERFLOW-2 CFD code which utilizes overset structured grids. A new set of boundary conditions were developed and added to the flow solver for modeling the roomls air-conditioning and proper cooling of the equipment. Boundary condition parameters for the flow solver are based on cooler CFM (flow rate) ratings and some reasonable assumptions of flow and heat transfer data for the floor and central processing units (CPU) . The geometry modeling from blue prints and grid generation were handled by the NASA Ames software package Chimera Grid Tools (CGT). This geometric model was developed as a CGT-scripted template, which can be easily modified to accommodate any changes in shape and size of the room, locations and dimensions of the CPU racks, disk racks, coolers, power distribution units, and mass-storage system. The compute nodes are grouped in pairs of racks with an aisle in the middle. High-speed connection cables connect the racks with overhead cable trays. The cool air from the cooling units is pumped into the computer room from a sub-floor through perforated floor tiles. The CPU cooling fans draw cool air from the floor tiles, which run along the outside length of each rack, and eject warm air into the center isle between the racks. This warm air is eventually drawn into the cooling units located near the walls of the room. One major concern is that the hot air ejected to the middle isle might recirculate back into the cool rack side and cause thermal short-cycling. The simulations analyzed and addressed the following important elements of the computer room: 1) High-temperature build-up in certain regions of the room; 2) Areas of low air circulation in the room; 3) Potential short-cycling of the computer rack cooling system; 4) Effectiveness of the perforated cooling floor tiles; 5) Effect of changes in various aspects of the cooling units. Detailed flow visualization is performed to show temperature distribution, air-flow streamlines and velocities in the computer room.

  8. Some aspects of steam-water flow simulation in geothermal wells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shulyupin, Alexander N.

    1996-01-24

    Actual aspects of steam-water simulation in geothermal wells are considered: necessary quality of a simulator, flow regimes, mass conservation equation, momentum conservation equation, energy conservation equation and condition equations. Shortcomings of traditional hydraulic approach are noted. Main questions of simulator development by the hydraulic approach are considered. New possibilities of a simulation with the structure approach employment are noted.

  9. A numerical simulation of finite-length Taylor-Couette flow

    NASA Technical Reports Server (NTRS)

    Streett, C. L.; Hussaini, M. Y.

    1988-01-01

    Results from numerical simulations of finite-length Taylor-Couette flow are presented. Included are time-accurate and steady-state studies of the change in the nature of the symmetric two-cell/asymmetric one-cell bifurcation with varying aspect ratio and of the Reynolds number/aspect ratio locus of the two-cell/four-cell bifurcation. Preliminary results from wavy-vortex simulations at low aspect ratios are also presented.

  10. Noise of Embedded High Aspect Ratio Nozzles

    NASA Technical Reports Server (NTRS)

    Bridges, James E.

    2011-01-01

    A family of high aspect ratio nozzles were designed to provide a parametric database of canonical embedded propulsion concepts. Nozzle throat geometries with aspect ratios of 2:1, 4:1, and 8:1 were chosen, all with convergent nozzle areas. The transition from the typical round duct to the rectangular nozzle was designed very carefully to produce a flow at the nozzle exit that was uniform and free from swirl. Once the basic rectangular nozzles were designed, external features common to embedded propulsion systems were added: extended lower lip (a.k.a. bevel, aft deck), differing sidewalls, and chevrons. For the latter detailed Reynolds-averaged Navier-Stokes (RANS) computational fluid dynamics (CFD) simulations were made to predict the thrust performance and to optimize parameters such as bevel length, and chevron penetration and azimuthal curvature. Seventeen of these nozzles were fabricated at a scale providing a 2.13 inch diameter equivalent area throat." ! The seventeen nozzles were tested for far-field noise and a few data were presented here on the effect of aspect ratio, bevel length, and chevron count and penetration. The sound field of the 2:1 aspect ratio rectangular jet was very nearly axisymmetric, but the 4:1 and 8:1 were not, the noise on their minor axes being louder than the major axes. Adding bevel length increased the noise of these nozzles, especially on their minor axes, both toward the long and short sides of the beveled nozzle. Chevrons were only added to the 2:1 rectangular jet. Adding 4 chevrons per wide side produced some decrease at aft angles, but increased the high frequency noise at right angles to the jet flow. This trend increased with increasing chevron penetration. Doubling the number of chevrons while maintaining their penetration decreased these effects. Empirical models of the parametric effect of these nozzles were constructed and quantify the trends stated above." Because it is the objective of the Supersonics Project that future design work be done more by physics-based computations and less by experiments, several codes under development were evaluated against these test cases. Preliminary results show that the RANS-based code JeNo predicts the spectral directivity of the low aspect ratio jets well, but has no capability to predict the non-axisymmetry. An effort to address this limitations, used in the RANS-based code of Leib and Goldstein, overpredicted the impact of aspect ratio. The broadband shock noise code RISN, also limited to axisymmetric assumptions, did a good job of predicting the spectral directivity of underexpanded 2:1 cold jet case but was not as successful on high aspect ratio jets, particularly when they are hot. All results are preliminary because the underlying CFD has not been validated yet. An effort using a Large Eddy Simulation code by Stanford University predicted noise that agreed with experiments to within a few dB.

  11. Shorebird Migration Patterns in Response to Climate Change: A Modeling Approach

    NASA Technical Reports Server (NTRS)

    Smith, James A.

    2010-01-01

    The availability of satellite remote sensing observations at multiple spatial and temporal scales, coupled with advances in climate modeling and information technologies offer new opportunities for the application of mechanistic models to predict how continental scale bird migration patterns may change in response to environmental change. In earlier studies, we explored the phenotypic plasticity of a migratory population of Pectoral sandpipers by simulating the movement patterns of an ensemble of 10,000 individual birds in response to changes in stopover locations as an indicator of the impacts of wetland loss and inter-annual variability on the fitness of migratory shorebirds. We used an individual based, biophysical migration model, driven by remotely sensed land surface data, climate data, and biological field data. Mean stop-over durations and stop-over frequency with latitude predicted from our model for nominal cases were consistent with results reported in the literature and available field data. In this study, we take advantage of new computing capabilities enabled by recent GP-GPU computing paradigms and commodity hardware (general purchase computing on graphics processing units). Several aspects of our individual based (agent modeling) approach lend themselves well to GP-GPU computing. We have been able to allocate compute-intensive tasks to the graphics processing units, and now simulate ensembles of 400,000 birds at varying spatial resolutions along the central North American flyway. We are incorporating additional, species specific, mechanistic processes to better reflect the processes underlying bird phenotypic plasticity responses to different climate change scenarios in the central U.S.

  12. Four PPPPerspectives on computational creativity in theory and in practice

    NASA Astrophysics Data System (ADS)

    Jordanous, Anna

    2016-04-01

    Computational creativity is the modelling, simulating or replicating of creativity computationally. In examining and learning from these "creative systems", from what perspective should the creativity of a system be considered? Are we interested in the creativity of the system's output? Or of its creative processes? Features of the system? Or how it operates within its environment? Traditionally computational creativity has focused more on creative systems' products or processes, though this focus has widened recently. Creativity research offers the Four Ps of creativity: Person/Producer, Product, Process and Press/Environment. This paper presents the Four Ps, explaining each in the context of creativity research and how it relates to computational creativity. To illustrate the usefulness of the Four Ps in taking broader perspectives on creativity in its computational treatment, the concepts of novelty and value are explored using the Four Ps, highlighting aspects of novelty and value that may otherwise be overlooked. Analysis of recent research in computational creativity finds that although each of the Four Ps appears in the body of computational creativity work, individual pieces of work often do not acknowledge all Four Ps, missing opportunities to widen their work's relevance. We can see, though, that high-status computational creativity papers do typically address all Four Ps. This paper argues that the broader views of creativity afforded by the Four Ps is vital in guiding us towards more comprehensively useful computational investigations of creativity.

  13. Modeling and simulation of protein-surface interactions: achievements and challenges.

    PubMed

    Ozboyaci, Musa; Kokh, Daria B; Corni, Stefano; Wade, Rebecca C

    2016-01-01

    Understanding protein-inorganic surface interactions is central to the rational design of new tools in biomaterial sciences, nanobiotechnology and nanomedicine. Although a significant amount of experimental research on protein adsorption onto solid substrates has been reported, many aspects of the recognition and interaction mechanisms of biomolecules and inorganic surfaces are still unclear. Theoretical modeling and simulations provide complementary approaches for experimental studies, and they have been applied for exploring protein-surface binding mechanisms, the determinants of binding specificity towards different surfaces, as well as the thermodynamics and kinetics of adsorption. Although the general computational approaches employed to study the dynamics of proteins and materials are similar, the models and force-fields (FFs) used for describing the physical properties and interactions of material surfaces and biological molecules differ. In particular, FF and water models designed for use in biomolecular simulations are often not directly transferable to surface simulations and vice versa. The adsorption events span a wide range of time- and length-scales that vary from nanoseconds to days, and from nanometers to micrometers, respectively, rendering the use of multi-scale approaches unavoidable. Further, changes in the atomic structure of material surfaces that can lead to surface reconstruction, and in the structure of proteins that can result in complete denaturation of the adsorbed molecules, can create many intermediate structural and energetic states that complicate sampling. In this review, we address the challenges posed to theoretical and computational methods in achieving accurate descriptions of the physical, chemical and mechanical properties of protein-surface systems. In this context, we discuss the applicability of different modeling and simulation techniques ranging from quantum mechanics through all-atom molecular mechanics to coarse-grained approaches. We examine uses of different sampling methods, as well as free energy calculations. Furthermore, we review computational studies of protein-surface interactions and discuss the successes and limitations of current approaches.

  14. SIMPSON: A General Simulation Program for Solid-State NMR Spectroscopy

    NASA Astrophysics Data System (ADS)

    Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.

    2000-12-01

    A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tcl scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple 1D experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.

  15. SIMPSON: A general simulation program for solid-state NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.

    2011-12-01

    A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tel scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple ID experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.

  16. Snowfall retrieval at X, Ka and W bands: consistency of backscattering and microphysical properties using BAECC ground-based measurements

    NASA Astrophysics Data System (ADS)

    Tecla Falconi, Marta; von Lerber, Annakaisa; Ori, Davide; Silvio Marzano, Frank; Moisseev, Dmitri

    2018-05-01

    Radar-based snowfall intensity retrieval is investigated at centimeter and millimeter wavelengths using co-located ground-based multi-frequency radar and video-disdrometer observations. Using data from four snowfall events, recorded during the Biogenic Aerosols Effects on Clouds and Climate (BAECC) campaign in Finland, measurements of liquid-water-equivalent snowfall rate S are correlated to radar equivalent reflectivity factors Ze, measured by the Atmospheric Radiation Measurement (ARM) cloud radars operating at X, Ka and W frequency bands. From these combined observations, power-law Ze-S relationships are derived for all three frequencies considering the influence of riming. Using microwave radiometer observations of liquid water path, the measured precipitation is divided into lightly, moderately and heavily rimed snow. Interestingly lightly rimed snow events show a spectrally distinct signature of Ze-S with respect to moderately or heavily rimed snow cases. In order to understand the connection between snowflake microphysical and multi-frequency backscattering properties, numerical simulations are performed by using the particle size distribution provided by the in situ video disdrometer and retrieved ice particle masses. The latter are carried out by using both the T-matrix method (TMM) applied to soft-spheroid particle models with different aspect ratios and exploiting a pre-computed discrete dipole approximation (DDA) database for rimed aggregates. Based on the presented results, it is concluded that the soft-spheroid approximation can be adopted to explain the observed multi-frequency Ze-S relations if a proper spheroid aspect ratio is selected. The latter may depend on the degree of riming in snowfall. A further analysis of the backscattering simulations reveals that TMM cross sections are higher than the DDA ones for small ice particles, but lower for larger particles. The differences of computed cross sections for larger and smaller particles are compensating for each other. This may explain why the soft-spheroid approximation is satisfactory for radar reflectivity simulations under study.

  17. What has finite element analysis taught us about diabetic foot disease and its management? A systematic review.

    PubMed

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2014-01-01

    Over the past two decades finite element (FE) analysis has become a popular tool for researchers seeking to simulate the biomechanics of the healthy and diabetic foot. The primary aims of these simulations have been to improve our understanding of the foot's complicated mechanical loading in health and disease and to inform interventions designed to prevent plantar ulceration, a major complication of diabetes. This article provides a systematic review and summary of the findings from FE analysis-based computational simulations of the diabetic foot. A systematic literature search was carried out and 31 relevant articles were identified covering three primary themes: methodological aspects relevant to modelling the diabetic foot; investigations of the pathomechanics of the diabetic foot; and simulation-based design of interventions to reduce ulceration risk. Methodological studies illustrated appropriate use of FE analysis for simulation of foot mechanics, incorporating nonlinear tissue mechanics, contact and rigid body movements. FE studies of pathomechanics have provided estimates of internal soft tissue stresses, and suggest that such stresses may often be considerably larger than those measured at the plantar surface and are proportionally greater in the diabetic foot compared to controls. FE analysis allowed evaluation of insole performance and development of new insole designs, footwear and corrective surgery to effectively provide intervention strategies. The technique also presents the opportunity to simulate the effect of changes associated with the diabetic foot on non-mechanical factors such as blood supply to local tissues. While significant advancement in diabetic foot research has been made possible by the use of FE analysis, translational utility of this powerful tool for routine clinical care at the patient level requires adoption of cost-effective (both in terms of labour and computation) and reliable approaches with clear clinical validity for decision making.

  18. Recovery Act: Web-based CO{sub 2} Subsurface Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paolini, Christopher; Castillo, Jose

    2012-11-30

    The Web-based CO{sub 2} Subsurface Modeling project focused primarily on extending an existing text-only, command-line driven, isothermal and isobaric, geochemical reaction-transport simulation code, developed and donated by Sienna Geodynamics, into an easier-to-use Web-based application for simulating long-term storage of CO{sub 2} in geologic reservoirs. The Web-based interface developed through this project, publically accessible via URL http://symc.sdsu.edu/, enables rapid prototyping of CO{sub 2} injection scenarios and allows students without advanced knowledge of geochemistry to setup a typical sequestration scenario, invoke a simulation, analyze results, and then vary one or more problem parameters and quickly re-run a simulation to answer what-if questions.more » symc.sdsu.edu has 2x12 core AMD Opteron™ 6174 2.20GHz processors and 16GB RAM. The Web-based application was used to develop a new computational science course at San Diego State University, COMP 670: Numerical Simulation of CO{sub 2} Sequestration, which was taught during the fall semester of 2012. The purpose of the class was to introduce graduate students to Carbon Capture, Use and Storage (CCUS) through numerical modeling and simulation, and to teach students how to interpret simulation results to make predictions about long-term CO{sub 2} storage capacity in deep brine reservoirs. In addition to the training and education component of the project, significant software development efforts took place. Two computational science doctoral and one geological science masters student, under the direction of the PIs, extended the original code developed by Sienna Geodynamics, named Sym.8. New capabilities were added to Sym.8 to simulate non-isothermal and non-isobaric flows of charged aqueous solutes in porous media, in addition to incorporating HPC support into the code for execution on many-core XSEDE clusters. A successful outcome of this project was the funding and training of three new computational science students and one geological science student in technologies relevant to carbon sequestration and problems involving flow in subsurface media. The three computational science students are currently finishing their doctorial studies on different aspects of modeling CO{sub 2} sequestration, while the geological science student completed his master’s thesis in modeling the thermal response of CO{sub 2} injection in brine and, as a direct result of participation in this project, is now employed at ExxonMobil as a full-time staff geologist.« less

  19. Comparison of validation methods for forming simulations

    NASA Astrophysics Data System (ADS)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  20. Design, Modeling, Fabrication, and Evaluation of the Air Amplifier for Improved Detection of Biomolecules by Electrospray Ionization Mass Spectrometry

    PubMed Central

    Robichaud, Guillaume; Dixon, R. Brent; Potturi, Amarnatha S.; Cassidy, Dan; Edwards, Jack R.; Sohn, Alex; Dow, Thomas A.; Muddiman, David C.

    2010-01-01

    Through a multi-disciplinary approach, the air amplifier is being evolved as a highly engineered device to improve detection limits of biomolecules when using electrospray ionization. Several key aspects have driven the modifications to the device through experimentation and simulations. We have developed a computer simulation that accurately portrays actual conditions and the results from these simulations are corroborated by the experimental data. These computer simulations can be used to predict outcomes from future designs resulting in a design process that is efficient in terms of financial cost and time. We have fabricated a new device with annular gap control over a range of 50 to 70 μm using piezoelectric actuators. This has enabled us to obtain better aerodynamic performance when compared to the previous design (2× more vacuum) and also more reproducible results. This is allowing us to study a broader experimental space than the previous design which is critical in guiding future directions. This work also presents and explains the principles behind a fractional factorial design of experiments methodology for testing a large number of experimental parameters in an orderly and efficient manner to understand and optimize the critical parameters that lead to obtain improved detection limits while minimizing the number of experiments performed. Preliminary results showed that several folds of improvements could be obtained for certain condition of operations (up to 34 folds). PMID:21499524

  1. RNA Structural Dynamics As Captured by Molecular Simulations: A Comprehensive Overview.

    PubMed

    Šponer, Jiří; Bussi, Giovanni; Krepl, Miroslav; Banáš, Pavel; Bottaro, Sandro; Cunha, Richard A; Gil-Ley, Alejandro; Pinamonti, Giovanni; Poblete, Simón; Jurečka, Petr; Walter, Nils G; Otyepka, Michal

    2018-04-25

    With both catalytic and genetic functions, ribonucleic acid (RNA) is perhaps the most pluripotent chemical species in molecular biology, and its functions are intimately linked to its structure and dynamics. Computer simulations, and in particular atomistic molecular dynamics (MD), allow structural dynamics of biomolecular systems to be investigated with unprecedented temporal and spatial resolution. We here provide a comprehensive overview of the fast-developing field of MD simulations of RNA molecules. We begin with an in-depth, evaluatory coverage of the most fundamental methodological challenges that set the basis for the future development of the field, in particular, the current developments and inherent physical limitations of the atomistic force fields and the recent advances in a broad spectrum of enhanced sampling methods. We also survey the closely related field of coarse-grained modeling of RNA systems. After dealing with the methodological aspects, we provide an exhaustive overview of the available RNA simulation literature, ranging from studies of the smallest RNA oligonucleotides to investigations of the entire ribosome. Our review encompasses tetranucleotides, tetraloops, a number of small RNA motifs, A-helix RNA, kissing-loop complexes, the TAR RNA element, the decoding center and other important regions of the ribosome, as well as assorted others systems. Extended sections are devoted to RNA-ion interactions, ribozymes, riboswitches, and protein/RNA complexes. Our overview is written for as broad of an audience as possible, aiming to provide a much-needed interdisciplinary bridge between computation and experiment, together with a perspective on the future of the field.

  2. Quantifying learning in medical students during a critical care medicine elective: a comparison of three evaluation instruments.

    PubMed

    Rogers, P L; Jacob, H; Rashwan, A S; Pinsky, M R

    2001-06-01

    To compare three different evaluative instruments and determine which is able to measure different aspects of medical student learning. Student learning was evaluated by using written examinations, objective structured clinical examination, and patient simulator that used two clinical scenarios before and after a structured critical care elective, by using a crossover design. Twenty-four 4th-yr students enrolled in the critical care medicine elective. All students took a multiple-choice written examination; evaluated a live simulated critically ill patient, requested data from a nurse, and intervened as appropriate at different stations (objective structured clinical examination); and evaluated the computer-controlled patient simulator and intervened as appropriate. Students' knowledge was assessed by using a multiple-choice examination containing the same data incorporated into the other examinations. Student performance on the objective structured clinical examination was evaluated at five stations. Both objective structured clinical examination and simulator tests were videotaped for subsequent scores of responses, quality of responses, and response time. The videotapes were reviewed for specific behaviors by faculty masked to time of examination. Students were expected to perform the following: a) assess airway, breathing, and circulation; b) prepare a mannequin for intubation; c) provide appropriate ventilator settings; d) manage hypotension; and e) request, interpret, and provide appropriate intervention for pulmonary artery catheter data. Students were expected to perform identical behaviors during the simulator examination; however, the entire examination was performed on the whole-body computer-controlled mannequin. The primary outcome measure was the difference in examination scores before and after the rotation. The mean preelective scores were 77 +/- 16%, 47 +/- 15%, and 41 +/- 14% for the written examination, objective structured clinical examination, and simulator, respectively, compared with 89 +/- 11%, 76 +/- 12%, and 62 +/- 15% after the elective (p <.0001). Prerotation scores for the written examination were significantly higher than the objective structured clinical examination or the simulator; postrotation scores were highest for the written examination and lowest for the simulator. Written examinations measure acquisition of knowledge but fail to predict if students can apply knowledge to problem solving, whereas both the objective structured clinical examination and the computer-controlled patient simulator can be used as effective performance evaluation tools.

  3. Multiscale molecular dynamics simulations of membrane remodeling by Bin/Amphiphysin/Rvs family proteins

    NASA Astrophysics Data System (ADS)

    Chun, Chan; Haohua, Wen; Lanyuan, Lu; Jun, Fan

    2016-01-01

    Membrane curvature is no longer thought of as a passive property of the membrane; rather, it is considered as an active, regulated state that serves various purposes in the cell such as between cells and organelle definition. While transport is usually mediated by tiny membrane bubbles known as vesicles or membrane tubules, such communication requires complex interplay between the lipid bilayers and cytosolic proteins such as members of the Bin/Amphiphysin/Rvs (BAR) superfamily of proteins. With rapid developments in novel experimental techniques, membrane remodeling has become a rapidly emerging new field in recent years. Molecular dynamics (MD) simulations are important tools for obtaining atomistic information regarding the structural and dynamic aspects of biological systems and for understanding the physics-related aspects. The availability of more sophisticated experimental data poses challenges to the theoretical community for developing novel theoretical and computational techniques that can be used to better interpret the experimental results to obtain further functional insights. In this review, we summarize the general mechanisms underlying membrane remodeling controlled or mediated by proteins. While studies combining experiments and molecular dynamics simulations recall existing mechanistic models, concurrently, they extend the role of different BAR domain proteins during membrane remodeling processes. We review these recent findings, focusing on how multiscale molecular dynamics simulations aid in understanding the physical basis of BAR domain proteins, as a representative of membrane-remodeling proteins. Project supported by the National Natural Science Foundation of China (Grant No. 21403182) and the Research Grants Council of Hong Kong, China (Grant No. CityU 21300014).

  4. Reproducible Computing: a new Technology for Statistics Education and Educational Research

    NASA Astrophysics Data System (ADS)

    Wessa, Patrick

    2009-05-01

    This paper explains how the R Framework (http://www.wessa.net) and a newly developed Compendium Platform (http://www.freestatistics.org) allow us to create, use, and maintain documents that contain empirical research results which can be recomputed and reused in derived work. It is illustrated that this technological innovation can be used to create educational applications that can be shown to support effective learning of statistics and associated analytical skills. It is explained how a Compendium can be created by anyone, without the need to understand the technicalities of scientific word processing (L style="font-variant: small-caps">ATEX) or statistical computing (R code). The proposed Reproducible Computing system allows educational researchers to objectively measure key aspects of the actual learning process based on individual and constructivist activities such as: peer review, collaboration in research, computational experimentation, etc. The system was implemented and tested in three statistics courses in which the use of Compendia was used to create an interactive e-learning environment that simulated the real-world process of empirical scientific research.

  5. POPEYE: A production rule-based model of multitask supervisory control (POPCORN)

    NASA Technical Reports Server (NTRS)

    Townsend, James T.; Kadlec, Helena; Kantowitz, Barry H.

    1988-01-01

    Recent studies of relationships between subjective ratings of mental workload, performance, and human operator and task characteristics have indicated that these relationships are quite complex. In order to study the various relationships and place subjective mental workload within a theoretical framework, we developed a production system model for the performance component of the complex supervisory task called POPCORN. The production system model is represented by a hierarchial structure of goals and subgoals, and the information flow is controlled by a set of condition-action rules. The implementation of this production system, called POPEYE, generates computer simulated data under different task difficulty conditions which are comparable to those of human operators performing the task. This model is the performance aspect of an overall dynamic psychological model which we are developing to examine and quantify relationships between performance and psychological aspects in a complex environment.

  6. Wind-tunnel investigation of longitudinal and lateral-directional stability and control characteristics of a 0.237-scale model of a remotely piloted research vehicle with a thick, high-aspect-ratio supercritical wing

    NASA Technical Reports Server (NTRS)

    Byrdsong, T. A.; Brooks, C. W., Jr.

    1980-01-01

    A 0.237-scale model of a remotely piloted research vehicle equipped with a thick, high-aspect-ratio supercritical wing was tested in the Langley 8-foot transonic tunnel to provide experimental data for a prediction of the static stability and control characteristics of the research vehicle as well as to provide an estimate of vehicle flight characteristics for a computer simulation program used in the planning and execution of specific flight-research mission. Data were obtained at a Reynolds number of 16.5 x 10 to the 6th power per meter for Mach numbers up to 0.92. The results indicate regions of longitudinal instability; however, an adequate margin of longitudinal stability exists at a selected cruise condition. Satisfactory effectiveness of pitch, roll, and yaw control was also demonstrated.

  7. Computational Enzymology and Organophosphorus Degrading Enzymes: Promising Approaches Toward Remediation Technologies of Warfare Agents and Pesticides

    DOE PAGES

    Ramalho, Teodorico C.; DeCastro, Alexandre A.; Silva, Daniela R.; ...

    2015-08-26

    The re-emergence of chemical weapons as a global threat in hands of terrorist groups, together with an increasing number of pesticides intoxications and environmental contaminations worldwide, has called the attention of the scientific community for the need of improvement in the technologies for detoxification of organophosphorus (OP) compounds. A compelling strategy is the use of bioremediation by enzymes that are able to hydrolyze these molecules to harmless chemical species. Several enzymes have been studied and engineered for this purpose. However, their mechanisms of action are not well understood. Theoretical investigations may help elucidate important aspects of these mechanisms and helpmore » in the development of more efficient bio-remediators. In this review, we point out the major contributions of computational methodologies applied to enzyme based detoxification of OPs. Furthermore, we highlight the use of PTE, PON, DFP, and BuChE as enzymes used in OP detoxification process and how computational tools such as molecular docking, molecular dynamics simulations and combined quantum mechanical/molecular mechanics have and will continue to contribute to this very important area of research.The re-emergence of chemical weapons as a global threat in hands of terrorist groups, together with an increasing number of pesticides intoxications and environmental contaminations worldwide, has called the attention of the scientific community for the need of improvement in the technologies for detoxification of organophosphorus (OP) compounds. A compelling strategy is the use of bioremediation by enzymes that are able to hydrolyze these molecules to harmless chemical species. Several enzymes have been studied and engineered for this purpose. However, their mechanisms of action are not well understood. Theoretical investigations may help elucidate important aspects of these mechanisms and help in the development of more efficient bio-remediators. In this review, we point out the major contributions of computational methodologies applied to enzyme based detoxification of OPs. Furthermore, we highlight the use of PTE, PON, DFP, and BuChE as enzymes used in OP detoxification process and how computational tools such as molecular docking, molecular dynamics simulations and combined quantum mechanical/molecular mechanics have and will continue to contribute to this very important area of research.« less

  8. Computational Enzymology and Organophosphorus Degrading Enzymes: Promising Approaches Toward Remediation Technologies of Warfare Agents and Pesticides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramalho, Teodorico C.; DeCastro, Alexandre A.; Silva, Daniela R.

    The re-emergence of chemical weapons as a global threat in hands of terrorist groups, together with an increasing number of pesticides intoxications and environmental contaminations worldwide, has called the attention of the scientific community for the need of improvement in the technologies for detoxification of organophosphorus (OP) compounds. A compelling strategy is the use of bioremediation by enzymes that are able to hydrolyze these molecules to harmless chemical species. Several enzymes have been studied and engineered for this purpose. However, their mechanisms of action are not well understood. Theoretical investigations may help elucidate important aspects of these mechanisms and helpmore » in the development of more efficient bio-remediators. In this review, we point out the major contributions of computational methodologies applied to enzyme based detoxification of OPs. Furthermore, we highlight the use of PTE, PON, DFP, and BuChE as enzymes used in OP detoxification process and how computational tools such as molecular docking, molecular dynamics simulations and combined quantum mechanical/molecular mechanics have and will continue to contribute to this very important area of research.The re-emergence of chemical weapons as a global threat in hands of terrorist groups, together with an increasing number of pesticides intoxications and environmental contaminations worldwide, has called the attention of the scientific community for the need of improvement in the technologies for detoxification of organophosphorus (OP) compounds. A compelling strategy is the use of bioremediation by enzymes that are able to hydrolyze these molecules to harmless chemical species. Several enzymes have been studied and engineered for this purpose. However, their mechanisms of action are not well understood. Theoretical investigations may help elucidate important aspects of these mechanisms and help in the development of more efficient bio-remediators. In this review, we point out the major contributions of computational methodologies applied to enzyme based detoxification of OPs. Furthermore, we highlight the use of PTE, PON, DFP, and BuChE as enzymes used in OP detoxification process and how computational tools such as molecular docking, molecular dynamics simulations and combined quantum mechanical/molecular mechanics have and will continue to contribute to this very important area of research.« less

  9. Carbonate aquifer of the Central Roswell Basin: recharge estimation by numerical modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rehfeldt, K.R.; Gross, G.W.

    The flow of ground water in the Roswell, New Mexico, Artesian Basin, has been studied since the early 1900s and varied ideas have been proposed to explain different aspects of the ground water flow system. The purpose of the present study was to delineate the spatial distribution and source, or sources, of recharge to the carbonate aquifer of the central Roswell Basin. A computer model was used to simulate ground water flow in the carbonate aquifer, beneath and west of Roswell and in the Glorieta Sandstone and Yeso Formation west of the carbonate aquifer.

  10. Soft Tissue Structure Modelling for Use in Orthopaedic Applications and Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Audenaert, E. A.; Mahieu, P.; van Hoof, T.; Pattyn, C.

    2009-12-01

    We present our methodology for the three-dimensional anatomical and geometrical description of soft tissues, relevant for orthopaedic surgical applications and musculoskeletal biomechanics. The technique involves the segmentation and geometrical description of muscles and neurovascular structures from high-resolution computer tomography scanning for the reconstruction of generic anatomical models. These models can be used for quantitative interpretation of anatomical and biomechanical aspects of different soft tissue structures. This approach should allow the use of these data in other application fields, such as musculoskeletal modelling, simulations for radiation therapy, and databases for use in minimally invasive, navigated and robotic surgery.

  11. Human sleep and circadian rhythms: a simple model based on two coupled oscillators.

    PubMed

    Strogatz, S H

    1987-01-01

    We propose a model of the human circadian system. The sleep-wake and body temperature rhythms are assumed to be driven by a pair of coupled nonlinear oscillators described by phase variables alone. The novel aspect of the model is that its equations may be solved analytically. Computer simulations are used to test the model against sleep-wake data pooled from 15 studies of subjects living for weeks in unscheduled, time-free environments. On these tests the model performs about as well as the existing models, although its mathematical structure is far simpler.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sklenka, L.; Rataj, J.; Frybort, J.

    Research reactors play an important role in providing key personnel of nuclear power plants a hands-on experience from operation and experiments at nuclear facilities. Training of NPP (Nuclear Power Plant) staff is usually deeply theoretical with an extensive utilisation of simulators and computer visualisation. But a direct sensing of the reactor response to various actions can only improve the personnel awareness of important aspects of reactor operation. Training Reactor VR-1 and its utilization for training of NPP operators and other professionals from Czech Republic and Slovakia is described. Typical experimental exercises and good practices in organization of a training programmore » are demonstrated. (authors)« less

  13. [A simulative biomechanical experiment on different position of none-cement acetabular components influencing the load distribution around acetabulum].

    PubMed

    Li, Dongsong; Liu, Jianguo; Li, Shuqiang; Fan, Honghui; Guan, Jikui

    2008-02-01

    In the present study, a three dimensional finite-element model of the human pelvic was reconstructed, and then, under different acetabular component position (the abduction angle ranges from 30 degrees to 70 degrees and the anteversion ranges from 5 degrees to 30degrees) the load distribution around the acetabular was evaluated by the computer biomechanical analysis program (Solidworks). Through the obtained load distribution results, the most even and reasonable range of the distribution was selected; therefore the safe range of the acetabular component implantation can be validated from the biomechanics aspect.

  14. Sodium dopants in helium clusters: Structure, equilibrium and submersion kinetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calvo, F.

    Alkali impurities bind to helium nanodroplets very differently depending on their size and charge state, large neutral or charged dopants being wetted by the droplet whereas small neutral impurities prefer to reside aside. Using various computational modeling tools such as quantum Monte Carlo and path-integral molecular dynamics simulations, we have revisited some aspects of the physical chemistry of helium droplets interacting with sodium impurities, including the onset of snowball formation in presence of many-body polarization forces, the transition from non-wetted to wetted behavior in larger sodium clusters, and the kinetics of submersion of small dopants after sudden ionization.

  15. MCNPX Cosmic Ray Shielding Calculations with the NORMAN Phantom Model

    NASA Technical Reports Server (NTRS)

    James, Michael R.; Durkee, Joe W.; McKinney, Gregg; Singleterry Robert

    2008-01-01

    The United States is planning manned lunar and interplanetary missions in the coming years. Shielding from cosmic rays is a critical aspect of manned spaceflight. These ventures will present exposure issues involving the interplanetary Galactic Cosmic Ray (GCR) environment. GCRs are comprised primarily of protons (approx.84.5%) and alpha-particles (approx.14.7%), while the remainder is comprised of massive, highly energetic nuclei. The National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) has commissioned a joint study with Los Alamos National Laboratory (LANL) to investigate the interaction of the GCR environment with humans using high-fidelity, state-of-the-art computer simulations. The simulations involve shielding and dose calculations in order to assess radiation effects in various organs. The simulations are being conducted using high-resolution voxel-phantom models and the MCNPX[1] Monte Carlo radiation-transport code. Recent advances in MCNPX physics packages now enable simulated transport over 2200 types of ions of widely varying energies in large, intricate geometries. We report here initial results obtained using a GCR spectrum and a NORMAN[3] phantom.

  16. A dual-porosity model for simulating solute transport in oil shale

    USGS Publications Warehouse

    Glover, K.C.

    1987-01-01

    A model is described for simulating three-dimensional groundwater flow and solute transport in oil shale and associated geohydrologic units. The model treats oil shale as a dual-porosity medium by simulating flow and transport within fractures using the finite-element method. Diffusion of solute between fractures and the essentially static water of the shale matrix is simulated by including an analytical solution that acts as a source-sink term to the differential equation of solute transport. While knowledge of fracture orientation and spacing is needed to effectively use the model, it is not necessary to map the locations of individual fractures. The computer program listed in the report incorporates many of the features of previous dual-porosity models while retaining a practical approach to solving field problems. As a result the theory of solute transport is not extended in any appreciable way. The emphasis is on bringing together various aspects of solute transport theory in a manner that is particularly suited to the unusual groundwater flow and solute transport characteristics of oil shale systems. (Author 's abstract)

  17. Computation of Steady-State Probability Distributions in Stochastic Models of Cellular Networks

    PubMed Central

    Hallen, Mark; Li, Bochong; Tanouchi, Yu; Tan, Cheemeng; West, Mike; You, Lingchong

    2011-01-01

    Cellular processes are “noisy”. In each cell, concentrations of molecules are subject to random fluctuations due to the small numbers of these molecules and to environmental perturbations. While noise varies with time, it is often measured at steady state, for example by flow cytometry. When interrogating aspects of a cellular network by such steady-state measurements of network components, a key need is to develop efficient methods to simulate and compute these distributions. We describe innovations in stochastic modeling coupled with approaches to this computational challenge: first, an approach to modeling intrinsic noise via solution of the chemical master equation, and second, a convolution technique to account for contributions of extrinsic noise. We show how these techniques can be combined in a streamlined procedure for evaluation of different sources of variability in a biochemical network. Evaluation and illustrations are given in analysis of two well-characterized synthetic gene circuits, as well as a signaling network underlying the mammalian cell cycle entry. PMID:22022252

  18. Support vector machine firefly algorithm based optimization of lens system.

    PubMed

    Shamshirband, Shahaboddin; Petković, Dalibor; Pavlović, Nenad T; Ch, Sudheer; Altameem, Torki A; Gani, Abdullah

    2015-01-01

    Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.

  19. Neighbour lists for smoothed particle hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Winkler, Daniel; Rezavand, Massoud; Rauch, Wolfgang

    2018-04-01

    The efficient iteration of neighbouring particles is a performance critical aspect of any high performance smoothed particle hydrodynamics (SPH) solver. SPH solvers that implement a constant smoothing length generally divide the simulation domain into a uniform grid to reduce the computational complexity of the neighbour search. Based on this method, particle neighbours are either stored per grid cell or for each individual particle, denoted as Verlet list. While the latter approach has significantly higher memory requirements, it has the potential for a significant computational speedup. A theoretical comparison is performed to estimate the potential improvements of the method based on unknown hardware dependent factors. Subsequently, the computational performance of both approaches is empirically evaluated on graphics processing units. It is shown that the speedup differs significantly for different hardware, dimensionality and floating point precision. The Verlet list algorithm is implemented as an alternative to the cell linked list approach in the open-source SPH solver DualSPHysics and provided as a standalone software package.

  20. Trends in computer applications in science assessment

    NASA Astrophysics Data System (ADS)

    Kumar, David D.; Helgeson, Stanley L.

    1995-03-01

    Seven computer applications to science assessment are reviewed. Conventional test administration includes record keeping, grading, and managing test banks. Multiple-choice testing involves forced selection of an answer from a menu, whereas constructed-response testing involves options for students to present their answers within a set standard deviation. Adaptive testing attempts to individualize the test to minimize the number of items and time needed to assess a student's knowledge. Figurai response testing assesses science proficiency in pictorial or graphic mode and requires the student to construct a mental image rather than selecting a response from a multiple choice menu. Simulations have been found useful for performance assessment on a large-scale basis in part because they make it possible to independently specify different aspects of a real experiment. An emerging approach to performance assessment is solution pathway analysis, which permits the analysis of the steps a student takes in solving a problem. Virtually all computer-based testing systems improve the quality and efficiency of record keeping and data analysis.

Top