Science.gov

Sample records for accurate computer simulation

  1. Computationally efficient and accurate enantioselectivity modeling by clusters of molecular dynamics simulations.

    PubMed

    Wijma, Hein J; Marrink, Siewert J; Janssen, Dick B

    2014-07-28

    Computational approaches could decrease the need for the laborious high-throughput experimental screening that is often required to improve enzymes by mutagenesis. Here, we report that using multiple short molecular dynamics (MD) simulations makes it possible to accurately model enantioselectivity for large numbers of enzyme-substrate combinations at low computational costs. We chose four different haloalkane dehalogenases as model systems because of the availability of a large set of experimental data on the enantioselective conversion of 45 different substrates. To model the enantioselectivity, we quantified the frequency of occurrence of catalytically productive conformations (near attack conformations) for pairs of enantiomers during MD simulations. We found that the angle of nucleophilic attack that leads to carbon-halogen bond cleavage was a critical variable that limited the occurrence of productive conformations; enantiomers for which this angle reached values close to 180° were preferentially converted. A cluster of 20-40 very short (10 ps) MD simulations allowed adequate conformational sampling and resulted in much better agreement to experimental enantioselectivities than single long MD simulations (22 ns), while the computational costs were 50-100 fold lower. With single long MD simulations, the dynamics of enzyme-substrate complexes remained confined to a conformational subspace that rarely changed significantly, whereas with multiple short MD simulations a larger diversity of conformations of enzyme-substrate complexes was observed. PMID:24916632

  2. Suite of finite element algorithms for accurate computation of soft tissue deformation for surgical simulation

    PubMed Central

    Joldes, Grand Roman; Wittek, Adam; Miller, Karol

    2008-01-01

    Real time computation of soft tissue deformation is important for the use of augmented reality devices and for providing haptic feedback during operation or surgeon training. This requires algorithms that are fast, accurate and can handle material nonlinearities and large deformations. A set of such algorithms is presented in this paper, starting with the finite element formulation and the integration scheme used and addressing common problems such as hourglass control and locking. The computation examples presented prove that by using these algorithms, real time computations become possible without sacrificing the accuracy of the results. For a brain model having more than 7000 degrees of freedom, we computed the reaction forces due to indentation with frequency of around 1000 Hz using a standard dual core PC. Similarly, we conducted simulation of brain shift using a model with more than 50 000 degrees of freedom in less than a minute. The speed benefits of our models results from combining the Total Lagrangian formulation with explicit time integration and low order finite elements. PMID:19152791

  3. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    SciTech Connect

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC

    2009-06-19

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.

  4. Time-Accurate Computational Fluid Dynamics Simulation of a Pair of Moving Solid Rocket Boosters

    NASA Technical Reports Server (NTRS)

    Strutzenberg, Louise L.; Williams, Brandon R.

    2011-01-01

    Since the Columbia accident, the threat to the Shuttle launch vehicle from debris during the liftoff timeframe has been assessed by the Liftoff Debris Team at NASA/MSFC. In addition to engineering methods of analysis, CFD-generated flow fields during the liftoff timeframe have been used in conjunction with 3-DOF debris transport methods to predict the motion of liftoff debris. Early models made use of a quasi-steady flow field approximation with the vehicle positioned at a fixed location relative to the ground; however, a moving overset mesh capability has recently been developed for the Loci/CHEM CFD software which enables higher-fidelity simulation of the Shuttle transient plume startup and liftoff environment. The present work details the simulation of the launch pad and mobile launch platform (MLP) with truncated solid rocket boosters (SRBs) moving in a prescribed liftoff trajectory derived from Shuttle flight measurements. Using Loci/CHEM, time-accurate RANS and hybrid RANS/LES simulations were performed for the timeframe T0+0 to T0+3.5 seconds, which consists of SRB startup to a vehicle altitude of approximately 90 feet above the MLP. Analysis of the transient flowfield focuses on the evolution of the SRB plumes in the MLP plume holes and the flame trench, impingement on the flame deflector, and especially impingment on the MLP deck resulting in upward flow which is a transport mechanism for debris. The results show excellent qualitative agreement with the visual record from past Shuttle flights, and comparisons to pressure measurements in the flame trench and on the MLP provide confidence in these simulation capabilities.

  5. Accurate treatments of electrostatics for computer simulations of biological systems: A brief survey of developments and existing problems

    NASA Astrophysics Data System (ADS)

    Yi, Sha-Sha; Pan, Cong; Hu, Zhong-Han

    2015-12-01

    Modern computer simulations of biological systems often involve an explicit treatment of the complex interactions among a large number of molecules. While it is straightforward to compute the short-ranged Van der Waals interaction in classical molecular dynamics simulations, it has been a long-lasting issue to develop accurate methods for the longranged Coulomb interaction. In this short review, we discuss three types of methodologies for the accurate treatment of electrostatics in simulations of explicit molecules: truncation-type methods, Ewald-type methods, and mean-field-type methods. Throughout the discussion, we brief the formulations and developments of these methods, emphasize the intrinsic connections among the three types of methods, and focus on the existing problems which are often associated with the boundary conditions of electrostatics. This brief survey is summarized with a short perspective on future trends along the method developments and applications in the field of biological simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 91127015 and 21522304) and the Open Project from the State Key Laboratory of Theoretical Physics, and the Innovation Project from the State Key Laboratory of Supramolecular Structure and Materials.

  6. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    PubMed Central

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational. PMID:25615870

  7. IGS-global ionospheric maps for accurate computation of GPS single- frequency ionospheric delay-simulation study

    NASA Astrophysics Data System (ADS)

    Farah, A.

    The Ionospheric delay is still one of the largest sources of error that affects the positioning accuracy of any satellite positioning system. This problem could be solved due to the dispersive nature of the Ionosphere by combining simultaneous measurements of signals at two different frequencies but it is still there for single- frequency users. Much effort has been made in establishing models for single- frequency users to make this effect as small as possible. These models vary in accuracy, input data and computational complexity, so the choice between the different models depends on the individual circumstances of the user. From the simulation point of view, the model needed should be accurate with a global coverage and good description to the Ionosphere's variable nature with both time and location. The author reviews some of these established models, starting with the BENT model, the Klobuchar model and the IRI (International Reference Ionosphere) model. Since quiet a long time, Klobuchar model considers the most widely used model ever in this field, due to its simplicity and time saving. Any GPS user could find Klobuchar model's coefficients in the broadcast navigation message. CODE, Centre for Orbit Determination in Europe provides a new set of coefficients for Klobuchar model, which gives more accurate results for the Ionospheric delay computation. IGS (International GPS Service) services include providing GPS community with a global Ionospheric maps in IONEX-format (IONosphere Map Exchange format) which enables the computation of the Ionospheric delay at the desired location and time. The study was undertaken from GPS-data simulation point of view. The aim was to select a model for the simulation of GPS data that gives a good description of the Ionosphere's nature with a high degree of accuracy in computing the Ionospheric delay that yields to better-simulated data. A new model developed by the author based on IGS global Ionospheric maps. A comparison

  8. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    SciTech Connect

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  9. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    NASA Astrophysics Data System (ADS)

    Mehmani, Yashar; Oostrom, Mart; Balhoff, Matthew T.

    2014-03-01

    Several approaches have been developed in the literature for solving flow and transport at the pore scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect-mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and validated against micromodel experiments; excellent matches were obtained across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3-D disordered granular media.

  10. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    SciTech Connect

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.

  11. CAFE: A Computer Tool for Accurate Simulation of the Regulatory Pool Fire Environment for Type B Packages

    SciTech Connect

    Gritzo, L.A.; Koski, J.A.; Suo-Anttila, A.J.

    1999-03-16

    The Container Analysis Fire Environment computer code (CAFE) is intended to provide Type B package designers with an enhanced engulfing fire boundary condition when combined with the PATRAN/P-Thermal commercial code. Historically an engulfing fire boundary condition has been modeled as {sigma}T{sup 4} where {sigma} is the Stefan-Boltzman constant, and T is the fire temperature. The CAFE code includes the necessary chemistry, thermal radiation, and fluid mechanics to model an engulfing fire. Effects included are the local cooling of gases that form a protective boundary layer that reduces the incoming radiant heat flux to values lower than expected from a simple {sigma}T{sup 4} model. In addition, the effect of object shape on mixing that may increase the local fire temperature is included. Both high and low temperature regions that depend upon the local availability of oxygen are also calculated. Thus the competing effects that can both increase and decrease the local values of radiant heat flux are included in a reamer that is not predictable a-priori. The CAFE package consists of a group of computer subroutines that can be linked to workstation-based thermal analysis codes in order to predict package performance during regulatory and other accident fire scenarios.

  12. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  13. Progress in fast, accurate multi-scale climate simulations

    DOE PAGESBeta

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  14. Progress in fast, accurate multi-scale climate simulations

    SciTech Connect

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  15. Progress in Fast, Accurate Multi-scale Climate Simulations

    SciTech Connect

    Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  16. Software simulator for multiple computer simulation system

    NASA Technical Reports Server (NTRS)

    Ogrady, E. P.

    1983-01-01

    A description is given of the structure and use of a computer program that simulates the operation of a parallel processor simulation system. The program is part of an investigation to determine algorithms that are suitable for simulating continous systems on a parallel processor configuration. The simulator is designed to accurately simulate the problem-solving phase of a simulation study. Care has been taken to ensure the integrity and correctness of data exchanges and to correctly sequence periods of computation and periods of data exchange. It is pointed out that the functions performed during a problem-setup phase or a reset phase are not simulated. In particular, there is no attempt to simulate the downloading process that loads object code into the local, transfer, and mapping memories of processing elements or the memories of the run control processor and the system control processor. The main program of the simulator carries out some problem-setup functions of the system control processor in that it requests the user to enter values for simulation system parameters and problem parameters. The method by which these values are transferred to the other processors, however, is not simulated.

  17. Time accurate simulations of compressible shear flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Steinberger, Craig J.; Vidoni, Thomas J.; Madnia, Cyrus K.

    1993-01-01

    The objectives of this research are to employ direct numerical simulation (DNS) to study the phenomenon of mixing (or lack thereof) in compressible free shear flows and to suggest new means of enhancing mixing in such flows. The shear flow configurations under investigation are those of parallel mixing layers and planar jets under both non-reacting and reacting nonpremixed conditions. During the three-years of this research program, several important issues regarding mixing and chemical reactions in compressible shear flows were investigated.

  18. Accurate Langevin approaches to simulate Markovian channel dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Yandong; Rüdiger, Sten; Shuai, Jianwei

    2015-12-01

    The stochasticity of ion-channels dynamic is significant for physiological processes on neuronal cell membranes. Microscopic simulations of the ion-channel gating with Markov chains can be considered to be an accurate standard. However, such Markovian simulations are computationally demanding for membrane areas of physiologically relevant sizes, which makes the noise-approximating or Langevin equation methods advantageous in many cases. In this review, we discuss the Langevin-like approaches, including the channel-based and simplified subunit-based stochastic differential equations proposed by Fox and Lu, and the effective Langevin approaches in which colored noise is added to deterministic differential equations. In the framework of Fox and Lu’s classical models, several variants of numerical algorithms, which have been recently developed to improve accuracy as well as efficiency, are also discussed. Through the comparison of different simulation algorithms of ion-channel noise with the standard Markovian simulation, we aim to reveal the extent to which the existing Langevin-like methods approximate results using Markovian methods. Open questions for future studies are also discussed.

  19. Accurate simulation of optical properties in dyes.

    PubMed

    Jacquemin, Denis; Perpète, Eric A; Ciofini, Ilaria; Adamo, Carlo

    2009-02-17

    Since Antiquity, humans have produced and commercialized dyes. To this day, extraction of natural dyes often requires lengthy and costly procedures. In the 19th century, global markets and new industrial products drove a significant effort to synthesize artificial dyes, characterized by low production costs, huge quantities, and new optical properties (colors). Dyes that encompass classes of molecules absorbing in the UV-visible part of the electromagnetic spectrum now have a wider range of applications, including coloring (textiles, food, paintings), energy production (photovoltaic cells, OLEDs), or pharmaceuticals (diagnostics, drugs). Parallel to the growth in dye applications, researchers have increased their efforts to design and synthesize new dyes to customize absorption and emission properties. In particular, dyes containing one or more metallic centers allow for the construction of fairly sophisticated systems capable of selectively reacting to light of a given wavelength and behaving as molecular devices (photochemical molecular devices, PMDs).Theoretical tools able to predict and interpret the excited-state properties of organic and inorganic dyes allow for an efficient screening of photochemical centers. In this Account, we report recent developments defining a quantitative ab initio protocol (based on time-dependent density functional theory) for modeling dye spectral properties. In particular, we discuss the importance of several parameters, such as the methods used for electronic structure calculations, solvent effects, and statistical treatments. In addition, we illustrate the performance of such simulation tools through case studies. We also comment on current weak points of these methods and ways to improve them. PMID:19113946

  20. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  1. Efficient and accurate computation of generalized singular-value decompositions

    NASA Astrophysics Data System (ADS)

    Drmac, Zlatko

    2001-11-01

    We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.

  2. Computer Modeling and Simulation

    SciTech Connect

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  3. Accelerator simulation using computers

    SciTech Connect

    Lee, M.; Zambre, Y.; Corbett, W.

    1992-01-01

    Every accelerator or storage ring system consists of a charged particle beam propagating through a beam line. Although a number of computer programs exits that simulate the propagation of a beam in a given beam line, only a few provide the capabilities for designing, commissioning and operating the beam line. This paper shows how a multi-track'' simulation and analysis code can be used for these applications.

  4. Accelerator simulation using computers

    SciTech Connect

    Lee, M.; Zambre, Y.; Corbett, W.

    1992-01-01

    Every accelerator or storage ring system consists of a charged particle beam propagating through a beam line. Although a number of computer programs exits that simulate the propagation of a beam in a given beam line, only a few provide the capabilities for designing, commissioning and operating the beam line. This paper shows how a ``multi-track`` simulation and analysis code can be used for these applications.

  5. Computer-simulated phacoemulsification

    NASA Astrophysics Data System (ADS)

    Laurell, Carl-Gustaf; Nordh, Leif; Skarman, Eva; Andersson, Mats; Nordqvist, Per

    2001-06-01

    Phacoemulsification makes the cataract operation easier for the patient but involves a demanding technique for the surgeon. It is therefore important to increase the quality of surgical training in order to shorten the learning period for the beginner. This should diminish the risks of the patient. We are developing a computer-based simulator for training of phacoemulsification. The simulator is built on a platform that can be used as a basis for several different training simulators. A prototype has been made that has been partly tested by experienced surgeons.

  6. Probabilistic Fatigue: Computational Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2002-01-01

    Fatigue is a primary consideration in the design of aerospace structures for long term durability and reliability. There are several types of fatigue that must be considered in the design. These include low cycle, high cycle, combined for different cyclic loading conditions - for example, mechanical, thermal, erosion, etc. The traditional approach to evaluate fatigue has been to conduct many tests in the various service-environment conditions that the component will be subjected to in a specific design. This approach is reasonable and robust for that specific design. However, it is time consuming, costly and needs to be repeated for designs in different operating conditions in general. Recent research has demonstrated that fatigue of structural components/structures can be evaluated by computational simulation based on a novel paradigm. Main features in this novel paradigm are progressive telescoping scale mechanics, progressive scale substructuring and progressive structural fracture, encompassed with probabilistic simulation. These generic features of this approach are to probabilistically telescope scale local material point damage all the way up to the structural component and to probabilistically scale decompose structural loads and boundary conditions all the way down to material point. Additional features include a multifactor interaction model that probabilistically describes material properties evolution, any changes due to various cyclic load and other mutually interacting effects. The objective of the proposed paper is to describe this novel paradigm of computational simulation and present typical fatigue results for structural components. Additionally, advantages, versatility and inclusiveness of computational simulation versus testing are discussed. Guidelines for complementing simulated results with strategic testing are outlined. Typical results are shown for computational simulation of fatigue in metallic composite structures to demonstrate the

  7. Computer simulation of earthquakes

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.

    1977-01-01

    In a computer simulation study of earthquakes a seismically active strike slip fault is represented by coupled mechanical blocks which are driven by a moving plate and which slide on a friction surface. Elastic forces and time independent friction are used to generate main shock events, while viscoelastic forces and time dependent friction add aftershock features. The study reveals that the size, length, and time and place of event occurrence are strongly influenced by the magnitude and degree of homogeneity in the elastic, viscous, and friction parameters of the fault region. For example, periodically reoccurring similar events are observed in simulations with near-homogeneous parameters along the fault, whereas seismic gaps are a common feature of simulations employing large variations in the fault parameters. The study also reveals correlations between strain energy release and fault length and average displacement and between main shock and aftershock displacements.

  8. Neutron supermirrors: an accurate theory for layer thickness computation

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2001-11-01

    We present a new theory for the computation of Super-Mirror stacks, using accurate formulas derived from the classical optics field. Approximations are introduced into the computation, but at a later stage than existing theories, providing a more rigorous treatment of the problem. The final result is a continuous thickness stack, whose properties can be determined at the outset of the design. We find that the well-known fourth power dependence of number of layers versus maximum angle is (of course) asymptotically correct. We find a formula giving directly the relation between desired reflectance, maximum angle, and number of layers (for a given pair of materials). Note: The author of this article, a classical opticist, has limited knowledge of the Neutron world, and begs forgiveness for any shortcomings, erroneous assumptions and/or misinterpretation of previous authors' work on the subject.

  9. Accurate Computation of Survival Statistics in Genome-Wide Studies

    PubMed Central

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli

    2015-01-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620

  10. Computer simulation of earthquakes

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.

    1976-01-01

    Two computer simulation models of earthquakes were studied for the dependence of the pattern of events on the model assumptions and input parameters. Both models represent the seismically active region by mechanical blocks which are connected to one another and to a driving plate. The blocks slide on a friction surface. In the first model elastic forces were employed and time independent friction to simulate main shock events. The size, length, and time and place of event occurrence were influenced strongly by the magnitude and degree of homogeniety in the elastic and friction parameters of the fault region. Periodically reoccurring similar events were frequently observed in simulations with near homogeneous parameters along the fault, whereas, seismic gaps were a common feature of simulations employing large variations in the fault parameters. The second model incorporated viscoelastic forces and time-dependent friction to account for aftershock sequences. The periods between aftershock events increased with time and the aftershock region was confined to that which moved in the main event.

  11. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  12. Direct computation of parameters for accurate polarizable force fields

    SciTech Connect

    Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.

    2014-11-21

    We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.

  13. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  14. Intelligence Assessment with Computer Simulations

    ERIC Educational Resources Information Center

    Kroner, S.; Plass, J.L.; Leutner, D.

    2005-01-01

    It has been suggested that computer simulations may be used for intelligence assessment. This study investigates what relationships exist between intelligence and computer-simulated tasks that mimic real-world problem-solving behavior, and discusses design requirements that simulations have to meet in order to be suitable for intelligence…

  15. Massively Parallel Processing for Fast and Accurate Stamping Simulations

    NASA Astrophysics Data System (ADS)

    Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu

    2005-08-01

    The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.

  16. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  17. Exploring accurate Poisson–Boltzmann methods for biomolecular simulations

    PubMed Central

    Wang, Changhao; Wang, Jun; Cai, Qin; Li, Zhilin; Zhao, Hong-Kai; Luo, Ray

    2013-01-01

    Accurate and efficient treatment of electrostatics is a crucial step in computational analyses of biomolecular structures and dynamics. In this study, we have explored a second-order finite-difference numerical method to solve the widely used Poisson–Boltzmann equation for electrostatic analyses of realistic bio-molecules. The so-called immersed interface method was first validated and found to be consistent with the classical weighted harmonic averaging method for a diversified set of test biomolecules. The numerical accuracy and convergence behaviors of the new method were next analyzed in its computation of numerical reaction field grid potentials, energies, and atomic solvation forces. Overall similar convergence behaviors were observed as those by the classical method. Interestingly, the new method was found to deliver more accurate and better-converged grid potentials than the classical method on or nearby the molecular surface, though the numerical advantage of the new method is reduced when grid potentials are extrapolated to the molecular surface. Our exploratory study indicates the need for further improving interpolation/extrapolation schemes in addition to the developments of higher-order numerical methods that have attracted most attention in the field. PMID:24443709

  18. A fast and accurate computational approach to protein ionization

    PubMed Central

    Spassov, Velin Z.; Yan, Lisa

    2008-01-01

    We report a very fast and accurate physics-based method to calculate pH-dependent electrostatic effects in protein molecules and to predict the pK values of individual sites of titration. In addition, a CHARMm-based algorithm is included to construct and refine the spatial coordinates of all hydrogen atoms at a given pH. The present method combines electrostatic energy calculations based on the Generalized Born approximation with an iterative mobile clustering approach to calculate the equilibria of proton binding to multiple titration sites in protein molecules. The use of the GBIM (Generalized Born with Implicit Membrane) CHARMm module makes it possible to model not only water-soluble proteins but membrane proteins as well. The method includes a novel algorithm for preliminary refinement of hydrogen coordinates. Another difference from existing approaches is that, instead of monopeptides, a set of relaxed pentapeptide structures are used as model compounds. Tests on a set of 24 proteins demonstrate the high accuracy of the method. On average, the RMSD between predicted and experimental pK values is close to 0.5 pK units on this data set, and the accuracy is achieved at very low computational cost. The pH-dependent assignment of hydrogen atoms also shows very good agreement with protonation states and hydrogen-bond network observed in neutron-diffraction structures. The method is implemented as a computational protocol in Accelrys Discovery Studio and provides a fast and easy way to study the effect of pH on many important mechanisms such as enzyme catalysis, ligand binding, protein–protein interactions, and protein stability. PMID:18714088

  19. Photoacoustic computed tomography without accurate ultrasonic transducer responses

    NASA Astrophysics Data System (ADS)

    Sheng, Qiwei; Wang, Kun; Xia, Jun; Zhu, Liren; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.

  20. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.

    PubMed

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  1. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data

    PubMed Central

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  2. An accurate and computationally efficient model for membrane-type circular-symmetric micro-hotplates.

    PubMed

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  3. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  4. How Accurate Are Transition States from Simulations of Enzymatic Reactions?

    PubMed Central

    2015-01-01

    The rate expression of traditional transition state theory (TST) assumes no recrossing of the transition state (TS) and thermal quasi-equilibrium between the ground state and the TS. Currently, it is not well understood to what extent these assumptions influence the nature of the activated complex obtained in traditional TST-based simulations of processes in the condensed phase in general and in enzymes in particular. Here we scrutinize these assumptions by characterizing the TSs for hydride transfer catalyzed by the enzyme Escherichia coli dihydrofolate reductase obtained using various simulation approaches. Specifically, we compare the TSs obtained with common TST-based methods and a dynamics-based method. Using a recently developed accurate hybrid quantum mechanics/molecular mechanics potential, we find that the TST-based and dynamics-based methods give considerably different TS ensembles. This discrepancy, which could be due equilibrium solvation effects and the nature of the reaction coordinate employed and its motion, raises major questions about how to interpret the TSs determined by common simulation methods. We conclude that further investigation is needed to characterize the impact of various TST assumptions on the TS phase-space ensemble and on the reaction kinetics. PMID:24860275

  5. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  6. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  7. Anisotropic Turbulence Modeling for Accurate Rod Bundle Simulations

    SciTech Connect

    Baglietto, Emilio

    2006-07-01

    An improved anisotropic eddy viscosity model has been developed for accurate predictions of the thermal hydraulic performances of nuclear reactor fuel assemblies. The proposed model adopts a non-linear formulation of the stress-strain relationship in order to include the reproduction of the anisotropic phenomena, and in combination with an optimized low-Reynolds-number formulation based on Direct Numerical Simulation (DNS) to produce correct damping of the turbulent viscosity in the near wall region. This work underlines the importance of accurate anisotropic modeling to faithfully reproduce the scale of the turbulence driven secondary flows inside the bundle subchannels, by comparison with various isothermal and heated experimental cases. The very low scale secondary motion is responsible for the increased turbulence transport which produces a noticeable homogenization of the velocity distribution and consequently of the circumferential cladding temperature distribution, which is of main interest in bundle design. Various fully developed bare bundles test cases are shown for different geometrical and flow conditions, where the proposed model shows clearly improved predictions, in close agreement with experimental findings, for regular as well as distorted geometries. Finally the applicability of the model for practical bundle calculations is evaluated through its application in the high-Reynolds form on coarse grids, with excellent results. (author)

  8. CgWind: A high-order accurate simulation tool for wind turbines and wind farms

    SciTech Connect

    Chand, K K; Henshaw, W D; Lundquist, K A; Singer, M A

    2010-02-22

    CgWind is a high-fidelity large eddy simulation (LES) tool designed to meet the modeling needs of wind turbine and wind park engineers. This tool combines several advanced computational technologies in order to model accurately the complex and dynamic nature of wind energy applications. The composite grid approach provides high-quality structured grids for the efficient implementation of high-order accurate discretizations of the incompressible Navier-Stokes equations. Composite grids also provide a natural mechanism for modeling bodies in relative motion and complex geometry. Advanced algorithms such as matrix-free multigrid, compact discretizations and approximate factorization will allow CgWind to perform highly resolved calculations efficiently on a wide class of computing resources. Also in development are nonlinear LES subgrid-scale models required to simulate the many interacting scales present in large wind turbine applications. This paper outlines our approach, the current status of CgWind and future development plans.

  9. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  10. Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues.

    PubMed

    Zhang, Hao; Zhao, Yan; Cao, Liangcai; Jin, Guofan

    2015-02-23

    We propose an algorithm based on fully computed holographic stereogram for calculating full-parallax computer-generated holograms (CGHs) with accurate depth cues. The proposed method integrates point source algorithm and holographic stereogram based algorithm to reconstruct the three-dimensional (3D) scenes. Precise accommodation cue and occlusion effect can be created, and computer graphics rendering techniques can be employed in the CGH generation to enhance the image fidelity. Optical experiments have been performed using a spatial light modulator (SLM) and a fabricated high-resolution hologram, the results show that our proposed algorithm can perform quality reconstructions of 3D scenes with arbitrary depth information. PMID:25836429

  11. Symphony: a framework for accurate and holistic WSN simulation.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2015-01-01

    Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144

  12. Symphony: A Framework for Accurate and Holistic WSN Simulation

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2015-01-01

    Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144

  13. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  14. Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.

    PubMed

    Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique

    2013-06-01

    The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530

  15. Computer simulation of space charge

    NASA Astrophysics Data System (ADS)

    Yu, K. W.; Chung, W. K.; Mak, S. S.

    1991-05-01

    Using the particle-mesh (PM) method, a one-dimensional simulation of the well-known Langmuir-Child's law is performed on an INTEL 80386-based personal computer system. The program is coded in turbo basic (trademark of Borland International, Inc.). The numerical results obtained were in excellent agreement with theoretical predictions and the computational time required is quite modest. This simulation exercise demonstrates that some simple computer simulation using particles may be implemented successfully on PC's that are available today, and hopefully this will provide the necessary incentives for newcomers to the field who wish to acquire a flavor of the elementary aspects of the practice.

  16. Accurate direct Eulerian simulation of dynamic elastic-plastic flow

    SciTech Connect

    Kamm, James R; Walter, John W

    2009-01-01

    The simulation of dynamic, large strain deformation is an important, difficult, and unsolved computational challenge. Existing Eulerian schemes for dynamic material response are plagued by unresolved issues. We present a new scheme for the first-order system of elasto-plasticity equations in the Eulerian frame. This system has an intrinsic constraint on the inverse deformation gradient. Standard Godunov schemes do not satisfy this constraint. The method of Flux Distributions (FD) was devised to discretely enforce such constraints for numerical schemes with cell-centered variables. We describe a Flux Distribution approach that enforces the inverse deformation gradient constraint. As this approach is new and novel, we do not yet have numerical results to validate our claims. This paper is the first installment of our program to develop this new method.

  17. Toward the Accurate Simulation of Two-Dimensional Electronic Spectra

    NASA Astrophysics Data System (ADS)

    Giussani, Angelo; Nenov, Artur; Segarra-Martí, Javier; Jaiswal, Vishal K.; Rivalta, Ivan; Dumont, Elise; Mukamel, Shaul; Garavelli, Marco

    2015-06-01

    Two-dimensional pump-probe electronic spectroscopy is a powerful technique able to provide both high spectral and temporal resolution, allowing the analysis of ultrafast complex reactions occurring via complementary pathways by the identification of decay-specific fingerprints. [1-2] The understanding of the origin of the experimentally recorded signals in a two-dimensional electronic spectrum requires the characterization of the electronic states involved in the electronic transitions photoinduced by the pump/probe pulses in the experiment. Such a goal constitutes a considerable computational challenge, since up to 100 states need to be described, for which state-of-the-art methods as RASSCF and RASPT2 have to be wisely employed. [3] With the present contribution, the main features and potentialities of two-dimensional electronic spectroscopy are presented, together with the machinery in continuous development in our groups in order to compute two-dimensional electronic spectra. The results obtained using different level of theory and simulations are shown, bringing as examples the computed two-dimensional electronic spectra for some specific cases studied. [2-4] [1] Rivalta I, Nenov A, Cerullo G, Mukamel S, Garavelli M, Int. J. Quantum Chem., 2014, 114, 85 [2] Nenov A, Segarra-Martí J, Giussani A, Conti I, Rivalta I, Dumont E, Jaiswal V K, Altavilla S, Mukamel S, Garavelli M, Faraday Discuss. 2015, DOI: 10.1039/C4FD00175C [3] Nenov A, Giussani A, Segarra-Martí J, Jaiswal V K, Rivalta I, Cerullo G, Mukamel S, Garavelli M, J. Chem. Phys. submitted [4] Nenov A, Giussani A, Fingerhut B P, Rivalta I, Dumont E, Mukamel S, Garavelli M, Phys. Chem. Chem. Phys. Submitted [5] Krebs N, Pugliesi I, Hauer J, Riedle E, New J. Phys., 2013,15, 08501

  18. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  19. Computer Simulation of Mutagenesis.

    ERIC Educational Resources Information Center

    North, J. C.; Dent, M. T.

    1978-01-01

    A FORTRAN program is described which simulates point-substitution mutations in the DNA strands of typical organisms. Its objective is to help students to understand the significance and structure of the genetic code, and the mechanisms and effect of mutagenesis. (Author/BB)

  20. Liquid propellant rocket engine combustion simulation with a time-accurate CFD method

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Shang, H. M.; Liaw, Paul; Hutt, J.

    1993-01-01

    Time-accurate computational fluid dynamics (CFD) algorithms are among the basic requirements as an engineering or research tool for realistic simulations of transient combustion phenomena, such as combustion instability, transient start-up, etc., inside the rocket engine combustion chamber. A time-accurate pressure based method is employed in the FDNS code for combustion model development. This is in connection with other program development activities such as spray combustion model development and efficient finite-rate chemistry solution method implementation. In the present study, a second-order time-accurate time-marching scheme is employed. For better spatial resolutions near discontinuities (e.g., shocks, contact discontinuities), a 3rd-order accurate TVD scheme for modeling the convection terms is implemented in the FDNS code. Necessary modification to the predictor/multi-corrector solution algorithm in order to maintain time-accurate wave propagation is also investigated. Benchmark 1-D and multidimensional test cases, which include the classical shock tube wave propagation problems, resonant pipe test case, unsteady flow development of a blast tube test case, and H2/O2 rocket engine chamber combustion start-up transient simulation, etc., are investigated to validate and demonstrate the accuracy and robustness of the present numerical scheme and solution algorithm.

  1. Cluster computing software for GATE simulations

    SciTech Connect

    Beenhouwer, Jan de; Staelens, Steven; Kruecker, Dirk; Ferrer, Ludovic; D'Asseler, Yves; Lemahieu, Ignace; Rannou, Fernando R.

    2007-06-15

    Geometry and tracking (GEANT4) is a Monte Carlo package designed for high energy physics experiments. It is used as the basis layer for Monte Carlo simulations of nuclear medicine acquisition systems in GEANT4 Application for Tomographic Emission (GATE). GATE allows the user to realistically model experiments using accurate physics models and time synchronization for detector movement through a script language contained in a macro file. The downside of this high accuracy is long computation time. This paper describes a platform independent computing approach for running GATE simulations on a cluster of computers in order to reduce the overall simulation time. Our software automatically creates fully resolved, nonparametrized macros accompanied with an on-the-fly generated cluster specific submit file used to launch the simulations. The scalability of GATE simulations on a cluster is investigated for two imaging modalities, positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to a higher sensitivity, PET simulations are characterized by relatively high data output rates that create rather large output files. SPECT simulations, on the other hand, have lower data output rates but require a long collimator setup time. Both of these characteristics hamper scalability as a function of the number of CPUs. The scalability of PET simulations is improved here by the development of a fast output merger. The scalability of SPECT simulations is improved by greatly reducing the collimator setup time. Accordingly, these two new developments result in higher scalability for both PET and SPECT simulations and reduce the computation time to more practical values.

  2. Cluster computing software for GATE simulations.

    PubMed

    De Beenhouwer, Jan; Staelens, Steven; Kruecker, Dirk; Ferrer, Ludovic; D'Asseler, Yves; Lemahieu, Ignace; Rannou, Fernando R

    2007-06-01

    Geometry and tracking (GEANT4) is a Monte Carlo package designed for high energy physics experiments. It is used as the basis layer for Monte Carlo simulations of nuclear medicine acquisition systems in GEANT4 Application for Tomographic Emission (GATE). GATE allows the user to realistically model experiments using accurate physics models and time synchronization for detector movement through a script language contained in a macro file. The downside of this high accuracy is long computation time. This paper describes a platform independent computing approach for running GATE simulations on a cluster of computers in order to reduce the overall simulation time. Our software automatically creates fully resolved, nonparametrized macros accompanied with an on-the-fly generated cluster specific submit file used to launch the simulations. The scalability of GATE simulations on a cluster is investigated for two imaging modalities, positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to a higher sensitivity, PET simulations are characterized by relatively high data output rates that create rather large output files. SPECT simulations, on the other hand, have lower data output rates but require a long collimator setup time. Both of these characteristics hamper scalability as a function of the number of CPUs. The scalability of PET simulations is improved here by the development of a fast output merger. The scalability of SPECT simulations is improved by greatly reducing the collimator setup time. Accordingly, these two new developments result in higher scalability for both PET and SPECT simulations and reduce the computation time to more practical values. PMID:17654895

  3. Accurate computation of Stokes flow driven by an open immersed interface

    NASA Astrophysics Data System (ADS)

    Li, Yi; Layton, Anita T.

    2012-06-01

    We present numerical methods for computing two-dimensional Stokes flow driven by forces singularly supported along an open, immersed interface. Two second-order accurate methods are developed: one for accurately evaluating boundary integral solutions at a point, and another for computing Stokes solution values on a rectangular mesh. We first describe a method for computing singular or nearly singular integrals, such as a double layer potential due to sources on a curve in the plane, evaluated at a point on or near the curve. To improve accuracy of the numerical quadrature, we add corrections for the errors arising from discretization, which are found by asymptotic analysis. When used to solve the Stokes equations with sources on an open, immersed interface, the method generates second-order approximations, for both the pressure and the velocity, and preserves the jumps in the solutions and their derivatives across the boundary. We then combine the method with a mesh-based solver to yield a hybrid method for computing Stokes solutions at N2 grid points on a rectangular grid. Numerical results are presented which exhibit second-order accuracy. To demonstrate the applicability of the method, we use the method to simulate fluid dynamics induced by the beating motion of a cilium. The method preserves the sharp jumps in the Stokes solution and their derivatives across the immersed boundary. Model results illustrate the distinct hydrodynamic effects generated by the effective stroke and by the recovery stroke of the ciliary beat cycle.

  4. Computer Simulation and Library Management.

    ERIC Educational Resources Information Center

    Main, Linda

    1992-01-01

    Reviews the literature on computer simulation modeling for library management and examines whether simulation is underutilized because the models are too complex and mathematical. Other problems with implementation are considered, including incomplete problem definition, lack of a conceptual framework, system constraints, lack of interaction…

  5. Plasma physics via computer simulation

    SciTech Connect

    Birdsall, C.K.; Langdon, A.B.

    1985-01-01

    This book describes the computerized simulation of plasma kinetics. Topics considered include why attempting to do plasma physics via computer simulation using particles makes good physical sense; overall view of a one-dimensional electrostatic program; a one-dimensional electrostatic program; introduction to the numerical methods used; a 1d electromagnetic program; projects for EM1; effects of the spatial grid; effects of the finite time step; energy-conserving simulation models; multipole models; kinetic theory for fluctuations and noise; collisions; statistical mechanics of a sheet plasma; electrostatic programs in two and three dimensions; electromagnetic programs in 2D and 3D; design of computer experiments; and the choice of parameters.

  6. High-performance computing and networking as tools for accurate emission computed tomography reconstruction.

    PubMed

    Passeri, A; Formiconi, A R; De Cristofaro, M T; Pupi, A; Meldolesi, U

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64x64) slices could be reconstructed from a set of 90 (64x64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. PMID:9096089

  7. Direct Simulations of Transition and Turbulence Using High-Order Accurate Finite-Difference Schemes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    1997-01-01

    In recent years the techniques of computational fluid dynamics (CFD) have been used to compute flows associated with geometrically complex configurations. However, success in terms of accuracy and reliability has been limited to cases where the effects of turbulence and transition could be modeled in a straightforward manner. Even in simple flows, the accurate computation of skin friction and heat transfer using existing turbulence models has proved to be a difficult task, one that has required extensive fine-tuning of the turbulence models used. In more complex flows (for example, in turbomachinery flows in which vortices and wakes impinge on airfoil surfaces causing periodic transitions from laminar to turbulent flow) the development of a model that accounts for all scales of turbulence and predicts the onset of transition may prove to be impractical. Fortunately, current trends in computing suggest that it may be possible to perform direct simulations of turbulence and transition at moderate Reynolds numbers in some complex cases in the near future. This seminar will focus on direct simulations of transition and turbulence using high-order accurate finite-difference methods. The advantage of the finite-difference approach over spectral methods is that complex geometries can be treated in a straightforward manner. Additionally, finite-difference techniques are the prevailing methods in existing application codes. In this seminar high-order-accurate finite-difference methods for the compressible and incompressible formulations of the unsteady Navier-Stokes equations and their applications to direct simulations of turbulence and transition will be presented.

  8. Composite Erosion by Computational Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2006-01-01

    Composite degradation is evaluated by computational simulation when the erosion degradation occurs on a ply-by-ply basis and the degrading medium (device) is normal to the ply. The computational simulation is performed by a multi factor interaction model and by a multi scale and multi physics available computer code. The erosion process degrades both the fiber and the matrix simultaneously in the same slice (ply). Both the fiber volume ratio and the matrix volume ratio approach zero while the void volume ratio increases as the ply degrades. The multi factor interaction model simulates the erosion degradation, provided that the exponents and factor ratios are selected judiciously. Results obtained by the computational composite mechanics show that most composite characterization properties degrade monotonically and approach "zero" as the ply degrades completely.

  9. Time-Accurate Unsteady Pressure Loads Simulated for the Space Launch System at Wind Tunnel Conditions

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.

    2015-01-01

    A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.

  10. Computational algorithms for simulations in atmospheric optics.

    PubMed

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors. PMID:27140113

  11. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja

    2015-09-01

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor-liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields Tc = 1.3128 ± 0.0016, ρc = 0.316 ± 0.004, and pc = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρt ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using rcut = 3.5σ yield Tc and pc that are higher by 0.2% and 1.4% than simulations with rcut = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that rcut = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with various ranges of the

  12. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    SciTech Connect

    Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja

    2015-09-21

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard

  13. Computing accurate age and distance factors in cosmology

    NASA Astrophysics Data System (ADS)

    Christiansen, Jodi L.; Siver, Andrew

    2012-05-01

    As the universe expands astronomical observables such as brightness and angular size on the sky change in ways that differ from our simple Cartesian expectation. We show how observed quantities depend on the expansion of space and demonstrate how to calculate such quantities using the Friedmann equations. The general solution to the Friedmann equations requires a numerical solution, which is easily coded in any computing language (including excel). We use these numerical calculations in four projects that help students build their understanding of high-redshift phenomena and cosmology. Instructions for these projects are available as supplementary materials.

  14. Computer Simulation of Aircraft Aerodynamics

    NASA Technical Reports Server (NTRS)

    Inouye, Mamoru

    1989-01-01

    The role of Ames Research Center in conducting basic aerodynamics research through computer simulations is described. The computer facilities, including supercomputers and peripheral equipment that represent the state of the art, are described. The methodology of computational fluid dynamics is explained briefly. Fundamental studies of turbulence and transition are being pursued to understand these phenomena and to develop models that can be used in the solution of the Reynolds-averaged Navier-Stokes equations. Four applications of computer simulations for aerodynamics problems are described: subsonic flow around a fuselage at high angle of attack, subsonic flow through a turbine stator-rotor stage, transonic flow around a flexible swept wing, and transonic flow around a wing-body configuration that includes an inlet and a tail.

  15. High-Resolution Tsunami Inundation Simulations Based on Accurate Estimations of Coastal Waveforms

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.; Furumura, T.

    2015-12-01

    We evaluate the accuracy of high-resolution tsunami inundation simulations in detail using the actual observational data of the 2011 Tohoku-Oki earthquake (Mw9.0) and investigate the methodologies to improve the simulation accuracy.Due to the recent development of parallel computing technologies, high-resolution tsunami inundation simulations are conducted more commonly than before. To evaluate how accurately these simulations can reproduce inundation processes, we test several types of simulation configurations on a parallel computer, where we can utilize the observational data (e.g., offshore and coastal waveforms and inundation properties) that are recorded during the Tohoku-Oki earthquake.Before discussing the accuracy of inundation processes on land, the incident waves at coastal sites must be accurately estimated. However, for megathrust earthquakes, it is difficult to find the tsunami source that can provide accurate estimations of tsunami waveforms at every coastal site because of the complex spatiotemporal distribution of the source and the limitation of observation. To overcome this issue, we employ a site-specific source inversion approach that increases the estimation accuracy within a specific coastal site by applying appropriate weighting to the observational data in the inversion process.We applied our source inversion technique to the Tohoku tsunami and conducted inundation simulations using 5-m resolution digital elevation model data (DEM) for the coastal area around Miyako Bay and Sendai Bay. The estimated waveforms at the coastal wave gauges of these bays successfully agree with the observed waveforms. However, the simulations overestimate the inundation extent indicating the necessity to improve the inundation model. We find that the value of Manning's roughness coefficient should be modified from the often-used value of n = 0.025 to n = 0.033 to obtain proper results at both cities.In this presentation, the simulation results with several

  16. Accurate and general treatment of electrostatic interaction in Hamiltonian adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.

    2016-07-01

    In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.

  17. Taxis through Computer Simulation Programs.

    ERIC Educational Resources Information Center

    Park, David

    1983-01-01

    Describes a sequence of five computer programs (listings for Apple II available from author) on tactic responses (oriented movement of a cell, cell group, or whole organism in reponse to stimuli). The simulation programs are useful in helping students examine mechanisms at work in real organisms. (JN)

  18. Computer Simulation of Diffraction Patterns.

    ERIC Educational Resources Information Center

    Dodd, N. A.

    1983-01-01

    Describes an Apple computer program (listing available from author) which simulates Fraunhofer and Fresnel diffraction using vector addition techniques (vector chaining) and allows user to experiment with different shaped multiple apertures. Graphics output include vector resultants, phase difference, diffraction patterns, and the Cornu spiral…

  19. Accurate and efficient halo-based galaxy clustering modelling with simulations

    NASA Astrophysics Data System (ADS)

    Zheng, Zheng; Guo, Hong

    2016-06-01

    Small- and intermediate-scale galaxy clustering can be used to establish the galaxy-halo connection to study galaxy formation and evolution and to tighten constraints on cosmological parameters. With the increasing precision of galaxy clustering measurements from ongoing and forthcoming large galaxy surveys, accurate models are required to interpret the data and extract relevant information. We introduce a method based on high-resolution N-body simulations to accurately and efficiently model the galaxy two-point correlation functions (2PCFs) in projected and redshift spaces. The basic idea is to tabulate all information of haloes in the simulations necessary for computing the galaxy 2PCFs within the framework of halo occupation distribution or conditional luminosity function. It is equivalent to populating galaxies to dark matter haloes and using the mock 2PCF measurements as the model predictions. Besides the accurate 2PCF calculations, the method is also fast and therefore enables an efficient exploration of the parameter space. As an example of the method, we decompose the redshift-space galaxy 2PCF into different components based on the type of galaxy pairs and show the redshift-space distortion effect in each component. The generalizations and limitations of the method are discussed.

  20. Accurate and Fast Simulation of Channel Noise in Conductance-Based Model Neurons by Diffusion Approximation

    PubMed Central

    Linaro, Daniele; Storace, Marco; Giugliano, Michele

    2011-01-01

    Stochastic channel gating is the major source of intrinsic neuronal noise whose functional consequences at the microcircuit- and network-levels have been only partly explored. A systematic study of this channel noise in large ensembles of biophysically detailed model neurons calls for the availability of fast numerical methods. In fact, exact techniques employ the microscopic simulation of the random opening and closing of individual ion channels, usually based on Markov models, whose computational loads are prohibitive for next generation massive computer models of the brain. In this work, we operatively define a procedure for translating any Markov model describing voltage- or ligand-gated membrane ion-conductances into an effective stochastic version, whose computer simulation is efficient, without compromising accuracy. Our approximation is based on an improved Langevin-like approach, which employs stochastic differential equations and no Montecarlo methods. As opposed to an earlier proposal recently debated in the literature, our approximation reproduces accurately the statistical properties of the exact microscopic simulations, under a variety of conditions, from spontaneous to evoked response features. In addition, our method is not restricted to the Hodgkin-Huxley sodium and potassium currents and is general for a variety of voltage- and ligand-gated ion currents. As a by-product, the analysis of the properties emerging in exact Markov schemes by standard probability calculus enables us for the first time to analytically identify the sources of inaccuracy of the previous proposal, while providing solid ground for its modification and improvement we present here. PMID:21423712

  1. Application of the G-JF discrete-time thermostat for fast and accurate molecular simulations

    NASA Astrophysics Data System (ADS)

    Grønbech-Jensen, Niels; Hayre, Natha Robert; Farago, Oded

    2014-02-01

    A new Langevin-Verlet thermostat that preserves the fluctuation-dissipation relationship for discrete time steps is applied to molecular modeling and tested against several popular suites (AMBER, GROMACS, LAMMPS) using a small molecule as an example that can be easily simulated by all three packages. Contrary to existing methods, the new thermostat exhibits no detectable changes in the sampling statistics as the time step is varied in the entire numerical stability range. The simple form of the method, which we express in the three common forms (Velocity-Explicit, Störmer-Verlet, and Leap-Frog), allows for easy implementation within existing molecular simulation packages to achieve faster and more accurate results with no cost in either computing time or programming complexity.

  2. Computer simulation of martensitic transformations

    SciTech Connect

    Xu, Ping

    1993-11-01

    The characteristics of martensitic transformations in solids are largely determined by the elastic strain that develops as martensite particles grow and interact. To study the development of microstructure, a finite-element computer simulation model was constructed to mimic the transformation process. The transformation is athermal and simulated at each incremental step by transforming the cell which maximizes the decrease in the free energy. To determine the free energy change, the elastic energy developed during martensite growth is calculated from the theory of linear elasticity for elastically homogeneous media, and updated as the transformation proceeds.

  3. Temperature estimators in computer simulation

    NASA Astrophysics Data System (ADS)

    Jara, César; González-Cataldo, Felipe; Davis, Sergio; Gutiérrez, Gonzalo

    2016-05-01

    Temperature is a key physical quantity that is used to describe equilibrium between two bodies in thermal contact. In computer simulations, the temperature is usually estimated by means of the equipartition theorem, as an average over the kinetic energy. However, recent studies have shown that the temperature can be estimated using only the particles positions, which has been called configurational temperature. Through classical molecular dynamics simulations of 108-argon-atoms system, we compare the performance of four different temperature estimators: the usual kinetic temperature and three configurational temperatures, Our results show that the different estimators converge to the same value, but their fluctuations are different.

  4. Developing accurate simulations for high-speed fiber links

    NASA Astrophysics Data System (ADS)

    Searcy, Steven; Stark, Andrew; Hsueh, Yu-Ting; Detwiler, Thomas; Tibuleac, Sorin; Chang, GK; Ralph, Stephen E.

    2011-01-01

    Reliable simulations of high-speed fiber optic links are necessary to understand, design, and deploy fiber networks. Laboratory experiments cannot explore all possible component variations and fiber environments that are found in today's deployed systems. Simulations typically depict relative penalties compared to a reference link. However, absolute performance metrics are required to assess actual deployment configurations. Here we detail the efforts within the Georgia Tech 100G Consortium towards achieving high absolute accuracy between simulation and experimental performance with a goal of +/-0.25 dB for back-to-back configuration, and +/-0.5 dB for transmission over multiple spans with different dispersion maps. We measure all possible component parameters including fiber length, loss, and dispersion for use in simulation. We also validate experimental methods of performance evaluation including OSNR assessment and DSP-based demodulation. We investigate a wide range of parameters including modulator chirp, polarization state, polarization dependent loss, transmit spectrum, laser linewidth, and fiber nonlinearity. We evaluate 56 Gb/s (single-polarization) and 112 Gb/s (dual-polarization) DQPSK and coherent QPSK within a 50 GHz DWDM environment with 10 Gb/s OOK adjacent channels for worst-case XPM effects. We demonstrate good simulation accuracy within linear and some nonlinear regimes for a wide range of OSNR in both back-to-back configuration and up to eight spans, over a range of launch powers. This allows us to explore a wide range of environments not available in the lab, including different fiber types, ROADM passbands, and levels of crosstalk. Continued exploration is required to validate robustness over various demodulation algorithms.

  5. Computer Simulations of Space Plasmas

    NASA Astrophysics Data System (ADS)

    Goertz, C. K.

    Even a superficial scanning of the latest issues of the Journal of Geophysical Research reveals that numerical simulation of space plasma processes is an active and growing field. The complexity and sophistication of numerically produced “data” rivals that of the real stuff. Sometimes numerical results need interpretation in terms of a simple “theory,” very much as the results of real experiments and observations do. Numerical simulation has indeed become a third independent tool of space physics, somewhere between observations and analytic theory. There is thus a strong need for textbooks and monographs that report the latest techniques and results in an easily accessible form. This book is an attempt to satisfy this need. The editors want it not only to be “proceedings of selected lectures (given) at the first ISSS (International School of Space Simulations in Kyoto, Japan, November 1-2, 1982) but rather…a form of textbook of computer simulations of space plasmas.” This is, of course, a difficult task when many authors are involved. Unavoidable redundancies and differences in notation may confuse the beginner. Some important questions, like numerical stability, are not discussed in sufficient detail. The recent book by C.K. Birdsall and A.B. Langdon (Plasma Physics via Computer Simulations, McGraw-Hill, New York, 1985) is more complete and detailed and seems more suitable as a textbook for simulations. Nevertheless, this book is useful to the beginner and the specialist because it contains not only descriptions of various numerical techniques but also many applications of simulations to space physics phenomena.

  6. Biomes computed from simulated climatologies

    SciTech Connect

    Claussen, M.; Esch, M.

    1994-01-01

    The biome model of Prentice et al. is used to predict global patterns of potential natural plant formations, or biomes, from climatologies simulated by ECHAM, a model used for climate simulations at the Max-Planck-Institut fuer Meteorologie. This study undertaken in order to show the advantage of this biome model in diagnosing the performance of a climate model and assessing effects of past and future climate changes predicted by a climate model. Good overall agreement is found between global patterns of biomes computed from observed and simulated data of present climate. But there are also major discrepancies indicated by a difference in biomes in Australia, in the Kalahari Desert, and in the Middle West of North America. These discrepancies can be traced back to in simulated rainfall as well as summer or winter temperatures. Global patterns of biomes computed from an ice age simulation reveal that North America, Europe, and Siberia should have been covered largely by tundra and taiga, whereas only small differences are for the tropical rain forests. A potential northeast shift of biomes is expected from a simulation with enhanced CO{sub 2} concentration according to the IPCC Scenario A. Little change is seen in the tropical rain forest and the Sahara. Since the biome model used is not capable of predicting chances in vegetation patterns due to a rapid climate change, the latter simulation to be taken as a prediction of chances in conditions favourable for the existence of certain biomes, not as a reduction of a future distribution of biomes. 15 refs., 8 figs., 2 tabs.

  7. Computer simulation of nonequilibrium processes

    SciTech Connect

    Wallace, D.C.

    1985-07-01

    The underlying concepts of nonequilibrium statistical mechanics, and of irreversible thermodynamics, will be described. The question at hand is then, how are these concepts to be realize in computer simulations of many-particle systems. The answer will be given for dissipative deformation processes in solids, on three hierarchical levels: heterogeneous plastic flow, dislocation dynamics, an molecular dynamics. Aplication to the shock process will be discussed.

  8. Accurate, practical simulation of satellite infrared radiometer spectral data

    SciTech Connect

    Sullivan, T.J.

    1982-09-01

    This study's purpose is to determine whether a relatively simple random band model formulation of atmospheric radiation transfer in the infrared region can provide valid simulations of narrow interval satellite-borne infrared sounder system data. Detailed ozonesondes provide the pertinent atmospheric information and sets of calibrated satellite measurements provide the validation. High resolution line-by-line model calculations are included to complete the evaluation.

  9. Fast and accurate simulations of diffusion-weighted MRI signals for the evaluation of acquisition sequences

    NASA Astrophysics Data System (ADS)

    Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime

    2016-03-01

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.

  10. Inversion based on computational simulations

    SciTech Connect

    Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.

    1998-09-01

    A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal.

  11. An accurate and efficient 3-D micromagnetic simulation of metal evaporated tape

    NASA Astrophysics Data System (ADS)

    Jones, M.; Miles, J. J.

    1997-07-01

    Metal evaporated tape (MET) has a complex column-like structure in which magnetic domains are arranged randomly. In order to accurately simulate the behaviour of MET it is important to capture these aspects of the material in a high-resolution 3-D micromagnetic model. The scale of this problem prohibits the use of traditional scalar computers and leads us to develop algorithms for a vector processor architecture. We demonstrate that despite the materials highly non-uniform structure, it is possible to develop fast vector algorithms for the computation of the magnetostatic interaction field. We do this by splitting the field calculation into near and far components. The near field component is calculated exactly using an efficient vector algorithm, whereas the far field is calculated approximately using a novel fast Fourier transform (FFT) technique. Results are presented which demonstrate that, in practice, the algorithms require sub-O( N log( N)) computation time. In addition results of highly realistic simulation of hysteresis in MET are presented.

  12. A hierarchical approach to accurate predictions of macroscopic thermodynamic behavior from quantum mechanics and molecular simulations

    NASA Astrophysics Data System (ADS)

    Garrison, Stephen L.

    2005-07-01

    The combination of molecular simulations and potentials obtained from quantum chemistry is shown to be able to provide reasonably accurate thermodynamic property predictions. Gibbs ensemble Monte Carlo simulations are used to understand the effects of small perturbations to various regions of the model Lennard-Jones 12-6 potential. However, when the phase behavior and second virial coefficient are scaled by the critical properties calculated for each potential, the results obey a corresponding states relation suggesting a non-uniqueness problem for interaction potentials fit to experimental phase behavior. Several variations of a procedure collectively referred to as quantum mechanical Hybrid Methods for Interaction Energies (HM-IE) are developed and used to accurately estimate interaction energies from CCSD(T) calculations with a large basis set in a computationally efficient manner for the neon-neon, acetylene-acetylene, and nitrogen-benzene systems. Using these results and methods, an ab initio, pairwise-additive, site-site potential for acetylene is determined and then improved using results from molecular simulations using this initial potential. The initial simulation results also indicate that a limited range of energies important for accurate phase behavior predictions. Second virial coefficients calculated from the improved potential indicate that one set of experimental data in the literature is likely erroneous. This prescription is then applied to methanethiol. Difficulties in modeling the effects of the lone pair electrons suggest that charges on the lone pair sites negatively impact the ability of the intermolecular potential to describe certain orientations, but that the lone pair sites may be necessary to reasonably duplicate the interaction energies for several orientations. Two possible methods for incorporating the effects of three-body interactions into simulations within the pairwise-additivity formulation are also developed. A low density

  13. Hydration free energies of cyanide and hydroxide ions from molecular dynamics simulations with accurate force fields

    USGS Publications Warehouse

    Lee, M.W.; Meuwly, M.

    2013-01-01

    The evaluation of hydration free energies is a sensitive test to assess force fields used in atomistic simulations. We showed recently that the vibrational relaxation times, 1D- and 2D-infrared spectroscopies for CN(-) in water can be quantitatively described from molecular dynamics (MD) simulations with multipolar force fields and slightly enlarged van der Waals radii for the C- and N-atoms. To validate such an approach, the present work investigates the solvation free energy of cyanide in water using MD simulations with accurate multipolar electrostatics. It is found that larger van der Waals radii are indeed necessary to obtain results close to the experimental values when a multipolar force field is used. For CN(-), the van der Waals ranges refined in our previous work yield hydration free energy between -72.0 and -77.2 kcal mol(-1), which is in excellent agreement with the experimental data. In addition to the cyanide ion, we also study the hydroxide ion to show that the method used here is readily applicable to similar systems. Hydration free energies are found to sensitively depend on the intermolecular interactions, while bonded interactions are less important, as expected. We also investigate in the present work the possibility of applying the multipolar force field in scoring trajectories generated using computationally inexpensive methods, which should be useful in broader parametrization studies with reduced computational resources, as scoring is much faster than the generation of the trajectories.

  14. Hydration free energies of cyanide and hydroxide ions from molecular dynamics simulations with accurate force fields.

    PubMed

    Lee, Myung Won; Meuwly, Markus

    2013-12-14

    The evaluation of hydration free energies is a sensitive test to assess force fields used in atomistic simulations. We showed recently that the vibrational relaxation times, 1D- and 2D-infrared spectroscopies for CN(-) in water can be quantitatively described from molecular dynamics (MD) simulations with multipolar force fields and slightly enlarged van der Waals radii for the C- and N-atoms. To validate such an approach, the present work investigates the solvation free energy of cyanide in water using MD simulations with accurate multipolar electrostatics. It is found that larger van der Waals radii are indeed necessary to obtain results close to the experimental values when a multipolar force field is used. For CN(-), the van der Waals ranges refined in our previous work yield hydration free energy between -72.0 and -77.2 kcal mol(-1), which is in excellent agreement with the experimental data. In addition to the cyanide ion, we also study the hydroxide ion to show that the method used here is readily applicable to similar systems. Hydration free energies are found to sensitively depend on the intermolecular interactions, while bonded interactions are less important, as expected. We also investigate in the present work the possibility of applying the multipolar force field in scoring trajectories generated using computationally inexpensive methods, which should be useful in broader parametrization studies with reduced computational resources, as scoring is much faster than the generation of the trajectories. PMID:24170171

  15. Development of modified cable models to simulate accurate neuronal active behaviors

    PubMed Central

    2014-01-01

    In large network and single three-dimensional (3-D) neuron simulations, high computing speed dictates using reduced cable models to simulate neuronal firing behaviors. However, these models are unwarranted under active conditions and lack accurate representation of dendritic active conductances that greatly shape neuronal firing. Here, realistic 3-D (R3D) models (which contain full anatomical details of dendrites) of spinal motoneurons were systematically compared with their reduced single unbranched cable (SUC, which reduces the dendrites to a single electrically equivalent cable) counterpart under passive and active conditions. The SUC models matched the R3D model's passive properties but failed to match key active properties, especially active behaviors originating from dendrites. For instance, persistent inward currents (PIC) hysteresis, frequency-current (FI) relationship secondary range slope, firing hysteresis, plateau potential partial deactivation, staircase currents, synaptic current transfer ratio, and regional FI relationships were not accurately reproduced by the SUC models. The dendritic morphology oversimplification and lack of dendritic active conductances spatial segregation in the SUC models caused significant underestimation of those behaviors. Next, SUC models were modified by adding key branching features in an attempt to restore their active behaviors. The addition of primary dendritic branching only partially restored some active behaviors, whereas the addition of secondary dendritic branching restored most behaviors. Importantly, the proposed modified models successfully replicated the active properties without sacrificing model simplicity, making them attractive candidates for running R3D single neuron and network simulations with accurate firing behaviors. The present results indicate that using reduced models to examine PIC behaviors in spinal motoneurons is unwarranted. PMID:25277743

  16. Accurate simulation of terahertz transmission through doped silicon junctions

    NASA Astrophysics Data System (ADS)

    Jen, Chih-Yu; Richter, Christiaan

    2015-03-01

    In the previous work we presented results demonstrating the ability of transmission mode terahertz time domain spectroscopy (THz-TDS) to detect doping profile differences and deviations in silicon. This capability is potentially useful for quality control in the semiconductor and photovoltaic industry. We shared subsequent experimental results revealing that terahertz interactions with both electrons and holes are strong enough to recognize both n- and p-type doping profile changes. We also displayed that the relatively long wavelength (~ 1 mm) of THz radiation allows this approach to be compatible with surface treatments like for instance the texturing (scattering layer) typically used in the solar industry. In this work we continuously demonstrate the accuracy with which current terahertz optical models can simulate the power spectrum of terahertz radiation transmitted through junctions with known doping profiles (as determined with SIMS). We conclude that current optical models predict the terahertz transmission and absorption in silicon junctions well.

  17. Accurate Position Sensing of Defocused Beams Using Simulated Beam Templates

    SciTech Connect

    Awwal, A; Candy, J; Haynam, C; Widmayer, C; Bliss, E; Burkhart, S

    2004-09-29

    In position detection using matched filtering one is faced with the challenge of determining the best position in the presence of distortions such as defocus and diffraction noise. This work evaluates the performance of simulated defocused images as the template against the real defocused beam. It was found that an amplitude modulated phase-only filter is better equipped to deal with real defocused images that suffer from diffraction noise effects resulting in a textured spot intensity pattern. It is shown that the there is a tradeoff of performance dependent upon the type and size of the defocused image. A novel automated system was developed that can automatically select the right template type and size. Results of this automation for real defocused images are presented.

  18. Equilibrium gas flow computations. I - Accurate and efficient calculation of equilibrium gas properties

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    1989-01-01

    This paper treats the accurate and efficient calculation of thermodynamic properties of arbitrary gas mixtures for equilibrium flow computations. New improvements in the Stupochenko-Jaffe model for the calculation of thermodynamic properties of diatomic molecules are presented. A unified formulation of equilibrium calculations for gas mixtures in terms of irreversible entropy is given. Using a highly accurate thermo-chemical data base, a new, efficient and vectorizable search algorithm is used to construct piecewise interpolation procedures with generate accurate thermodynamic variable and their derivatives required by modern computational algorithms. Results are presented for equilibrium air, and compared with those given by the Srinivasan program.

  19. Computer simulation of metal-organic materials

    NASA Astrophysics Data System (ADS)

    Stern, Abraham C.

    Computer simulations of metal-organic frameworks are conducted to both investigate the mechanism of hydrogen sorption and to elucidate a detailed, molecular-level understanding of the physical interactions that can lead to successful material design strategies. To this end, important intermolecular interactions are identified and individually parameterized to yield a highly accurate representation of the potential energy landscape. Polarization, one such interaction found to play a significant role in H 2 sorption, is included explicitly for the first time in simulations of metal-organic frameworks. Permanent electrostatics are usually accounted for by means of an approximate fit to model compounds. The application of this method to simulations involving metal-organic frameworks introduces several substantial problems that are characterized in this work. To circumvent this, a method is developed and tested in which atomic point partial charges are computed more directly, fit to the fully periodic electrostatic potential. In this manner, long-range electrostatics are explicitly accounted for via Ewald summation. Grand canonical Monte Carlo simulations are conducted employing the force field parameterization developed here. Several of the major findings of this work are: Polarization is found to play a critical role in determining the overall structure of H2 sorbed in metal-organic frameworks, although not always the determining factor in uptake. The parameterization of atomic point charges by means of a fit to the periodic electrostatic potential is a robust, efficient method and consistently results in a reliable description of Coulombic interactions without introducing ambiguity associated with other procedures. After careful development of both hydrogen and framework potential energy functions, quantitatively accurate results have been obtained. Such predictive accuracy will aid greatly in the rational, iterative design cycle between experimental and theoretical

  20. Parallel kinetic Monte Carlo simulation framework incorporating accurate models of adsorbate lateral interactions

    SciTech Connect

    Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James; Stamatakis, Michail

    2013-12-14

    Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.

  1. Computer simulation results of attitude estimation of earth orbiting satellites

    NASA Technical Reports Server (NTRS)

    Kou, S. R.

    1976-01-01

    Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.

  2. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  3. Object-Oriented NeuroSys: Parallel Programs for Simulating Large Networks of Biologically Accurate Neurons

    SciTech Connect

    Pacheco, P; Miller, P; Kim, J; Leese, T; Zabiyaka, Y

    2003-05-07

    Object-oriented NeuroSys (ooNeuroSys) is a collection of programs for simulating very large networks of biologically accurate neurons on distributed memory parallel computers. It includes two principle programs: ooNeuroSys, a parallel program for solving the large systems of ordinary differential equations arising from the interconnected neurons, and Neurondiz, a parallel program for visualizing the results of ooNeuroSys. Both programs are designed to be run on clusters and use the MPI library to obtain parallelism. ooNeuroSys also includes an easy-to-use Python interface. This interface allows neuroscientists to quickly develop and test complex neuron models. Both ooNeuroSys and Neurondiz have a design that allows for both high performance and relative ease of maintenance.

  4. Computational Methods for Jet Noise Simulation

    NASA Technical Reports Server (NTRS)

    Goodrich, John W. (Technical Monitor); Hagstrom, Thomas

    2003-01-01

    The purpose of our project is to develop, analyze, and test novel numerical technologies central to the long term goal of direct simulations of subsonic jet noise. Our current focus is on two issues: accurate, near-field domain truncations and high-order, single-step discretizations of the governing equations. The Direct Numerical Simulation (DNS) of jet noise poses a number of extreme challenges to computational technique. In particular, the problem involves multiple temporal and spatial scales as well as flow instabilities and is posed on an unbounded spatial domain. Moreover, the basic phenomenon of interest, the radiation of acoustic waves to the far field, involves only a minuscule fraction of the total energy. The best current simulations of jet noise are at low Reynolds number. It is likely that an increase of one to two orders of magnitude will be necessary to reach a regime where the separation between the energy-containing and dissipation scales is sufficient to make the radiated noise essentially independent of the Reynolds number. Such an increase in resolution cannot be obtained in the near future solely through increases in computing power. Therefore, new numerical methodologies of maximal efficiency and accuracy are required.

  5. Evaluation of the Time-Derivative Coupling for Accurate Electronic State Transition Probabilities from Numerical Simulations.

    PubMed

    Meek, Garrett A; Levine, Benjamin G

    2014-07-01

    Spikes in the time-derivative coupling (TDC) near surface crossings make the accurate integration of the time-dependent Schrödinger equation in nonadiabatic molecular dynamics simulations a challenge. To address this issue, we present an approximation to the TDC based on a norm-preserving interpolation (NPI) of the adiabatic electronic wave functions within each time step. We apply NPI and two other schemes for computing the TDC in numerical simulations of the Landau-Zener model, comparing the simulated transfer probabilities to the exact solution. Though NPI does not require the analytical calculation of nonadiabatic coupling matrix elements, it consistently yields unsigned population transfer probability errors of ∼0.001, whereas analytical calculation of the TDC yields errors of 0.0-1.0 depending on the time step, the offset of the maximum in the TDC from the beginning of the time step, and the coupling strength. The approximation of Hammes-Schiffer and Tully yields errors intermediate between NPI and the analytical scheme. PMID:26279558

  6. Computer simulations of particle packing

    SciTech Connect

    Cesarano, J. III; McEuen, M.J.; Swiler, T.

    1996-09-01

    Computer code has been developed to rapidly simulate the random packing of disks and spheres in two and three dimensions. Any size distribution may be packed. The code simulates varying degrees of inter particle conditions ranging from sticky to free flowing. The code will also calculate the overall packing density, density distributions, and void size distributions (in two dimensions). An important aspect of the code is that it is written in C++ and incorporates a user-friendly graphical interface for standard Macintosh and Power PC platforms. Investigations as to how well the code simulates the realistic random packing have begun. The code has been developed in consideration of the problem of filling a container (or die) with spray-dried granules of ceramic powder (represented by spheres). Although not presented here, the futuristic goal of this work is to give users the ability to predict homogeneity of filled dies prior to dry pressing. Additionally, this software has educational utility for studying relationships between particle size distributions and macrostructures.

  7. Computation simulation of the nonlinear response of suspension bridges

    SciTech Connect

    McCallen, D.B.; Astaneh-Asl, A.

    1997-10-01

    Accurate computational simulation of the dynamic response of long- span bridges presents one of the greatest challenges facing the earthquake engineering community The size of these structures, in terms of physical dimensions and number of main load bearing members, makes computational simulation of transient response an arduous task. Discretization of a large bridge with general purpose finite element software often results in a computational model of such size that excessive computational effort is required for three dimensional nonlinear analyses. The aim of the current study was the development of efficient, computationally based methodologies for the nonlinear analysis of cable supported bridge systems which would allow accurate characterization of a bridge with a relatively small number of degrees of freedom. This work has lead to the development of a special purpose software program for the nonlinear analysis of cable supported bridges and the methodologies and software are described and illustrated in this paper.

  8. Determining Peptide Partitioning Properties via Computer Simulation

    PubMed Central

    Ulmschneider, Jakob P.; Ulmschneider, Martin B.

    2010-01-01

    The transfer of polypeptide segments into lipid bilayers to form transmembrane helices represents the crucial first step in cellular membrane protein folding and assembly. This process is driven by complex and poorly understood atomic interactions of peptides with the lipid bilayer environment. The lack of suitable experimental techniques that can resolve these processes both at atomic resolution and nanosecond timescales has spurred the development of computational techniques. In this review, we summarize the significant progress achieved in the last few years in elucidating the partitioning of peptides into lipid bilayer membranes using atomic detail molecular dynamics simulations. Indeed, partitioning simulations can now provide a wealth of structural and dynamic information. Furthermore, we show that peptide-induced bilayer distortions, insertion pathways, transfer free energies, and kinetic insertion barriers are now accurate enough to complement experiments. Further advances in simulation methods and force field parameter accuracy promise to turn molecular dynamics simulations into a powerful tool for investigating a wide range of membrane active peptide phenomena. PMID:21107546

  9. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography

    PubMed Central

    Haley, William E.; Ibrahim, El-Sayed H.; Qu, Mingliang; Cernigliaro, Joseph G.; Goldfarb, David S.; McCollough, Cynthia H.

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  10. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography.

    PubMed

    Haley, William E; Ibrahim, El-Sayed H; Qu, Mingliang; Cernigliaro, Joseph G; Goldfarb, David S; McCollough, Cynthia H

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  11. Unfitted Two-Phase Flow Simulations in Pore-Geometries with Accurate

    NASA Astrophysics Data System (ADS)

    Heimann, Felix; Engwer, Christian; Ippisch, Olaf; Bastian, Peter

    2013-04-01

    The development of better macro scale models for multi-phase flow in porous media is still impeded by the lack of suitable methods for the simulation of such flow regimes on the pore scale. The highly complicated geometry of natural porous media imposes requirements with regard to stability and computational efficiency which current numerical methods fail to meet. Therefore, current simulation environments are still unable to provide a thorough understanding of porous media in multi-phase regimes and still fail to reproduce well known effects like hysteresis or the more peculiar dynamics of the capillary fringe with satisfying accuracy. Although flow simulations in pore geometries were initially the domain of Lattice-Boltzmann and other particle methods, the development of Galerkin methods for such applications is important as they complement the range of feasible flow and parameter regimes. In the recent past, it has been shown that unfitted Galerkin methods can be applied efficiently to topologically demanding geometries. However, in the context of two-phase flows, the interface of the two immiscible fluids effectively separates the domain in two sub-domains. The exact representation of such setups with multiple independent and time depending geometries exceeds the functionality of common unfitted methods. We present a new approach to pore scale simulations with an unfitted discontinuous Galerkin (UDG) method. Utilizing a recursive sub-triangulation algorithm, we extent the UDG method to setups with multiple independent geometries. This approach allows an accurate representation of the moving contact line and the interface conditions, i.e. the pressure jump across the interface. Example simulations in two and three dimensions illustrate and verify the stability and accuracy of this approach.

  12. A Three Dimensional Parallel Time Accurate Turbopump Simulation Procedure Using Overset Grid Systems

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chan, William; Kwak, Dochan

    2001-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and non-uniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability will be presented along with the performance of parallel versions of the code.

  13. A Three-Dimensional Parallel Time-Accurate Turbopump Simulation Procedure Using Overset Grid System

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chan, William; Kwak, Dochan

    2002-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and nonuniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability are presented along with the performance of parallel versions of the code.

  14. Computer simulation of microstructural dynamics

    SciTech Connect

    Grest, G.S.; Anderson, M.P.; Srolovitz, D.J.

    1985-01-01

    Since many of the physical properties of materials are determined by their microstructure, it is important to be able to predict and control microstructural development. A number of approaches have been taken to study this problem, but they assume that the grains can be described as spherical or hexagonal and that growth occurs in an average environment. We have developed a new technique to bridge the gap between the atomistic interactions and the macroscopic scale by discretizing the continuum system such that the microstructure retains its topological connectedness, yet is amenable to computer simulations. Using this technique, we have studied grain growth in polycrystalline aggregates. The temporal evolution and grain morphology of our model are in excellent agreement with experimental results for metals and ceramics.

  15. Priority Queues for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor)

    1998-01-01

    The present invention is embodied in new priority queue data structures for event list management of computer simulations, and includes a new priority queue data structure and an improved event horizon applied to priority queue data structures. ne new priority queue data structure is a Qheap and is made out of linked lists for robust, fast, reliable, and stable event list management and uses a temporary unsorted list to store all items until one of the items is needed. Then the list is sorted, next, the highest priority item is removed, and then the rest of the list is inserted in the Qheap. Also, an event horizon is applied to binary tree and splay tree priority queue data structures to form the improved event horizon for event management.

  16. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  17. A Mass Spectrometer Simulator in Your Computer

    NASA Astrophysics Data System (ADS)

    Gagnon, Michel

    2012-12-01

    Introduced to study components of ionized gas, the mass spectrometer has evolved into a highly accurate device now used in many undergraduate and research laboratories. Unfortunately, despite their importance in the formation of future scientists, mass spectrometers remain beyond the financial reach of many high schools and colleges. As a result, it is not possible for instructors to take full advantage of this equipment. Therefore, to facilitate accessibility to this tool, we have developed a realistic computer-based simulator. Using this software, students are able to practice their ability to identify the components of the original gas, thereby gaining a better understanding of the underlying physical laws. The software is available as a free download.

  18. Computer simulations of liquid crystals

    NASA Astrophysics Data System (ADS)

    Smondyrev, Alexander M.

    Liquid crystal physics is an exciting interdisciplinary field of research with important practical applications. Their complexity and the presence of strong translational and orientational fluctuations require a computational approach, especially in the studies of nonequlibrium phenomena. In this dissertation we present the results of computer simulation studies of liquid crystals using the molecular dynamics technique. We employed the Gay-Berne phenomenological model of liquid crystals to describe the interaction between the molecules. Both equilibrium and non-equilibrium phenomena were studied. In the first case we studied the flow properties of the liquid crystal system in equilibrium as well as the dynamics of the director. We measured the viscosities of the Gay-Berne model in the nematic and isotropic phases. The temperature-dependence of the rotational and shear viscosities, including the nonmonotonic behavior of one shear viscosity, are in good agreement with experimental data. The bulk viscosities are significantly larger than the shear viscosities, again in agreement with experiment. The director motion was found to be ballistic at short times and diffusive at longer times. The second class of problems we focused on is the properties of the system which was rapidly quenched to very low temperatures from the nematic phase. We find a glass transition to a metastable phase with nematic order and frozen translational and orientational degrees of freedom. For fast quench rates the local structure is nematic-like, while for slower quench rates smectic order is present as well. Finally, we considered a system in the isotropic phase which is then cooled to temperatures below the isotropic-nematic transition temperature. We expect topological defects to play a central role in the subsequent equilibration of the system. To identify and study these defects we require a simulation of a system with several thousand particles. We present the results of large

  19. Finite domain simulations with adaptive boundaries: accurate potentials and nonequilibrium movesets.

    PubMed

    Wagoner, Jason A; Pande, Vijay S

    2013-12-21

    We extend the theory of hybrid explicit/implicit solvent models to include an explicit domain that grows and shrinks in response to a solute's evolving configuration. The goal of this model is to provide an appropriate but not excessive amount of solvent detail, and the inclusion of an adjustable boundary provides a significant computational advantage for solutes that explore a range of configurations. In addition to the theoretical development, a successful implementation of this method requires (1) an efficient moveset that propagates the boundary as a new coordinate of the system, and (2) an accurate continuum solvent model with parameters that are transferable to an explicit domain of any size. We address these challenges and develop boundary updates using Monte Carlo moves biased by nonequilibrium paths. We obtain the desired level of accuracy using a "decoupling interface" that we have previously shown to remove boundary artifacts common to hybrid solvent models. Using an uncharged, coarse-grained solvent model, we then study the efficiency of nonequilibrium paths that a simulation takes by quantifying the dissipation. In the spirit of optimization, we study this quantity over a range of simulation parameters. PMID:24359359

  20. Industrial Compositional Streamline Simulation for Efficient and Accurate Prediction of Gas Injection and WAG Processes

    SciTech Connect

    Margot Gerritsen

    2008-10-31

    Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids

  1. The Shuttle Mission Simulator computer generated imagery

    NASA Technical Reports Server (NTRS)

    Henderson, T. H.

    1984-01-01

    Equipment available in the primary training facility for the Space Transportation System (STS) flight crews includes the Fixed Base Simulator, the Motion Base Simulator, the Spacelab Simulator, and the Guidance and Navigation Simulator. The Shuttle Mission Simulator (SMS) consists of the Fixed Base Simulator and the Motion Base Simulator. The SMS utilizes four visual Computer Generated Image (CGI) systems. The Motion Base Simulator has a forward crew station with six-degrees of freedom motion simulation. Operation of the Spacelab Simulator is planned for the spring of 1983. The Guidance and Navigation Simulator went into operation in 1982. Aspects of orbital visual simulation are discussed, taking into account the earth scene, payload simulation, the generation and display of 1079 stars, the simulation of sun glare, and Reaction Control System jet firing plumes. Attention is also given to landing site visual simulation, and night launch and landing simulation.

  2. Understanding Islamist political violence through computational social simulation

    SciTech Connect

    Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G; Eberhardt, Ariane; Stradling, Seth G

    2008-01-01

    Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates the computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.

  3. Development of simulation computer complex specification

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The Training Simulation Computer Complex Study was one of three studies contracted in support of preparations for procurement of a shuttle mission simulator for shuttle crew training. The subject study was concerned with definition of the software loads to be imposed on the computer complex to be associated with the shuttle mission simulator and the development of procurement specifications based on the resulting computer requirements. These procurement specifications cover the computer hardware and system software as well as the data conversion equipment required to interface the computer to the simulator hardware. The development of the necessary hardware and software specifications required the execution of a number of related tasks which included, (1) simulation software sizing, (2) computer requirements definition, (3) data conversion equipment requirements definition, (4) system software requirements definition, (5) a simulation management plan, (6) a background survey, and (7) preparation of the specifications.

  4. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK.

    PubMed

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting

    2016-01-01

    This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507

  5. Accurate charge capture and cost allocation: cost justification for bedside computing.

    PubMed Central

    Grewal, R.; Reed, R. L.

    1993-01-01

    This paper shows that cost justification for bedside clinical computing can be made by recouping charges with accurate charge capture. Twelve months worth of professional charges for a sixteen bed surgical intensive care unit are computed from charted data in a bedside clinical database and are compared to the professional charges actually billed by the unit. A substantial difference in predicted charges and billed charges was found. This paper also discusses the concept of appropriate cost allocation in the inpatient environment and the feasibility of appropriate allocation as a by-product of bedside computing. PMID:8130444

  6. Space Ultrareliable Modular Computer (SUMC) instruction simulator

    NASA Technical Reports Server (NTRS)

    Curran, R. T.

    1972-01-01

    The design principles, description, functional operation, and recommended expansion and enhancements are presented for the Space Ultrareliable Modular Computer interpretive simulator. Included as appendices are the user's manual, program module descriptions, target instruction descriptions, simulator source program listing, and a sample program printout. In discussing the design and operation of the simulator, the key problems involving host computer independence and target computer architectural scope are brought into focus.

  7. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  8. Simulating Drosophila Genetics with the Computer.

    ERIC Educational Resources Information Center

    Small, James W., Jr.; Edwards, Kathryn L.

    1979-01-01

    Presents some techniques developed to help improve student understanding of Mendelian principles through the use of a computer simulation model by the genetic system of the fruit fly. Includes discussion and evaluation of this computer assisted program. (MA)

  9. Protocols for Handling Messages Between Simulation Computers

    NASA Technical Reports Server (NTRS)

    Balcerowski, John P.; Dunnam, Milton

    2006-01-01

    Practical Simulator Network (PSimNet) is a set of data-communication protocols designed especially for use in handling messages between computers that are engaging cooperatively in real-time or nearly-real-time training simulations. In a typical application, computers that provide individualized training at widely dispersed locations would communicate, by use of PSimNet, with a central host computer that would provide a common computational- simulation environment and common data. Originally intended for use in supporting interfaces between training computers and computers that simulate the responses of spacecraft scientific payloads, PSimNet could be especially well suited for a variety of other applications -- for example, group automobile-driver training in a classroom. Another potential application might lie in networking of automobile-diagnostic computers at repair facilities to a central computer that would compile the expertise of numerous technicians and engineers and act as an expert consulting technician.

  10. Chip level simulation of fault tolerant computers

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1982-01-01

    Chip-level modeling techniques in the evaluation of fault tolerant systems were researched. A fault tolerant computer was modeled. An efficient approach to functional fault simulation was developed. Simulation software was also developed.

  11. Monte Carlo Computer Simulation of a Rainbow.

    ERIC Educational Resources Information Center

    Olson, Donald; And Others

    1990-01-01

    Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)

  12. Computer Based Simulation of Laboratory Experiments.

    ERIC Educational Resources Information Center

    Edward, Norrie S.

    1997-01-01

    Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…

  13. Constructivist Design of Graphic Computer Simulations.

    ERIC Educational Resources Information Center

    Black, John B.; And Others

    Two graphic computer simulations have been prepared for teaching high school and middle school students about how business organizations and financial systems work: "Parkside," which simulates managing a hotel; and "Guestwear," which simulates managing a clothing manufacturer. Both simulations are based on six principles of constructivist design…

  14. Computer-based personality judgments are more accurate than those made by humans

    PubMed Central

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-01

    Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  15. Computer-based personality judgments are more accurate than those made by humans.

    PubMed

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  16. Computer Simulation in Chemical Kinetics

    ERIC Educational Resources Information Center

    Anderson, Jay Martin

    1976-01-01

    Discusses the use of the System Dynamics technique in simulating a chemical reaction for kinetic analysis. Also discusses the use of simulation modelling in biology, ecology, and the social sciences, where experimentation may be impractical or impossible. (MLH)

  17. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    NASA Technical Reports Server (NTRS)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  18. Evaluation of a Second-Order Accurate Navier-Stokes Code for Detached Eddy Simulation Past a Circular Cylinder

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Singer, Bart A.

    2003-01-01

    We evaluate the applicability of a production computational fluid dynamics code for conducting detached eddy simulation for unsteady flows. A second-order accurate Navier-Stokes code developed at NASA Langley Research Center, known as TLNS3D, is used for these simulations. We focus our attention on high Reynolds number flow (Re = 5 x 10(sup 4) - 1.4 x 10(sup 5)) past a circular cylinder to simulate flows with large-scale separations. We consider two types of flow situations: one in which the flow at the separation point is laminar, and the other in which the flow is already turbulent when it detaches from the surface of the cylinder. Solutions are presented for two- and three-dimensional calculations using both the unsteady Reynolds-averaged Navier-Stokes paradigm and the detached eddy simulation treatment. All calculations use the standard Spalart-Allmaras turbulence model as the base model.

  19. Computer simulation of CPM dye lasers

    SciTech Connect

    Wang Qingyue; Zhao Xingjun )

    1990-01-01

    Quantative analysis of the laser pulses of various intracavity elements in a CPM dye laser is carried out in this study. The pulse formation is simulated with a computer, resulting in an asymmetric numerical solution for the pulse shape. The mechanisms of pulse formation are also discussed based on the results of computer simulation.

  20. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants. PMID:24216719

  1. VLSI circuit simulation using a vector computer

    NASA Technical Reports Server (NTRS)

    Mcgrogan, S. K.

    1984-01-01

    Simulation of circuits having more than 2000 active devices requires the largest, fastest computers available. A vector computer, such as the CYBER 205, can yield great speed and cost advantages if efforts are made to adapt the simulation program to the strengths of the computer. ASPEC and SPICE (1), two widely used circuit simulation programs, are discussed. ASPECV and VAMOS (5) are respectively vector adaptations of these two simulators. They demonstrate the substantial performance enhancements possible for this class of algorithm on the CYBER 205.

  2. Nonlinear preconditioning for efficient and accurate interface capturing in simulation of multicomponent compressible flows

    NASA Astrophysics Data System (ADS)

    Shukla, Ratnesh K.

    2014-11-01

    Single fluid schemes that rely on an interface function for phase identification in multicomponent compressible flows are widely used to study hydrodynamic flow phenomena in several diverse applications. Simulations based on standard numerical implementation of these schemes suffer from an artificial increase in the width of the interface function owing to the numerical dissipation introduced by an upwind discretization of the governing equations. In addition, monotonicity requirements which ensure that the sharp interface function remains bounded at all times necessitate use of low-order accurate discretization strategies. This results in a significant reduction in accuracy along with a loss of intricate flow features. In this paper we develop a nonlinear transformation based interface capturing method which achieves superior accuracy without compromising the simplicity, computational efficiency and robustness of the original flow solver. A nonlinear map from the signed distance function to the sigmoid type interface function is used to effectively couple a standard single fluid shock and interface capturing scheme with a high-order accurate constrained level set reinitialization method in a way that allows for oscillation-free transport of the sharp material interface. Imposition of a maximum principle, which ensures that the multidimensional preconditioned interface capturing method does not produce new maxima or minima even in the extreme events of interface merger or breakup, allows for an explicit determination of the interface thickness in terms of the grid spacing. A narrow band method is formulated in order to localize computations pertinent to the preconditioned interface capturing method. Numerical tests in one dimension reveal a significant improvement in accuracy and convergence; in stark contrast to the conventional scheme, the proposed method retains its accuracy and convergence characteristics in a shifted reference frame. Results from the test

  3. Optimum spaceborne computer system design by simulation

    NASA Technical Reports Server (NTRS)

    Williams, T.; Kerner, H.; Weatherbee, J. E.; Taylor, D. S.; Hodges, B.

    1973-01-01

    A deterministic simulator is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Its use as a tool to study and determine the minimum computer system configuration necessary to satisfy the on-board computational requirements of a typical mission is presented. The paper describes how the computer system configuration is determined in order to satisfy the data processing demand of the various shuttle booster subsytems. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources.

  4. Analyzing Robotic Kinematics Via Computed Simulations

    NASA Technical Reports Server (NTRS)

    Carnahan, Timothy M.

    1992-01-01

    Computing system assists in evaluation of kinematics of conceptual robot. Displays positions and motions of robotic manipulator within work cell. Also displays interactions between robotic manipulator and other objects. Results of simulation displayed on graphical computer workstation. System includes both off-the-shelf software originally developed for automotive industry and specially developed software. Simulation system also used to design human-equivalent hand, to model optical train in infrared system, and to develop graphical interface for teleoperator simulation system.

  5. Computationally Lightweight Air-Traffic-Control Simulation

    NASA Technical Reports Server (NTRS)

    Knight, Russell

    2005-01-01

    An algorithm for computationally lightweight simulation of automated air traffic control (ATC) at a busy airport has been derived. The algorithm is expected to serve as the basis for development of software that would be incorporated into flight-simulator software, the ATC component of which is not yet capable of handling realistic airport loads. Software based on this algorithm could also be incorporated into other computer programs that simulate a variety of scenarios for purposes of training or amusement.

  6. An accurate and efficient computation method of the hydration free energy of a large, complex molecule

    NASA Astrophysics Data System (ADS)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of /2 ( is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  7. Computer Clinical Simulations in Health Sciences.

    ERIC Educational Resources Information Center

    Jones, Gary L; Keith, Kenneth D.

    1983-01-01

    Discusses the key characteristics of clinical simulation, some developmental foundations, two current research studies, and some implications for the future of health science education. Investigations of the effects of computer-based simulation indicate that acquisition of decision-making skills is greater than with noncomputerized simulations.…

  8. Computer simulation of nonequilibrium processes

    SciTech Connect

    Hoover, W.G.; Moran, B.; Holian, B.L.; Posch, H.A.; Bestiale, S.

    1987-01-01

    Recent atomistic simulations of irreversible macroscopic hydrodynamic flows are illustrated. An extension of Nose's reversible atomistic mechanics makes it possible to simulate such non-equilibrium systems with completely reversible equations of motion. The new techniques show that macroscopic irreversibility is a natural inevitable consequence of time-reversible Lyapunov-unstable microscopic equations of motion.

  9. Performance evaluation using SYSTID time domain simulation. [computer-aid design and analysis for communication systems

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.; Ziemer, R. E.; Fashano, M. J.

    1975-01-01

    This paper reviews the SYSTID technique for performance evaluation of communication systems using time-domain computer simulation. An example program illustrates the language. The inclusion of both Gaussian and impulse noise models make accurate simulation possible in a wide variety of environments. A very flexible postprocessor makes possible accurate and efficient performance evaluation.

  10. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  11. Filtration theory using computer simulations

    SciTech Connect

    Bergman, W.; Corey, I.

    1997-08-01

    We have used commercially available fluid dynamics codes based on Navier-Stokes theory and the Langevin particle equation of motion to compute the particle capture efficiency and pressure drop through selected two- and three-dimensional fiber arrays. The approach we used was to first compute the air velocity vector field throughout a defined region containing the fiber matrix. The particle capture in the fiber matrix is then computed by superimposing the Langevin particle equation of motion over the flow velocity field. Using the Langevin equation combines the particle Brownian motion, inertia and interception mechanisms in a single equation. In contrast, most previous investigations treat the different capture mechanisms separately. We have computed the particle capture efficiency and the pressure drop through one, 2-D and two, 3-D fiber matrix elements. 5 refs., 11 figs.

  12. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    PubMed

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-01

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062

  13. Material Models for Accurate Simulation of Sheet Metal Forming and Springback

    NASA Astrophysics Data System (ADS)

    Yoshida, Fusahito

    2010-06-01

    For anisotropic sheet metals, modeling of anisotropy and the Bauschinger effect is discussed in the framework of Yoshida-Uemori kinematic hardening model incorporating with anisotropic yield functions. The performances of the models in predicting yield loci, cyclic stress-strain responses on several types of steel and aluminum sheets are demonstrated by comparing the numerical simulation results with the corresponding experimental observations. From some examples of FE simulation of sheet metal forming and springback, it is concluded that modeling of both the anisotropy and the Bauschinger effect is essential for the accurate numerical simulation.

  14. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  15. Palm computer demonstrates a fast and accurate means of burn data collection.

    PubMed

    Lal, S O; Smith, F W; Davis, J P; Castro, H Y; Smith, D W; Chinkes, D L; Barrow, R E

    2000-01-01

    Manual biomedical data collection and entry of the data into a personal computer is time-consuming and can be prone to errors. The purpose of this study was to compare data entry into a hand-held computer versus hand written data followed by entry of the data into a personal computer. A Palm (3Com Palm IIIx, Santa, Clara, Calif) computer with a custom menu-driven program was used for the entry and retrieval of burn-related variables. These variables were also used to create an identical sheet that was filled in by hand. Identical data were retrieved twice from 110 charts 48 hours apart and then used to create an Excel (Microsoft, Redmond, Wash) spreadsheet. One time data were recorded by the Palm entry method, and the other time the data were handwritten. The method of retrieval was alternated between the Palm system and handwritten system every 10 charts. The total time required to log data and to generate an Excel spreadsheet was recorded and used as a study endpoint. The total time for the Palm method of data collection and downloading to a personal computer was 23% faster than hand recording with the personal computer entry method (P < 0.05), and 58% fewer errors were generated with the Palm method.) The Palm is a faster and more accurate means of data collection than a handwritten technique. PMID:11194811

  16. Evaluation of Visual Computer Simulator for Computer Architecture Education

    ERIC Educational Resources Information Center

    Imai, Yoshiro; Imai, Masatoshi; Moritoh, Yoshio

    2013-01-01

    This paper presents trial evaluation of a visual computer simulator in 2009-2011, which has been developed to play some roles of both instruction facility and learning tool simultaneously. And it illustrates an example of Computer Architecture education for University students and usage of e-Learning tool for Assembly Programming in order to…

  17. High performance computing for domestic petroleum reservoir simulation

    SciTech Connect

    Zyvoloski, G.; Auer, L.; Dendy, J.

    1996-06-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory. High-performance computing offers the prospect of greatly increasing the resolution at which petroleum reservoirs can be represented in simulation models. The increases in resolution can be achieved through large increases in computational speed and memory, if machine architecture and numerical methods for solution of the multiphase flow equations can be used to advantage. Perhaps more importantly, the increased speed and size of today`s computers make it possible to add physical processes to simulation codes that heretofore were too expensive in terms of computer time and memory to be practical. These factors combine to allow the development of new, more accurate methods for optimizing petroleum reservoir production.

  18. Computer simulation of upset welding

    SciTech Connect

    Spingarn, J R; Mason, W E; Swearengen, J C

    1982-04-01

    Useful process modeling of upset welding requires contributions from metallurgy, welding engineering, thermal analysis and experimental mechanics. In this report, the significant milestones for such an effort are outlined and probable difficult areas are pointed out. Progress to date is summarized and directions for future research are offered. With regard to the computational aspects of this problem, a 2-D heat conduction computer code has been modified to incorporate electrical heating, and computations have been run for an axisymmetric problem with simple viscous material laws and d.c. electrical boundary conditions. In the experimental endeavor, the boundary conditions have been measured during the welding process, although interpretation of voltage drop measurements is not straightforward. The ranges of strain, strain rate and temperature encountered during upset welding have been measured or calculated, and the need for a unifying constitutive law is described. Finally, the possible complications of microstructure and interfaces are clarified.

  19. Novel electromagnetic surface integral equations for highly accurate computations of dielectric bodies with arbitrarily low contrasts

    SciTech Connect

    Erguel, Ozguer; Guerel, Levent

    2008-12-01

    We present a novel stabilization procedure for accurate surface formulations of electromagnetic scattering problems involving three-dimensional dielectric objects with arbitrarily low contrasts. Conventional surface integral equations provide inaccurate results for the scattered fields when the contrast of the object is low, i.e., when the electromagnetic material parameters of the scatterer and the host medium are close to each other. We propose a stabilization procedure involving the extraction of nonradiating currents and rearrangement of the right-hand side of the equations using fictitious incident fields. Then, only the radiating currents are solved to calculate the scattered fields accurately. This technique can easily be applied to the existing implementations of conventional formulations, it requires negligible extra computational cost, and it is also appropriate for the solution of large problems with the multilevel fast multipole algorithm. We show that the stabilization leads to robust formulations that are valid even for the solutions of extremely low-contrast objects.

  20. An accurate quadrature technique for the contact boundary in 3D finite element computations

    NASA Astrophysics Data System (ADS)

    Duong, Thang X.; Sauer, Roger A.

    2015-01-01

    This paper presents a new numerical integration technique for 3D contact finite element implementations, focusing on a remedy for the inaccurate integration due to discontinuities at the boundary of contact surfaces. The method is based on the adaptive refinement of the integration domain along the boundary of the contact surface, and is accordingly denoted RBQ for refined boundary quadrature. It can be used for common element types of any order, e.g. Lagrange, NURBS, or T-Spline elements. In terms of both computational speed and accuracy, RBQ exhibits great advantages over a naive increase of the number of quadrature points. Also, the RBQ method is shown to remain accurate for large deformations. Furthermore, since the sharp boundary of the contact surface is determined, it can be used for various purposes like the accurate post-processing of the contact pressure. Several examples are presented to illustrate the new technique.

  1. Computational Spectrum of Agent Model Simulation

    SciTech Connect

    Perumalla, Kalyan S

    2010-01-01

    The study of human social behavioral systems is finding renewed interest in military, homeland security and other applications. Simulation is the most generally applied approach to studying complex scenarios in such systems. Here, we outline some of the important considerations that underlie the computational aspects of simulation-based study of human social systems. The fundamental imprecision underlying questions and answers in social science makes it necessary to carefully distinguish among different simulation problem classes and to identify the most pertinent set of computational dimensions associated with those classes. We identify a few such classes and present their computational implications. The focus is then shifted to the most challenging combinations in the computational spectrum, namely, large-scale entity counts at moderate to high levels of fidelity. Recent developments in furthering the state-of-the-art in these challenging cases are outlined. A case study of large-scale agent simulation is provided in simulating large numbers (millions) of social entities at real-time speeds on inexpensive hardware. Recent computational results are identified that highlight the potential of modern high-end computing platforms to push the envelope with respect to speed, scale and fidelity of social system simulations. Finally, the problem of shielding the modeler or domain expert from the complex computational aspects is discussed and a few potential solution approaches are identified.

  2. A Generic Scheduling Simulator for High Performance Parallel Computers

    SciTech Connect

    Yoo, B S; Choi, G S; Jette, M A

    2001-08-01

    It is well known that efficient job scheduling plays a crucial role in achieving high system utilization in large-scale high performance computing environments. A good scheduling algorithm should schedule jobs to achieve high system utilization while satisfying various user demands in an equitable fashion. Designing such a scheduling algorithm is a non-trivial task even in a static environment. In practice, the computing environment and workload are constantly changing. There are several reasons for this. First, the computing platforms constantly evolve as the technology advances. For example, the availability of relatively powerful commodity off-the-shelf (COTS) components at steadily diminishing prices have made it feasible to construct ever larger massively parallel computers in recent years [1, 4]. Second, the workload imposed on the system also changes constantly. The rapidly increasing compute resources have provided many applications developers with the opportunity to radically alter program characteristics and take advantage of these additional resources. New developments in software technology may also trigger changes in user applications. Finally, political climate change may alter user priorities or the mission of the organization. System designers in such dynamic environments must be able to accurately forecast the effect of changes in the hardware, software, and/or policies under consideration. If the environmental changes are significant, one must also reassess scheduling algorithms. Simulation has frequently been relied upon for this analysis, because other methods such as analytical modeling or actual measurements are usually too difficult or costly. A drawback of the simulation approach, however, is that developing a simulator is a time-consuming process. Furthermore, an existing simulator cannot be easily adapted to a new environment. In this research, we attempt to develop a generic job-scheduling simulator, which facilitates the evaluation of

  3. An accurate Fortran code for computing hydrogenic continuum wave functions at a wide range of parameters

    NASA Astrophysics Data System (ADS)

    Peng, Liang-You; Gong, Qihuang

    2010-12-01

    The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for

  4. Towards fast and accurate algorithms for processing fuzzy data: interval computations revisited

    NASA Astrophysics Data System (ADS)

    Xiang, Gang; Kreinovich, Vladik

    2013-02-01

    In many practical applications, we need to process data, e.g. to predict the future values of different quantities based on their current values. Often, the only information that we have about the current values comes from experts, and is described in informal ('fuzzy') terms like 'small'. To process such data, it is natural to use fuzzy techniques, techniques specifically designed by Lotfi Zadeh to handle such informal information. In this survey, we start by revisiting the motivation behind Zadeh's formulae for processing fuzzy data, and explain how the algorithmic problem of processing fuzzy data can be described in terms of interval computations (α-cuts). Many fuzzy practitioners claim 'I tried interval computations, they did not work' - meaning that they got estimates which are much wider than the desired α-cuts. We show that such statements are usually based on a (widely spread) misunderstanding - that interval computations simply mean replacing each arithmetic operation with the corresponding operation with intervals. We show that while such straightforward interval techniques indeed often lead to over-wide estimates, the current advanced interval computations techniques result in estimates which are much more accurate. We overview such advanced interval computations techniques, and show that by using them, we can efficiently and accurately process fuzzy data. We wrote this survey with three audiences in mind. First, we want fuzzy researchers and practitioners to understand the current advanced interval computations techniques and to use them to come up with faster and more accurate algorithms for processing fuzzy data. For this 'fuzzy' audience, we explain these current techniques in detail. Second, we also want interval researchers to better understand this important application area for their techniques. For this 'interval' audience, we want to explain where fuzzy techniques come from, what are possible variants of these techniques, and what are the

  5. Economic Analysis. Computer Simulation Models.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This volume of the text discusses the simulation of behavioral relationships among variable elements in an economy and presents…

  6. Astronomy Simulation with Computer Graphics.

    ERIC Educational Resources Information Center

    Thomas, William E.

    1982-01-01

    "Planetary Motion Simulations" is a system of programs designed for students to observe motions of a superior planet (one whose orbit lies outside the orbit of the earth). Programs run on the Apple II microcomputer and employ high-resolution graphics to present the motions of Saturn. (Author/JN)

  7. Computer simulation of engine systems

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1980-01-01

    The use of computerized simulations of the steady state and transient performance of jet engines throughout the flight regime is discussed. In addition, installation effects on thrust and specific fuel consumption is accounted for as well as engine weight, dimensions and cost. The availability throughout the government and industry of analytical methods for calculating these quantities are pointed out.

  8. Fiber Composite Sandwich Thermostructural Behavior: Computational Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Aiello, R. A.; Murthy, P. L. N.

    1986-01-01

    Several computational levels of progressive sophistication/simplification are described to computationally simulate composite sandwich hygral, thermal, and structural behavior. The computational levels of sophistication include: (1) three-dimensional detailed finite element modeling of the honeycomb, the adhesive and the composite faces; (2) three-dimensional finite element modeling of the honeycomb assumed to be an equivalent continuous, homogeneous medium, the adhesive and the composite faces; (3) laminate theory simulation where the honeycomb (metal or composite) is assumed to consist of plies with equivalent properties; and (4) derivations of approximate, simplified equations for thermal and mechanical properties by simulating the honeycomb as an equivalent homogeneous medium. The approximate equations are combined with composite hygrothermomechanical and laminate theories to provide a simple and effective computational procedure for simulating the thermomechanical/thermostructural behavior of fiber composite sandwich structures.

  9. Augmented Reality Simulations on Handheld Computers

    ERIC Educational Resources Information Center

    Squire, Kurt; Klopfer, Eric

    2007-01-01

    Advancements in handheld computing, particularly its portability, social interactivity, context sensitivity, connectivity, and individuality, open new opportunities for immersive learning environments. This article articulates the pedagogical potential of augmented reality simulations in environmental engineering education by immersing students in…

  10. Computer Simulation of F=m/a.

    ERIC Educational Resources Information Center

    Hayden, Howard C.

    1984-01-01

    Discusses a computer simulation which: (1) describes an experiment investigating F=m/a; (2) generates data; (3) allows students to see the data; and (4) generates the equation with a least-squares fit. (JN)

  11. Computer simulation and optimization of radioelectronic devices

    NASA Astrophysics Data System (ADS)

    Benenson, Z. M.; Elistratov, M. R.; Ilin, L. K.; Kravchenko, S. V.; Sukhov, D. M.; Udler, M. A.

    Methods of simulating and optimizing radioelectronic devices in an automated design system are discussed. Also treated are algorithms used in the computer-aided design of these devices. The special language of description for these devices is described.

  12. Special purpose hybrid transfinite elements and unified computational methodology for accurately predicting thermoelastic stress waves

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.

  13. Accurate Experiment to Computation Coupling for Understanding QH-mode physics using NIMROD

    NASA Astrophysics Data System (ADS)

    King, J. R.; Burrell, K. H.; Garofalo, A. M.; Groebner, R. J.; Hanson, J. D.; Hebert, J. D.; Hudson, S. R.; Pankin, A. Y.; Kruger, S. E.; Snyder, P. B.

    2015-11-01

    It is desirable to have an ITER H-mode regime that is quiescent to edge-localized modes (ELMs). The quiescent H-mode (QH-mode) with edge harmonic oscillations (EHO) is one such regime. High quality equilibria are essential for accurate EHO simulations with initial-value codes such as NIMROD. We include profiles outside the LCFS which generate associated currents when we solve the Grad-Shafranov equation with open-flux regions using the NIMEQ solver. The new solution is an equilibrium that closely resembles the original reconstruction (which does not contain open-flux currents). This regenerated equilibrium is consistent with the profiles that are measured by the high quality diagnostics on DIII-D. Results from nonlinear NIMROD simulations of the EHO are presented. The full measured rotation profiles are included in the simulation. The simulation develops into a saturated state. The saturation mechanism of the EHO is explored and simulation is compared to magnetic-coil measurements. This work is currently supported in part by the US DOE Office of Science under awards DE-FC02-04ER54698, DE-AC02-09CH11466 and the SciDAC Center for Extended MHD Modeling.

  14. Computer-simulated phacoemulsification improvements

    NASA Astrophysics Data System (ADS)

    Soederberg, Per G.; Laurell, Carl-Gustaf; Artzen, D.; Nordh, Leif; Skarman, Eva; Nordqvist, P.; Andersson, Mats

    2002-06-01

    A simulator for phacoemulsification cataract extraction is developed. A three-dimensional visual interface and foot pedals for phacoemulsification power, x-y positioning, zoom and focus were established. An algorithm that allows real time visual feedback of the surgical field was developed. Cataract surgery is the most common surgical procedure. The operation requires input from both feet and both hands and provides visual feedback through the operation microscope essentially without tactile feedback. Experience demonstrates that the number of complications for an experienced surgeon learning phacoemulsification, decreases exponentially, reaching close to the asymptote after the first 500 procedures despite initial wet lab training on animal eyes. Simulator training is anticipated to decrease training time, decrease complication rate for the beginner and reduce expensive supervision by a high volume surgeon.

  15. Teaching Environmental Systems Modelling Using Computer Simulation.

    ERIC Educational Resources Information Center

    Moffatt, Ian

    1986-01-01

    A computer modeling course in environmental systems and dynamics is presented. The course teaches senior undergraduates to analyze a system of interest, construct a system flow chart, and write computer programs to simulate real world environmental processes. An example is presented along with a course evaluation, figures, tables, and references.…

  16. Psychology on Computers: Simulations, Experiments and Projects.

    ERIC Educational Resources Information Center

    Belcher, Duane M.; Smith, Stephen D.

    PSYCOM is a unique mixed media package which combines high interest projects on the computer with a written text of expository material. It goes beyond most computer-assisted instruction which emphasizes drill and practice and testing of knowledge. A project might consist of a simulation or an actual experiment, or it might be a demonstration, a…

  17. Computer Simulation Of A Small Turboshaft Engine

    NASA Technical Reports Server (NTRS)

    Ballin, Mark G.

    1991-01-01

    Component-type mathematical model of small turboshaft engine developed for use in real-time computer simulations of dynamics of helicopter flight. Yields shaft speeds, torques, fuel-consumption rates, and other operating parameters with sufficient accuracy for use in real-time simulation of maneuvers involving large transients in power and/or severe accelerations.

  18. Simulations of Probabilities for Quantum Computing

    NASA Technical Reports Server (NTRS)

    Zak, M.

    1996-01-01

    It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-LIpschitz dynamics, without utilization of any man-made devices (such as random number generators). Self-organizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed.

  19. Criterion Standards for Evaluating Computer Simulation Courseware.

    ERIC Educational Resources Information Center

    Wholeben, Brent Edward

    This paper explores the role of computerized simulations as a decision-modeling intervention strategy, and views the strategy's different attribute biases based upon the varying primary missions of instruction versus application. The common goals associated with computer simulations as a training technique are discussed and compared with goals of…

  20. Salesperson Ethics: An Interactive Computer Simulation

    ERIC Educational Resources Information Center

    Castleberry, Stephen

    2014-01-01

    A new interactive computer simulation designed to teach sales ethics is described. Simulation learner objectives include gaining a better understanding of legal issues in selling; realizing that ethical dilemmas do arise in selling; realizing the need to be honest when selling; seeing that there are conflicting demands from a salesperson's…

  1. Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.

  2. Computer simulation of gear tooth manufacturing processes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri; Huston, Ronald L.

    1990-01-01

    The use of computer graphics to simulate gear tooth manufacturing procedures is discussed. An analytical basis for the simulation is established for spur gears. The simulation itself, however, is developed not only for spur gears, but for straight bevel gears as well. The applications of the developed procedure extend from the development of finite element models of heretofore intractable geometrical forms, to exploring the fabrication of nonstandard tooth forms.

  3. A Variable Coefficient Method for Accurate Monte Carlo Simulation of Dynamic Asset Price

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Hung, Chih-Young; Yu, Shao-Ming; Chiang, Su-Yun; Chiang, Yi-Hui; Cheng, Hui-Wen

    2007-07-01

    In this work, we propose an adaptive Monte Carlo (MC) simulation technique to compute the sample paths for the dynamical asset price. In contrast to conventional MC simulation with constant drift and volatility (μ,σ), our MC simulation is performed with variable coefficient methods for (μ,σ) in the solution scheme, where the explored dynamic asset pricing model starts from the formulation of geometric Brownian motion. With the method of simultaneously updated (μ,σ), more than 5,000 runs of MC simulation are performed to fulfills basic accuracy of the large-scale computation and suppresses statistical variance. Daily changes of stock market index in Taiwan and Japan are investigated and analyzed.

  4. Accurate methods for computing inviscid and viscous Kelvin-Helmholtz instability

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.; Forbes, Lawrence K.

    2011-02-01

    The Kelvin-Helmholtz instability is modelled for inviscid and viscous fluids. Here, two bounded fluid layers flow parallel to each other with the interface between them growing in an unstable fashion when subjected to a small perturbation. In the various configurations of this problem, and the related problem of the vortex sheet, there are several phenomena associated with the evolution of the interface; notably the formation of a finite time curvature singularity and the ‘roll-up' of the interface. Two contrasting computational schemes will be presented. A spectral method is used to follow the evolution of the interface in the inviscid version of the problem. This allows the interface shape to be computed up to the time that a curvature singularity forms, with several computational difficulties overcome to reach that point. A weakly compressible viscous version of the problem is studied using finite difference techniques and a vorticity-streamfunction formulation. The two versions have comparable, but not identical, initial conditions and so the results exhibit some differences in timing. By including a small amount of viscosity the interface may be followed to the point that it rolls up into a classic ‘cat's-eye' shape. Particular attention was given to computing a consistent initial condition and solving the continuity equation both accurately and efficiently.

  5. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    PubMed

    Subramanian, Swetha; Mast, T Douglas

    2015-10-01

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462

  6. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation

    NASA Astrophysics Data System (ADS)

    Subramanian, Swetha; Mast, T. Douglas

    2015-09-01

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  7. Optimum spaceborne computer system design by simulation

    NASA Technical Reports Server (NTRS)

    Williams, T.; Weatherbee, J. E.; Taylor, D. S.

    1972-01-01

    A deterministic digital simulation model is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Use of the model as a tool in configuring a minimum computer system for a typical mission is demonstrated. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources, i.e., the configuration derived is a minimal one. Other considerations such as increased reliability through the use of standby spares would be taken into account in the definition of a practical system for a given mission.

  8. Computer simulation of bubble formation.

    SciTech Connect

    Insepov, Z.; Bazhirov, T.; Norman, G.; Stegailov, V.; Mathematics and Computer Science; Institute for High Energy Densities of Joint Institute for High Temperatures of RAS

    2007-01-01

    Properties of liquid metals (Li, Pb, Na) containing nanoscale cavities were studied by atomistic Molecular Dynamics (MD). Two atomistic models of cavity simulation were developed that cover a wide area in the phase diagram with negative pressure. In the first model, the thermodynamics of cavity formation, stability and the dynamics of cavity evolution in bulk liquid metals have been studied. Radial densities, pressures, surface tensions, and work functions of nano-scale cavities of various radii were calculated for liquid Li, Na, and Pb at various temperatures and densities, and at small negative pressures near the liquid-gas spinodal, and the work functions for cavity formation in liquid Li were calculated and compared with the available experimental data. The cavitation rate can further be obtained by using the classical nucleation theory (CNT). The second model is based on the stability study and on the kinetics of cavitation of the stretched liquid metals. A MD method was used to simulate cavitation in a metastable Pb and Li melts and determine the stability limits. States at temperatures below critical (T < 0.5Tc) and large negative pressures were considered. The kinetic boundary of liquid phase stability was shown to be different from the spinodal. The kinetics and dynamics of cavitation were studied. The pressure dependences of cavitation frequencies were obtained for several temperatures. The results of MD calculations were compared with estimates based on classical nucleation theory.

  9. Computer Code for Nanostructure Simulation

    NASA Technical Reports Server (NTRS)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  10. Pseudospark discharges via computer simulation

    SciTech Connect

    Boeuf, J.P.; Pitchford, L.C. )

    1991-04-01

    The authors of this paper developed a hybrid fluid-particle (Monte Carlo) model to describe the initiation phase of pseudospark discharges. In this model, time-dependent fluid equations for the electrons and positive ions are solved self-consistently with Poisson's equation for the electric field in a two-dimensional, cylindrically symmetrical geometry. The Monte Carlo simulation is used to determine the ionization source term in the fluid equations. This model has been used to study the evolution of a discharge in helium at 0.5 torr, with an applied voltage of 2 kV and in a typical pseudospark geometry. From the numerical results, the authors have identified a sequence of physical events that lead to the rapid rise in current associated with the onset of the pseudospark discharge mode. For the conditions the authors have simulated, they find that there is a maximum in the electron multiplication at the time which corresponds to the onset of the hollow cathode effect, and although the multiplication later decreases, it is always greater than needed for a steady-state discharge. Thus the sheaths inside the hollow cathode tend to collapse against the walls, and eventually cathode emission mechanisms (such as field-enhanced thermionic emission) which the authors have not included will start to play a role. In spite of the approximation in this model, the picture which has emerged provides insight into the mechanisms controlling the onset of this potentially important discharge mode.

  11. Computational simulations and experimental validation of a furnace brazing process

    SciTech Connect

    Hosking, F.M.; Gianoulakis, S.E.; Malizia, L.A.

    1998-12-31

    Modeling of a furnace brazing process is described. The computational tools predict the thermal response of loaded hardware in a hydrogen brazing furnace to programmed furnace profiles. Experiments were conducted to validate the model and resolve computational uncertainties. Critical boundary conditions that affect materials and processing response to the furnace environment were determined. {open_quotes}Global{close_quotes} and local issues (i.e., at the furnace/hardware and joint levels, respectively) are discussed. The ability to accurately simulate and control furnace conditions is examined.

  12. Rotorcraft Damage Tolerance Evaluated by Computational Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Minnetyan, Levon; Abdi, Frank

    2000-01-01

    An integrally stiffened graphite/epoxy composite rotorcraft structure is evaluated via computational simulation. A computer code that scales up constituent micromechanics level material properties to the structure level and accounts for all possible failure modes is used for the simulation of composite degradation under loading. Damage initiation, growth, accumulation, and propagation to fracture are included in the simulation. Design implications with regard to defect and damage tolerance of integrally stiffened composite structures are examined. A procedure is outlined regarding the use of this type of information for setting quality acceptance criteria, design allowables, damage tolerance, and retirement-for-cause criteria.

  13. Polymer Composites Corrosive Degradation: A Computational Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Minnetyan, Levon

    2007-01-01

    A computational simulation of polymer composites corrosive durability is presented. The corrosive environment is assumed to manage the polymer composite degradation on a ply-by-ply basis. The degradation is correlated with a measured pH factor and is represented by voids, temperature and moisture which vary parabolically for voids and linearly for temperature and moisture through the laminate thickness. The simulation is performed by a computational composite mechanics computer code which includes micro, macro, combined stress failure and laminate theories. This accounts for starting the simulation from constitutive material properties and up to the laminate scale which exposes the laminate to the corrosive environment. Results obtained for one laminate indicate that the ply-by-ply degradation degrades the laminate to the last one or the last several plies. Results also demonstrate that the simulation is applicable to other polymer composite systems as well.

  14. Computer simulations of lung surfactant.

    PubMed

    Baoukina, Svetlana; Tieleman, D Peter

    2016-10-01

    Lung surfactant lines the gas-exchange interface in the lungs and reduces the surface tension, which is necessary for breathing. Lung surfactant consists mainly of lipids with a small amount of proteins and forms a monolayer at the air-water interface connected to bilayer reservoirs. Lung surfactant function involves transfer of material between the monolayer and bilayers during the breathing cycle. Lipids and proteins are organized laterally in the monolayer; selected species are possibly preferentially transferred to bilayers. The complex 3D structure of lung surfactant and the exact roles of lipid organization and proteins remain important goals for research. We review recent simulation studies on the properties of lipid monolayers, monolayers with phase coexistence, monolayer-bilayer transformations, lipid-protein interactions, and effects of nanoparticles on lung surfactant. This article is part of a Special Issue entitled: Biosimulations edited by Ilpo Vattulainen and Tomasz Róg. PMID:26922885

  15. Creating science simulations through Computational Thinking Patterns

    NASA Astrophysics Data System (ADS)

    Basawapatna, Ashok Ram

    Computational thinking aims to outline fundamental skills from computer science that everyone should learn. As currently defined, with help from the National Science Foundation (NSF), these skills include problem formulation, logically organizing data, automating solutions through algorithmic thinking, and representing data through abstraction. One aim of the NSF is to integrate these and other computational thinking concepts into the classroom. End-user programming tools offer a unique opportunity to accomplish this goal. An end-user programming tool that allows students with little or no prior experience the ability to create simulations based on phenomena they see in-class could be a first step towards meeting most, if not all, of the above computational thinking goals. This thesis describes the creation, implementation and initial testing of a programming tool, called the Simulation Creation Toolkit, with which users apply high-level agent interactions called Computational Thinking Patterns (CTPs) to create simulations. Employing Computational Thinking Patterns obviates lower behavior-level programming and allows users to directly create agent interactions in a simulation by making an analogy with real world phenomena they are trying to represent. Data collected from 21 sixth grade students with no prior programming experience and 45 seventh grade students with minimal programming experience indicates that this is an effective first step towards enabling students to create simulations in the classroom environment. Furthermore, an analogical reasoning study that looked at how users might apply patterns to create simulations from high- level descriptions with little guidance shows promising results. These initial results indicate that the high level strategy employed by the Simulation Creation Toolkit is a promising strategy towards incorporating Computational Thinking concepts in the classroom environment.

  16. Computer simulation of space station computer steered high gain antenna

    NASA Technical Reports Server (NTRS)

    Beach, S. W.

    1973-01-01

    The mathematical modeling and programming of a complete simulation program for a space station computer-steered high gain antenna are described. The program provides for reading input data cards, numerically integrating up to 50 first order differential equations, and monitoring up to 48 variables on printed output and on plots. The program system consists of a high gain antenna, an antenna gimbal control system, an on board computer, and the environment in which all are to operate.

  17. Computer Models Simulate Fine Particle Dispersion

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  18. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation

    SciTech Connect

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro

    2012-06-15

    % for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. Conclusions: The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm.

  19. Optical computed tomography of radiochromic gels for accurate three-dimensional dosimetry

    NASA Astrophysics Data System (ADS)

    Babic, Steven

    In this thesis, three-dimensional (3-D) radiochromic Ferrous Xylenol-orange (FX) and Leuco Crystal Violet (LCV) micelles gels were imaged by laser and cone-beam (Vista(TM)) optical computed tomography (CT) scanners. The objective was to develop optical CT of radiochromic gels for accurate 3-D dosimetry of intensity-modulated radiation therapy (IMRT) and small field techniques used in modern radiotherapy. First, the cause of a threshold dose response in FX gel dosimeters when scanned with a yellow light source was determined. This effect stems from a spectral sensitivity to multiple chemical complexes that are at different dose levels between ferric ions and xylenol-orange. To negate the threshold dose, an initial concentration of ferric ions is needed in order to shift the chemical equilibrium so that additional dose results in a linear production of a coloured complex that preferentially absorbs at longer wavelengths. Second, a low diffusion leuco-based radiochromic gel consisting of Triton X-100 micelles was developed. The diffusion coefficient of the LCV micelle gel was found to be minimal (0.036 + 0.001 mm2 hr-1 ). Although a dosimetric characterization revealed a reduced sensitivity to radiation, this was offset by a lower auto-oxidation rate and base optical density, higher melting point and no spectral sensitivity. Third, the Radiological Physics Centre (RPC) head-and-neck IMRT protocol was extended to 3-D dose verification using laser and cone-beam (Vista(TM)) optical CT scans of FX gels. Both optical systems yielded comparable measured dose distributions in high-dose regions and low gradients. The FX gel dosimetry results were crossed checked against independent thermoluminescent dosimeter and GAFChromicRTM EBT film measurements made by the RPC. It was shown that optical CT scanned FX gels can be used for accurate IMRT dose verification in 3-D. Finally, corrections for FX gel diffusion and scattered stray light in the Vista(TM) scanner were developed to

  20. Accurate 3-D finite difference computation of traveltimes in strongly heterogeneous media

    NASA Astrophysics Data System (ADS)

    Noble, M.; Gesret, A.; Belayouni, N.

    2014-12-01

    Seismic traveltimes and their spatial derivatives are the basis of many imaging methods such as pre-stack depth migration and tomography. A common approach to compute these quantities is to solve the eikonal equation with a finite-difference scheme. If many recently published algorithms for resolving the eikonal equation do now yield fairly accurate traveltimes for most applications, the spatial derivatives of traveltimes remain very approximate. To address this accuracy issue, we develop a new hybrid eikonal solver that combines a spherical approximation when close to the source and a plane wave approximation when far away. This algorithm reproduces properly the spherical behaviour of wave fronts in the vicinity of the source. We implement a combination of 16 local operators that enables us to handle velocity models with sharp vertical and horizontal velocity contrasts. We associate to these local operators a global fast sweeping method to take into account all possible directions of wave propagation. Our formulation allows us to introduce a variable grid spacing in all three directions of space. We demonstrate the efficiency of this algorithm in terms of computational time and the gain in accuracy of the computed traveltimes and their derivatives on several numerical examples.

  1. Utilizing fast multipole expansions for efficient and accurate quantum-classical molecular dynamics simulations.

    PubMed

    Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul

    2015-03-14

    Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 10(3)-10(5) molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online. PMID:25770527

  2. Utilizing fast multipole expansions for efficient and accurate quantum-classical molecular dynamics simulations

    SciTech Connect

    Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul

    2015-03-14

    Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 10{sup 3}-10{sup 5} molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.

  3. Utilizing fast multipole expansions for efficient and accurate quantum-classical molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul

    2015-03-01

    Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.

  4. Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.

  5. Highly Accurate Frequency Calculations of Crab Cavities Using the VORPAL Computational Framework

    SciTech Connect

    Austin, T.M.; Cary, J.R.; Bellantoni, L.; /Argonne

    2009-05-01

    We have applied the Werner-Cary method [J. Comp. Phys. 227, 5200-5214 (2008)] for extracting modes and mode frequencies from time-domain simulations of crab cavities, as are needed for the ILC and the beam delivery system of the LHC. This method for frequency extraction relies on a small number of simulations, and post-processing using the SVD algorithm with Tikhonov regularization. The time-domain simulations were carried out using the VORPAL computational framework, which is based on the eminently scalable finite-difference time-domain algorithm. A validation study was performed on an aluminum model of the 3.9 GHz RF separators built originally at Fermi National Accelerator Laboratory in the US. Comparisons with measurements of the A15 cavity show that this method can provide accuracy to within 0.01% of experimental results after accounting for manufacturing imperfections. To capture the near degeneracies two simulations, requiring in total a few hours on 600 processors were employed. This method has applications across many areas including obtaining MHD spectra from time-domain simulations.

  6. An accurate elasto-plastic frictional tangential force displacement model for granular-flow simulations: Displacement-driven formulation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Vu-Quoc, Loc

    2007-07-01

    We present in this paper the displacement-driven version of a tangential force-displacement (TFD) model that accounts for both elastic and plastic deformations together with interfacial friction occurring in collisions of spherical particles. This elasto-plastic frictional TFD model, with its force-driven version presented in [L. Vu-Quoc, L. Lesburg, X. Zhang. An accurate tangential force-displacement model for granular-flow simulations: contacting spheres with plastic deformation, force-driven formulation, Journal of Computational Physics 196(1) (2004) 298-326], is consistent with the elasto-plastic frictional normal force-displacement (NFD) model presented in [L. Vu-Quoc, X. Zhang. An elasto-plastic contact force-displacement model in the normal direction: displacement-driven version, Proceedings of the Royal Society of London, Series A 455 (1991) 4013-4044]. Both the NFD model and the present TFD model are based on the concept of additive decomposition of the radius of contact area into an elastic part and a plastic part. The effect of permanent indentation after impact is represented by a correction to the radius of curvature. The effect of material softening due to plastic flow is represented by a correction to the elastic moduli. The proposed TFD model is accurate, and is validated against nonlinear finite element analyses involving plastic flows in both the loading and unloading conditions. The proposed consistent displacement-driven, elasto-plastic NFD and TFD models are designed for implementation in computer codes using the discrete-element method (DEM) for granular-flow simulations. The model is shown to be accurate and is validated against nonlinear elasto-plastic finite-element analysis.

  7. Time Accurate CFD Simulations of the Orion Launch Abort Vehicle in the Transonic Regime

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph; Rojahn, Josh

    2011-01-01

    Significant asymmetries in the fluid dynamics were calculated for some cases in the CFD simulations of the Orion Launch Abort Vehicle through its abort trajectories. The CFD simulations were performed steady state with symmetric boundary conditions and geometries. The trajectory points at issue were in the transonic regime, at 0 and 5 angles of attack with the Abort Motors with and without the Attitude Control Motors (ACM) firing. In some of the cases the asymmetric fluid dynamics resulted in aerodynamic side forces that were large enough that would overcome the control authority of the ACMs. MSFC s Fluid Dynamics Group supported the investigation into the cause of the flow asymmetries with time accurate CFD simulations, utilizing a hybrid RANS-LES turbulence model. The results show that the flow over the vehicle and the subsequent interaction with the AB and ACM motor plumes were unsteady. The resulting instantaneous aerodynamic forces were oscillatory with fairly large magnitudes. Time averaged aerodynamic forces were essentially symmetric.

  8. Recommendations for Achieving Accurate Numerical Simulation of Tip Clearance Flows in Transonic Compressor Rotors

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.

  9. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  10. CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval

    PubMed Central

    Karim, Rezaul; Aziz, Mohd. Momin Al; Shatabda, Swakkhar; Rahman, M. Sohel; Mia, Md. Abul Kashem; Zaman, Farhana; Rakin, Salman

    2015-01-01

    The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226

  11. CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval.

    PubMed

    Karim, Rezaul; Aziz, Mohd Momin Al; Shatabda, Swakkhar; Rahman, M Sohel; Mia, Md Abul Kashem; Zaman, Farhana; Rakin, Salman

    2015-01-01

    The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226

  12. Symbolic computation in system simulation and design

    NASA Astrophysics Data System (ADS)

    Evans, Brian L.; Gu, Steve X.; Kalavade, Asa; Lee, Edward A.

    1995-06-01

    This paper examines some of the roles that symbolic computation plays in assisting system- level simulation and design. By symbolic computation, we mean programs like Mathematica that perform symbolic algebra and apply transformation rules based on algebraic identities. At a behavioral level, symbolic computation can compute parameters, generate new models, and optimize parameter settings. At the synthesis level, symbolic computation can work in tandem with synthesis tools to rewrite cascade and parallel combinations on components in sub- systems to meet design constraints. Symbolic computation represents one type of tool that may be invoked in the complex flow of the system design process. The paper discusses the qualities that a formal infrastructure for managing system design should have. The paper also describes an implementation of this infrastructure called DesignMaker, implemented in the Ptolemy environment, which manages the flow of tool invocations in an efficient manner using a graphical file dependency mechanism.

  13. Computer Series, 108. Computer Simulation of Chemical Equilibrium.

    ERIC Educational Resources Information Center

    Cullen, John F., Jr.

    1989-01-01

    Presented is a computer simulation called "The Great Chemical Bead Game" which can be used to teach the concepts of equilibrium and kinetics to introductory chemistry students more clearly than through an experiment. Discussed are the rules of the game, the application of rate laws and graphical analysis. (CW)

  14. Enabling Computational Technologies for Terascale Scientific Simulations

    SciTech Connect

    Ashby, S.F.

    2000-08-24

    We develop scalable algorithms and object-oriented code frameworks for terascale scientific simulations on massively parallel processors (MPPs). Our research in multigrid-based linear solvers and adaptive mesh refinement enables Laboratory programs to use MPPs to explore important physical phenomena. For example, our research aids stockpile stewardship by making practical detailed 3D simulations of radiation transport. The need to solve large linear systems arises in many applications, including radiation transport, structural dynamics, combustion, and flow in porous media. These systems result from discretizations of partial differential equations on computational meshes. Our first research objective is to develop multigrid preconditioned iterative methods for such problems and to demonstrate their scalability on MPPs. Scalability describes how total computational work grows with problem size; it measures how effectively additional resources can help solve increasingly larger problems. Many factors contribute to scalability: computer architecture, parallel implementation, and choice of algorithm. Scalable algorithms have been shown to decrease simulation times by several orders of magnitude.

  15. Computer simulation of breathing systems for divers

    SciTech Connect

    Sexton, P.G.; Nuckols, M.L.

    1983-02-01

    A powerful new tool for the analysis and design of underwater breathing gas systems is being developed. A versatile computer simulator is described which makes possible the modular ''construction'' of any conceivable breathing gas system from computer memory-resident components. The analysis of a typical breathing gas system is demonstrated using this simulation technique, and the effects of system modifications on performance of the breathing system are shown. This modeling technique will ultimately serve as the foundation for a proposed breathing system simulator under development by the Navy. The marriage of this computer modeling technique with an interactive graphics system will provide the designer with an efficient, cost-effective tool for the development of new and improved diving systems.

  16. Towards the computations of accurate spectroscopic parameters and vibrational spectra for organic compounds

    NASA Astrophysics Data System (ADS)

    Hochlaf, M.; Puzzarini, C.; Senent, M. L.

    2015-07-01

    We present multi-component computations for rotational constants, vibrational and torsional levels of medium-sized molecules. Through the treatment of two organic sulphur molecules, ethyl mercaptan and dimethyl sulphide, which are relevant for atmospheric and astrophysical media, we point out the outstanding capabilities of explicitly correlated coupled clusters (CCSD(T)-F12) method in conjunction with the cc-pVTZ-F12 basis set for the accurate predictions of such quantities. Indeed, we show that the CCSD(T)-F12/cc-pVTZ-F12 equilibrium rotational constants are in good agreement with those obtained by means of a composite scheme based on CCSD(T) calculations that accounts for the extrapolation to the complete basis set (CBS) limit and core-correlation effects [CCSD(T)/CBS+CV], thus leading to values of ground-state rotational constants rather close to the corresponding experimental data. For vibrational and torsional levels, our analysis reveals that the anharmonic frequencies derived from CCSD(T)-F12/cc-pVTZ-F12 harmonic frequencies and anharmonic corrections (Δν = ω - ν) at the CCSD/cc-pVTZ level closely agree with experimental results. The pattern of the torsional transitions and the shape of the potential energy surfaces along the torsional modes are also well reproduced using the CCSD(T)-F12/cc-pVTZ-F12 energies. Interestingly, this good accuracy is accompanied with a strong reduction of the computational costs. This makes the procedures proposed here as schemes of choice for effective and accurate prediction of spectroscopic properties of organic compounds. Finally, popular density functional approaches are compared with the coupled cluster (CC) methodologies in torsional studies. The long-range CAM-B3LYP functional of Handy and co-workers is recommended for large systems.

  17. Software Engineering for Scientific Computer Simulations

    NASA Astrophysics Data System (ADS)

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  18. Time-Accurate Computation of Viscous Flow Around Deforming Bodies Using Overset Grids

    SciTech Connect

    Fast, P; Henshaw, W D

    2001-04-02

    Dynamically evolving boundaries and deforming bodies interacting with a flow are commonly encountered in fluid dynamics. However, the numerical simulation of flows with dynamic boundaries is difficult with current methods. We propose a new method for studying such problems. The key idea is to use the overset grid method with a thin, body-fitted grid near the deforming boundary, while using fixed Cartesian grids to cover most of the computational domain. Our approach combines the strengths of earlier moving overset grid methods for rigid body motion, and unstructured grid methods for Aow-structure interactions. Large scale deformation of the flow boundaries can be handled without a global regridding, and in a computationally efficient way. In terms of computational cost, even a full overset grid regridding is significantly cheaper than a full regridding of an unstructured grid for the same domain, especially in three dimensions. Numerical studies are used to verify accuracy and convergence of our flow solver. As a computational example, we consider two-dimensional incompressible flow past a flexible filament with prescribed dynamics.

  19. Structural Composites Corrosive Management by Computational Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Minnetyan, Levon

    2006-01-01

    A simulation of corrosive management on polymer composites durability is presented. The corrosive environment is assumed to manage the polymer composite degradation on a ply-by-ply basis. The degradation is correlated with a measured Ph factor and is represented by voids, temperature, and moisture which vary parabolically for voids and linearly for temperature and moisture through the laminate thickness. The simulation is performed by a computational composite mechanics computer code which includes micro, macro, combined stress failure, and laminate theories. This accounts for starting the simulation from constitutive material properties and up to the laminate scale which exposes the laminate to the corrosive environment. Results obtained for one laminate indicate that the ply-by-ply managed degradation degrades the laminate to the last one or the last several plies. Results also demonstrate that the simulation is applicable to other polymer composite systems as well.

  20. Learning features in computer simulation skills training.

    PubMed

    Johannesson, Eva; Olsson, Mats; Petersson, Göran; Silén, Charlotte

    2010-09-01

    New simulation tools imply new opportunities to teach skills and train health care professionals. The aim of this study was to investigate the learning gained from computer simulation skills training. The study was designed for optimal educational settings, which benefit student-centred learning. Twenty-four second year undergraduate nursing students practised intravenous catheterization with the computer simulation program CathSim. Questionnaires were answered before and after the skills training, and after the skills examination. When using CathSim, the students appreciated the variation in patient cases, the immediate feedback, and a better understanding of anatomy, but they missed having an arm model to hold. We concluded that CathSim was useful in the students' learning process and skills training when appropriately integrated into the curriculum. Learning features to be aware of when organizing curricula with simulators are motivation, realism, variation, meaningfulness and feedback. PMID:20015690

  1. Task simulation in computer-based training

    SciTech Connect

    Gardner, P.R.

    1988-02-01

    Westinghouse Hanford Company (WHC) makes extensive use of job-task simulations in company-developed computer-based training (CBT) courseware. This courseware is different from most others because it does not simulate process control machinery or other computer programs, instead the WHC Excerises model day-to-day tasks such as physical work preparations, progress, and incident handling. These Exercises provide a higher level of motivation and enable the testing of more complex patterns of behavior than those typically measured by multiple-choice and short questions. Examples from the WHC Radiation Safety and Crane Safety courses will be used as illustrations. 3 refs.

  2. Student Choices when Learning with Computer Simulations

    NASA Astrophysics Data System (ADS)

    Podolefsky, Noah S.; Adams, Wendy K.; Wieman, Carl E.

    2009-11-01

    We examine student choices while using PhET computer simulations (sims) to learn physics content. In interviews, students were given questions from the Force Concept Inventory (FCI) and were allowed to choose from 12 computer simulations in order to answer these questions. We investigate students' choices when answering FCI questions with sims. We find that while students' initially choose sims that match problem situations at a surface level, deeper connections may be noticed by students later on. These results inform us on how people may choose education resources when learning on their own.

  3. Computer simulation: A modern day crystal ball?

    NASA Technical Reports Server (NTRS)

    Sham, Michael; Siprelle, Andrew

    1994-01-01

    It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.

  4. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-01-01

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  5. Virtual ambulatory care. Computer simulation applications.

    PubMed

    Zilm, Frank; Culp, Kristyna; Dorney, Beverley

    2003-01-01

    Computer simulation modeling has evolved during the past twenty years into an effective tool for analyzing and planning ambulatory care facilities. This article explains the use of this tool in three case-study, ambulatory care settings--a GI lab, holding beds for a cardiac catheterization laboratory, and in emergency services. These examples also illustrate the use of three software packages currently available: MedModel, Simul8, and WITNESS. PMID:12545512

  6. Accurate and Scalable O(N) Algorithm for First-Principles Molecular-Dynamics Computations on Large Parallel Computers

    NASA Astrophysics Data System (ADS)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7×10-4 Ha/Bohr.

  7. Accurate and Scalable O(N) Algorithm for First-Principles Molecular-Dynamics Computations on Large Parallel Computers

    SciTech Connect

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7x10-4 Ha/Bohr.

  8. A fast but accurate excitonic simulation of the electronic circular dichroism of nucleic acids: how can it be achieved?

    PubMed

    Loco, Daniele; Jurinovich, Sandro; Di Bari, Lorenzo; Mennucci, Benedetta

    2016-01-14

    We present and discuss a simple and fast computational approach to the calculation of electronic circular dichroism spectra of nucleic acids. It is based on a exciton model in which the couplings are obtained in terms of the full transition-charge distributions, as resulting from TDDFT methods applied on the individual nucleobases. We validated the method on two systems, a DNA G-quadruplex and a RNA β-hairpin whose solution structures have been accurately determined by means of NMR. We have shown that the different characteristics of composition and structure of the two systems can lead to quite important differences in the dependence of the accuracy of the simulation on the excitonic parameters. The accurate reproduction of the CD spectra together with their interpretation in terms of the excitonic composition suggest that this method may lend itself as a general computational tool to both predict the spectra of hypothetic structures and define clear relationships between structural and ECD properties. PMID:26646952

  9. Development of highly accurate approximate scheme for computing the charge transfer integral.

    PubMed

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature. PMID:26298117

  10. Development of highly accurate approximate scheme for computing the charge transfer integral

    SciTech Connect

    Pershin, Anton; Szalay, Péter G.

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  11. A fast and accurate simulator for the design of birdcage coils in MRI.

    PubMed

    Giovannetti, Giulio; Landini, Luigi; Santarelli, Maria Filomena; Positano, Vincenzo

    2002-11-01

    The birdcage coils are extensively used in MRI systems since they introduce a high signal to noise ratio and a high radiofrequency magnetic field homogeneity that guarantee a large field of view. The present article describes the implementation of a birdcage coil simulator, operating in high-pass and low-pass modes, using magnetostatic analysis of the coil. Respect to other simulators described in literature, our simulator allows to obtain in short time not only the dominant frequency mode, but also the complete resonant frequency spectrum and the relevant magnetic field pattern with high accuracy. Our simulator accounts for all the inductances including the mutual inductances between conductors. Moreover, the inductance calculation includes an accurately birdcage geometry description and the effect of a radiofrequency shield. The knowledge of all the resonance modes introduced by a birdcage coil is twofold useful during birdcage coil design: --higher order modes should be pushed far from the fundamental one, --for particular applications, it is necessary to localize other resonant modes (as the Helmholtz mode) jointly to the dominant mode. The knowledge of the magnetic field pattern allows to a priori verify the field homogeneity created inside the coil, when varying the coil dimension and mainly the number of the coil legs. The coil is analyzed using equivalent circuit method. Finally, the simulator is validated by implementing a low-pass birdcage coil and comparing our data with the literature. PMID:12413563

  12. Accurate, efficient, and scalable parallel simulation of mesoscale electrostatic/magnetostatic problems accelerated by a fast multipole method

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Karpeev, Dmitry; Li, Jiyuan; de Pablo, Juan; Hernandez-Ortiz, Juan; Heinonen, Olle

    Boundary integrals arise in many electrostatic and magnetostatic problems. In computational modeling of these problems, although the integral is performed only on the boundary of a domain, its direct evaluation needs O(N2) operations, where N is number of unknowns on the boundary. The O(N2) scaling impedes a wider usage of the boundary integral method in scientific and engineering communities. We have developed a parallel computational approach that utilize the Fast Multipole Method to evaluate the boundary integral in O(N) operations. To demonstrate the accuracy, efficiency, and scalability of our approach, we consider two test cases. In the first case, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space using a hybrid finite element-boundary integral method. In the second case, we solve an electrostatic problem involving the polarization of dielectric objects in free space using the boundary element method. The results from test cases show that our parallel approach can enable highly efficient and accurate simulations of mesoscale electrostatic/magnetostatic problems. Computing resources was provided by Blues, a high-performance cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory. Work at Argonne was supported by U. S. DOE, Office of Science under Contract No. DE-AC02-06CH11357.

  13. Perspective: Computer simulations of long time dynamics.

    PubMed

    Elber, Ron

    2016-02-14

    Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances. PMID:26874473

  14. Perspective: Computer simulations of long time dynamics

    PubMed Central

    Elber, Ron

    2016-01-01

    Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances. PMID:26874473

  15. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  16. Metal cutting simulation of 4340 steel using an accurate mechanical description of meterial strength and fracture

    SciTech Connect

    Maudlin, P.J.; Stout, M.G.

    1996-09-01

    Strength and fracture constitutive relationships containing strain rate dependence and thermal softening are important for accurate simulation of metal cutting. The mechanical behavior of a hardened 4340 steel was characterized using the von Mises yield function, the Mechanical Threshold Stress model and the Johnson- Cook fracture model. This constitutive description was implemented into the explicit Lagrangian FEM continuum-mechanics code EPIC, and orthogonal plane-strain metal cutting calculations were performed. Heat conduction and friction at the toolwork-piece interface were included in the simulations. These transient calculations were advanced in time until steady state machining behavior (force) was realized. Experimental cutting force data (cutting and thrust forces) were measured for a planning operation and compared to the calculations. 13 refs., 6 figs.

  17. Uncertainty and error in computational simulations

    SciTech Connect

    Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.

    1997-10-01

    The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.

  18. Simulating physical phenomena with a quantum computer

    NASA Astrophysics Data System (ADS)

    Ortiz, Gerardo

    2003-03-01

    In a keynote speech at MIT in 1981 Richard Feynman raised some provocative questions in connection to the exact simulation of physical systems using a special device named a ``quantum computer'' (QC). At the time it was known that deterministic simulations of quantum phenomena in classical computers required a number of resources that scaled exponentially with the number of degrees of freedom, and also that the probabilistic simulation of certain quantum problems were limited by the so-called sign or phase problem, a problem believed to be of exponential complexity. Such a QC was intended to mimick physical processes exactly the same as Nature. Certainly, remarks coming from such an influential figure generated widespread interest in these ideas, and today after 21 years there are still some open questions. What kind of physical phenomena can be simulated with a QC?, How?, and What are its limitations? Addressing and attempting to answer these questions is what this talk is about. Definitively, the goal of physics simulation using controllable quantum systems (``physics imitation'') is to exploit quantum laws to advantage, and thus accomplish efficient imitation. Fundamental is the connection between a quantum computational model and a physical system by transformations of operator algebras. This concept is a necessary one because in Quantum Mechanics each physical system is naturally associated with a language of operators and thus can be considered as a possible model of quantum computation. The remarkable result is that an arbitrary physical system is naturally simulatable by another physical system (or QC) whenever a ``dictionary'' between the two operator algebras exists. I will explain these concepts and address some of Feynman's concerns regarding the simulation of fermionic systems. Finally, I will illustrate the main ideas by imitating simple physical phenomena borrowed from condensed matter physics using quantum algorithms, and present experimental

  19. Numerical Simulation of the 2004 Indian Ocean Tsunami: Accurate Flooding and drying in Banda Aceh

    NASA Astrophysics Data System (ADS)

    Cui, Haiyang; Pietrzak, Julie; Stelling, Guus; Androsov, Alexey; Harig, Sven

    2010-05-01

    The Indian Ocean Tsunami on December 26, 2004 caused one of the largest tsunamis in recent times and led to widespread devastation and loss of life. One of the worst hit regions was Banda Aceh, which is the capital of the Aceh province, located in the northern part of Sumatra, 150km from the source of the earthquake. A German-Indonesian Tsunami Early Warning System (GITEWS) (www.gitews.de) is currently under active development. The work presented here is carried out within the GITEWS framework. One of the aims of this project is the development of accurate models with which to simulate the propagation, flooding and drying, and run-up of a tsunami. In this context, TsunAWI has been developed by the Alfred Wegener Institute; it is an explicit, () finite element model. However, the accurate numerical simulation of flooding and drying requires the conservation of mass and momentum. This is not possible in the current version of TsunAWi. The P1NC - P1element guarantees mass conservation in a global sense, yet as we show here it is important to guarantee mass conservation at the local level, that is within each individual cell. Here an unstructured grid, finite volume ocean model is presented. It is derived from the P1NC - P1 element, and is shown to be mass and momentum conserving. Then a number of simulations are presented, including dam break problems flooding over both a wet and a dry bed. Excellent agreement is found. Then we present simulations for Banda Aceh, and compare the results to on-site survey data, as well as to results from the original TsunAWI code.

  20. Quantitative computer simulations of extraterrestrial processing operations

    NASA Technical Reports Server (NTRS)

    Vincent, T. L.; Nikravesh, P. E.

    1989-01-01

    The automation of a small, solid propellant mixer was studied. Temperature control is under investigation. A numerical simulation of the system is under development and will be tested using different control options. Control system hardware is currently being put into place. The construction of mathematical models and simulation techniques for understanding various engineering processes is also studied. Computer graphics packages were utilized for better visualization of the simulation results. The mechanical mixing of propellants is examined. Simulation of the mixing process is being done to study how one can control for chaotic behavior to meet specified mixing requirements. An experimental mixing chamber is also being built. It will allow visual tracking of particles under mixing. The experimental unit will be used to test ideas from chaos theory, as well as to verify simulation results. This project has applications to extraterrestrial propellant quality and reliability.

  1. Computer simulation of screw dislocation in aluminum

    NASA Technical Reports Server (NTRS)

    Esterling, D. M.

    1976-01-01

    The atomic structure in a 110 screw dislocation core for aluminum is obtained by computer simulation. The lattice statics technique is employed since it entails no artificially imposed elastic boundary around the defect. The interatomic potential has no adjustable parameters and was derived from pseudopotential theory. The resulting atomic displacements were allowed to relax in all three dimensions.

  2. Eliminating Computational Instability In Multibody Simulations

    NASA Technical Reports Server (NTRS)

    Watts, Gaines L.

    1994-01-01

    TWOBODY implements improved version of Lagrange multiplier method. Program ultilizes programming technique eliminating computational instability in multibody simulations in which Lagrange multipliers used. In technique, one uses constraint equations, instead of integration, to determine coordinates that are not independent. To illustrate technique, it includes simple mathematical model of solid rocket booster and parachute connected by frictionless swivel. Written in FORTRAN 77.

  3. Macromod: Computer Simulation For Introductory Economics

    ERIC Educational Resources Information Center

    Ross, Thomas

    1977-01-01

    The Macroeconomic model (Macromod) is a computer assisted instruction simulation model designed for introductory economics courses. An evaluation of its utilization at a community college indicates that it yielded a 10 percent to 13 percent greater economic comprehension than lecture classes and that it met with high student approval. (DC)

  4. Designing Online Scaffolds for Interactive Computer Simulation

    ERIC Educational Resources Information Center

    Chen, Ching-Huei; Wu, I-Chia; Jen, Fen-Lan

    2013-01-01

    The purpose of this study was to examine the effectiveness of online scaffolds in computer simulation to facilitate students' science learning. We first introduced online scaffolds to assist and model students' science learning and to demonstrate how a system embedded with online scaffolds can be designed and implemented to help high…

  5. Assessing Moderator Variables: Two Computer Simulation Studies.

    ERIC Educational Resources Information Center

    Mason, Craig A.; And Others

    1996-01-01

    A strategy is proposed for conceptualizing moderating relationships based on their type (strictly correlational and classically correlational) and form, whether continuous, noncontinuous, logistic, or quantum. Results of computer simulations comparing three statistical approaches for assessing moderator variables are presented, and advantages of…

  6. Decision Making in Computer-Simulated Experiments.

    ERIC Educational Resources Information Center

    Suits, J. P.; Lagowski, J. J.

    A set of interactive, computer-simulated experiments was designed to respond to the large range of individual differences in aptitude and reasoning ability generally exhibited by students enrolled in first-semester general chemistry. These experiments give students direct experience in the type of decision making needed in an experimental setting.…

  7. Progress in Computational Simulation of Earthquakes

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Gregory; Judd, Michele; Li, P. Peggy; Norton, Charles; Tisdale, Edwin; Granat, Robert

    2006-01-01

    GeoFEST(P) is a computer program written for use in the QuakeSim project, which is devoted to development and improvement of means of computational simulation of earthquakes. GeoFEST(P) models interacting earthquake fault systems from the fault-nucleation to the tectonic scale. The development of GeoFEST( P) has involved coupling of two programs: GeoFEST and the Pyramid Adaptive Mesh Refinement Library. GeoFEST is a message-passing-interface-parallel code that utilizes a finite-element technique to simulate evolution of stress, fault slip, and plastic/elastic deformation in realistic materials like those of faulted regions of the crust of the Earth. The products of such simulations are synthetic observable time-dependent surface deformations on time scales from days to decades. Pyramid Adaptive Mesh Refinement Library is a software library that facilitates the generation of computational meshes for solving physical problems. In an application of GeoFEST(P), a computational grid can be dynamically adapted as stress grows on a fault. Simulations on workstations using a few tens of thousands of stress and displacement finite elements can now be expanded to multiple millions of elements with greater than 98-percent scaled efficiency on over many hundreds of parallel processors (see figure).

  8. Computation applied to particle accelerator simulations

    SciTech Connect

    Herrmannsfeldt, W.B. ); Yan, Y.T. )

    1991-07-01

    The rapid growth in the power of large-scale computers has had a revolutionary effect on the study of charged-particle accelerators that is similar to the impact of smaller computers on everyday life. Before an accelerator is built, it is now the absolute rule to simulate every component and subsystem by computer to establish modes of operation and tolerances. We will bypass the important and fruitful areas of control and operation and consider only application to design and diagnostic interpretation. Applications of computers can be divided into separate categories including: component design, system design, stability studies, cost optimization, and operating condition simulation. For the purposes of this report, we will choose a few examples taken from the above categories to illustrate the methods and we will discuss the significance of the work to the project, and also briefly discuss the accelerator project itself. The examples that will be discussed are: (1) the tracking analysis done for the main ring of the Superconducting Supercollider, which contributed to the analysis which ultimately resulted in changing the dipole coil diameter to 5 cm from the earlier design for a 4-cm coil-diameter dipole magnet; (2) the design of accelerator structures for electron-positron linear colliders and circular colliding beam systems (B-factories); (3) simulation of the wake fields from multibunch electron beams for linear colliders; and (4) particle-in-cell simulation of space-charge dominated beams for an experimental liner induction accelerator for Heavy Ion Fusion. 8 refs., 9 figs.

  9. Factors Promoting Engaged Exploration with Computer Simulations

    ERIC Educational Resources Information Center

    Podolefsky, Noah S.; Perkins, Katherine K.; Adams, Wendy K.

    2010-01-01

    This paper extends prior research on student use of computer simulations (sims) to engage with and explore science topics, in this case wave interference. We describe engaged exploration; a process that involves students actively interacting with educational materials, sense making, and exploring primarily via their own questioning. We analyze…

  10. Spiking network simulation code for petascale computers.

    PubMed

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  11. Spiking network simulation code for petascale computers

    PubMed Central

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  12. Canonical Decomposition of Ictal Scalp EEG and Accurate Source Localisation: Principles and Simulation Study

    PubMed Central

    De Vos, Maarten; De Lathauwer, Lieven; Vanrumste, Bart; Van Huffel, Sabine; Van Paesschen, W.

    2007-01-01

    Long-term electroencephalographic (EEG) recordings are important in the presurgical evaluation of refractory partial epilepsy for the delineation of the ictal onset zones. In this paper, we introduce a new concept for an automatic, fast, and objective localisation of the ictal onset zone in ictal EEG recordings. Canonical decomposition of ictal EEG decomposes the EEG in atoms. One or more atoms are related to the seizure activity. A single dipole was then fitted to model the potential distribution of each epileptic atom. In this study, we performed a simulation study in order to estimate the dipole localisation error. Ictal dipole localisation was very accurate, even at low signal-to-noise ratios, was not affected by seizure activity frequency or frequency changes, and was minimally affected by the waveform and depth of the ictal onset zone location. Ictal dipole localisation error using 21 electrodes was around 10.0 mm and improved more than tenfold in the range of 0.5–1.0 mm using 148 channels. In conclusion, our simulation study of canonical decomposition of ictal scalp EEG allowed a robust and accurate localisation of the ictal onset zone. PMID:18301715

  13. Computer simulations of WIGWAM underwater experiment

    SciTech Connect

    Kamegai, Minao; White, J.W.

    1993-11-01

    We performed computer simulations of the WIGWAM underwater experiment with a 2-D hydro-code, CALE. First, we calculated the bubble pulse and the signal strength at the closest gauge in one-dimensional geometry. The calculation shows excellent agreement with the measured data. Next, we made two-dimensional simulations of WIGWAM applying the gravity over-pressure, and calculated the signals at three selected gauge locations where measurements were recorded. The computed peak pressures at those gauge locations come well within the 15% experimental error bars. The signal at the farthest gauge is of the order of 200 bars. This is significant, because at this pressure the CALE output can be linked to a hydro-acoustics computer program, NPE Code (Nonlinear Progressive Wave-equation Code), to analyze the long distance propagation of acoustical signals from the underwater explosions on a global scale.

  14. Computer simulation of underwater nuclear effects

    SciTech Connect

    Kamegai, M.

    1987-01-30

    We investigated underwater nuclear effects by computer simulations. First, we computed a long distance wave propagation in water by the 1-D LASNEX code by modeling the energy source and the underwater environment. The pressure-distance data were calculated for two quite different yields; pressures range from 300 GPa to 15 MPa. They were found to be in good agreement with Snay's theoretical points and the Wigwam measurements. The computed data also agree with the similarity solution at high pressures and the empirical equation at low pressures. After completion of the 1-D study, we investigated a free surface effect commonly referred to as irregular surface rarefaction by applying two hydrocodes (LASNEX and ALE), linked at the appropriate time. Using these codes, we simulated near-surface explosions for three depths of burst (3 m, 21 m and 66.5 m), which represent the strong, intermediate, and weak surface shocks, respectively.

  15. TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow

    USGS Publications Warehouse

    Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.

    1993-01-01

    A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.

  16. Computer simulation of surface and film processes

    NASA Technical Reports Server (NTRS)

    Tiller, W. A.; Halicioglu, M. T.

    1983-01-01

    Adequate computer methods, based on interactions between discrete particles, provide information leading to an atomic level understanding of various physical processes. The success of these simulation methods, however, is related to the accuracy of the potential energy function representing the interactions among the particles. The development of a potential energy function for crystalline SiO2 forms that can be employed in lengthy computer modelling procedures was investigated. In many of the simulation methods which deal with discrete particles, semiempirical two body potentials were employed to analyze energy and structure related properties of the system. Many body interactions are required for a proper representation of the total energy for many systems. Many body interactions for simulations based on discrete particles are discussed.

  17. Computational Simulation of Composite Structural Fatigue

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon

    2004-01-01

    Progressive damage and fracture of composite structures subjected to monotonically increasing static, tension-tension cyclic, pressurization, and flexural cyclic loading are evaluated via computational simulation. Constituent material properties, stress and strain limits are scaled up to the structure level to evaluate the overall damage and fracture propagation for composites. Damage initiation, growth, accumulation, and propagation to fracture due to monotonically increasing static and cyclic loads are included in the simulations. Results show the number of cycles to failure at different temperatures and the damage progression sequence during different degradation stages. A procedure is outlined for use of computational simulation data in the assessment of damage tolerance, determination of sensitive parameters affecting fracture, and interpretation of results with insight for design decisions.

  18. Computational Simulation of Composite Structural Fatigue

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon; Chamis, Christos C. (Technical Monitor)

    2005-01-01

    Progressive damage and fracture of composite structures subjected to monotonically increasing static, tension-tension cyclic, pressurization, and flexural cyclic loading are evaluated via computational simulation. Constituent material properties, stress and strain limits are scaled up to the structure level to evaluate the overall damage and fracture propagation for composites. Damage initiation, growth, accumulation, and propagation to fracture due to monotonically increasing static and cyclic loads are included in the simulations. Results show the number of cycles to failure at different temperatures and the damage progression sequence during different degradation stages. A procedure is outlined for use of computational simulation data in the assessment of damage tolerance, determination of sensitive parameters affecting fracture, and interpretation of results with insight for design decisions.

  19. Accurate simulation of two-dimensional optical microcavities with uniquely solvable boundary integral equations and trigonometric Galerkin discretization.

    PubMed

    Boriskina, Svetlana V; Sewell, Phillip; Benson, Trevor M; Nosich, Alexander I

    2004-03-01

    A fast and accurate method is developed to compute the natural frequencies and scattering characteristics of arbitrary-shape two-dimensional dielectric resonators. The problem is formulated in terms of a uniquely solvable set of second-kind boundary integral equations and discretized by the Galerkin method with angular exponents as global test and trial functions. The log-singular term is extracted from one of the kernels, and closed-form expressions are derived for the main parts of all the integral operators. The resulting discrete scheme has a very high convergence rate. The method is used in the simulation of several optical microcavities for modern dense wavelength-division-multiplexed systems. PMID:15005404

  20. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    NASA Astrophysics Data System (ADS)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  1. EASY-GOING deconvolution: Combining accurate simulation and evolutionary algorithms for fast deconvolution of solid-state quadrupolar NMR spectra

    NASA Astrophysics Data System (ADS)

    Grimminck, Dennis L. A. G.; Polman, Ben J. W.; Kentgens, Arno P. M.; Leo Meerts, W.

    2011-08-01

    A fast and accurate fit program is presented for deconvolution of one-dimensional solid-state quadrupolar NMR spectra of powdered materials. Computational costs of the synthesis of theoretical spectra are reduced by the use of libraries containing simulated time/frequency domain data. These libraries are calculated once and with the use of second-party simulation software readily available in the NMR community, to ensure a maximum flexibility and accuracy with respect to experimental conditions. EASY-GOING deconvolution ( EGdeconv) is equipped with evolutionary algorithms that provide robust many-parameter fitting and offers efficient parallellised computing. The program supports quantification of relative chemical site abundances and (dis)order in the solid-state by incorporation of (extended) Czjzek and order parameter models. To illustrate EGdeconv's current capabilities, we provide three case studies. Given the program's simple concept it allows a straightforward extension to include other NMR interactions. The program is available as is for 64-bit Linux operating systems.

  2. Real-Time Simulation Computation System. [for digital flight simulation of research aircraft

    NASA Technical Reports Server (NTRS)

    Fetter, J. L.

    1981-01-01

    The Real-Time Simulation Computation System, which will provide the flexibility necessary for operation in the research environment at the Ames Research Center is discussed. Designing the system with common subcomponents and using modular construction techniques enhances expandability and maintainability qualities. The 10-MHz series transmission scheme is the basis of the Input/Output Unit System and is the driving force providing the system flexibility. Error checking and detection performed on the transmitted data provide reliability measurements and assurances that accurate data are received at the simulators.

  3. A novel fast and accurate pseudo-analytical simulation approach for MOAO

    NASA Astrophysics Data System (ADS)

    Gendron, É.; Charara, A.; Abdelfattah, A.; Gratadour, D.; Keyes, D.; Ltaief, H.; Morel, C.; Vidal, F.; Sevin, A.; Rousset, G.

    2014-08-01

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is

  4. Tracking Non-rigid Structures in Computer Simulations

    SciTech Connect

    Gezahegne, A; Kamath, C

    2008-01-10

    A key challenge in tracking moving objects is the correspondence problem, that is, the correct propagation of object labels from one time step to another. This is especially true when the objects are non-rigid structures, changing shape, and merging and splitting over time. In this work, we describe a general approach to tracking thousands of non-rigid structures in an image sequence. We show how we can minimize memory requirements and generate accurate results while working with only two frames of the sequence at a time. We demonstrate our results using data from computer simulations of a fluimix problem.

  5. Computer simulation of underwater nuclear events

    SciTech Connect

    Kamegai, M.

    1986-09-01

    This report describes the computer simulation of two underwater nuclear explosions, Operation Wigwam and a modern hypothetical explosion of greater yield. The computer simulations were done in spherical geometry with the LASNEX computer code. Comparison of the LASNEX calculation with Snay's analytical results and the Wigwam measurements shows that agreement in the shock pressure versus range in water is better than 5%. The results of the calculations are also consistent with the cube root scaling law for an underwater blast wave. The time constant of the wave front was determined from the wave profiles taken at several points. The LASNEX time-constant calculation and Snay's theoretical results agree to within 20%. A time-constant-versus-range relation empirically fitted by Snay is valid only within a limited range at low pressures, whereas a time-constant formula based on Sedov's similarity solution holds at very high pressures. This leaves the intermediate pressure range with neither an empirical nor a theoretical formula for the time constant. These one-dimensional simulations demonstrate applicability of the computer code to investigations of this nature, and justify the use of this technique for more complex two-dimensional problems, namely, surface effects on underwater nuclear explosions. 16 refs., 8 figs., 2 tabs.

  6. Accurate computation and interpretation of spin-dependent properties in metalloproteins

    NASA Astrophysics Data System (ADS)

    Rodriguez, Jorge

    2006-03-01

    Nature uses the properties of open-shell transition metal ions to carry out a variety of functions associated with vital life processes. Mononuclear and binuclear iron centers, in particular, are intriguing structural motifs present in many heme and non-heme proteins. Hemerythrin and methane monooxigenase, for example, are members of the latter class whose diiron active sites display magnetic ordering. We have developed a computational protocol based on spin density functional theory (SDFT) to accurately predict physico-chemical parameters of metal sites in proteins and bioinorganic complexes which traditionally had only been determined from experiment. We have used this new methodology to perform a comprehensive study of the electronic structure and magnetic properties of heme and non-heme iron proteins and related model compounds. We have been able to predict with a high degree of accuracy spectroscopic (Mössbauer, EPR, UV-vis, Raman) and magnetization parameters of iron proteins and, at the same time, gained unprecedented microscopic understanding of their physico-chemical properties. Our results have allowed us to establish important correlations between the electronic structure, geometry, spectroscopic data, and biochemical function of heme and non- heme iron proteins.

  7. Aeroacoustic Flow Phenomena Accurately Captured by New Computational Fluid Dynamics Method

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    2002-01-01

    One of the challenges in the computational fluid dynamics area is the accurate calculation of aeroacoustic phenomena, especially in the presence of shock waves. One such phenomenon is "transonic resonance," where an unsteady shock wave at the throat of a convergent-divergent nozzle results in the emission of acoustic tones. The space-time Conservation-Element and Solution-Element (CE/SE) method developed at the NASA Glenn Research Center can faithfully capture the shock waves, their unsteady motion, and the generated acoustic tones. The CE/SE method is a revolutionary new approach to the numerical modeling of physical phenomena where features with steep gradients (e.g., shock waves, phase transition, etc.) must coexist with those having weaker variations. The CE/SE method does not require the complex interpolation procedures (that allow for the possibility of a shock between grid cells) used by many other methods to transfer information between grid cells. These interpolation procedures can add too much numerical dissipation to the solution process. Thus, while shocks are resolved, weaker waves, such as acoustic waves, are washed out.

  8. Fast and accurate computation of two-dimensional non-separable quadratic-phase integrals.

    PubMed

    Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus

    2010-06-01

    We report a fast and accurate algorithm for numerical computation of two-dimensional non-separable linear canonical transforms (2D-NS-LCTs). Also known as quadratic-phase integrals, this class of integral transforms represents a broad class of optical systems including Fresnel propagation in free space, propagation in graded-index media, passage through thin lenses, and arbitrary concatenations of any number of these, including anamorphic/astigmatic/non-orthogonal cases. The general two-dimensional non-separable case poses several challenges which do not exist in the one-dimensional case and the separable two-dimensional case. The algorithm takes approximately N log N time, where N is the two-dimensional space-bandwidth product of the signal. Our method properly tracks and controls the space-bandwidth products in two dimensions, in order to achieve information theoretically sufficient, but not wastefully redundant, sampling required for the reconstruction of the underlying continuous functions at any stage of the algorithm. Additionally, we provide an alternative definition of general 2D-NS-LCTs that shows its kernel explicitly in terms of its ten parameters, and relate these parameters bidirectionally to conventional ABCD matrix parameters. PMID:20508697

  9. Accurate computation of surface stresses and forces with immersed boundary methods

    NASA Astrophysics Data System (ADS)

    Goza, Andres; Liska, Sebastian; Morley, Benjamin; Colonius, Tim

    2016-09-01

    Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is applied as a post-processing procedure, so that the convergence of the velocity field is unaffected. We demonstrate the efficacy of the method by computing stresses and forces that converge to the physical stresses and forces for several test problems.

  10. Computational Challenges in Nuclear Weapons Simulation

    SciTech Connect

    McMillain, C F; Adams, T F; McCoy, M G; Christensen, R B; Pudliner, B S; Zika, M R; Brantley, P S; Vetter, J S; May, J M

    2003-08-29

    After a decade of experience, the Stockpile Stewardship Program continues to ensure the safety, security and reliability of the nation's nuclear weapons. The Advanced Simulation and Computing (ASCI) program was established to provide leading edge, high-end simulation capabilities needed to meet the program's assessment and certification requirements. The great challenge of this program lies in developing the tools and resources necessary for the complex, highly coupled, multi-physics calculations required to simulate nuclear weapons. This paper describes the hardware and software environment we have applied to fulfill our nuclear weapons responsibilities. It also presents the characteristics of our algorithms and codes, especially as they relate to supercomputing resource capabilities and requirements. It then addresses impediments to the development and application of nuclear weapon simulation software and hardware and concludes with a summary of observations and recommendations on an approach for working with industry and government agencies to address these impediments.

  11. Application of computer simulators in population genetics.

    PubMed

    Feng, Gao; Haipeng, Li

    2016-08-01

    The genomes of more and more organisms have been sequenced due to the advances in next-generation sequencing technologies. As a powerful tool, computer simulators play a critical role in studying the genome-wide DNA polymorphism pattern. Simulations can be performed both forwards-in-time and backwards-in-time, which complement each other and are suitable for meeting different needs, such as studying the effect of evolutionary dynamics, the estimation of parameters, and the validation of evolutionary hypotheses as well as new methods. In this review, we briefly introduced population genetics related theoretical framework and provided a detailed comparison of 32 simulators published over the last ten years. The future development of new simulators was also discussed. PMID:27531609

  12. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  13. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  14. Time Accurate CFD Simulations of the Orion Launch Abort Vehicle in the Transonic Regime

    NASA Technical Reports Server (NTRS)

    Rojahn, Josh; Ruf, Joe

    2011-01-01

    Significant asymmetries in the fluid dynamics were calculated for some cases in the CFD simulations of the Orion Launch Abort Vehicle through its abort trajectories. The CFD simulations were performed steady state and in three dimensions with symmetric geometries, no freestream sideslip angle, and motors firing. The trajectory points at issue were in the transonic regime, at 0 and +/- 5 angles of attack with the Abort Motors with and without the Attitude Control Motors (ACM) firing. In some of the cases the asymmetric fluid dynamics resulted in aerodynamic side forces that were large enough that would overcome the control authority of the ACMs. MSFC's Fluid Dynamics Group supported the investigation into the cause of the flow asymmetries with time accurate CFD simulations, utilizing a hybrid RANS-LES turbulence model. The results show that the flow over the vehicle and the subsequent interaction with the AB and ACM motor plumes were unsteady. The resulting instantaneous aerodynamic forces were oscillatory with fairly large magnitudes. Time averaged aerodynamic forces were essentially symmetric.

  15. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    SciTech Connect

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  16. Computer Simulations Improve University Instructional Laboratories1

    PubMed Central

    2004-01-01

    Laboratory classes are commonplace and essential in biology departments but can sometimes be cumbersome, unreliable, and a drain on time and resources. As university intakes increase, pressure on budgets and staff time can often lead to reduction in practical class provision. Frequently, the ability to use laboratory equipment, mix solutions, and manipulate test animals are essential learning outcomes, and “wet” laboratory classes are thus appropriate. In others, however, interpretation and manipulation of the data are the primary learning outcomes, and here, computer-based simulations can provide a cheaper, easier, and less time- and labor-intensive alternative. We report the evaluation of two computer-based simulations of practical exercises: the first in chromosome analysis, the second in bioinformatics. Simulations can provide significant time savings to students (by a factor of four in our first case study) without affecting learning, as measured by performance in assessment. Moreover, under certain circumstances, performance can be improved by the use of simulations (by 7% in our second case study). We concluded that the introduction of these simulations can significantly enhance student learning where consideration of the learning outcomes indicates that it might be appropriate. In addition, they can offer significant benefits to teaching staff. PMID:15592599

  17. Computational simulation of Faraday probe measurements

    NASA Astrophysics Data System (ADS)

    Boerner, Jeremiah J.

    Electric propulsion devices, including ion thrusters and Hall thrusters, are becoming increasingly popular for long duration space missions. Ground-based experimental testing of such devices is performed in vacuum chambers, which develop an unavoidable background gas due to pumping limitations and facility leakage. Besides directly altering the operating environment, the background gas may indirectly affect the performance of immersed plasma probe diagnostics. This work focuses on computational modeling research conducted to evaluate the performance of a current-collecting Faraday probe. Initial findings from one dimensional analytical models of plasma sheaths are used as reference cases for subsequent modeling. A two dimensional, axisymmetric, hybrid electron fluid and Particle In Cell computational code is used for extensive simulation of the plasma flow around a representative Faraday probe geometry. The hybrid fluid PIC code is used to simulate a range of inflowing plasma conditions, from a simple ion beam consistent with one dimensional models to a multiple component plasma representative of a low-power Hall thruster plume. These simulations produce profiles of plasma properties and simulated current measurements at the probe surface. Interpretation of the simulation results leads to recommendations for probe design and experimental techniques. Significant contributions of this work include the development and use of two new non-neutral detailed electron fluid models and the recent incorporation of multi grid capabilities.

  18. Computer Simulation for Emergency Incident Management

    SciTech Connect

    Brown, D L

    2004-12-03

    This report describes the findings and recommendations resulting from the Department of Homeland Security (DHS) Incident Management Simulation Workshop held by the DHS Advanced Scientific Computing Program in May 2004. This workshop brought senior representatives of the emergency response and incident-management communities together with modeling and simulation technologists from Department of Energy laboratories. The workshop provided an opportunity for incident responders to describe the nature and substance of the primary personnel roles in an incident response, to identify current and anticipated roles of modeling and simulation in support of incident response, and to begin a dialog between the incident response and simulation technology communities that will guide and inform planned modeling and simulation development for incident response. This report provides a summary of the discussions at the workshop as well as a summary of simulation capabilities that are relevant to incident-management training, and recommendations for the use of simulation in both incident management and in incident management training, based on the discussions at the workshop. In addition, the report discusses areas where further research and development will be required to support future needs in this area.

  19. Memory interface simulator: A computer design aid

    NASA Technical Reports Server (NTRS)

    Taylor, D. S.; Williams, T.; Weatherbee, J. E.

    1972-01-01

    Results are presented of a study conducted with a digital simulation model being used in the design of the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. The model simulates the activity involved as instructions are fetched from random access memory for execution in one of the system central processing units. A series of model runs measured instruction execution time under various assumptions pertaining to the CPU's and the interface between the CPU's and RAM. Design tradeoffs are presented in the following areas: Bus widths, CPU microprogram read only memory cycle time, multiple instruction fetch, and instruction mix.

  20. Computer Simulation of the VASIMR Engine

    NASA Technical Reports Server (NTRS)

    Garrison, David

    2005-01-01

    The goal of this project is to develop a magneto-hydrodynamic (MHD) computer code for simulation of the VASIMR engine. This code is designed be easy to modify and use. We achieve this using the Cactus framework, a system originally developed for research in numerical relativity. Since its release, Cactus has become an extremely powerful and flexible open source framework. The development of the code will be done in stages, starting with a basic fluid dynamic simulation and working towards a more complex MHD code. Once developed, this code can be used by students and researchers in order to further test and improve the VASIMR engine.

  1. Chemically Accurate Simulation of a Polyatomic Molecule-Metal Surface Reaction.

    PubMed

    Nattino, Francesco; Migliorini, Davide; Kroes, Geert-Jan; Dombrowski, Eric; High, Eric A; Killelea, Daniel R; Utz, Arthur L

    2016-07-01

    Although important to heterogeneous catalysis, the ability to accurately model reactions of polyatomic molecules with metal surfaces has not kept pace with developments in gas phase dynamics. Partnering the specific reaction parameter (SRP) approach to density functional theory with ab initio molecular dynamics (AIMD) extends our ability to model reactions with metals with quantitative accuracy from only the lightest reactant, H2, to essentially all molecules. This is demonstrated with AIMD calculations on CHD3 + Ni(111) in which the SRP functional is fitted to supersonic beam experiments, and validated by showing that AIMD with the resulting functional reproduces initial-state selected sticking measurements with chemical accuracy (4.2 kJ/mol ≈ 1 kcal/mol). The need for only semilocal exchange makes our scheme computationally tractable for dissociation on transition metals. PMID:27284787

  2. Chemically Accurate Simulation of a Polyatomic Molecule-Metal Surface Reaction

    PubMed Central

    2016-01-01

    Although important to heterogeneous catalysis, the ability to accurately model reactions of polyatomic molecules with metal surfaces has not kept pace with developments in gas phase dynamics. Partnering the specific reaction parameter (SRP) approach to density functional theory with ab initio molecular dynamics (AIMD) extends our ability to model reactions with metals with quantitative accuracy from only the lightest reactant, H2, to essentially all molecules. This is demonstrated with AIMD calculations on CHD3 + Ni(111) in which the SRP functional is fitted to supersonic beam experiments, and validated by showing that AIMD with the resulting functional reproduces initial-state selected sticking measurements with chemical accuracy (4.2 kJ/mol ≈ 1 kcal/mol). The need for only semilocal exchange makes our scheme computationally tractable for dissociation on transition metals. PMID:27284787

  3. A fast and accurate method for computing the Sunyaev-Zel'dovich signal of hot galaxy clusters

    NASA Astrophysics Data System (ADS)

    Chluba, Jens; Nagai, Daisuke; Sazonov, Sergey; Nelson, Kaylea

    2012-10-01

    New-generation ground- and space-based cosmic microwave background experiments have ushered in discoveries of massive galaxy clusters via the Sunyaev-Zel'dovich (SZ) effect, providing a new window for studying cluster astrophysics and cosmology. Many of the newly discovered, SZ-selected clusters contain hot intracluster plasma (kTe ≳ 10 keV) and exhibit disturbed morphology, indicative of frequent mergers with large peculiar velocity (v ≳ 1000 km s-1). It is well known that for the interpretation of the SZ signal from hot, moving galaxy clusters, relativistic corrections must be taken into account, and in this work, we present a fast and accurate method for computing these effects. Our approach is based on an alternative derivation of the Boltzmann collision term which provides new physical insight into the sources of different kinematic corrections in the scattering problem. In contrast to previous works, this allows us to obtain a clean separation of kinematic and scattering terms. We also briefly mention additional complications connected with kinematic effects that should be considered when interpreting future SZ data for individual clusters. One of the main outcomes of this work is SZPACK, a numerical library which allows very fast and precise (≲0.001 per cent at frequencies hν ≲ 20kTγ) computation of the SZ signals up to high electron temperature (kTe ≃ 25 keV) and large peculiar velocity (v/c ≃ 0.01). The accuracy is well beyond the current and future precision of SZ observations and practically eliminates uncertainties which are usually overcome with more expensive numerical evaluation of the Boltzmann collision term. Our new approach should therefore be useful for analysing future high-resolution, multifrequency SZ observations as well as computing the predicted SZ effect signals from numerical simulations.

  4. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  5. An accurate and scalable O(N) algorithm for First-Principles Molecular Dynamics computations on petascale computers and beyond

    NASA Astrophysics Data System (ADS)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-03-01

    We present a truly scalable First-Principles Molecular Dynamics algorithm with O(N) complexity and fully controllable accuracy, capable of simulating systems of sizes that were previously impossible with this degree of accuracy. By avoiding global communication, we have extended W. Kohn's condensed matter ``nearsightedness'' principle to a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wavefunctions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 100,000 atoms on 100,000 processors, with a wall-clock time of the order of one minute per molecular dynamics time step. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  6. Accurate reaction-diffusion operator splitting on tetrahedral meshes for parallel stochastic molecular simulations.

    PubMed

    Hepburn, I; Chen, W; De Schutter, E

    2016-08-01

    Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realistic biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification. PMID:27497550

  7. Accurate reaction-diffusion operator splitting on tetrahedral meshes for parallel stochastic molecular simulations

    NASA Astrophysics Data System (ADS)

    Hepburn, I.; Chen, W.; De Schutter, E.

    2016-08-01

    Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realistic biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.

  8. Computer Simulation For Design Of TWT's

    NASA Technical Reports Server (NTRS)

    Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard

    1992-01-01

    A three-dimensional finite-element analytical technique facilitates design and fabrication of traveling-wave-tube (TWT) slow-wave structures. Used to perform thermal and mechanical analyses of TWT designed with variety of configurations, geometries, and materials. Using three-dimensional computer analysis, designer able to simulate building and testing of TWT, with consequent substantial saving of time and money. Technique enables detailed look into operation of traveling-wave tubes to help improve performance for future communications systems.

  9. Achieving accurate simulations of urban impacts on ozone at high resolution

    NASA Astrophysics Data System (ADS)

    Li, J.; Georgescu, M.; Hyde, P.; Mahalov, A.; Moustaoui, M.

    2014-11-01

    The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations [O3] due to urbanization within cities in arid/semi-arid environments. First, a method based on a shape preserving Monotonic Cubic Interpolation (MCI) is developed and used to downscale anthropogenic emissions from the 4 km resolution 2005 National Emissions Inventory (NEI05) to the finest model resolution of 1 km. Using the rapidly expanding Phoenix metropolitan region as the area of focus, we demonstrate the proposed MCI method achieves ozone simulation results with appreciably improved correspondence to observations relative to the default interpolation method of the WRF-Chem system. Next, two additional sets of experiments are conducted, with the recommended MCI approach, to examine impacts of urbanization on ozone production: (1) the urban land cover is included (i.e., urbanization experiments) and, (2) the urban land cover is replaced with the region’s native shrubland. Impacts due to the presence of the built environment on [O3] are highly heterogeneous across the metropolitan area. Increased near surface [O3] due to urbanization of 10-20 ppb is predominantly a nighttime phenomenon while simulated impacts during daytime are negligible. Urbanization narrows the daily [O3] range (by virtue of increasing nighttime minima), an impact largely due to the region’s urban heat island. Our results demonstrate the importance of the MCI method for accurate representation of the diurnal profile of ozone, and highlight its utility for high-resolution air quality simulations for urban areas.

  10. OBSERVING SIMULATED PROTOSTARS WITH OUTFLOWS: HOW ACCURATE ARE PROTOSTELLAR PROPERTIES INFERRED FROM SEDs?

    SciTech Connect

    Offner, Stella S. R.; Robitaille, Thomas P.; Hansen, Charles E.; Klein, Richard I.; McKee, Christopher F.

    2012-07-10

    The properties of unresolved protostars and their local environment are frequently inferred from spectral energy distributions (SEDs) using radiative transfer modeling. In this paper, we use synthetic observations of realistic star formation simulations to evaluate the accuracy of properties inferred from fitting model SEDs to observations. We use ORION, an adaptive mesh refinement (AMR) three-dimensional gravito-radiation-hydrodynamics code, to simulate low-mass star formation in a turbulent molecular cloud including the effects of protostellar outflows. To obtain the dust temperature distribution and SEDs of the forming protostars, we post-process the simulations using HYPERION, a state-of-the-art Monte Carlo radiative transfer code. We find that the ORION and HYPERION dust temperatures typically agree within a factor of two. We compare synthetic SEDs of embedded protostars for a range of evolutionary times, simulation resolutions, aperture sizes, and viewing angles. We demonstrate that complex, asymmetric gas morphology leads to a variety of classifications for individual objects as a function of viewing angle. We derive best-fit source parameters for each SED through comparison with a pre-computed grid of radiative transfer models. While the SED models correctly identify the evolutionary stage of the synthetic sources as embedded protostars, we show that the disk and stellar parameters can be very discrepant from the simulated values, which is expected since the disk and central source are obscured by the protostellar envelope. Parameters such as the stellar accretion rate, stellar mass, and disk mass show better agreement, but can still deviate significantly, and the agreement may in some cases be artificially good due to the limited range of parameters in the set of model SEDs. Lack of correlation between the model and simulation properties in many individual instances cautions against overinterpreting properties inferred from SEDs for unresolved protostellar

  11. A hybrid method for efficient and accurate simulations of diffusion compartment imaging signals

    NASA Astrophysics Data System (ADS)

    Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît; Taquet, Maxime

    2015-12-01

    Diffusion-weighted imaging is sensitive to the movement of water molecules through the tissue microstructure and can therefore be used to gain insight into the tissue cellular architecture. While the diffusion signal arising from simple geometrical microstructure is known analytically, it remains unclear what diffusion signal arises from complex microstructural configurations. Such knowledge is important to design optimal acquisition sequences, to understand the limitations of diffusion-weighted imaging and to validate novel models of the brain microstructure. We present a novel framework for the efficient simulation of high-quality DW-MRI signals based on the hybrid combination of exact analytic expressions in simple geometric compartments such as cylinders and spheres and Monte Carlo simulations in more complex geometries. We validate our approach on synthetic arrangements of parallel cylinders representing the geometry of white matter fascicles, by comparing it to complete, all-out Monte Carlo simulations commonly used in the literature. For typical configurations, equal levels of accuracy are obtained with our hybrid method in less than one fifth of the computational time required for Monte Carlo simulations.

  12. Integrated computer simulation on FIR FEL dynamics

    SciTech Connect

    Furukawa, H.; Kuruma, S.; Imasaki, K.

    1995-12-31

    An integrated computer simulation code has been developed to analyze the RF-Linac FEL dynamics. First, the simulation code on the electron beam acceleration and transport processes in RF-Linac: (LUNA) has been developed to analyze the characteristics of the electron beam in RF-Linac and to optimize the parameters of RF-Linac. Second, a space-time dependent 3D FEL simulation code (Shipout) has been developed. The RF-Linac FEL total simulations have been performed by using the electron beam data from LUNA in Shipout. The number of particles using in a RF-Linac FEL total simulation is approximately 1000. The CPU time for the simulation of 1 round trip is about 1.5 minutes. At ILT/ILE, Osaka, a 8.5MeV RF-Linac with a photo-cathode RF-gun is used for FEL oscillation experiments. By using 2 cm wiggler, the FEL oscillation in the wavelength approximately 46 {mu}m are investigated. By the simulations using LUNA with the parameters of an ILT/ILE experiment, the pulse shape and the energy spectra of the electron beam at the end of the linac are estimated. The pulse shape of the electron beam at the end of the linac has sharp rise-up and it slowly decays as a function of time. By the RF-linac FEL total simulations with the parameters of an ILT/ILE experiment, the dependencies of the start up of the FEL oscillations on the pulse shape of the electron beam at the end of the linac are estimated. The coherent spontaneous emission effects and the quick start up of FEL oscillations have been observed by the RF-Linac FEL total simulations.

  13. Computer simulations in the science classroom

    NASA Astrophysics Data System (ADS)

    Richards, John; Barowy, William; Levin, Dov

    1992-03-01

    In this paper we describe software for science instruction that is based upon a constructivist epistemology of learning. From a constructivist perspective, the process of learning is viewed as an active construction of knowledge, rather than a passive reception of information. The computer has the potential to provide an environment in which students can explore their understanding and better construct scientific knowledge. The Explorer is an interactive environment that integrates animated computer models with analytic capabilities for learning and teaching science. The system include graphs, a spreadsheet, scripting, and interactive tools. During formative evaluation of Explorer in the classroom, we have focused on learning the function and effectiveness of computer models in teaching science. Models have helped students relate theory to experiment when used in conjunction with hands-on activities and when the simulation addressed students' naive understanding of the phenomena. Two classroom examples illustrate our findings. The first is based on the dynamics of colliding objects. The second describes a class modeling the function of simple electric circuits. The simulations bridge between phenomena and theory by providing an abstract representation on which students may make measurements. Simulations based on scientific theory help to provide a set of interrelated experiences that challenge students' informal understanding of the science.

  14. Metal matrix composites microfracture: Computational simulation

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Caruso, John J.; Chamis, Christos C.

    1990-01-01

    Fiber/matrix fracture and fiber-matrix interface debonding in a metal matrix composite (MMC) are computationally simulated. These simulations are part of a research activity to develop computational methods for microfracture, microfracture propagation and fracture toughness of the metal matrix composites. The three-dimensional finite element model used in the simulation consists of a group of nine unidirectional fibers in three by three unit cell array of SiC/Ti15 metal matrix composite with a fiber volume ration of 0.35. This computational procedure is used to predict the fracture process and establish the hierarchy of fracture modes based on strain energy release rate. It is also used to predict stress redistribution to surrounding matrix-fibers due to initial and progressive fracture of fiber/matrix and due to debonding of fiber-matrix interface. Microfracture results for various loading cases such as longitudinal, transverse, shear and bending are presented and discussed. Step-by-step procedures are outlined to evaluate composite microfracture for a given composite system.

  15. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  16. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  17. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chen, Xiaofei

    2016-06-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of "family of secular functions" that we herein call "adaptive mode observers", is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of "turning point", our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  18. Fast, Accurate Simulation of Polaron Dynamics and Multidimensional Spectroscopy by Multiple Davydov Trial States.

    PubMed

    Zhou, Nengji; Chen, Lipeng; Huang, Zhongkai; Sun, Kewei; Tanimura, Yoshitaka; Zhao, Yang

    2016-03-10

    By employing the Dirac-Frenkel time-dependent variational principle, we study the dynamical properties of the Holstein molecular crystal model with diagonal and off-diagonal exciton-phonon coupling. A linear combination of the Davydov D1 (D2) ansatz, referred to as the "multi-D1 ansatz" ("multi-D2 ansatz"), is used as the trial state with enhanced accuracy but without sacrificing efficiency. The time evolution of the exciton probability is found to be in perfect agreement with that of the hierarchy equations of motion, demonstrating the promise the multiple Davydov trial states hold as an efficient, robust description of dynamics of complex quantum systems. In addition to the linear absorption spectra computed for both diagonal and off-diagonal cases, for the first time, 2D spectra have been calculated for systems with off-diagonal exciton-phonon coupling by employing the multiple D2 ansatz to compute the nonlinear response function, testifying to the great potential of the multiple D2 ansatz for fast, accurate implementation of multidimensional spectroscopy. It is found that the signal exhibits a single peak for weak off-diagonal coupling, while a vibronic multipeak structure appears for strong off-diagonal coupling. PMID:26871592

  19. Ku-Band rendezvous radar performance computer simulation model

    NASA Technical Reports Server (NTRS)

    Magnusson, H. G.; Goff, M. F.

    1984-01-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  20. How to obtain accurate resist simulations in very low-k1 era?

    NASA Astrophysics Data System (ADS)

    Chiou, Tsann-Bim; Park, Chan-Ha; Choi, Jae-Seung; Min, Young-Hong; Hansen, Steve; Tseng, Shih-En; Chen, Alek C.; Yim, Donggyu

    2006-03-01

    A procedure for calibrating a resist model iteratively adjusts appropriate parameters until the simulations of the model match the experimental data. The tunable parameters may include the shape of the illuminator, the geometry and transmittance/phase of the mask, light source and scanner-related parameters that affect imaging quality, resist process control and most importantly the physical/chemical factors in the resist model. The resist model can be accurately calibrated by measuring critical dimensions (CD) of a focus-exposure matrix (FEM) and the technique has been demonstrated to be very successful in predicting lithographic performance. However, resist model calibration is more challenging in the low k1 (<0.3) regime because numerous uncertainties, such as mask and resist CD metrology errors, are becoming too large to be ignored. This study demonstrates a resist model calibration procedure for a 0.29 k1 process using a 6% halftone mask containing 2D brickwall patterns. The influence of different scanning electron microscopes (SEM) and their wafer metrology signal analysis algorithms on the accuracy of the resist model is evaluated. As an example of the metrology issue of the resist pattern, the treatment of a sidewall angle is demonstrated for the resist line ends where the contrast is relatively low. Additionally, the mask optical proximity correction (OPC) and corner rounding are considered in the calibration procedure that is based on captured SEM images. Accordingly, the average root-mean-square (RMS) error, which is the difference between simulated and experimental CDs, can be improved by considering the metrological issues. Moreover, a weighting method and a measured CD tolerance are proposed to handle the different CD variations of the various edge points of the wafer resist pattern. After the weighting method is implemented and the CD selection criteria applied, the RMS error can be further suppressed. Therefore, the resist CD and process window can

  1. Pre-Stall Behavior of a Transonic Axial Compressor Stage via Time-Accurate Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Chen, Jen-Ping; Hathaway, Michael D.; Herrick, Gregory P.

    2008-01-01

    CFD calculations using high-performance parallel computing were conducted to simulate the pre-stall flow of a transonic compressor stage, NASA compressor Stage 35. The simulations were run with a full-annulus grid that models the 3D, viscous, unsteady blade row interaction without the need for an artificial inlet distortion to induce stall. The simulation demonstrates the development of the rotating stall from the growth of instabilities. Pressure-rise performance and pressure traces are compared with published experimental data before the study of flow evolution prior to the rotating stall. Spatial FFT analysis of the flow indicates a rotating long-length disturbance of one rotor circumference, which is followed by a spike-type breakdown. The analysis also links the long-length wave disturbance with the initiation of the spike inception. The spike instabilities occur when the trajectory of the tip clearance flow becomes perpendicular to the axial direction. When approaching stall, the passage shock changes from a single oblique shock to a dual-shock, which distorts the perpendicular trajectory of the tip clearance vortex but shows no evidence of flow separation that may contribute to stall.

  2. A computer simulation of chromosomal instability

    NASA Astrophysics Data System (ADS)

    Goodwin, E.; Cornforth, M.

    The transformation of a normal cell into a cancerous growth can be described as a process of mutation and selection occurring within the context of clonal expansion. Radiation, in addition to initial DNA damage, induces a persistent and still poorly understood genomic instability process that contributes to the mutational burden. It will be essential to include a quantitative description of this phenomenon in any attempt at science-based risk assessment. Monte Carlo computer simulations are a relatively simple way to model processes that are characterized by an element of randomness. A properly constructed simulation can capture the essence of a phenomenon that, as is often the case in biology, can be extraordinarily complex, and can do so even though the phenomenon itself is incompletely understood. A simple computer simulation of one manifestation of genomic instability known as chromosomal instability will be presented. The model simulates clonal expansion of a single chromosomally unstable cell into a colony. Instability is characterized by a single parameter, the rate of chromosomal rearrangement. With each new chromosome aberration, a unique subclone arises (subclones are defined as having a unique karyotype). The subclone initially has just one cell, but it can expand with cell division if the aberration is not lethal. The computer program automatically keeps track of the number of subclones within the expanding colony, and the number of cells within each subclone. Because chromosome aberrations kill some cells during colony growth, colonies arising from unstable cells tend to be smaller than those arising from stable cells. For any chosen level of instability, the computer program calculates the mean number of cells per colony averaged over many runs. These output should prove useful for investigating how such radiobiological phenomena as slow growth colonies, increased doubling time, and delayed cell death depend on chromosomal instability. Also of

  3. Procedure for computer-controlled milling of accurate surfaces of revolution for millimeter and far-infrared mirrors

    NASA Technical Reports Server (NTRS)

    Emmons, Louisa; De Zafra, Robert

    1991-01-01

    A simple method for milling accurate off-axis parabolic mirrors with a computer-controlled milling machine is discussed. For machines with a built-in circle-cutting routine, an exact paraboloid can be milled with few computer commands and without the use of the spherical or linear approximations. The proposed method can be adapted easily to cut off-axis sections of elliptical or spherical mirrors.

  4. New Computer Simulations of Macular Neural Functioning

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Doshay, D.; Linton, S.; Parnas, B.; Montgomery, K.; Chimento, T.

    1994-01-01

    We use high performance graphics workstations and supercomputers to study the functional significance of the three-dimensional (3-D) organization of gravity sensors. These sensors have a prototypic architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scaled-up, 3-D versions run on a Cray Y-MP supercomputer. A semi-automated method of reconstruction of neural tissue from serial sections studied in a transmission electron microscope has been developed to eliminate tedious conventional photography. The reconstructions use a mesh as a step in generating a neural surface for visualization. Two meshes are required to model calyx surfaces. The meshes are connected and the resulting prisms represent the cytoplasm and the bounding membranes. A finite volume analysis method is employed to simulate voltage changes along the calyx in response to synapse activation on the calyx or on calyceal processes. The finite volume method insures that charge is conserved at the calyx-process junction. These and other models indicate that efferent processes act as voltage followers, and that the morphology of some afferent processes affects their functioning. In a final application, morphological information is symbolically represented in three dimensions in a computer. The possible functioning of the connectivities is tested using mathematical interpretations of physiological parameters taken from the literature. Symbolic, 3-D simulations are in progress to probe the functional significance of the connectivities. This research is expected to advance computer-based studies of macular functioning and of synaptic plasticity.

  5. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    SciTech Connect

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  6. Accurate technique for complete geometric calibration of cone-beam computed tomography systems.

    PubMed

    Cho, Youngbin; Moseley, Douglas J; Siewerdsen, Jeffrey H; Jaffray, David A

    2005-04-01

    Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 degrees (around beam direction) to 0.3 degrees (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0

  7. Application of special-purpose digital computers to rotorcraft real-time simulation

    NASA Technical Reports Server (NTRS)

    Mackie, D. B.; Michelson, S.

    1978-01-01

    The use of an array processor as a computational element in rotorcraft real-time simulation is studied. A multilooping scheme was considered in which the rotor would loop over its calculations a number of time while the remainder of the model cycled once on a host computer. To prove that such a method would realistically simulate rotorcraft, a FORTRAN program was constructed to emulate a typical host-array processor computing configuration. The multilooping of an expanded rotor model, which included appropriate kinematic equations, resulted in an accurate and stable simulation.

  8. Parallel Proximity Detection for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)

    1998-01-01

    The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are included by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.

  9. Fast computation algorithms for speckle pattern simulation

    SciTech Connect

    Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru

    2013-11-13

    We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.

  10. Investigation of Carbohydrate Recognition via Computer Simulation

    SciTech Connect

    Johnson, Quentin R.; Lindsay, Richard J.; Petridis, Loukas; Shen, Tongye

    2015-04-28

    Carbohydrate recognition by proteins, such as lectins and other (bio)molecules, can be essential for many biological functions. Interest has arisen due to potential protein and drug design and future bioengineering applications. A quantitative measurement of carbohydrate-protein interaction is thus important for the full characterization of sugar recognition. Here, we focus on the aspect of utilizing computer simulations and biophysical models to evaluate the strength and specificity of carbohydrate recognition in this review. With increasing computational resources, better algorithms and refined modeling parameters, using state-of-the-art supercomputers to calculate the strength of the interaction between molecules has become increasingly mainstream. We review the current state of this technique and its successful applications for studying protein-sugar interactions in recent years.

  11. Parallel Proximity Detection for Computer Simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)

    1997-01-01

    The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are includes by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.

  12. Computer simulation of spacecraft/environment interaction.

    PubMed

    Krupnikov, K K; Makletsov, A A; Mileev, V N; Novikov, L S; Sinolits, V V

    1999-10-01

    This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991 1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language. PMID:11542669

  13. Investigation of Carbohydrate Recognition via Computer Simulation

    DOE PAGESBeta

    Johnson, Quentin R.; Lindsay, Richard J.; Petridis, Loukas; Shen, Tongye

    2015-04-28

    Carbohydrate recognition by proteins, such as lectins and other (bio)molecules, can be essential for many biological functions. Interest has arisen due to potential protein and drug design and future bioengineering applications. A quantitative measurement of carbohydrate-protein interaction is thus important for the full characterization of sugar recognition. Here, we focus on the aspect of utilizing computer simulations and biophysical models to evaluate the strength and specificity of carbohydrate recognition in this review. With increasing computational resources, better algorithms and refined modeling parameters, using state-of-the-art supercomputers to calculate the strength of the interaction between molecules has become increasingly mainstream. We reviewmore » the current state of this technique and its successful applications for studying protein-sugar interactions in recent years.« less

  14. Molecular physiology of rhodopsin: Computer simulation

    NASA Astrophysics Data System (ADS)

    Fel'Dman, T. B.; Kholmurodov, Kh. T.; Ostrovsky, M. A.

    2008-03-01

    Computer simulation is used for comparative investigation of the molecular dynamics of rhodopsin containing the chromophore group (11- cis-retinal) and free opsin. Molecular dynamics is traced within a time interval of 3000 ps; 3 × 106 discrete conformational states of rhodopsin and opsin are obtained and analyzed. It is demonstrated that the presence of the chromophore group in the chromophore center of opsin influences considerably the nearest protein environment of 11- cis-retinal both in the region of the β-ionone ring and in the region of the protonated Schiff base bond. Based on simulation results, a possible intramolecular mechanism of keeping rhodopsin as a G-protein-coupled receptor in the inactive state, i.e., the chromophore function as an efficient ligand antagonist, is discussed.

  15. Computer Simulation Studies of Gramicidin Channel

    NASA Astrophysics Data System (ADS)

    Song, Hyundeok; Beck, Thomas

    2009-04-01

    Ion channels are large membrane proteins, and their function is to facilitate the passage of ions across biological membranes. Recently, Dr. John Cuppoletti's group at UC showed that the gramicidin channel could function at high temperatures (360 -- 390K) with significant currents. This finding may have large implications for fuel cell technology. In this project, we will examine the experimental system by computer simulation. We will investigate how the temperature affects the current and differences in magnitude of the currents between two forms of Gramicidin, A and D. This research will help to elucidate the underlying molecular mechanism in this promising new technology.

  16. Sampling strategies for accurate computational inferences of gametic phase across highly polymorphic major histocompatibility complex loci

    PubMed Central

    2011-01-01

    Background Genes of the Major Histocompatibility Complex (MHC) are very popular genetic markers among evolutionary biologists because of their potential role in pathogen confrontation and sexual selection. However, MHC genotyping still remains challenging and time-consuming in spite of substantial methodological advances. Although computational haplotype inference has brought into focus interesting alternatives, high heterozygosity, extensive genetic variation and population admixture are known to cause inaccuracies. We have investigated the role of sample size, genetic polymorphism and genetic structuring on the performance of the popular Bayesian PHASE algorithm. To cover this aim, we took advantage of a large database of known genotypes (using traditional laboratory-based techniques) at single MHC class I (N = 56 individuals and 50 alleles) and MHC class II B (N = 103 individuals and 62 alleles) loci in the lesser kestrel Falco naumanni. Findings Analyses carried out over real MHC genotypes showed that the accuracy of gametic phase reconstruction improved with sample size as a result of the reduction in the allele to individual ratio. We then simulated different data sets introducing variations in this parameter to define an optimal ratio. Conclusions Our results demonstrate a critical influence of the allele to individual ratio on PHASE performance. We found that a minimum allele to individual ratio (1:2) yielded 100% accuracy for both MHC loci. Sampling effort is therefore a crucial step to obtain reliable MHC haplotype reconstructions and must be accomplished accordingly to the degree of MHC polymorphism. We expect our findings provide a foothold into the design of straightforward and cost-effective genotyping strategies of those MHC loci from which locus-specific primers are available. PMID:21615903

  17. Multiscale Computer Simulation of Failure in Aerogels

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2008-01-01

    Aerogels have been of interest to the aerospace community primarily for their thermal properties, notably their low thermal conductivities. While such gels are typically fragile, recent advances in the application of conformal polymer layers to these gels has made them potentially useful as lightweight structural materials as well. We have previously performed computer simulations of aerogel thermal conductivity and tensile and compressive failure, with results that are in qualitative, and sometimes quantitative, agreement with experiment. However, recent experiments in our laboratory suggest that gels having similar densities may exhibit substantially different properties. In this work, we extend our original diffusion limited cluster aggregation (DLCA) model for gel structure to incorporate additional variation in DLCA simulation parameters, with the aim of producing DLCA clusters of similar densities that nevertheless have different fractal dimension and secondary particle coordination. We perform particle statics simulations of gel strain on these clusters, and consider the effects of differing DLCA simulation conditions, and the resultant differences in fractal dimension and coordination, on gel strain properties.

  18. Computer Simulation of Fracture in Aerogels

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2006-01-01

    Aerogels are of interest to the aerospace community primarily for their thermal properties, notably their low thermal conductivities. While the gels are typically fragile, recent advances in the application of conformal polymer layers to these gels has made them potentially useful as lightweight structural materials as well. In this work, we investigate the strength and fracture behavior of silica aerogels using a molecular statics-based computer simulation technique. The gels' structure is simulated via a Diffusion Limited Cluster Aggregation (DLCA) algorithm, which produces fractal structures representing experimentally observed aggregates of so-called secondary particles, themselves composed of amorphous silica primary particles an order of magnitude smaller. We have performed multi-length-scale simulations of fracture in silica aerogels, in which the interaction b e e n two secondary particles is assumed to be described by a Morse pair potential parameterized such that the potential range is much smaller than the secondary particle size. These Morse parameters are obtained by atomistic simulation of models of the experimentally-observed amorphous silica "bridges," with the fracture behavior of these bridges modeled via molecular statics using a Morse/Coulomb potential for silica. We consider the energetics of the fracture, and compare qualitative features of low-and high-density gel fracture.

  19. Computational model for protein unfolding simulation

    NASA Astrophysics Data System (ADS)

    Tian, Xu-Hong; Zheng, Ye-Han; Jiao, Xiong; Liu, Cai-Xing; Chang, Shan

    2011-06-01

    The protein folding problem is one of the fundamental and important questions in molecular biology. However, the all-atom molecular dynamics studies of protein folding and unfolding are still computationally expensive and severely limited by the time scale of simulation. In this paper, a simple and fast protein unfolding method is proposed based on the conformational stability analyses and structure modeling. In this method, two structure-based conditions are considered to identify the unstable regions of proteins during the unfolding processes. The protein unfolding trajectories are mimicked through iterative structure modeling according to conformational stability analyses. Two proteins, chymotrypsin inhibitor 2 (CI2) and α -spectrin SH3 domain (SH3) were simulated by this method. Their unfolding pathways are consistent with the previous molecular dynamics simulations. Furthermore, the transition states of the two proteins were identified in unfolding processes and the theoretical Φ values of these transition states showed significant correlations with the experimental data (the correlation coefficients are >0.8). The results indicate that this method is effective in studying protein unfolding. Moreover, we analyzed and discussed the influence of parameters on the unfolding simulation. This simple coarse-grained model may provide a general and fast approach for the mechanism studies of protein folding.

  20. Computer simulation of solder joint failure

    SciTech Connect

    Burchett, S.N.; Frear, D.R.; Rashid, M.M.

    1997-04-01

    The thermomechanical fatigue failure of solder joints is increasingly becoming an important reliability issue for electronic packages. The purpose of this Laboratory Directed Research and Development (LDRD) project was to develop computational tools for simulating the behavior of solder joints under strain and temperature cycling, taking into account the microstructural heterogeneities that exist in as-solidified near eutectic Sn-Pb joints, as well as subsequent microstructural evolution. The authors present two computational constitutive models, a two-phase model and a single-phase model, that were developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions. Unique metallurgical tests provide the fundamental input for the constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations with this model agree qualitatively with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. The single-phase model was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. Special thermomechanical fatigue tests were developed to give fundamental materials input to the models, and an in situ SEM thermomechanical fatigue test system was developed to characterize microstructural evolution and the mechanical behavior of solder joints during the test. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests. The simulation results from the two-phase model showed good fit to the experimental test results.

  1. Consistent Multigroup Theory Enabling Accurate Course-Group Simulation of Gen IV Reactors

    SciTech Connect

    Rahnema, Farzad; Haghighat, Alireza; Ougouag, Abderrafi

    2013-11-29

    The objective of this proposal is the development of a consistent multi-group theory that accurately accounts for the energy-angle coupling associated with collapsed-group cross sections. This will allow for coarse-group transport and diffusion theory calculations that exhibit continuous energy accuracy and implicitly treat cross- section resonances. This is of particular importance when considering the highly heterogeneous and optically thin reactor designs within the Next Generation Nuclear Plant (NGNP) framework. In such reactors, ignoring the influence of anisotropy in the angular flux on the collapsed cross section, especially at the interface between core and reflector near which control rods are located, results in inaccurate estimates of the rod worth, a serious safety concern. The scope of this project will include the development and verification of a new multi-group theory enabling high-fidelity transport and diffusion calculations in coarse groups, as well as a methodology for the implementation of this method in existing codes. This will allow for a higher accuracy solution of reactor problems while using fewer groups and will reduce the computational expense. The proposed research represents a fundamental advancement in the understanding and improvement of multi- group theory for reactor analysis.

  2. Towards an accurate representation of electrostatics in classical force fields: Efficient implementation of multipolar interactions in biomolecular simulations

    NASA Astrophysics Data System (ADS)

    Sagui, Celeste; Pedersen, Lee G.; Darden, Thomas A.

    2004-01-01

    The accurate simulation of biologically active macromolecules faces serious limitations that originate in the treatment of electrostatics in the empirical force fields. The current use of "partial charges" is a significant source of errors, since these vary widely with different conformations. By contrast, the molecular electrostatic potential (MEP) obtained through the use of a distributed multipole moment description, has been shown to converge to the quantum MEP outside the van der Waals surface, when higher order multipoles are used. However, in spite of the considerable improvement to the representation of the electronic cloud, higher order multipoles are not part of current classical biomolecular force fields due to the excessive computational cost. In this paper we present an efficient formalism for the treatment of higher order multipoles in Cartesian tensor formalism. The Ewald "direct sum" is evaluated through a McMurchie-Davidson formalism [L. McMurchie and E. Davidson, J. Comput. Phys. 26, 218 (1978)]. The "reciprocal sum" has been implemented in three different ways: using an Ewald scheme, a particle mesh Ewald (PME) method, and a multigrid-based approach. We find that even though the use of the McMurchie-Davidson formalism considerably reduces the cost of the calculation with respect to the standard matrix implementation of multipole interactions, the calculation in direct space remains expensive. When most of the calculation is moved to reciprocal space via the PME method, the cost of a calculation where all multipolar interactions (up to hexadecapole-hexadecapole) are included is only about 8.5 times more expensive than a regular AMBER 7 [D. A. Pearlman et al., Comput. Phys. Commun. 91, 1 (1995)] implementation with only charge-charge interactions. The multigrid implementation is slower but shows very promising results for parallelization. It provides a natural way to interface with continuous, Gaussian-based electrostatics in the future. It is

  3. Molecular Simulation of Carbon Dioxide Capture by Montmorillonite Using an Accurate and Flexible Force Field

    SciTech Connect

    Romanov, V N; Cygan, R T; Myshakin, E M

    2012-06-21

    Naturally occurring clay minerals provide a distinctive material for carbon capture and carbon dioxide sequestration. Swelling clay minerals, such as the smectite variety, possess an aluminosilicate structure that is controlled by low-charge layers that readily expand to accommodate water molecules and, potentially, CO2. Recent experimental studies have demonstrated the efficacy of intercalating CO2 in the interlayer of layered clays, but little is known about the molecular mechanisms of the process and the extent of carbon capture as a function of clay charge and structure. A series of molecular dynamics simulations and vibrational analyses have been completed to assess the molecular interactions associated with incorporation of CO2 and H2O in the interlayer of montmorillonite clay and to help validate the models with experimental observation. An accurate and fully flexible set of interatomic potentials for CO2 is developed and combined with Clayff potentials to help evaluate the intercalation mechanism and examine the effect of molecular flexibility onthe diffusion rate of CO2 in water.

  4. Computer Simulations in Science Education: Implications for Distance Education

    ERIC Educational Resources Information Center

    Sahin, Sami

    2006-01-01

    This paper is a review of literature about the use of computer simulations in science education. This review examines types and examples of computer simulations. The literature review indicated that although computer simulations cannot replace science classroom and laboratory activities completely, they offer various advantages both for classroom…

  5. The Learning Effects of Computer Simulations in Science Education

    ERIC Educational Resources Information Center

    Rutten, Nico; van Joolingen, Wouter R.; van der Veen, Jan T.

    2012-01-01

    This article reviews the (quasi)experimental research of the past decade on the learning effects of computer simulations in science education. The focus is on two questions: how use of computer simulations can enhance traditional education, and how computer simulations are best used in order to improve learning processes and outcomes. We report on…

  6. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure.

    PubMed

    Lippert, Ross A; Predescu, Cristian; Ierardi, Douglas J; Mackenzie, Kenneth M; Eastwood, Michael P; Dror, Ron O; Shaw, David E

    2013-10-28

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity. PMID:24182003

  7. Benchmarking computational fluid dynamics models for lava flow simulation

    NASA Astrophysics Data System (ADS)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi

    2016-04-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.

  8. Simulating Subsurface Reactive Flows on Ultrascale Computers with PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Hammond, G. E.; Lichtner, P. C.; Lu, C.; Smith, B. F.; Philip, B.

    2009-12-01

    To provide true predictive utility, subsurface simulations often must accurately resolve--in three dimensions--complicated, multi-phase flow fields in highly heterogeneous geology with numerous chemical species and complex chemistry. This task is especially daunting because of the wide range of spatial scales involved--from the pore scale to the field scale--ranging over six orders of magnitude, and the wide range of time scales ranging from seconds or less to millions of years. This represents a true "Grand Challenge" computational problem, requiring not only the largest-scale ("ultrascale") supercomputers, but accompanying advances in algorithms for the efficient numerical solution of systems of PDEs using these machines, and in mathematical modeling techniques that can adequately capture the truly multi-scale nature of these problems. We describe some of the specific challenges involved and present the software and algorithmic approaches that are being using in the computer code PFLOTRAN to provide scalable performance for such simulations on tens of thousands of processors. We focus particularly on scalable techniques for solving the large (up to billions of total degrees of freedom), sparse algebraic systems that arise. We also describe ongoing work to address disparate time and spatial scales by both the development of adaptive mesh refinement methods and the use of multiple continuum formulations. Finally, we present some examples from recent simulations conducted on Jaguar, the 150152 processor core Cray XT5 system at Oak Ridge National Laboratory that is currently one of the most powerful supercomputers in the world.

  9. A mechanistic approach for accurate simulation of village scale malaria transmission

    PubMed Central

    Bomblies, Arne; Duchemin, Jean-Bernard; Eltahir, Elfatih AB

    2009-01-01

    Background Malaria transmission models commonly incorporate spatial environmental and climate variability for making regional predictions of disease risk. However, a mismatch of these models' typical spatial resolutions and the characteristic scale of malaria vector population dynamics may confound disease risk predictions in areas of high spatial hydrological variability such as the Sahel region of Africa. Methods Field observations spanning two years from two Niger villages are compared. The two villages are separated by only 30 km but exhibit a ten-fold difference in anopheles mosquito density. These two villages would be covered by a single grid cell in many malaria models, yet their entomological activity differs greatly. Environmental conditions and associated entomological activity are simulated at high spatial- and temporal resolution using a mechanistic approach that couples a distributed hydrology scheme and an entomological model. Model results are compared to regular field observations of Anopheles gambiae sensu lato mosquito populations and local hydrology. The model resolves the formation and persistence of individual pools that facilitate mosquito breeding and predicts spatio-temporal mosquito population variability at high resolution using an agent-based modeling approach. Results Observations of soil moisture, pool size, and pool persistence are reproduced by the model. The resulting breeding of mosquitoes in the simulated pools yields time-integrated seasonal mosquito population dynamics that closely follow observations from captured mosquito abundance. Interannual difference in mosquito abundance is simulated, and the inter-village difference in mosquito population is reproduced for two years of observations. These modeling results emulate the known focal nature of malaria in Niger Sahel villages. Conclusion Hydrological variability must be represented at high spatial and temporal resolution to achieve accurate predictive ability of malaria risk

  10. A Computational Framework for Bioimaging Simulation

    PubMed Central

    Watabe, Masaki; Arjunan, Satya N. V.; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi

    2015-01-01

    Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units. PMID:26147508

  11. Neural network computer simulation of medical aerosols.

    PubMed

    Richardson, C J; Barlow, D J

    1996-06-01

    Preliminary investigations have been conducted to assess the potential for using artificial neural networks to simulate aerosol behaviour, with a view to employing this type of methodology in the evaluation and design of pulmonary drug-delivery systems. Details are presented of the general purpose software developed for these tasks; it implements a feed-forward back-propagation algorithm with weight decay and connection pruning, the user having complete run-time control of the network architecture and mode of training. A series of exploratory investigations is then reported in which different network structures and training strategies are assessed in terms of their ability to simulate known patterns of fluid flow in simple model systems. The first of these involves simulations of cellular automata-generated data for fluid flow through a partially obstructed two-dimensional pipe. The artificial neural networks are shown to be highly successful in simulating the behaviour of this simple linear system, but with important provisos relating to the information content of the training data and the criteria used to judge when the network is properly trained. A second set of investigations is then reported in which similar networks are used to simulate patterns of fluid flow through aerosol generation devices, using training data furnished through rigorous computational fluid dynamics modelling. These more complex three-dimensional systems are modelled with equal success. It is concluded that carefully tailored, well trained networks could provide valuable tools not just for predicting but also for analysing the spatial dynamics of pharmaceutical aerosols. PMID:8832491

  12. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  13. Computational simulation of concurrent engineering for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  14. Computational considerations for the simulation of shock-induced sound

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Carpenter, Mark H.

    1996-01-01

    The numerical study of aeroacoustic problems places stringent demands on the choice of a computational algorithm, because it requires the ability to propagate disturbances of small amplitude and short wavelength. The demands are particularly high when shock waves are involved, because the chosen algorithm must also resolve discontinuities in the solution. The extent to which a high-order-accurate shock-capturing method can be relied upon for aeroacoustics applications that involve the interaction of shocks with other waves has not been previously quantified. Such a study is initiated in this work. A fourth-order-accurate essentially nonoscillatory (ENO) method is used to investigate the solutions of inviscid, compressible flows with shocks in a quasi-one-dimensional nozzle flow. The design order of accuracy is achieved in the smooth regions of a steady-state test case. However, in an unsteady test case, only first-order results are obtained downstream of a sound-shock interaction. The difficulty in obtaining a globally high-order-accurate solution in such a case with a shock-capturing method is demonstrated through the study of a simplified, linear model problem. Some of the difficult issues and ramifications for aeroacoustics simulations of flows with shocks that are raised by these results are discussed.

  15. Limited rotational and rovibrational line lists computed with highly accurate quartic force fields and ab initio dipole surfaces.

    PubMed

    Fortenberry, Ryan C; Huang, Xinchuan; Schwenke, David W; Lee, Timothy J

    2014-02-01

    In this work, computational procedures are employed to compute the rotational and rovibrational spectra and line lists for H2O, CO2, and SO2. Building on the established use of quartic force fields, MP2 and CCSD(T) Dipole Moment Surfaces (DMSs) are computed for each system of study in order to produce line intensities as well as the transition energies. The computed results exhibit a clear correlation to reference data available in the HITRAN database. Additionally, even though CCSD(T) DMSs produce more accurate intensities as compared to experiment, the use of MP2 DMSs results in reliable line lists that are still comparable to experiment. The use of the less computationally costly MP2 method is beneficial in the study of larger systems where use of CCSD(T) would be more costly. PMID:23692860

  16. Computer simulation of surface and film processes

    NASA Technical Reports Server (NTRS)

    Tiller, W. A.; Halicioglu, M. T.

    1984-01-01

    All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.

  17. Computer simulation of fatigue under diametrical compression

    SciTech Connect

    Carmona, H. A.; Kun, F.; Andrade, J. S. Jr.; Herrmann, H. J.

    2007-04-15

    We study the fatigue fracture of disordered materials by means of computer simulations of a discrete element model. We extend a two-dimensional fracture model to capture the microscopic mechanisms relevant for fatigue and we simulate the diametric compression of a disc shape specimen under a constant external force. The model allows us to follow the development of the fracture process on the macrolevel and microlevel varying the relative influence of the mechanisms of damage accumulation over the load history and healing of microcracks. As a specific example we consider recent experimental results on the fatigue fracture of asphalt. Our numerical simulations show that for intermediate applied loads the lifetime of the specimen presents a power law behavior. Under the effect of healing, more prominent for small loads compared to the tensile strength of the material, the lifetime of the sample increases and a fatigue limit emerges below which no macroscopic failure occurs. The numerical results are in a good qualitative agreement with the experimental findings.

  18. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  19. Solid rocket booster internal flow analysis by highly accurate adaptive computational methods

    NASA Technical Reports Server (NTRS)

    Huang, C. Y.; Tworzydlo, W.; Oden, J. T.; Bass, J. M.; Cullen, C.; Vadaketh, S.

    1991-01-01

    The primary objective of this project was to develop an adaptive finite element flow solver for simulating internal flows in the solid rocket booster. Described here is a unique flow simulator code for analyzing highly complex flow phenomena in the solid rocket booster. New methodologies and features incorporated into this analysis tool are described.

  20. Combining Theory and Experiment to Compute Highly Accurate Line Lists for Stable Molecules, and Purely AB Initio Theory to Compute Accurate Rotational and Rovibrational Line Lists for Transient Molecules

    NASA Astrophysics Data System (ADS)

    Lee, Timothy J.; Huang, Xinchuan; Fortenberry, Ryan C.; Schwenke, David W.

    2013-06-01

    Theoretical chemists have been computing vibrational and rovibrational spectra of small molecules for more than 40 years, but over the last decade the interest in this application has grown significantly. The increased interest in computing accurate rotational and rovibrational spectra for small molecules could not come at a better time, as NASA and ESA have begun to acquire a mountain of high-resolution spectra from the Herschel mission, and soon will from the SOFIA and JWST missions. In addition, the ground-based telescope, ALMA, has begun to acquire high-resolution spectra in the same time frame. Hence the need for highly accurate line lists for many small molecules, including their minor isotopologues, will only continue to increase. I will present the latest developments from our group on using the "Best Theory + High-Resolution Experimental Data" strategy to compute highly accurate rotational and rovibrational spectra for small molecules, including NH3, CO2, and SO2. I will also present the latest work from our group in producing purely ab initio line lists and spectroscopic constants for small molecules thought to exist in various astrophysical environments, but for which there is either limited or no high-resolution experimental data available. These more limited line lists include purely rotational transitions as well as rovibrational transitions for bands up through a few combination/overtones.

  1. Chip level simulation of fault tolerant computers

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1983-01-01

    Chip level modeling techniques, functional fault simulation, simulation software development, a more efficient, high level version of GSP, and a parallel architecture for functional simulation are discussed.

  2. A fourth order accurate finite difference scheme for the computation of elastic waves

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.

    1986-01-01

    A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.

  3. Space radiator simulation manual for computer code

    NASA Technical Reports Server (NTRS)

    Black, W. Z.; Wulff, W.

    1972-01-01

    A computer program that simulates the performance of a space radiator is presented. The program basically consists of a rigorous analysis which analyzes a symmetrical fin panel and an approximate analysis that predicts system characteristics for cases of non-symmetrical operation. The rigorous analysis accounts for both transient and steady state performance including aerodynamic and radiant heating of the radiator system. The approximate analysis considers only steady state operation with no aerodynamic heating. A description of the radiator system and instructions to the user for program operation is included. The input required for the execution of all program options is described. Several examples of program output are contained in this section. Sample output includes the radiator performance during ascent, reentry and orbit.

  4. Computational simulation methods for composite fracture mechanics

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1988-01-01

    Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.

  5. Computational simulation of hot composite structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Murthy, P. L. N.; Singhal, S. N.

    1991-01-01

    Three different computer codes developed in-house are described for application to hot composite structures. These codes include capabilities for: (1) laminate behavior (METCAN); (2) thermal/structural analysis of hot structures made from high temperature metal matrix composites (HITCAN); and (3) laminate tailoring (MMLT). Results for select sample cases are described to demonstrate the versatility as well as the application of these codes to specific situations. The sample case results show that METCAN can be used to simulate cyclic life in high temperature metal matrix composites; HITCAN can be used to evaluate the structural performance of curved panels as well as respective sensitivities of various nonlinearities, and MMLT can be used to tailor the fabrication process in order to reduce residual stresses in the matrix upon cool-down.

  6. Computational simulation of hot composites structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Murthy, P. L. N.; Singhal, S. N.

    1991-01-01

    Three different computer codes developed in-house are described for application to hot composite structures. These codes include capabilities for: (1) laminate behavior (METCAN); (2) thermal/structural analysis of hot structures made from high temperature metal matrix composites (HITCAN); and (3) laminate tailoring (MMLT). Results for select sample cases are described to demonstrate the versatility as well as the application of these codes to specific situations. The sample case results show that METCAN can be used to simulate cyclic life in high temperature metal matrix composites; HITCAN can be used to evaluate the structural performance of curved panels as well as respective sensitivities of various nonlinearities, and MMLT can be used to tailor the fabrication process in order to reduce residual stresses in the matrix upon cool-down.

  7. Miller experiments in atomistic computer simulations

    PubMed Central

    Saitta, Antonino Marco; Saija, Franz

    2014-01-01

    The celebrated Miller experiments reported on the spontaneous formation of amino acids from a mixture of simple molecules reacting under an electric discharge, giving birth to the research field of prebiotic chemistry. However, the chemical reactions involved in those experiments have never been studied at the atomic level. Here we report on, to our knowledge, the first ab initio computer simulations of Miller-like experiments in the condensed phase. Our study, based on the recent method of treatment of aqueous systems under electric fields and on metadynamics analysis of chemical reactions, shows that glycine spontaneously forms from mixtures of simple molecules once an electric field is switched on and identifies formic acid and formamide as key intermediate products of the early steps of the Miller reactions, and the crucible of formation of complex biological molecules. PMID:25201948

  8. Computer simulation of fracture using long range pair potentials

    NASA Technical Reports Server (NTRS)

    Mullins, M.

    1984-01-01

    The behavior of a crack in iron is studied using computer simulation. A Morse interatomic potential is used which is of a longer ranged nature than the potentials used in previous studies. With this potential, blunting of the crack tip by spontaneous dislocation nucleation occurs under most conditions. The presence of hydrogen appears to inhibit this blunting and thus to encourage brittle fracture. In the absence of hydrogen, a temperature dependent brittle-ductile transition is observed for some loads. This behavior is quite different from that observed when shorter ranged potentials are used, and it appears to be in somewhat better agreement with experimental results. This agreement with experiment is not conclusive, however, indicating the need for further work to determine more accurate potentials.

  9. Experiential Learning through Computer-Based Simulations.

    ERIC Educational Resources Information Center

    Maynes, Bill; And Others

    1992-01-01

    Describes experiential learning instructional model and simulation for student principals. Describes interactive laser videodisc simulation. Reports preliminary findings about student principal learning from simulation. Examines learning approaches by unsuccessful and successful students and learning levels of model learners. Simulation's success…

  10. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  11. Methods for increased computational efficiency of multibody simulations

    NASA Astrophysics Data System (ADS)

    Epple, Alexander

    This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.

  12. Engineering Fracking Fluids with Computer Simulation

    NASA Astrophysics Data System (ADS)

    Shaqfeh, Eric

    2015-11-01

    There are no comprehensive simulation-based tools for engineering the flows of viscoelastic fluid-particle suspensions in fully three-dimensional geometries. On the other hand, the need for such a tool in engineering applications is immense. Suspensions of rigid particles in viscoelastic fluids play key roles in many energy applications. For example, in oil drilling the ``drilling mud'' is a very viscous, viscoelastic fluid designed to shear-thin during drilling, but thicken at stoppage so that the ``cuttings'' can remain suspended. In a related application known as hydraulic fracturing suspensions of solids called ``proppant'' are used to prop open the fracture by pumping them into the well. It is well-known that particle flow and settling in a viscoelastic fluid can be quite different from that which is observed in Newtonian fluids. First, it is now well known that the ``fluid particle split'' at bifurcation cracks is controlled by fluid rheology in a manner that is not understood. Second, in Newtonian fluids, the presence of an imposed shear flow in the direction perpendicular to gravity (which we term a cross or orthogonal shear flow) has no effect on the settling of a spherical particle in Stokes flow (i.e. at vanishingly small Reynolds number). By contrast, in a non-Newtonian liquid, the complex rheological properties induce a nonlinear coupling between the sedimentation and shear flow. Recent experimental data have shown both the shear thinning and the elasticity of the suspending polymeric solutions significantly affects the fluid-particle split at bifurcations, as well as the settling rate of the solids. In the present work, we use the Immersed Boundary Method to develop computer simulations of viscoelastic flow in suspensions of spheres to study these problems. These simulations allow us to understand the detailed physical mechanisms for the remarkable physical behavior seen in practice, and actually suggest design rules for creating new fluid recipes.

  13. A computer simulation model to compute the radiation transfer of mountainous regions

    NASA Astrophysics Data System (ADS)

    Li, Yuguang; Zhao, Feng; Song, Rui

    2011-11-01

    In mountainous regions, the radiometric signal recorded at the sensor depends on a number of factors such as sun angle, atmospheric conditions, surface cover type, and topography. In this paper, a computer simulation model of radiation transfer is designed and evaluated. This model implements the Monte Carlo ray-tracing techniques and is specifically dedicated to the study of light propagation in mountainous regions. The radiative processes between sun light and the objects within the mountainous region are realized by using forward Monte Carlo ray-tracing methods. The performance of the model is evaluated through detailed comparisons with the well-established 3D computer simulation model: RGM (Radiosity-Graphics combined Model) based on the same scenes and identical spectral parameters, which shows good agreements between these two models' results. By using the newly developed computer model, series of typical mountainous scenes are generated to analyze the physical mechanism of mountainous radiation transfer. The results show that the effects of the adjacent slopes are important for deep valleys and they particularly affect shadowed pixels, and the topographic effect needs to be considered in mountainous terrain before accurate inferences from remotely sensed data can be made.

  14. Accurate prediction of unsteady and time-averaged pressure loads using a hybrid Reynolds-Averaged/large-eddy simulation technique

    NASA Astrophysics Data System (ADS)

    Bozinoski, Radoslav

    Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions

  15. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    SciTech Connect

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-10-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  16. Time-Accurate Computations of Isolated Circular Synthetic Jets in Crossflow

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Schaeffler, N. W.; Milanovic, I. M.; Zaman, K. B. M. Q.

    2007-01-01

    Results from unsteady Reynolds-averaged Navier-Stokes computations are described for two different synthetic jet flows issuing into a turbulent boundary layer crossflow through a circular orifice. In one case the jet effect is mostly contained within the boundary layer, while in the other case the jet effect extends beyond the boundary layer edge. Both cases have momentum flux ratios less than 2. Several numerical parameters are investigated, and some lessons learned regarding the CFD methods for computing these types of flow fields are summarized. Results in both cases are compared to experiment.

  17. Time-Accurate Computations of Isolated Circular Synthetic Jets in Crossflow

    NASA Technical Reports Server (NTRS)

    Rumsey, Christoper L.; Schaeffler, Norman W.; Milanovic, I. M.; Zaman, K. B. M. Q.

    2005-01-01

    Results from unsteady Reynolds-averaged Navier-Stokes computations are described for two different synthetic jet flows issuing into a turbulent boundary layer crossflow through a circular orifice. In one case the jet effect is mostly contained within the boundary layer, while in the other case the jet effect extends beyond the boundary layer edge. Both cases have momentum flux ratios less than 2. Several numerical parameters are investigated, and some lessons learned regarding the CFD methods for computing these types of flow fields are outlined. Results in both cases are compared to experiment.

  18. Duality quantum computer and the efficient quantum simulations

    NASA Astrophysics Data System (ADS)

    Wei, Shi-Jie; Long, Gui-Lu

    2016-03-01

    Duality quantum computing is a new mode of a quantum computer to simulate a moving quantum computer passing through a multi-slit. It exploits the particle wave duality property for computing. A quantum computer with n qubits and a qudit simulates a moving quantum computer with n qubits passing through a d-slit. Duality quantum computing can realize an arbitrary sum of unitaries and therefore a general quantum operator, which is called a generalized quantum gate. All linear bounded operators can be realized by the generalized quantum gates, and unitary operators are just the extreme points of the set of generalized quantum gates. Duality quantum computing provides flexibility and a clear physical picture in designing quantum algorithms, and serves as a powerful bridge between quantum and classical algorithms. In this paper, after a brief review of the theory of duality quantum computing, we will concentrate on the applications of duality quantum computing in simulations of Hamiltonian systems. We will show that duality quantum computing can efficiently simulate quantum systems by providing descriptions of the recent efficient quantum simulation algorithm of Childs and Wiebe (Quantum Inf Comput 12(11-12):901-924, 2012) for the fast simulation of quantum systems with a sparse Hamiltonian, and the quantum simulation algorithm by Berry et al. (Phys Rev Lett 114:090502, 2015), which provides exponential improvement in precision for simulating systems with a sparse Hamiltonian.

  19. Computer subroutine ISUDS accurately solves large system of simultaneous linear algebraic equations

    NASA Technical Reports Server (NTRS)

    Collier, G.

    1967-01-01

    Computer program, an Iterative Scheme Using a Direct Solution, obtains double precision accuracy using a single-precision coefficient matrix. ISUDS solves a system of equations written in matrix form as AX equals B, where A is a square non-singular coefficient matrix, X is a vector, and B is a vector.

  20. Using Computational Simulations to Confront Students' Mental Models

    ERIC Educational Resources Information Center

    Rodrigues, R.; Carvalho, P. Simeão

    2014-01-01

    In this paper we show an example of how to use a computational simulation to obtain visual feedback for students' mental models, and compare their predictions with the simulated system's behaviour. Additionally, we use the computational simulation to incrementally modify the students' mental models in order to accommodate new data,…

  1. Computer-aided simulation study of photomultiplier tubes

    NASA Technical Reports Server (NTRS)

    Zaghloul, Mona E.; Rhee, Do Jun

    1989-01-01

    A computer model that simulates the response of photomultiplier tubes (PMTs) and the associated voltage divider circuit is developed. An equivalent circuit that approximates the operation of the device is derived and then used to develop a computer simulation of the PMT. Simulation results are presented and discussed.

  2. MULTEM: A new multislice program to perform accurate and fast electron diffraction and imaging simulations using Graphics Processing Units with CUDA.

    PubMed

    Lobato, I; Van Dyck, D

    2015-09-01

    The main features and the GPU implementation of the MULTEM program are presented and described. This new program performs accurate and fast multislice simulations by including higher order expansion of the multislice solution of the high energy Schrödinger equation, the correct subslicing of the three-dimensional potential and top-bottom surfaces. The program implements different kinds of simulation for CTEM, STEM, ED, PED, CBED, ADF-TEM and ABF-HC with proper treatment of the spatial and temporal incoherences. The multislice approach described here treats the specimen as amorphous material which allows a straightforward implementation of the frozen phonon approximation. The generalized transmission function for each slice is calculated when is needed and then discarded. This allows us to perform large simulations that can include millions of atoms and keep the computer memory requirements to a reasonable level. PMID:25965576

  3. Accurate Analysis and Computer Aided Design of Microstrip Dual Mode Resonators and Filters.

    NASA Astrophysics Data System (ADS)

    Grounds, Preston Whitfield, III

    1995-01-01

    Microstrip structures are of interest due to their many applications in microwave circuit design. Their small size and ease of connection to both passive and active components make them well suited for use in systems where size and space is at a premium. These include satellite communication systems, radar systems, satellite navigation systems, cellular phones and many others. In general, space is always a premium for any mobile system. Microstrip resonators find particular application in oscillators and filters. In typical filters each microstrip patch corresponds to one resonator. However, when dual mode patches are employed, each patch acts as two resonators and therefore reduces the amount of space required to build the filter. This dissertation focuses on the accurate electromagnetic analysis of the components of planar dual mode filters. Highly accurate analyses are required so that the resonator to resonator coupling and the resonator to input/output can be predicted with precision. Hence, filters can be built with a minimum of design iterations and tuning. The analysis used herein is an integral equation formulation in the spectral domain. The analysis is done in the spectral domain since the Green's function can be derived in closed form, and the spatial domain convolution becomes a simple product. The resulting set of equations is solved using the Method of Moments with Galerkin's procedure. The electromagnetic analysis is applied to range of problems including unloaded dual mode patches, dual mode patches coupled to microstrip feedlines, and complete filter structures. At each step calculated results are compared to measured results and good agreement is found. The calculated results are also compared to results from the circuit analysis program HP EESOF^{ rm TM} and again good agreement is found. A dual mode elliptic filter is built and good performance is obtained.

  4. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGESBeta

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  5. Matrix-vector multiplication using digital partitioning for more accurate optical computing

    NASA Technical Reports Server (NTRS)

    Gary, C. K.

    1992-01-01

    Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.

  6. Molecular Simulation of the Free Energy for the Accurate Determination of Phase Transition Properties of Molecular Solids

    NASA Astrophysics Data System (ADS)

    Sellers, Michael; Lisal, Martin; Brennan, John

    2015-06-01

    Investigating the ability of a molecular model to accurately represent a real material is crucial to model development and use. When the model simulates materials in extreme conditions, one such property worth evaluating is the phase transition point. However, phase transitions are often overlooked or approximated because of difficulty or inaccuracy when simulating them. Techniques such as super-heating or super-squeezing a material to induce a phase change suffer from inherent timescale limitations leading to ``over-driving,'' and dual-phase simulations require many long-time runs to seek out what frequently results in an inexact location of phase-coexistence. We present a compilation of methods for the determination of solid-solid and solid-liquid phase transition points through the accurate calculation of the chemical potential. The methods are applied to the Smith-Bharadwaj atomistic potential's representation of cyclotrimethylene trinitramine (RDX) to accurately determine its melting point (Tm) and the alpha to gamma solid phase transition pressure. We also determine Tm for a coarse-grain model of RDX, and compare its value to experiment and atomistic counterpart. All methods are employed via the LAMMPS simulator, resulting in 60-70 simulations that total 30-50 ns. Approved for public release. Distribution is unlimited.

  7. Numerical parameter constraints for accurate PIC-DSMC simulation of breakdown from arc initiation to stable arcs

    NASA Astrophysics Data System (ADS)

    Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith

    2015-09-01

    Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  8. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  9. Iofetamine I 123 single photon emission computed tomography is accurate in the diagnosis of Alzheimer's disease

    SciTech Connect

    Johnson, K.A.; Holman, B.L.; Rosen, T.J.; Nagel, J.S.; English, R.J.; Growdon, J.H. )

    1990-04-01

    To determine the diagnostic accuracy of iofetamine hydrochloride I 123 (IMP) with single photon emission computed tomography in Alzheimer's disease, we studied 58 patients with AD and 15 age-matched healthy control subjects. We used a qualitative method to assess regional IMP uptake in the entire brain and to rate image data sets as normal or abnormal without knowledge of subjects'clinical classification. The sensitivity and specificity of IMP with single photon emission computed tomography in AD were 88% and 87%, respectively. In 15 patients with mild cognitive deficits (Blessed Dementia Scale score, less than or equal to 10), sensitivity was 80%. With the use of a semiquantitative measure of regional cortical IMP uptake, the parietal lobes were the most functionally impaired in AD and the most strongly associated with the patients' Blessed Dementia Scale scores. These results indicated that IMP with single photon emission computed tomography may be a useful adjunct in the clinical diagnosis of AD in early, mild disease.

  10. Time-accurate unsteady flow simulations supporting the SRM T+68-second pressure spike anomaly investigation (STS-54B)

    NASA Astrophysics Data System (ADS)

    Dougherty, N. S.; Burnette, D. W.; Holt, J. B.; Matienzo, Jose

    1993-07-01

    Time-accurate unsteady flow simulations are being performed supporting the SRM T+68sec pressure 'spike' anomaly investigation. The anomaly occurred in the RH SRM during the STS-54 flight (STS-54B) but not in the LH SRM (STS-54A) causing a momentary thrust mismatch approaching the allowable limit at that time into the flight. Full-motor internal flow simulations using the USA-2D axisymmetric code are in progress for the nominal propellant burn-back geometry and flow conditions at T+68-sec--Pc = 630 psi, gamma = 1.1381, T(sub c) = 6200 R, perfect gas without aluminum particulate. In a cooperative effort with other investigation team members, CFD-derived pressure loading on the NBR and castable inhibitors was used iteratively to obtain nominal deformed geometry of each inhibitor, and the deformed (bent back) inhibitor geometry was entered into this model. Deformed geometry was computed using structural finite-element models. A solution for the unsteady flow has been obtained for the nominal flow conditions (existing prior to the occurrence of the anomaly) showing sustained standing pressure oscillations at nominally 14.5 Hz in the motor IL acoustic mode that flight and static test data confirm to be normally present at this time. Average mass flow discharged from the nozzle was confirmed to be the nominal expected (9550 lbm/sec). The local inlet boundary condition is being perturbed at the location of the presumed reconstructed anomaly as identified by interior ballistics performance specialist team members. A time variation in local mass flow is used to simulate sudden increase in burning area due to localized propellant grain cracks. The solution will proceed to develop a pressure rise (proportional to total mass flow rate change squared). The volume-filling time constant (equivalent to 0.5 Hz) comes into play in shaping the rise rate of the developing pressure 'spike' as it propagates at the speed of sound in both directions to the motor head end and nozzle. The

  11. Time-Accurate Unsteady Flow Simulations Supporting the SRM T+68-Second Pressure Spike Anomaly Investigation (STS-54B)

    NASA Technical Reports Server (NTRS)

    Dougherty, N. S.; Burnette, D. W.; Holt, J. B.; Matienzo, Jose

    1993-01-01

    Time-accurate unsteady flow simulations are being performed supporting the SRM T+68sec pressure 'spike' anomaly investigation. The anomaly occurred in the RH SRM during the STS-54 flight (STS-54B) but not in the LH SRM (STS-54A) causing a momentary thrust mismatch approaching the allowable limit at that time into the flight. Full-motor internal flow simulations using the USA-2D axisymmetric code are in progress for the nominal propellant burn-back geometry and flow conditions at T+68-sec--Pc = 630 psi, gamma = 1.1381, T(sub c) = 6200 R, perfect gas without aluminum particulate. In a cooperative effort with other investigation team members, CFD-derived pressure loading on the NBR and castable inhibitors was used iteratively to obtain nominal deformed geometry of each inhibitor, and the deformed (bent back) inhibitor geometry was entered into this model. Deformed geometry was computed using structural finite-element models. A solution for the unsteady flow has been obtained for the nominal flow conditions (existing prior to the occurrence of the anomaly) showing sustained standing pressure oscillations at nominally 14.5 Hz in the motor IL acoustic mode that flight and static test data confirm to be normally present at this time. Average mass flow discharged from the nozzle was confirmed to be the nominal expected (9550 lbm/sec). The local inlet boundary condition is being perturbed at the location of the presumed reconstructed anomaly as identified by interior ballistics performance specialist team members. A time variation in local mass flow is used to simulate sudden increase in burning area due to localized propellant grain cracks. The solution will proceed to develop a pressure rise (proportional to total mass flow rate change squared). The volume-filling time constant (equivalent to 0.5 Hz) comes into play in shaping the rise rate of the developing pressure 'spike' as it propagates at the speed of sound in both directions to the motor head end and nozzle. The

  12. Fast methods for computing scene raw signals in millimeter-wave sensor simulations

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.; Reynolds, Terry M.; Satterfield, H. Dewayne

    2010-04-01

    Modern millimeter wave (mmW) radar sensor systems employ wideband transmit waveforms and efficient receiver signal processing methods for resolving accurate measurements of targets embedded in complex backgrounds. Fast Fourier Transform processing of pulse return signal samples is used to resolve range and Doppler locations, and amplitudes of scattered RF energy. Angle glint from RF scattering centers can be measured by performing monopulse arithmetic on signals resolved in both delta and sum antenna channels. Environment simulations for these sensors - including all-digital and hardware-in-the-loop (HWIL) scene generators - require fast, efficient methods for computing radar receiver input signals to support accurate simulations with acceptable execution time and computer cost. Although all-digital and HWIL simulations differ in their representations of the radar sensor (which is itself a simulation in the all-digital case), the signal computations for mmW scene modeling are closely related for both types. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) have developed various fast methods for computing mmW scene raw signals to support both HWIL scene projection and all-digital receiver model input signal synthesis. These methods range from high level methods of decomposing radar scenes for accurate application of spatially-dependent nonlinear scatterer phase history, to low-level methods of efficiently computing individual scatterer complex signals and single precision transcendental functions. The efficiencies of these computations are intimately tied to math and memory resources provided by computer architectures. The paper concludes with a summary of radar scene computing performance on available computer architectures, and an estimate of future growth potential for this computational performance.

  13. An accurate modeling, simulation, and analysis tool for predicting and estimating Raman LIDAR system performance

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.

    2007-09-01

    BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.

  14. Computer simulation and the features of novel empirical data.

    PubMed

    Lusk, Greg

    2016-04-01

    In an attempt to determine the epistemic status of computer simulation results, philosophers of science have recently explored the similarities and differences between computer simulations and experiments. One question that arises is whether and, if so, when, simulation results constitute novel empirical data. It is often supposed that computer simulation results could never be empirical or novel because simulations never interact with their targets, and cannot go beyond their programming. This paper argues against this position by examining whether, and under what conditions, the features of empiricality and novelty could be displayed by computer simulation data. I show that, to the extent that certain familiar measurement results have these features, so can some computer simulation results. PMID:27083094

  15. iTagPlot: an accurate computation and interactive drawing tool for tag density plot

    PubMed Central

    Kim, Sung-Hwan; Ezenwoye, Onyeka; Cho, Hwan-Gue; Robertson, Keith D.; Choi, Jeong-Hyeon

    2015-01-01

    Motivation: Tag density plots are very important to intuitively reveal biological phenomena from capture-based sequencing data by visualizing the normalized read depth in a region. Results: We have developed iTagPlot to compute tag density across functional features in parallel using multicores and a grid engine and to interactively explore it in a graphical user interface. It allows us to stratify features by defining groups based on biological function and measurement, summary statistics and unsupervised clustering. Availability and implementation: http://sourceforge.net/projects/itagplot/. Contact: jechoi@gru.edu and jeochoi@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25792550

  16. Computer simulation of FCC riser reactors.

    SciTech Connect

    Chang, S. L.; Golchert, B.; Lottes, S. A.; Petrick, M.; Zhou, C. Q.

    1999-04-20

    A three-dimensional computational fluid dynamics (CFD) code, ICRKFLO, was developed to simulate the multiphase reacting flow system in a fluid catalytic cracking (FCC) riser reactor. The code solve flow properties based on fundamental conservation laws of mass, momentum, and energy for gas, liquid, and solid phases. Useful phenomenological models were developed to represent the controlling FCC processes, including droplet dispersion and evaporation, particle-solid interactions, and interfacial heat transfer between gas, droplets, and particles. Techniques were also developed to facilitate numerical calculations. These techniques include a hybrid flow-kinetic treatment to include detailed kinetic calculations, a time-integral approach to overcome numerical stiffness problems of chemical reactions, and a sectional coupling and blocked-cell technique for handling complex geometry. The copyrighted ICRKFLO software has been validated with experimental data from pilot- and commercial-scale FCC units. The code can be used to evaluate the impacts of design and operating conditions on the production of gasoline and other oil products.

  17. Computational simulation of liquid rocket injector anomalies

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Singhal, A. K.; Tam, L. T.; Davidian, K.

    1986-01-01

    A computer model has been developed to analyze the three-dimensional two-phase reactive flows in liquid fueled rocket combustors. The model is designed to study the influence of liquid propellant injection nonuniformities on the flow pattern, combustion and heat transfer within the combustor. The Eulerian-Lagrangian approach for simulating polidisperse spray flow, evaporation and combustion has been used. Full coupling between the phases is accounted for. A nonorthogonal, body fitted coordinate system along with a conservative control volume formulation is employed. The physical models built into the model include a kappa-epsilon turbulence model, a two-step chemical reaction, and the six-flux radiation model. Semiempirical models are used to describe all interphase coupling terms as well as chemical reaction rates. The purpose of this study was to demonstrate an analytical capability to predict the effects of reactant injection nonuniformities (injection anomalies) on combustion and heat transfer within the rocket combustion chamber. The results show promising application of the model to comprehensive modeling of liquid propellant rocket engines.

  18. Quantitative and Qualitative Simulation in Computer Based Training.

    ERIC Educational Resources Information Center

    Stevens, Albert; Roberts, Burce

    1983-01-01

    Computer-based systems combining quantitative simulation with qualitative tutorial techniques provide learners with sophisticated individualized training. The teaching capabilities and operating procedures of Steamer, a simulated steam plant, are described. (Author/MBR)

  19. Simulating complex intracellular processes using object-oriented computational modelling.

    PubMed

    Johnson, Colin G; Goldman, Jacki P; Gullick, William J

    2004-11-01

    The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation. PMID:15302205

  20. Gravitational Focusing and the Computation of an Accurate Moon/Mars Cratering Ratio

    NASA Technical Reports Server (NTRS)

    Matney, Mark J.

    2006-01-01

    There have been a number of attempts to use asteroid populations to simultaneously compute cratering rates on the Moon and bodies elsewhere in the Solar System to establish the cratering ratio (e.g., [1],[2]). These works use current asteroid orbit population databases combined with collision rate calculations based on orbit intersections alone. As recent work on meteoroid fluxes [3] have highlighted, however, collision rates alone are insufficient to describe the cratering rates on planetary surfaces - especially planets with stronger gravitational fields than the Moon, such as Earth and Mars. Such calculations also need to include the effects of gravitational focusing, whereby the spatial density of the slower-moving impactors is preferentially "focused" by the gravity of the body. This leads overall to higher fluxes and cratering rates, and is highly dependent on the detailed velocity distributions of the impactors. In this paper, a comprehensive gravitational focusing algorithm originally developed to describe fluxes of interplanetary meteoroids [3] is applied to the collision rates and cratering rates of populations of asteroids and long-period comets to compute better cratering ratios for terrestrial bodies in the Solar System. These results are compared to the calculations of other researchers.

  1. Accurate simulation of the electron cloud in the Fermilab Main Injector with VORPAL

    SciTech Connect

    Lebrun, Paul L.G.; Spentzouris, Panagiotis; Cary, John R.; Stoltz, Peter; Veitzer, Seth A.; /Tech-X, Boulder

    2010-05-01

    Precision simulations of the electron cloud at the Fermilab Main Injector have been studied using the plasma simulation code VORPAL. Fully 3D and self consistent solutions that includes E.M. field maps generated by the cloud and the proton bunches have been obtained, as well detailed distributions of the electron's 6D phase space. We plan to include such maps in the ongoing simulation of the space charge effects in the Main Injector. Simulations of the response of beam position monitors, retarding field analyzers and microwave transmission experiments are ongoing.

  2. Computer Simulation Methods for Defect Configurations and Nanoscale Structures

    SciTech Connect

    Gao, Fei

    2010-01-01

    This chapter will describe general computer simulation methods, including ab initio calculations, molecular dynamics and kinetic Monte-Carlo method, and their applications to the calculations of defect configurations in various materials (metals, ceramics and oxides) and the simulations of nanoscale structures due to ion-solid interactions. The multiscale theory, modeling, and simulation techniques (both time scale and space scale) will be emphasized, and the comparisons between computer simulation results and exprimental observations will be made.

  3. Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.

    1995-01-01

    The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.

  4. A model for the accurate computation of the lateral scattering of protons in water.

    PubMed

    Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-02-21

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time. PMID:26808380

  5. A model for the accurate computation of the lateral scattering of protons in water

    NASA Astrophysics Data System (ADS)

    Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.

    2016-02-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  6. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    NASA Technical Reports Server (NTRS)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  7. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    PubMed Central

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  8. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    PubMed

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  9. Computational Chemical Imaging for Cardiovascular Pathology: Chemical Microscopic Imaging Accurately Determines Cardiac Transplant Rejection

    PubMed Central

    Tiwari, Saumya; Reddy, Vijaya B.; Bhargava, Rohit; Raman, Jaishankar

    2015-01-01

    Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR) spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients’ biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures. PMID:25932912

  10. Ring polymer molecular dynamics fast computation of rate coefficients on accurate potential energy surfaces in local configuration space: Application to the abstraction of hydrogen from methane

    NASA Astrophysics Data System (ADS)

    Meng, Qingyong; Chen, Jun; Zhang, Dong H.

    2016-04-01

    To fast and accurately compute rate coefficients of the H/D + CH4 → H2/HD + CH3 reactions, we propose a segmented strategy for fitting suitable potential energy surface (PES), on which ring-polymer molecular dynamics (RPMD) simulations are performed. On the basis of recently developed permutation invariant polynomial neural-network approach [J. Li et al., J. Chem. Phys. 142, 204302 (2015)], PESs in local configuration spaces are constructed. In this strategy, global PES is divided into three parts, including asymptotic, intermediate, and interaction parts, along the reaction coordinate. Since less fitting parameters are involved in the local PESs, the computational efficiency for operating the PES routine is largely enhanced by a factor of ˜20, comparing with that for global PES. On interaction part, the RPMD computational time for the transmission coefficient can be further efficiently reduced by cutting off the redundant part of the child trajectories. For H + CH4, good agreements among the present RPMD rates and those from previous simulations as well as experimental results are found. For D + CH4, on the other hand, qualitative agreement between present RPMD and experimental results is predicted.

  11. Development of a Space Radiation Monte Carlo Computer Simulation

    NASA Technical Reports Server (NTRS)

    Pinsky, Lawrence S.

    1997-01-01

    The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.

  12. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation.

    PubMed

    Mangado, Nerea; Ceresa, Mario; Duchateau, Nicolas; Kjer, Hans Martin; Vera, Sergio; Dejea Velardo, Hector; Mistrik, Pavel; Paulsen, Rasmus R; Fagertun, Jens; Noailly, Jérôme; Piella, Gemma; González Ballester, Miguel Ángel

    2016-08-01

    Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging. To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient's CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns constitutive parameters to all components of the finite element model. This model can then be used to study in silico the effects of the electrical stimulation of the cochlear implant. Results are shown on a total of 25 models of patients. In all cases, a final mesh suitable for finite element simulations was obtained, in an average time of 94 s. The framework has proven to be fast and robust, and is promising for a detailed prognosis of the cochlear implantation surgery. PMID:26715210

  13. Accurate computation of the radiation from simple antennas using the finite-difference time-domain method

    NASA Astrophysics Data System (ADS)

    Maloney, James G.; Smith, Glenn S.; Scott, Waymond R., Jr.

    1990-07-01

    Two antennas are considered, a cylindrical monopole and a conical monopole. Both are driven through an image plane from a coaxial transmission line. Each of these antennas corresponds to a well-posed theoretical electromagnetic boundary value problem and a realizable experimental model. These antennas are analyzed by a straightforward application of the time-domain finite-difference method. The computed results for these antennas are shown to be in excellent agreement with accurate experimental measurements for both the time domain and the frequency domain. The graphical displays presented for the transient near-zone and far-zone radiation from these antennas provide physical insight into the radiation process.

  14. Advanced material testing in support of accurate sheet metal forming simulations

    NASA Astrophysics Data System (ADS)

    Kuwabara, Toshihiko

    2013-05-01

    This presentation is a review of experimental methods for accurately measuring and modeling the anisotropic plastic deformation behavior of metal sheets under a variety of loading paths: biaxial compression test, hydraulic bulge test, biaxial tension test using a cruciform specimen, multiaxial tube expansion test using a closed-loop electrohydraulic testing machine for the measurement of forming limit strains and stresses, combined tension-shear test, and in-plane stress reversal test. Observed material responses are compared with predictions using phenomenological plasticity models to highlight the importance of accurate material testing. Special attention is paid to the plastic deformation behavior of sheet metals commonly used in industry, and to verifying the validity of constitutive models based on anisotropic yield functions at a large plastic strain range. The effects of using appropriate material models on the improvement of predictive accuracy for forming defects, such as springback and fracture, are also presented.

  15. Simulation of reliability in multiserver computer networks

    NASA Astrophysics Data System (ADS)

    Minkevičius, Saulius

    2012-11-01

    The performance in terms of reliability of computer multiserver networks motivates this paper. The probability limit theorem is derived on the extreme queue length in open multiserver queueing networks in heavy traffic and applied to a reliability model for multiserver computer networks where we relate the time of failure of a multiserver computer network to the system parameters.

  16. Computational simulations of vorticity enhanced diffusion

    NASA Astrophysics Data System (ADS)

    Vold, Erik L.

    1999-11-01

    Computer simulations are used to investigate a phenomenon of vorticity enhanced diffusion (VED), a net transport and mixing of a passive scalar across a prescribed vortex flow field driven by a background gradient in the scalar quantity. The central issue under study here is the increase in scalar flux down the gradient and across the vortex field. The numerical scheme uses cylindrical coordinates centered with the vortex flow which allows an exact advective solution and 1D or 2D diffusion using simple numerical methods. In the results, the ratio of transport across a localized vortex region in the presence of the vortex flow over that expected for diffusion alone is evaluated as a measure of VED. This ratio is seen to increase dramatically while the absolute flux across the vortex decreases slowly as the diffusion coefficient is decreased. Similar results are found and compared for varying diffusion coefficient, D, or vortex rotation time, τv, for a constant background gradient in the transported scalar vs an interface in the transported quantity, and for vortex flow fields constant in time vs flow which evolves in time from an initial state and with a Schmidt number of order unity. A simple analysis shows that for a small diffusion coefficient, the flux ratio measure of VED scales as the vortex radius over the thickness for mass diffusion in a viscous shear layer within the vortex characterized by (Dτv)1/2. The phenomenon is linear as investigated here and suggests that a significant enhancement of mixing in fluids may be a relatively simple linear process. Discussion touches on how this vorticity enhanced diffusion may be related to mixing in nonlinear turbulent flows.

  17. Computer simulator for a mobile telephone system

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Ziegler, C.

    1983-01-01

    A software simulator to help NASA in the design of the LMSS was developed. The simulator will be used to study the characteristics of implementation requirements of the LMSS's configuration with specifications as outlined by NASA.

  18. Simulation of the stress computation in shells

    NASA Technical Reports Server (NTRS)

    Salama, M.; Utku, S.

    1978-01-01

    A self-teaching computer program is described, whereby the stresses in thin shells can be computed with good accuracy using the best fit approach. The program is designed for use in interactive game mode to allow the structural engineer to learn about (1) the major sources of difficulties and associated errors in the computation of stresses in thin shells, (2) possible ways to reduce the errors, and (3) trade-off between computational cost and accuracy. Included are derivation of the computational approach, program description, and several examples illustrating the program usage.

  19. Towards an accurate and computationally-efficient modelling of Fe(II)-based spin crossover materials.

    PubMed

    Vela, Sergi; Fumanal, Maria; Ribas-Arino, Jordi; Robert, Vincent

    2015-07-01

    The DFT + U methodology is regarded as one of the most-promising strategies to treat the solid state of molecular materials, as it may provide good energetic accuracy at a moderate computational cost. However, a careful parametrization of the U-term is mandatory since the results may be dramatically affected by the selected value. Herein, we benchmarked the Hubbard-like U-term for seven Fe(ii)N6-based pseudo-octahedral spin crossover (SCO) compounds, using as a reference an estimation of the electronic enthalpy difference (ΔHelec) extracted from experimental data (T1/2, ΔS and ΔH). The parametrized U-value obtained for each of those seven compounds ranges from 2.37 eV to 2.97 eV, with an average value of U = 2.65 eV. Interestingly, we have found that this average value can be taken as a good starting point since it leads to an unprecedented mean absolute error (MAE) of only 4.3 kJ mol(-1) in the evaluation of ΔHelec for the studied compounds. Moreover, by comparing our results on the solid state and the gas phase of the materials, we quantify the influence of the intermolecular interactions on the relative stability of the HS and LS states, with an average effect of ca. 5 kJ mol(-1), whose sign cannot be generalized. Overall, the findings reported in this manuscript pave the way for future studies devoted to understand the crystalline phase of SCO compounds, or the adsorption of individual molecules on organic or metallic surfaces, in which the rational incorporation of the U-term within DFT + U yields the required energetic accuracy that is dramatically missing when using bare-DFT functionals. PMID:26040609

  20. Accurate micro-computed tomography imaging of pore spaces in collagen-based scaffold.

    PubMed

    Zidek, Jan; Vojtova, Lucy; Abdel-Mohsen, A M; Chmelik, Jiri; Zikmund, Tomas; Brtnikova, Jana; Jakubicek, Roman; Zubal, Lukas; Jan, Jiri; Kaiser, Jozef

    2016-06-01

    In this work we have used X-ray micro-computed tomography (μCT) as a method to observe the morphology of 3D porous pure collagen and collagen-composite scaffolds useful in tissue engineering. Two aspects of visualizations were taken into consideration: improvement of the scan and investigation of its sensitivity to the scan parameters. Due to the low material density some parts of collagen scaffolds are invisible in a μCT scan. Therefore, here we present different contrast agents, which increase the contrast of the scanned biopolymeric sample for μCT visualization. The increase of contrast of collagenous scaffolds was performed with ceramic hydroxyapatite microparticles (HAp), silver ions (Ag(+)) and silver nanoparticles (Ag-NPs). Since a relatively small change in imaging parameters (e.g. in 3D volume rendering, threshold value and μCT acquisition conditions) leads to a completely different visualized pattern, we have optimized these parameters to obtain the most realistic picture for visual and qualitative evaluation of the biopolymeric scaffold. Moreover, scaffold images were stereoscopically visualized in order to better see the 3D biopolymer composite scaffold morphology. However, the optimized visualization has some discontinuities in zoomed view, which can be problematic for further analysis of interconnected pores by commonly used numerical methods. Therefore, we applied the locally adaptive method to solve discontinuities issue. The combination of contrast agent and imaging techniques presented in this paper help us to better understand the structure and morphology of the biopolymeric scaffold that is crucial in the design of new biomaterials useful in tissue engineering. PMID:27153826

  1. Cognitive Effects from Process Learning with Computer-Based Simulations.

    ERIC Educational Resources Information Center

    Breuer, Klaus; Kummer, Ruediger

    1990-01-01

    Discusses content learning versus process learning, describes process learning with computer-based simulations, and highlights an empirical study on the effects of process learning with problem-oriented, computer-managed simulations in technical vocational education classes in West Germany. Process learning within a model of the cognitive system…

  2. Computer Simulation Models of Economic Systems in Higher Education.

    ERIC Educational Resources Information Center

    Smith, Lester Sanford

    The increasing complexity of educational operations make analytical tools, such as computer simulation models, especially desirable for educational administrators. This MA thesis examined the feasibility of developing computer simulation models of economic systems in higher education to assist decision makers in allocating resources. The report…

  3. Explore Effective Use of Computer Simulations for Physics Education

    ERIC Educational Resources Information Center

    Lee, Yu-Fen; Guo, Yuying

    2008-01-01

    The dual purpose of this article is to provide a synthesis of the findings related to the use of computer simulations in physics education and to present implications for teachers and researchers in science education. We try to establish a conceptual framework for the utilization of computer simulations as a tool for learning and instruction in…

  4. The Link between Computer Simulations and Social Studies Learning: Debriefing.

    ERIC Educational Resources Information Center

    Chiodo, John J.; Flaim, Mary L.

    1993-01-01

    Asserts that debriefing is the missing link between learning achievement and simulations in social studies. Maintains that teachers who employ computer-assisted instruction must utilize effective debriefing activities. Provides a four-step debriefing model using the computer simulation, Oregon Trail. (CFR)

  5. The Role of Computer Simulations in Engineering Education.

    ERIC Educational Resources Information Center

    Smith, P. R.; Pollard, D.

    1986-01-01

    Discusses role of computer simulation in complementing and extending conventional components of undergraduate engineering education process in United Kingdom universities and polytechnics. Aspects of computer-based learning are reviewed (laboratory simulation, lecture and tutorial support, inservice teacher education) with reference to programs in…

  6. How Effective Is Instructional Support for Learning with Computer Simulations?

    ERIC Educational Resources Information Center

    Eckhardt, Marc; Urhahne, Detlef; Conrad, Olaf; Harms, Ute

    2013-01-01

    The study examined the effects of two different instructional interventions as support for scientific discovery learning using computer simulations. In two well-known categories of difficulty, data interpretation and self-regulation, instructional interventions for learning with computer simulations on the topic "ecosystem water" were developed…

  7. Evaluation of Computer Simulations for Teaching Apparel Merchandising Concepts.

    ERIC Educational Resources Information Center

    Jolly, Laura D.; Sisler, Grovalynn

    1988-01-01

    The study developed and evaluated computer simulations for teaching apparel merchandising concepts. Evaluation results indicated that teaching method (computer simulation versus case study) does not significantly affect cognitive learning. Student attitudes varied, however, according to topic (profitable merchandising analysis versus retailing…

  8. New Pedagogies on Teaching Science with Computer Simulations

    ERIC Educational Resources Information Center

    Khan, Samia

    2011-01-01

    Teaching science with computer simulations is a complex undertaking. This case study examines how an experienced science teacher taught chemistry using computer simulations and the impact of his teaching on his students. Classroom observations over 3 semesters, teacher interviews, and student surveys were collected. The data was analyzed for (1)…

  9. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  10. Accurate guidance for percutaneous access to a specific target in soft tissues: preclinical study of computer-assisted pericardiocentesis.

    PubMed

    Chavanon, O; Barbe, C; Troccaz, J; Carrat, L; Ribuot, C; Noirclerc, M; Maitrasse, B; Blin, D

    1999-06-01

    In the field of percutaneous access to soft tissues, our project was to improve classical pericardiocentesis by performing accurate guidance to a selected target, according to a model of the pericardial effusion acquired through three-dimensional (3D) data recording. Required hardware is an echocardiographic device and a needle, both linked to a 3D localizer, and a computer. After acquiring echographic data, a modeling procedure allows definition of the optimal puncture strategy, taking into consideration the mobility of the heart, by determining a stable region, whatever the period of the cardiac cycle. A passive guidance system is then used to reach the planned target accurately, generally a site in the middle of the stable region. After validation on a dynamic phantom and a feasibility study in dogs, an accuracy and reliability analysis protocol was realized on pigs with experimental pericardial effusion. Ten consecutive successful punctures using various trajectories were performed on eight pigs. Nonbloody liquid was collected from pericardial effusions in the stable region (5 to 9 mm wide) within 10 to 15 minutes from echographic acquisition to drainage. Accuracy of at least 2.5 mm was demonstrated. This study demonstrates the feasibility of computer-assisted pericardiocentesis. Beyond the simple improvement of the current technique, this method could be a new way to reach the heart or a new tool for percutaneous access and image-guided puncture of soft tissues. Further investigation will be necessary before routine human application. PMID:10414543

  11. Computers for real time flight simulation: A market survey

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.; Karplus, W. J.

    1977-01-01

    An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.

  12. Enabling R&D for accurate simulation of non-ideal explosives.

    SciTech Connect

    Aidun, John Bahram; Thompson, Aidan Patrick; Schmitt, Robert Gerard

    2010-09-01

    We implemented two numerical simulation capabilities essential to reliably predicting the effect of non-ideal explosives (NXs). To begin to be able to treat the multiple, competing, multi-step reaction paths and slower kinetics of NXs, Sandia's CTH shock physics code was extended to include the TIGER thermochemical equilibrium solver as an in-line routine. To facilitate efficient exploration of reaction pathways that need to be identified for the CTH simulations, we implemented in Sandia's LAMMPS molecular dynamics code the MSST method, which is a reactive molecular dynamics technique for simulating steady shock wave response. Our preliminary demonstrations of these two capabilities serve several purposes: (i) they demonstrate proof-of-principle for our approach; (ii) they provide illustration of the applicability of the new functionality; and (iii) they begin to characterize the use of the new functionality and identify where improvements will be needed for the ultimate capability to meet national security needs. Next steps are discussed.

  13. A computationally efficient and accurate numerical representation of thermodynamic properties of steam and water for computations of non-equilibrium condensing steam flow in steam turbines

    NASA Astrophysics Data System (ADS)

    Hrubý, Jan

    2012-04-01

    Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.

  14. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  15. GPU-accelerated micromagnetic simulations using cloud computing

    NASA Astrophysics Data System (ADS)

    Jermain, C. L.; Rowlands, G. E.; Buhrman, R. A.; Ralph, D. C.

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  16. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    ERIC Educational Resources Information Center

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  17. Computational Electromagnetics (CEM) Laboratory: Simulation Planning Guide

    NASA Technical Reports Server (NTRS)

    Khayat, Michael A.

    2011-01-01

    The simulation process, milestones and inputs are unknowns to first-time users of the CEM Laboratory. The Simulation Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their engineering personnel in simulation planning and execution. Material covered includes a roadmap of the simulation process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, facility interfaces, and inputs necessary to define scope, cost, and schedule are included as an appendix to the guide.

  18. Computer simulated plant design for waste minimization/pollution prevention

    SciTech Connect

    Bumble, S.

    2000-07-01

    The book discusses several paths to pollution prevention and waste minimization by using computer simulation programs. It explains new computer technologies used in the field of pollution prevention and waste management; provides information pertaining to overcoming technical, economic, and environmental barriers to waste reduction; gives case-studies from industries; and covers computer aided flow sheet design and analysis for nuclear fuel reprocessing.

  19. A Mass Spectrometer Simulator in Your Computer

    ERIC Educational Resources Information Center

    Gagnon, Michel

    2012-01-01

    Introduced to study components of ionized gas, the mass spectrometer has evolved into a highly accurate device now used in many undergraduate and research laboratories. Unfortunately, despite their importance in the formation of future scientists, mass spectrometers remain beyond the financial reach of many high schools and colleges. As a result,…

  20. Accurate Ab Initio Quantum Mechanics Simulations of Bi2Se3 and Bi2Te3 Topological Insulator Surfaces.

    PubMed

    Crowley, Jason M; Tahir-Kheli, Jamil; Goddard, William A

    2015-10-01

    It has been established experimentally that Bi2Te3 and Bi2Se3 are topological insulators, with zero band gap surface states exhibiting linear dispersion at the Fermi energy. Standard density functional theory (DFT) methods such as PBE lead to large errors in the band gaps for such strongly correlated systems, while more accurate GW methods are too expensive computationally to apply to the thin films studied experimentally. We show here that the hybrid B3PW91 density functional yields GW-quality results for these systems at a computational cost comparable to PBE. The efficiency of our approach stems from the use of Gaussian basis functions instead of plane waves or augmented plane waves. This remarkable success without empirical corrections of any kind opens the door to computational studies of real chemistry involving the topological surface state, and our approach is expected to be applicable to other semiconductors with strong spin-orbit coupling. PMID:26722872

  1. Creating Science Simulations through Computational Thinking Patterns

    ERIC Educational Resources Information Center

    Basawapatna, Ashok Ram

    2012-01-01

    Computational thinking aims to outline fundamental skills from computer science that everyone should learn. As currently defined, with help from the National Science Foundation (NSF), these skills include problem formulation, logically organizing data, automating solutions through algorithmic thinking, and representing data through abstraction.…

  2. Computer Simulations as an Integral Part of Intermediate Macroeconomics.

    ERIC Educational Resources Information Center

    Millerd, Frank W.; Robertson, Alastair R.

    1987-01-01

    Describes the development of two interactive computer simulations which were fully integrated with other course materials. The simulations illustrate the effects of various real and monetary "demand shocks" on aggregate income, interest rates, and components of spending and economic output. Includes an evaluation of the simulations' effects on…

  3. Full vectorial simulation of multilayer anisotropic waveguides with an accurate and automated finite-element program.

    PubMed

    Zhao, A P; Cvetkovic, S R

    1994-08-20

    An efficient, accurate, and automated vectorial finite-element software package (named WAVEGIDE), which is implemented within a PDE/Protran problem-solving environment, has been extended to general multilayer anisotropic waveguides. With our system, through an interactive question-and-answer session, the problem can be simply defined with high-level PDE/Protran commands. The problem can then be solved easily and quickly by the main processor within this intelligent environment. In particular, in our system the eigenvalue of waveguide problems may be either a propagation constant (β) or an operated light frequency (F). Furthermore, the cutoff frequencies of propagation modes in waveguides can be calculated. As an application of this approach, numerical results for both scalar and hybrid modes in multilayer anisotropic waveguides are presented and are also compared with results obtained with the domain-integral method. These results clearly illustrate the unique flexibility, accuracy, and the ease of use f the WAVEGIDE program. PMID:20935964

  4. Differential-equation-based representation of truncation errors for accurate numerical simulation

    NASA Astrophysics Data System (ADS)

    MacKinnon, Robert J.; Johnson, Richard W.

    1991-09-01

    High-order compact finite difference schemes for 2D convection-diffusion-type differential equations with constant and variable convection coefficients are derived. The governing equations are employed to represent leading truncation terms, including cross-derivatives, making the overall O(h super 4) schemes conform to a 3 x 3 stencil. It is shown that the two-dimensional constant coefficient scheme collapses to the optimal scheme for the one-dimensional case wherein the finite difference equation yields nodally exact results. The two-dimensional schemes are tested against standard model problems, including a Navier-Stokes application. Results show that the two schemes are generally more accurate, on comparable grids, than O(h super 2) centered differencing and commonly used O(h) and O(h super 3) upwinding schemes.

  5. A new class of accurate, mesh-free hydrodynamic simulation methods

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2015-06-01

    We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.

  6. Genetic Crossing vs Cloning by Computer Simulation

    NASA Astrophysics Data System (ADS)

    Dasgupta, Subinay

    We perform Monte Carlo simulation using Penna's bit string model, and compare the process of asexual reproduction by cloning with that by genetic crossover. We find them to be comparable as regards survival of a species, and also if a natural disaster is simulated.

  7. Genetic crossing vs cloning by computer simulation

    SciTech Connect

    Dasgupta, S.

    1997-06-01

    We perform Monte Carlo simulation using Penna`s bit string model, and compare the process of asexual reproduction by cloning with that by genetic crossover. We find them to be comparable as regards survival of a species, and also if a natural disaster is simulated.

  8. Spatial Learning and Computer Simulations in Science

    ERIC Educational Resources Information Center

    Lindgren, Robb; Schwartz, Daniel L.

    2009-01-01

    Interactive simulations are entering mainstream science education. Their effects on cognition and learning are often framed by the legacy of information processing, which emphasized amodal problem solving and conceptual organization. In contrast, this paper reviews simulations from the vantage of research on perception and spatial learning,…

  9. H2 Adsorption in a Porous Crystal: Accurate First-Principles Quantum Simulation.

    PubMed

    D'Arcy, Jordan H; Jordan, Meredith J T; Frankcombe, Terry J; Collins, Michael A

    2015-12-17

    A general method is presented for constructing, from ab initio quantum chemistry calculations, the potential energy surface (PES) for H2 absorbed in a porous crystalline material. The method is illustrated for the metal-organic framework material MOF-5. Rigid body quantum diffusion Monte Carlo simulations are used in the construction of the PES and to evaluate the quantum ground state of H2 in MOF-5, the zero-point energy, and the enthalpy of adsorption at 0 K. PMID:26322374

  10. A procedure for computing accurate ab initio quartic force fields: Application to HO2+ and H2O

    NASA Astrophysics Data System (ADS)

    Huang, Xinchuan; Lee, Timothy J.

    2008-07-01

    A procedure for the calculation of molecular quartic force fields (QFFs) is proposed and investigated. The goal is to generate highly accurate ab initio QFFs that include many of the so-called ``small'' effects that are necessary to achieve high accuracy. The small effects investigated in the present study include correlation of the core electrons (core correlation), extrapolation to the one-particle basis set limit, correction for scalar relativistic contributions, correction for higher-order correlation effects, and inclusion of diffuse functions in the one-particle basis set. The procedure is flexible enough to allow for some effects to be computed directly, while others may be added as corrections. A single grid of points is used and is centered about an initial reference geometry that is designed to be as close as possible to the final ab initio equilibrium structure (with all effects included). It is shown that the least-squares fit of the QFF is not compromised by the added corrections, and the balance between elimination of contamination from higher-order force constants while retaining energy differences large enough to yield meaningful quartic force constants is essentially unchanged from the standard procedures we have used for many years. The initial QFF determined from the least-squares fit is transformed to the exact minimum in order to eliminate gradient terms and allow for the use of second-order perturbation theory for evaluation of spectroscopic constants. It is shown that this step has essentially no effect on the quality of the QFF largely because the initial reference structure is, by design, very close to the final ab initio equilibrium structure. The procedure is used to compute an accurate, purely ab initio QFF for the H2O molecule, which is used as a benchmark test case. The procedure is then applied to the ground and first excited electronic states of the HO2+ molecular cation. Fundamental vibrational frequencies and spectroscopic

  11. Accurate simulation of the electron cloud in the Fermilab Main Injector with VORPAL

    SciTech Connect

    Lebrun, Paul L.G.; Spentzouris, Panagiotis; Cary, John R.; Stoltz, Peter; Veitzer, Seth A.; /Tech-X, Boulder

    2011-01-01

    We present results from a precision simulation of the electron cloud (EC) in the Fermilab Main Injector using the code VORPAL. This is a fully 3d and self consistent treatment of the EC. Both distributions of electrons in 6D phase-space and E.M. field maps have been generated. This has been done for various configurations of the magnetic fields found around the machine have been studied. Plasma waves associated to the fluctuation density of the cloud have been analyzed. Our results are compared with those obtained with the POSINST code. The response of a Retarding Field Analyzer (RFA) to the EC has been simulated, as well as the more challenging microwave absorption experiment. Definite predictions of their exact response are difficult to obtain,mostly because of the uncertainties in the secondary emission yield and, in the case of the RFA, because of the sensitivity of the electron collection efficiency to unknown stray magnetic fields. Nonetheless, our simulations do provide guidance to the experimental program.

  12. High Fidelity Simulation of a Computer Room

    NASA Technical Reports Server (NTRS)

    Ahmad, Jasim; Chan, William; Chaderjian, Neal; Pandya, Shishir

    2005-01-01

    This viewgraph presentation reviews NASA's Columbia supercomputer and the mesh technology used to test the adequacy of the fluid and cooling of a computer room. A technical description of the Columbia supercomputer is also presented along with its performance capability.

  13. Some theoretical issues on computer simulations

    SciTech Connect

    Barrett, C.L.; Reidys, C.M.

    1998-02-01

    The subject of this paper is the development of mathematical foundations for a theory of simulation. Sequentially updated cellular automata (sCA) over arbitrary graphs are employed as a paradigmatic framework. In the development of the theory, the authors focus on the properties of causal dependencies among local mappings in a simulation. The main object of and study is the mapping between a graph representing the dependencies among entities of a simulation and a representing the equivalence classes of systems obtained by all possible updates.

  14. Use of advanced computers for aerodynamic flow simulation

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F.

    1980-01-01

    The current and projected use of advanced computers for large-scale aerodynamic flow simulation applied to engineering design and research is discussed. The design use of mature codes run on conventional, serial computers is compared with the fluid research use of new codes run on parallel and vector computers. The role of flow simulations in design is illustrated by the application of a three dimensional, inviscid, transonic code to the Sabreliner 60 wing redesign. Research computations that include a more complete description of the fluid physics by use of Reynolds averaged Navier-Stokes and large-eddy simulation formulations are also presented. Results of studies for a numerical aerodynamic simulation facility are used to project the feasibility of design applications employing these more advanced three dimensional viscous flow simulations.

  15. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  16. Three-dimensional Euler time accurate simulations of fan rotor-stator interactions

    NASA Technical Reports Server (NTRS)

    Boretti, A. A.

    1990-01-01

    A numerical method useful to describe unsteady 3-D flow fields within turbomachinery stages is presented. The method solves the compressible, time dependent, Euler conservation equations with a finite volume, flux splitting, total variation diminishing, approximately factored, implicit scheme. Multiblock composite gridding is used to partition the flow field into a specified arrangement of blocks with static and dynamic interfaces. The code is optimized to take full advantage of the processing power and speed of the Cray Y/MP supercomputer. The method is applied to the computation of the flow field within a single stage, axial flow fan, thus reproducing the unsteady 3-D rotor-stator interaction.

  17. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  18. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  19. Computational field simulation of temporally deforming geometries

    SciTech Connect

    Boyalakuntla, K.; Soni, B.K.; Thornburg, H.J.

    1996-12-31

    A NURBS based moving grid generation technique is presented to simulate temporally deforming geometries. Grid generation for a complex configuration can be a time consuming process and temporally varying geometries necessitate the regeneration of such a grid for every time step. The Non Uniform Rational B Spline (NURBS) based control point information is used for geometry description. The parametric definition of the NURBS is utilized in the development of the methodology to generate well distributed grid in a timely manner. The numerical simulation involving temporally deforming geometry is accomplished by appropriately linking to a unsteady, multi-block, thin layer Navier-Stokes solver. The present method greatly reduces CPU requirements for time dependent remeshing, facilitating the simulation of more complex unsteady problems. This current effort is the first step towards multidisciplinary design optimization, which involves coupling aerodynamic heat transfer and structural analysis. Applications include simulation of temporally deforming bodies.

  20. High-order accurate multi-phase simulations: building blocks and whats tricky about them

    NASA Astrophysics Data System (ADS)

    Kummer, Florian

    2015-11-01

    We are going to present a high-order numerical method for multi-phase flow problems, which employs a sharp interface representation by a level-set and an extended discontinuous Galerkin (XDG) discretization for the flow properties. The shape of the XDG basis functions is dynamically adapted to the position of the fluid interface, so that the spatial approximation space can represent jumps in pressure and kinks in velocity accurately. By this approach, the `hp-convergence' property of the classical discontinuous Galerkin (DG) method can be preserved for the low-regularity, discontinuous solutions, such as those appearing in multi-phase flows. Within the past years, several building blocks of such a method were presented: this includes numerical integration on cut-cells, the spatial discretization by the XDG method, precise evaluation of curvature and level-set algorithms tailored to the special requirements of XDG-methods. The presentation covers a short review on these building-block and their integration into a full multi-phase solver. A special emphasis is put on the discussion of the several pitfalls one may expire in the formulation of such a solver. German Research Foundation.

  1. Computer simulation of water reclamation processors

    NASA Technical Reports Server (NTRS)

    Fisher, John W.; Hightower, T. M.; Flynn, Michael T.

    1991-01-01

    The development of detailed simulation models of water reclamation processors based on the ASPEN PLUS simulation program is discussed. Individual models have been developed for vapor compression distillation, vapor phase catalytic ammonia removal, and supercritical water oxidation. These models are used for predicting the process behavior. Particular attention is given to methodology which is used to complete this work, and the insights which are gained by this type of model development.

  2. Estimating computer communication network performance using network simulations

    SciTech Connect

    Garcia, A.B.

    1985-01-01

    A generalized queuing model simulation of store-and-forward computer communication networks is developed and implemented using Simulation Language for Alternative Modeling (SLAM). A baseline simulation model is validated by comparison with published analytic models. The baseline model is expanded to include an ACK/NAK data link protocol, four-level message precedence, finite queues, and a response traffic scenario. Network performance, as indicated by average message delay and message throughput, is estimated using the simulation model.

  3. How accurate are volcanic ash simulations of the 2010 Eyjafjallajökull eruption?

    NASA Astrophysics Data System (ADS)

    Dacre, Helen; Harvey, Natalie; Webley, Peter; Morton, Don

    2016-04-01

    In the event of a volcanic eruption the decision to close airspace is based on forecast ash maps, produced using volcanic ash transport and dispersion models. In this paper we quantitatively evaluate the spatial skill of volcanic ash simulations using satellite retrievals of ash from the Eyjafjallajökull eruption during the period from 7-16 May 2010. We find that at the start of this period, 7-10 May, the model (FLEXPART) has excellent skill and can predict the spatial distribution of the satellite retrieved ash to within 0.5°× 0.5° lat/lon. However, on the 10 May there is a decrease in the spatial accuracy of the model, to 2.5°× 2.5° lat/lon, and between 11-12 May the simulated ash location errors grow rapidly. On the 11 May ash is located close to a bifurcation point in the atmosphere, resulting in a rapid divergence in the modeled and satellite ash locations. In general, the model skill reduces as the residence time of ash increases. However, the error growth is not always steady. Rapid increases in error growth are linked to critical points in the ash trajectories. Ensemble modeling using perturbed meteorological data would help to represent this uncertainty and assimilation of satellite ash data would help to reduce uncertainty in volcanic ash forecasts.

  4. How accurate are volcanic ash simulations of the 2010 Eyjafjallajökull eruption?

    NASA Astrophysics Data System (ADS)

    Dacre, H. F.; Harvey, N. J.; Webley, P. W.; Morton, D.

    2016-04-01

    In the event of a volcanic eruption the decision to close airspace is based on forecast ash maps, produced using volcanic ash transport and dispersion models. In this paper we quantitatively evaluate the spatial skill of volcanic ash simulations using satellite retrievals of ash from the Eyjafjallajökull eruption during the period from 7 to 16 May 2010. We find that at the start of this period, 7-10 May, the model (FLEXible PARTicle) has excellent skill and can predict the spatial distribution of the satellite-retrieved ash to within 0.5° × 0.5° latitude/longitude. However, on 10 May there is a decrease in the spatial accuracy of the model to 2.5°× 2.5° latitude/longitude, and between 11 and 12 May the simulated ash location errors grow rapidly. On 11 May ash is located close to a bifurcation point in the atmosphere, resulting in a rapid divergence in the modeled and satellite ash locations. In general, the model skill reduces as the residence time of ash increases. However, the error growth is not always steady. Rapid increases in error growth are linked to key points in the ash trajectories. Ensemble modeling using perturbed meteorological data would help to represent this uncertainty, and assimilation of satellite ash data would help to reduce uncertainty in volcanic ash forecasts.

  5. Computer Simulation Performed for Columbia Project Cooling System

    NASA Technical Reports Server (NTRS)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  6. Effective Control of Computationally Simulated Wing Rock in Subsonic Flow

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.; Menzies, Margaret A.

    1997-01-01

    The unsteady compressible, full Navier-Stokes (NS) equations and the Euler equations of rigid-body dynamics are sequentially solved to simulate the delta wing rock phenomenon. The NS equations are solved time accurately, using the implicit, upwind, Roe flux-difference splitting, finite-volume scheme. The rigid-body dynamics equations are solved using a four-stage Runge-Kutta scheme. Once the wing reaches the limit-cycle response, an active control model using a mass injection system is applied from the wing surface to suppress the limit-cycle oscillation. The active control model is based on state feedback and the control law is established using pole placement techniques. The control law is based on the feedback of two states: the roll-angle and roll velocity. The primary model of the computational applications consists of a 80 deg swept, sharp edged, delta wing at 30 deg angle of attack in a freestream of Mach number 0.1 and Reynolds number of 0.4 x 10(exp 6). With a limit-cycle roll amplitude of 41.1 deg, the control model is applied, and the results show that within one and one half cycles of oscillation, the wing roll amplitude and velocity are brought to zero.

  7. Avoiding pitfalls in simulating real-time computer systems

    NASA Technical Reports Server (NTRS)

    Smith, R. S.

    1984-01-01

    The software simulation of a computer target system on a computer host system, known as an interpretive computer simulator (ICS), functionally models and implements the action of the target hardware. For an ICS to function as efficiently as possible and to avoid certain pitfalls in designing an ICS, it is important that the details of the hardware architectural design of both the target and the host computers be known. This paper discusses both host selection considerations and ICS design features that, without proper consideration, could make the resulting ICS too slow to use or too costly to maintain and expand.

  8. Computational fluid dynamics simulations of oscillating wings and comparison to lifting-line theory

    NASA Astrophysics Data System (ADS)

    Keddington, Megan

    Computational fluid dynamics (CFD) analysis was performed in order to compare the solutions of oscillating wings with Prandtl's lifting-line theory. Quasi-steady and steady-periodic simulations were completed using the CFD software Star-CCM+. The simulations were performed for a number of frequencies in a pure plunging setup. Additional simulations were then completed using a setup of combined pitching and plunging at multiple frequencies. Results from the CFD simulations were compared to the quasi-steady lifting-line solution in the form of the axial-force, normal-force, power, and thrust coefficients, as well as the efficiency obtained for each simulation. The mean values were evaluated for each simulation and compared to the quasi-steady lifting-line solution. It was found that as the frequency of oscillation increased, the quasi-steady lifting-line solution was decreasingly accurate in predicting solutions.

  9. Simulating Expert Clinical Comprehension: Adapting Latent Semantic Analysis to Accurately Extract Clinical Concepts from Psychiatric Narrative

    PubMed Central

    Cohen, Trevor; Blatter, Brett; Patel, Vimla

    2008-01-01

    Cognitive studies reveal that less-than-expert clinicians are less able to recognize meaningful patterns of data in clinical narratives. Accordingly, psychiatric residents early in training fail to attend to information that is relevant to diagnosis and the assessment of dangerousness. This manuscript presents cognitively motivated methodology for the simulation of expert ability to organize relevant findings supporting intermediate diagnostic hypotheses. Latent Semantic Analysis is used to generate a semantic space from which meaningful associations between psychiatric terms are derived. Diagnostically meaningful clusters are modeled as geometric structures within this space and compared to elements of psychiatric narrative text using semantic distance measures. A learning algorithm is defined that alters components of these geometric structures in response to labeled training data. Extraction and classification of relevant text segments is evaluated against expert annotation, with system-rater agreement approximating rater-rater agreement. A range of biomedical informatics applications for these methods are suggested. PMID:18455483

  10. Evaluation of the EURO-CORDEX RCMs to accurately simulate the Etesian wind system

    NASA Astrophysics Data System (ADS)

    Dafka, Stella; Xoplaki, Elena; Toreti, Andrea; Zanis, Prodromos; Tyrlis, Evangelos; Luterbacher, Jürg

    2016-04-01

    The Etesians are among the most persistent regional scale wind systems in the lower troposphere that blow over the Aegean Sea during the extended summer season. ΑAn evaluation of the high spatial resolution, EURO-CORDEX Regional Climate Models (RCMs) is here presented. The study documents the performance of the individual models in representing the basic spatiotemporal pattern of the Etesian wind system for the period 1989-2004. The analysis is mainly focused on evaluating the abilities of the RCMs in simulating the surface wind over the Aegean Sea and the associated large scale atmospheric circulation. Mean Sea Level Pressure (SLP), wind speed and geopotential height at 500 hPa are used. The simulated results are validated against reanalysis datasets (20CR-v2c and ERA20-C) and daily observational measurements (12:00 UTC) from the mainland Greece and Aegean Sea. The analysis highlights the general ability of the RCMs to capture the basic features of the Etesians, but also indicates considerable deficiencies for selected metrics, regions and subperiods. Some of these deficiencies include the significant underestimation (overestimation) of the mean SLP in the northeastern part of the analysis domain in all subperiods (for May and June) when compared to 20CR-v2c (ERA20-C), the significant overestimation of the anomalous ridge over the Balkans and central Europe and the underestimation of the wind speed over the Aegean Sea. Future work will include an assessment of the Etesians for the next decades using EURO-CORDEX projections under different RCP scenarios and estimate the future potential for wind energy production.

  11. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  12. Two inviscid computational simulations of separated flow about airfoils

    NASA Technical Reports Server (NTRS)

    Barnwell, R. W.

    1976-01-01

    Two inviscid computational simulations of separated flow about airfoils are described. The basic computational method is the line relaxation finite-difference method. Viscous separation is approximated with inviscid free-streamline separation. The point of separation is specified, and the pressure in the separation region is calculated. In the first simulation, the empiricism of constant pressure in the separation region is employed. This empiricism is easier to implement with the present method than with singularity methods. In the second simulation, acoustic theory is used to determine the pressure in the separation region. The results of both simulations are compared with experiment.

  13. Computer simulator for a mobile telephone system

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1981-01-01

    A software simulator was developed to assist NASA in the design of the land mobile satellite service. Structured programming techniques were used by developing the algorithm using an ALCOL-like pseudo language and then encoding the algorithm into FORTRAN 4. The basic input data to the system is a sine wave signal although future plans call for actual sampled voice as the input signal. The simulator is capable of studying all the possible combinations of types and modes of calls through the use of five communication scenarios: single hop systems; double hop, signal gateway system; double hop, double gateway system; mobile to wireline system; and wireline to mobile system. The transmitter, fading channel, and interference source simulation are also discussed.

  14. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  15. Incorporation of shuttle CCT parameters in computer simulation models

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry

    1990-01-01

    Computer simulations of shuttle missions have become increasingly important during recent years. The complexity of mission planning for satellite launch and repair operations which usually involve EVA has led to the need for accurate visibility and access studies. The PLAID modeling package used in the Man-Systems Division at Johnson currently has the necessary capabilities for such studies. In addition, the modeling package is used for spatial location and orientation of shuttle components for film overlay studies such as the current investigation of the hydrogen leaks found in the shuttle flight. However, there are a number of differences between the simulation studies and actual mission viewing. These include image blur caused by the finite resolution of the CCT monitors in the shuttle and signal noise from the video tubes of the cameras. During the course of this investigation the shuttle CCT camera and monitor parameters are incorporated into the existing PLAID framework. These parameters are specific for certain camera/lens combinations and the SNR characteristics of these combinations are included in the noise models. The monitor resolution is incorporated using a Gaussian spread function such as that found in the screen phosphors in the shuttle monitors. Another difference between the traditional PLAID generated images and actual mission viewing lies in the lack of shadows and reflections of light from surfaces. Ray tracing of the scene explicitly includes the lighting and material characteristics of surfaces. The results of some preliminary studies using ray tracing techniques for the image generation process combined with the camera and monitor effects are also reported.

  16. Chaotic versus nonchaotic stochastic dynamics in Monte Carlo simulations: a route for accurate energy differences in N-body systems.

    PubMed

    Assaraf, Roland; Caffarel, Michel; Kollias, A C

    2011-04-15

    We present a method to efficiently evaluate small energy differences of two close N-body systems by employing stochastic processes having a stability versus chaos property. By using the same random noise, energy differences are computed from close trajectories without reweighting procedures. The approach is presented for quantum systems but can be applied to classical N-body systems as well. It is exemplified with diffusion Monte Carlo simulations for long chains of hydrogen atoms and molecules for which it is shown that the long-standing problem of computing energy derivatives is solved. PMID:21568537

  17. Efficient and physically accurate modeling and simulation of anisoplanatic imaging through the atmosphere: a space-variant volumetric image blur method

    NASA Astrophysics Data System (ADS)

    Reinhardt, Colin N.; Ritcey, James A.

    2015-09-01

    We present a novel method for efficient and physically-accurate modeling & simulation of anisoplanatic imaging through the atmosphere; in particular we present a new space-variant volumetric image blur algorithm. The method is based on the use of physical atmospheric meteorology models, such as vertical turbulence profiles and aerosol/molecular profiles which can be in general fully spatially-varying in 3 dimensions and also evolving in time. The space-variant modeling method relies on the metadata provided by 3D computer graphics modeling and rendering systems to decompose the image into a set of slices which can be treated in an independent but physically consistent manner to achieve simulated image blur effects which are more accurate and realistic than the homogeneous and stationary blurring methods which are commonly used today. We also present a simple illustrative example of the application of our algorithm, and show its results and performance are in agreement with the expected relative trends and behavior of the prescribed turbulence profile physical model used to define the initial spatially-varying environmental scenario conditions. We present the details of an efficient Fourier-transform-domain formulation of the SV volumetric blur algorithm and detailed algorithm pseudocode description of the method implementation and clarification of some nonobvious technical details.

  18. A novel approach for accurate radiative transfer in cosmological hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Petkova, Margarita; Springel, Volker

    2011-08-01

    accurately deal with non-equilibrium effects. We discuss several tests of the new method, including shadowing configurations in two and three dimensions, ionized sphere expansion in static and dynamic density fields and the ionization of a cosmological density field. The tests agree favourably with analytical expectations and results based on other numerical radiative transfer approximations.

  19. Computer simulation of bounded plasma systems

    SciTech Connect

    Lawson, W.S.

    1987-03-05

    The physical and numerical problems of kinetic simulation of a bounded electrostatic plasma system in one planar dimension are examined, and solutions to them are presented. These problems include particle absorption, reflection and emission at boundaries, the solution of Poisson's equation under non-periodic boundary conditions, and the treatment of an external circuit connecting the boundaries. Some comments are also made regarding the problems of higher dimensions. The methods which are described here are implemented in a code named PDW1, which is available from Professor C.K. Birdsall, Plasma Theory and Simulation Group, Cory Hall, University of California, Berkeley, CA 94720.

  20. Computer Simulations in the Science Classroom.

    ERIC Educational Resources Information Center

    Richards, John; And Others

    1992-01-01

    Explorer is an interactive environment based on a constructivist epistemology of learning that integrates animated computer models with analytic capabilities for learning science. The system includes graphs, a spreadsheet, scripting, and interactive tools. Two examples involving the dynamics of colliding objects and electric circuits illustrate…

  1. Computer Simulation of Electric Field Lines.

    ERIC Educational Resources Information Center

    Kirkup, L.

    1985-01-01

    Describes a computer program which plots electric field line plots. Includes program listing, sample diagrams produced on a BBC model B microcomputer (which could be produced on other microcomputers by modifying the program), and a discussion of the properties of field lines. (JN)

  2. Using Computer Simulations to Integrate Learning.

    ERIC Educational Resources Information Center

    Liao, Thomas T.

    1983-01-01

    Describes the primary design criteria and the classroom activities involved in "The Yellow Light Problem," a minicourse on decision making in the secondary school Mathematics, Engineering and Science Achievement (MESA) program in California. Activities include lectures, discussions, science and math labs, computer labs, and development of…

  3. Accurate simulation of near-wall turbulence over a compliant tensegrity fabric

    NASA Astrophysics Data System (ADS)

    Luo, Haoxiang; Bewley, Thomas R.

    2005-05-01

    This paper presents a new class of compliant surfaces, dubbed tensegrity fabrics, for the problem of reducing the drag induced by near-wall turbulent flows. The substructure upon which this compliant surface is built is based on the "tensegrity" structural paradigm, and is formed as a stable pretensioned network of compressive members ("bars") interconnected by tensile members ("tendons"). Compared with existing compliant surface studies, most of which are based on spring-supported plates or membranes, tensegrity fabrics appear to be better configured to respond to the shear stress fluctuations (in addition to the pressure fluctuations) generated by near-wall turbulence. As a result, once the several parameters affecting the compliance characteristics of the structure are tuned appropriately, the tensegrity fabric might exhibit an improved capacity for dampening the fluctuations of near-wall turbulence, thereby reducing drag. This paper improves our previous work (SPIE Paper 5049-57) and uses a 3D time-dependent coordinate transformation in the flow simulations to account for the motion of the channel walls, and the Cartesian components of the velocity are used as the flow variables. For the spatial discretization, a dealiased pseudospectral scheme is used in the homogeneous directions and a second-order finite difference scheme is used in the wall-normal direction. The code is first validated with several benchmark results that are available in the published literature for flows past both stationary and nonstationary walls. Direct numerical simulations of turbulent flows at Re_tau=150 over the compliant tensegrity fabric are then presented. It is found that, when the stiffness, mass, damping, and orientation of the members of the the unit cell defining the tensegrity fabric are selected appropriately, the near-wall statistics of the turbulence are altered significantly. The flow/structure interface is found to form streamwise-travelling waves reminiscent of those

  4. Numerical simulation of supersonic wake flow with parallel computers

    SciTech Connect

    Wong, C.C.; Soetrisno, M.

    1995-07-01

    Simulating a supersonic wake flow field behind a conical body is a computing intensive task. It requires a large number of computational cells to capture the dominant flow physics and a robust numerical algorithm to obtain a reliable solution. High performance parallel computers with unique distributed processing and data storage capability can provide this need. They have larger computational memory and faster computing time than conventional vector computers. We apply the PINCA Navier-Stokes code to simulate a wind-tunnel supersonic wake experiment on Intel Gamma, Intel Paragon, and IBM SP2 parallel computers. These simulations are performed to study the mean flow in the near wake region of a sharp, 7-degree half-angle, adiabatic cone at Mach number 4.3 and freestream Reynolds number of 40,600. Overall the numerical solutions capture the general features of the hypersonic laminar wake flow and compare favorably with the wind tunnel data. With a refined and clustering grid distribution in the recirculation zone, the calculated location of the rear stagnation point is consistent with the 2D axisymmetric and 3D experiments. In this study, we also demonstrate the importance of having a large local memory capacity within a computer node and the effective utilization of the number of computer nodes to achieve good parallel performance when simulating a complex, large-scale wake flow problem.

  5. Computer simulation of on-orbit manned maneuvering unit operations

    NASA Technical Reports Server (NTRS)

    Stuart, G. M.; Garcia, K. D.

    1986-01-01

    Simulation of spacecraft on-orbit operations is discussed in reference to Martin Marietta's Space Operations Simulation laboratory's use of computer software models to drive a six-degree-of-freedom moving base carriage and two target gimbal systems. In particular, key simulation issues and related computer software models associated with providing real-time, man-in-the-loop simulations of the Manned Maneuvering Unit (MMU) are addressed with special attention given to how effectively these models and motion systems simulate the MMU's actual on-orbit operations. The weightless effects of the space environment require the development of entirely new devices for locomotion. Since the access to space is very limited, it is necessary to design, build, and test these new devices within the physical constraints of earth using simulators. The simulation method that is discussed here is the technique of using computer software models to drive a Moving Base Carriage (MBC) that is capable of providing simultaneous six-degree-of-freedom motions. This method, utilized at Martin Marietta's Space Operations Simulation (SOS) laboratory, provides the ability to simulate the operation of manned spacecraft, provides the pilot with proper three-dimensional visual cues, and allows training of on-orbit operations. The purpose here is to discuss significant MMU simulation issues, the related models that were developed in response to these issues and how effectively these models simulate the MMU's actual on-orbiter operations.

  6. Reanalysis and Simulation Suggest a Phylogenetic Microarray Does Not Accurately Profile Microbial Communities

    PubMed Central

    Midgley, David J.; Greenfield, Paul; Shaw, Janet M.; Oytam, Yalchin; Li, Dongmei; Kerr, Caroline A.; Hendry, Philip

    2012-01-01

    The second generation (G2) PhyloChip is designed to detect over 8700 bacteria and archaeal and has been used over 50 publications and conference presentations. Many of those publications reveal that the PhyloChip measures of species richness greatly exceed statistical estimates of richness based on other methods. An examination of probes downloaded from Greengenes suggested that the system may have the potential to distort the observed community structure. This may be due to the sharing of probes by taxa; more than 21% of the taxa in that downloaded data have no unique probes. In-silico simulations using these data showed that a population of 64 taxa representing a typical anaerobic subterranean community returned 96 different taxa, including 15 families incorrectly called present and 19 families incorrectly called absent. A study of nasal and oropharyngeal microbial communities by Lemon et al (2010) found some 1325 taxa using the G2 PhyloChip, however, about 950 of these taxa have, in the downloaded data, no unique probes and cannot be definitively called present. Finally, data from Brodie et al (2007), when re-examined, indicate that the abundance of the majority of detected taxa, are highly correlated with one another, suggesting that many probe sets do not act independently. Based on our analyses of downloaded data, we conclude that outputs from the G2 PhyloChip should be treated with some caution, and that the presence of taxa represented solely by non-unique probes be independently verified. PMID:22457798

  7. Advanced Simulation and Computing Business Plan

    SciTech Connect

    Rummel, E.

    2015-07-09

    To maintain a credible nuclear weapons program, the National Nuclear Security Administration’s (NNSA’s) Office of Defense Programs (DP) needs to make certain that the capabilities, tools, and expert staff are in place and are able to deliver validated assessments. This requires a complete and robust simulation environment backed by an experimental program to test ASC Program models. This ASC Business Plan document encapsulates a complex set of elements, each of which is essential to the success of the simulation component of the Nuclear Security Enterprise. The ASC Business Plan addresses the hiring, mentoring, and retaining of programmatic technical staff responsible for building the simulation tools of the nuclear security complex. The ASC Business Plan describes how the ASC Program engages with industry partners—partners upon whom the ASC Program relies on for today’s and tomorrow’s high performance architectures. Each piece in this chain is essential to assure policymakers, who must make decisions based on the results of simulations, that they are receiving all the actionable information they need.

  8. Multidimensional computer simulation of Stirling cycle engines

    NASA Astrophysics Data System (ADS)

    Hall, Charles A.; Porsching, Thomas A.

    1992-07-01

    This report summarizes the activities performed under NASA-Grant NAG3-1097 during 1991. During that period, work centered on the following tasks: (1) to investigate more effective solvers for ALGAE; (2) to modify the plotting package for ALGAE; and (3) to validate ALGAE by simulating oscillating flow problems similar to those studied by Kurzweg and Ibrahim.

  9. Multidimensional computer simulation of Stirling cycle engines

    NASA Technical Reports Server (NTRS)

    Hall, Charles A.; Porsching, Thomas A.

    1992-01-01

    This report summarizes the activities performed under NASA-Grant NAG3-1097 during 1991. During that period, work centered on the following tasks: (1) to investigate more effective solvers for ALGAE; (2) to modify the plotting package for ALGAE; and (3) to validate ALGAE by simulating oscillating flow problems similar to those studied by Kurzweg and Ibrahim.

  10. Student Ecosystems Problem Solving Using Computer Simulation.

    ERIC Educational Resources Information Center

    Howse, Melissa A.

    The purpose of this study was to determine the procedural knowledge brought to, and created within, a pond ecology simulation by students. Environmental Decision Making (EDM) is an ecosystems modeling tool that allows users to pose their own problems and seek satisfying solutions. Of specific interest was the performance of biology majors who had…

  11. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM{sup +}-up scheme

    SciTech Connect

    Chang, Chih-Hao . E-mail: chchang@engineering.ucsb.edu; Liou, Meng-Sing . E-mail: meng-sing.liou@grc.nasa.gov

    2007-07-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM{sup +} scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM{sup +}-up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion.

  12. Process Training Derived from a Computer Simulation Theory

    ERIC Educational Resources Information Center

    Holzman, Thomas G.; And Others

    1976-01-01

    Discusses a study which investigated whether a computer simulation model could suggest subroutines that were instructable and whether instruction on these subroutines could facilitate subjects' solutions to the problem task. (JM)

  13. Development and Formative Evaluation of Computer Simulated College Chemistry Experiments.

    ERIC Educational Resources Information Center

    Cavin, Claudia S.; Cavin, E. D.

    1978-01-01

    This article describes the design, preparation, and initial evaluation of a set of computer-simulated chemistry experiments. The experiments entailed the use of an atomic emission spectroscope and a single-beam visible absorption spectrophometer. (Author/IRT)

  14. Computer Simulations as a Teaching Tool in Community Colleges

    ERIC Educational Resources Information Center

    Grimm, Floyd M., III

    1978-01-01

    Describes the implementation of a computer assisted instruction program at Harford Community College. Eight different biology simulation programs are used covering topics in ecology, genetics, biochemistry, and sociobiology. (MA)

  15. MINEXP, A Computer-Simulated Mineral Exploration Program

    ERIC Educational Resources Information Center

    Smith, Michael J.; And Others

    1978-01-01

    This computer simulation is designed to put students into a realistic decision making situation in mineral exploration. This program can be used with different exploration situations such as ore deposits, petroleum, ground water, etc. (MR)

  16. Issues in Computer Simulation in Military Maintenance Training

    ERIC Educational Resources Information Center

    Brock, John F.

    1978-01-01

    This article discusses the state of computer-based simulation, reviews the early phases of ISD, suggests that current ISD approaches are missing critical inputs, and proposes a research and development program. (Author)

  17. An Exercise in Biometrical Genetics Based on a Computer Simulation.

    ERIC Educational Resources Information Center

    Murphy, P. J.

    1983-01-01

    Describes an exercise in biometrical genetics based on the noninteractive use of a computer simulation of a wheat hydridization program. Advantages of using the material in this way are also discussed. (Author/JN)

  18. Physical resist models and their calibration: their readiness for accurate EUV lithography simulation

    NASA Astrophysics Data System (ADS)

    Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.

    2010-04-01

    In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.

  19. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  20. Launch Site Computer Simulation and its Application to Processes

    NASA Technical Reports Server (NTRS)

    Sham, Michael D.

    1995-01-01

    This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.

  1. Computer Simulated Growth of Icosahedral Glass

    NASA Astrophysics Data System (ADS)

    Leino, Y. A. J.; Salomaa, M. M.

    1990-01-01

    One possible model for materials displaying classically forbidden symmetry properties (apart from perfect quasicrystals) is the icosahedral glass model. We simulate the random growth of two types of two-dimensional icosahedral glasses consisting of the Penrose tiles, First we restrict the growth with the arrow rules, then we let the structure develop totally freely. The diffraction patterns have a clear five-fold symmetry in both cases. The diffraction peak intensities do not differ, but shapes of the central peaks vary depending on whether the arrow rules are imposed or not. Finally, we show that the half-width of the central peak decreases when the size of the simulation increases until a finite disorder-limited value is achieved. This phenomenon is in agreement with the behaviour of physical quasicrystallites and in contradiction with perfect mathematical quasicrystals which have Bragg peaks of zero width.

  2. Computational Aerothermodynamic Simulation Issues on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; White, Jeffery A.

    2004-01-01

    The synthesis of physical models for gas chemistry and turbulence from the structured grid codes LAURA and VULCAN into the unstructured grid code FUN3D is described. A directionally Symmetric, Total Variation Diminishing (STVD) algorithm and an entropy fix (eigenvalue limiter) keyed to local cell Reynolds number are introduced to improve solution quality for hypersonic aeroheating applications. A simple grid-adaptation procedure is incorporated within the flow solver. Simulations of flow over an ellipsoid (perfect gas, inviscid), Shuttle Orbiter (viscous, chemical nonequilibrium) and comparisons to the structured grid solvers LAURA (cylinder, Shuttle Orbiter) and VULCAN (flat plate) are presented to show current capabilities. The quality of heating in 3D stagnation regions is very sensitive to algorithm options in general, high aspect ratio tetrahedral elements complicate the simulation of high Reynolds number, viscous flow as compared to locally structured meshes aligned with the flow.

  3. Computer simulation of a geomagnetic substorm

    NASA Technical Reports Server (NTRS)

    Lyon, J. G.; Brecht, S. H.; Huba, J. D.; Fedder, J. A.; Palmadesso, P. J.

    1981-01-01

    A global two-dimensional simulation of a substormlike process occurring in earth's magnetosphere is presented. The results are consistent with an empirical substorm model - the neutral-line model. Specifically, the introduction of a southward interplanetary magnetic field forms an open magnetosphere. Subsequently, a substorm neutral line forms at about 15 earth radii or closer in the magnetotail, and plasma sheet thinning and plasma acceleration occur. Eventually the substorm neutral line moves tailward toward its presubstorm position.

  4. Computer simulation of surface and film processes

    NASA Technical Reports Server (NTRS)

    Tiller, W. A.

    1981-01-01

    A molecular dynamics technique based upon Lennard-Jones type pair interactions is used to investigate time-dependent as well as equilibrium properties. The case study deals with systems containing Si and O atoms. In this case a more involved potential energy function (PEF) is employed and the system is simulated via a Monte-Carlo procedure. This furnishes the equilibrium properties of the system at its interfaces and surfaces as well as in the bulk.

  5. Evaluation of a Computer Simulation in a Therapeutics Case Discussion.

    ERIC Educational Resources Information Center

    Kinkade, Raenel E.; And Others

    1995-01-01

    A computer program was used to simulate a case presentation in pharmacotherapeutics. Students (n=24) used their knowledge of the disease (glaucoma) and various topical agents on the computer program's formulary to "treat" the patient. Comparison of results with a control group found the method as effective as traditional case presentation on…

  6. Use of Computer Simulations in Microbial and Molecular Genetics.

    ERIC Educational Resources Information Center

    Wood, Peter

    1984-01-01

    Describes five computer programs: four simulations of genetic and physical mapping experiments and one interactive learning program on the genetic coding mechanism. The programs were originally written in BASIC for the VAX-11/750 V.3. mainframe computer and have been translated into Applesoft BASIC for Apple IIe microcomputers. (JN)

  7. COFLO: A Computer Aid for Teaching Ecological Simulation.

    ERIC Educational Resources Information Center

    Le vow, Roy B.

    A computer-assisted course was designed to provide students with an understanding of modeling and simulation techniques in quantitiative ecology. It deals with continuous systems and has two segments. One develops mathematical and computer tools, beginning with abstract systems and their relation to physical systems. Modeling principles are next…

  8. Coached, Interactive Computer Simulations: A New Technology for Training.

    ERIC Educational Resources Information Center

    Hummel, Thomas J.

    This paper provides an overview of a prototype simulation-centered intelligent computer-based training (CBT) system--implemented using expert system technology--which provides: (1) an environment in which trainees can learn and practice complex skills; (2) a computer-based coach or mentor to critique performance, suggest improvements, and provide…

  9. Frontiers in the Teaching of Physiology. Computer Literacy and Simulation.

    ERIC Educational Resources Information Center

    Tidball, Charles S., Ed.; Shelesnyak, M. C., Ed.

    Provided is a collection of papers on computer literacy and simulation originally published in The Physiology Teacher, supplemented by additional papers and a glossary of terms relevant to the field. The 12 papers are presented in five sections. An affirmation of conventional physiology laboratory exercises, coping with computer terminology, and…

  10. Generating dynamic simulations of movement using computed muscle control.

    PubMed

    Thelen, Darryl G; Anderson, Frank C; Delp, Scott L

    2003-03-01

    Computation of muscle excitation patterns that produce coordinated movements of muscle-actuated dynamic models is an important and challenging problem. Using dynamic optimization to compute excitation patterns comes at a large computational cost, which has limited the use of muscle-actuated simulations. This paper introduces a new algorithm, which we call computed muscle control, that uses static optimization along with feedforward and feedback controls to drive the kinematic trajectory of a musculoskeletal model toward a set of desired kinematics. We illustrate the algorithm by computing a set of muscle excitations that drive a 30-muscle, 3-degree-of-freedom model of pedaling to track measured pedaling kinematics and forces. Only 10 min of computer time were required to compute muscle excitations that reproduced the measured pedaling dynamics, which is over two orders of magnitude faster than conventional dynamic optimization techniques. Simulated kinematics were within 1 degrees of experimental values, simulated pedal forces were within one standard deviation of measured pedal forces for nearly all of the crank cycle, and computed muscle excitations were similar in timing to measured electromyographic patterns. The speed and accuracy of this new algorithm improves the feasibility of using detailed musculoskeletal models to simulate and analyze movement. PMID:12594980

  11. Teaching Macroeconomics with a Computer Simulation. Final Report.

    ERIC Educational Resources Information Center

    Dolbear, F. Trenery, Jr.

    The study of macroeconomics--the determination and control of aggregative variables such as gross national product, unemployment and inflation--may be facilitated by the use of a computer simulation policy game. An aggregative model of the economy was constructed and programed for a computer and (hypothetical) historical data were generated. The…

  12. The Use of Computer Simulations in High School Curricula.

    ERIC Educational Resources Information Center

    Visich, Marian, Jr.; Braun, Ludwig

    The Huntington Computer Project has developed 17 simulation games which can be used for instructional purposes in high schools. These games were designed to run on digital computers and to deal with material from either biology, physics, or social studies. Distribution was achieved through the Digital Equipment Corporation, which disseminated…

  13. A computer simulator for development of engineering system design methodologies

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Sobieszczanski-Sobieski, J.

    1987-01-01

    A computer program designed to simulate and improve engineering system design methodology is described. The simulator mimics the qualitative behavior and data couplings occurring among the subsystems of a complex engineering system. It eliminates the engineering analyses in the subsystems by replacing them with judiciously chosen analytical functions. With the cost of analysis eliminated, the simulator is used for experimentation with a large variety of candidate algorithms for multilevel design optimization to choose the best ones for the actual application. Thus, the simulator serves as a development tool for multilevel design optimization strategy. The simulator concept, implementation, and status are described and illustrated with examples.

  14. A New Boundary Condition for Computer Simulations of Interfacial Systems

    SciTech Connect

    Wong, Ka-Yiu; Pettitt, Bernard M.; Montgomery, B.

    2000-08-18

    A new boundary condition for computer simulations of interfacial systems is presented. The simulation box used in this boundary condition is the asymmetric unit of space group Pb, and it contains only one interface. Compared to the simulation box using common periodic boundary conditions which contains two interfaces, the number of particles in the simulation is reduced by half. This boundary condition was tested against common periodic boundary conditions in molecular dynamic simulations of liquid water interacting with hydroxylated silica surfaces. It yielded results essentially identical to periodic boundary condition and consumed less CPU time for comparable statistics.

  15. A new boundary condition for computer simulations of interfacial systems

    NASA Astrophysics Data System (ADS)

    Wong, Ka-Yiu; Pettitt, B. Montgomery

    2000-08-01

    A new boundary condition for computer simulations of interfacial systems is presented. The simulation box used in this boundary condition is the asymmetric unit of space group Pb, and it contains only one interface. Compared to the simulation box using common periodic boundary conditions which contains two interfaces, the number of particles in the simulation is reduced by half. This boundary condition was tested against common periodic boundary conditions in molecular dynamic simulations of liquid water interacting with hydroxylated silica surfaces. It yielded results essentially identical to periodic boundary condition and consumed less CPU time for comparable statistics.

  16. Developing Computer-Based Interactive Video Simulations on Questioning Strategies.

    ERIC Educational Resources Information Center

    Rogers, Randall; Rieff, Judith

    1989-01-01

    This article presents a rationale for development and implementation of computer based interactive videotape (CBIV) in preservice teacher education; identifies advantages of CBIV simulations over other practice exercises; describes economical production procedures; discusses implications and importance of these simulations; and makes…

  17. Computer Simulation of Incomplete-Data Interpretation Exercise.

    ERIC Educational Resources Information Center

    Robertson, Douglas Frederick

    1987-01-01

    Described is a computer simulation that was used to help general education students enrolled in a large introductory geology course. The purpose of the simulation is to learn to interpret incomplete data. Students design a plan to collect bathymetric data for an area of the ocean. Procedures used by the students and instructor are included.…

  18. Learner Perceptions of Realism and Magic in Computer Simulations.

    ERIC Educational Resources Information Center

    Hennessy, Sara; O'Shea, Tim

    1993-01-01

    Discusses the possible lack of credibility in educational interactive computer simulations. Topics addressed include "Shopping on Mars," a collaborative adventure game for arithmetic calculation that uses direct manipulation in the microworld; the Alternative Reality Kit, a graphical animated environment for creating interactive simulations; and…

  19. Preliminary Evaluation of a Computer Simulation of Long Cane Use.

    ERIC Educational Resources Information Center

    Chubon, Robert A.; Keith, Ashley D.

    1989-01-01

    Developed and evaluated long cane mobility computer simulation as visual rehabilitation training device and research tool in graduate students assigned to instruction (BI) (N=10) or enhanced instruction (EI) (N=9). Found higher percentage of EI students completed simulation task. Concluded that students registered positive understanding changes,…

  20. Enhancing Computer Science Education with a Wireless Intelligent Simulation Environment

    ERIC Educational Resources Information Center

    Cook, Diane J.; Huber, Manfred; Yerraballi, Ramesh; Holder, Lawrence B.

    2004-01-01

    The goal of this project is to develop a unique simulation environment that can be used to increase students' interest and expertise in Computer Science curriculum. Hands-on experience with physical or simulated equipment is an essential ingredient for learning, but many approaches to training develop a separate piece of equipment or software for…