NASA Astrophysics Data System (ADS)
Cohen, F.; Kasahara, K.
As described in an accompanying paper (kasahara), full M.C simulation of air showers in the GZK region is possible by a distributed-parallel processing method. However, this still needs a long computation time even with ~50 to ~100 cpu's which may be available in many pc cluster environments. Air showers always fluctuate event to event largely, and only 1 or few events are not appropriate for practical application. However, we may note that the fluctuations appear only in the longitudinal development; if we look into the ingredients (energy spectrum, angular distribution, arrival time distribution etc and their correlations) at the same "age" of the shower, they are almost the same (or at least can be scaled; e.g, for the lateral distribution, we may use appropriate Moliere length ). In some cases (for muons and hadrons), we may use another parameter instead of the "age". Based on this fact, we developed a new fast and accurate M.C simulation scheme which utilizes a database in which full M.C results are stored (FDD). We generate a number of air showers by using the usual thin sampling method. The thin sampling is sometimes very dangerous when we discuss detailed ingredient (say,lateral distribution, energy spectrum, their correlations etc) but is safely employed to see the total number of particles in the longitudinal development (LDD; we can generate ~1000 LDD showers by 50 cpu's in a day). Then, for a given 1 particular such an event at a certain depth, we can extract every details from FDD by a correspondence rule such as the one using "age" etc. We describe the method, its current status and show some results for the TA experiment.
Time accurate simulations of compressible shear flows
NASA Technical Reports Server (NTRS)
Givi, Peyman; Steinberger, Craig J.; Vidoni, Thomas J.; Madnia, Cyrus K.
1993-01-01
The objectives of this research are to employ direct numerical simulation (DNS) to study the phenomenon of mixing (or lack thereof) in compressible free shear flows and to suggest new means of enhancing mixing in such flows. The shear flow configurations under investigation are those of parallel mixing layers and planar jets under both non-reacting and reacting nonpremixed conditions. During the three-years of this research program, several important issues regarding mixing and chemical reactions in compressible shear flows were investigated.
Accurate simulation of optical properties in dyes.
Jacquemin, Denis; Perpète, Eric A; Ciofini, Ilaria; Adamo, Carlo
2009-02-17
Since Antiquity, humans have produced and commercialized dyes. To this day, extraction of natural dyes often requires lengthy and costly procedures. In the 19th century, global markets and new industrial products drove a significant effort to synthesize artificial dyes, characterized by low production costs, huge quantities, and new optical properties (colors). Dyes that encompass classes of molecules absorbing in the UV-visible part of the electromagnetic spectrum now have a wider range of applications, including coloring (textiles, food, paintings), energy production (photovoltaic cells, OLEDs), or pharmaceuticals (diagnostics, drugs). Parallel to the growth in dye applications, researchers have increased their efforts to design and synthesize new dyes to customize absorption and emission properties. In particular, dyes containing one or more metallic centers allow for the construction of fairly sophisticated systems capable of selectively reacting to light of a given wavelength and behaving as molecular devices (photochemical molecular devices, PMDs).Theoretical tools able to predict and interpret the excited-state properties of organic and inorganic dyes allow for an efficient screening of photochemical centers. In this Account, we report recent developments defining a quantitative ab initio protocol (based on time-dependent density functional theory) for modeling dye spectral properties. In particular, we discuss the importance of several parameters, such as the methods used for electronic structure calculations, solvent effects, and statistical treatments. In addition, we illustrate the performance of such simulation tools through case studies. We also comment on current weak points of these methods and ways to improve them. PMID:19113946
Accurate Langevin approaches to simulate Markovian channel dynamics
NASA Astrophysics Data System (ADS)
Huang, Yandong; Rüdiger, Sten; Shuai, Jianwei
2015-12-01
The stochasticity of ion-channels dynamic is significant for physiological processes on neuronal cell membranes. Microscopic simulations of the ion-channel gating with Markov chains can be considered to be an accurate standard. However, such Markovian simulations are computationally demanding for membrane areas of physiologically relevant sizes, which makes the noise-approximating or Langevin equation methods advantageous in many cases. In this review, we discuss the Langevin-like approaches, including the channel-based and simplified subunit-based stochastic differential equations proposed by Fox and Lu, and the effective Langevin approaches in which colored noise is added to deterministic differential equations. In the framework of Fox and Lu’s classical models, several variants of numerical algorithms, which have been recently developed to improve accuracy as well as efficiency, are also discussed. Through the comparison of different simulation algorithms of ion-channel noise with the standard Markovian simulation, we aim to reveal the extent to which the existing Langevin-like methods approximate results using Markovian methods. Open questions for future studies are also discussed.
Accurate stress resultants equations for laminated composite deep thick shells
Qatu, M.S.
1995-11-01
This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
Anisotropic Turbulence Modeling for Accurate Rod Bundle Simulations
Baglietto, Emilio
2006-07-01
An improved anisotropic eddy viscosity model has been developed for accurate predictions of the thermal hydraulic performances of nuclear reactor fuel assemblies. The proposed model adopts a non-linear formulation of the stress-strain relationship in order to include the reproduction of the anisotropic phenomena, and in combination with an optimized low-Reynolds-number formulation based on Direct Numerical Simulation (DNS) to produce correct damping of the turbulent viscosity in the near wall region. This work underlines the importance of accurate anisotropic modeling to faithfully reproduce the scale of the turbulence driven secondary flows inside the bundle subchannels, by comparison with various isothermal and heated experimental cases. The very low scale secondary motion is responsible for the increased turbulence transport which produces a noticeable homogenization of the velocity distribution and consequently of the circumferential cladding temperature distribution, which is of main interest in bundle design. Various fully developed bare bundles test cases are shown for different geometrical and flow conditions, where the proposed model shows clearly improved predictions, in close agreement with experimental findings, for regular as well as distorted geometries. Finally the applicability of the model for practical bundle calculations is evaluated through its application in the high-Reynolds form on coarse grids, with excellent results. (author)
D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.
Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried
2016-01-01
Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054
D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data
Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried
2016-01-01
Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054
How Accurate Are Transition States from Simulations of Enzymatic Reactions?
2015-01-01
The rate expression of traditional transition state theory (TST) assumes no recrossing of the transition state (TS) and thermal quasi-equilibrium between the ground state and the TS. Currently, it is not well understood to what extent these assumptions influence the nature of the activated complex obtained in traditional TST-based simulations of processes in the condensed phase in general and in enzymes in particular. Here we scrutinize these assumptions by characterizing the TSs for hydride transfer catalyzed by the enzyme Escherichia coli dihydrofolate reductase obtained using various simulation approaches. Specifically, we compare the TSs obtained with common TST-based methods and a dynamics-based method. Using a recently developed accurate hybrid quantum mechanics/molecular mechanics potential, we find that the TST-based and dynamics-based methods give considerably different TS ensembles. This discrepancy, which could be due equilibrium solvation effects and the nature of the reaction coordinate employed and its motion, raises major questions about how to interpret the TSs determined by common simulation methods. We conclude that further investigation is needed to characterize the impact of various TST assumptions on the TS phase-space ensemble and on the reaction kinetics. PMID:24860275
Progress in Fast, Accurate Multi-scale Climate Simulations
Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter
2015-01-01
We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting
2016-01-01
This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507
Symphony: a framework for accurate and holistic WSN simulation.
Riliskis, Laurynas; Osipov, Evgeny
2015-01-01
Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144
Symphony: A Framework for Accurate and Holistic WSN Simulation
Riliskis, Laurynas; Osipov, Evgeny
2015-01-01
Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144
Accurate simulation of terahertz transmission through doped silicon junctions
NASA Astrophysics Data System (ADS)
Jen, Chih-Yu; Richter, Christiaan
2015-03-01
In the previous work we presented results demonstrating the ability of transmission mode terahertz time domain spectroscopy (THz-TDS) to detect doping profile differences and deviations in silicon. This capability is potentially useful for quality control in the semiconductor and photovoltaic industry. We shared subsequent experimental results revealing that terahertz interactions with both electrons and holes are strong enough to recognize both n- and p-type doping profile changes. We also displayed that the relatively long wavelength (~ 1 mm) of THz radiation allows this approach to be compatible with surface treatments like for instance the texturing (scattering layer) typically used in the solar industry. In this work we continuously demonstrate the accuracy with which current terahertz optical models can simulate the power spectrum of terahertz radiation transmitted through junctions with known doping profiles (as determined with SIMS). We conclude that current optical models predict the terahertz transmission and absorption in silicon junctions well.
Accurate Position Sensing of Defocused Beams Using Simulated Beam Templates
Awwal, A; Candy, J; Haynam, C; Widmayer, C; Bliss, E; Burkhart, S
2004-09-29
In position detection using matched filtering one is faced with the challenge of determining the best position in the presence of distortions such as defocus and diffraction noise. This work evaluates the performance of simulated defocused images as the template against the real defocused beam. It was found that an amplitude modulated phase-only filter is better equipped to deal with real defocused images that suffer from diffraction noise effects resulting in a textured spot intensity pattern. It is shown that the there is a tradeoff of performance dependent upon the type and size of the defocused image. A novel automated system was developed that can automatically select the right template type and size. Results of this automation for real defocused images are presented.
Material Models for Accurate Simulation of Sheet Metal Forming and Springback
NASA Astrophysics Data System (ADS)
Yoshida, Fusahito
2010-06-01
For anisotropic sheet metals, modeling of anisotropy and the Bauschinger effect is discussed in the framework of Yoshida-Uemori kinematic hardening model incorporating with anisotropic yield functions. The performances of the models in predicting yield loci, cyclic stress-strain responses on several types of steel and aluminum sheets are demonstrated by comparing the numerical simulation results with the corresponding experimental observations. From some examples of FE simulation of sheet metal forming and springback, it is concluded that modeling of both the anisotropy and the Bauschinger effect is essential for the accurate numerical simulation.
DKIST Adaptive Optics System: Simulation Results
NASA Astrophysics Data System (ADS)
Marino, Jose; Schmidt, Dirk
2016-05-01
The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.
Accurate direct Eulerian simulation of dynamic elastic-plastic flow
Kamm, James R; Walter, John W
2009-01-01
The simulation of dynamic, large strain deformation is an important, difficult, and unsolved computational challenge. Existing Eulerian schemes for dynamic material response are plagued by unresolved issues. We present a new scheme for the first-order system of elasto-plasticity equations in the Eulerian frame. This system has an intrinsic constraint on the inverse deformation gradient. Standard Godunov schemes do not satisfy this constraint. The method of Flux Distributions (FD) was devised to discretely enforce such constraints for numerical schemes with cell-centered variables. We describe a Flux Distribution approach that enforces the inverse deformation gradient constraint. As this approach is new and novel, we do not yet have numerical results to validate our claims. This paper is the first installment of our program to develop this new method.
Open cherry picker simulation results
NASA Technical Reports Server (NTRS)
Nathan, C. A.
1982-01-01
The simulation program associated with a key piece of support equipment to be used to service satellites directly from the Shuttle is assessed. The Open Cherry Picker (OCP) is a manned platform mounted at the end of the remote manipulator system (RMS) and is used to enhance extra vehicular activities (EVA). The results of simulations performed on the Grumman Large Amplitude Space Simulator (LASS) and at the JSC Water Immersion Facility are summarized.
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Toward the Accurate Simulation of Two-Dimensional Electronic Spectra
NASA Astrophysics Data System (ADS)
Giussani, Angelo; Nenov, Artur; Segarra-Martí, Javier; Jaiswal, Vishal K.; Rivalta, Ivan; Dumont, Elise; Mukamel, Shaul; Garavelli, Marco
2015-06-01
Two-dimensional pump-probe electronic spectroscopy is a powerful technique able to provide both high spectral and temporal resolution, allowing the analysis of ultrafast complex reactions occurring via complementary pathways by the identification of decay-specific fingerprints. [1-2] The understanding of the origin of the experimentally recorded signals in a two-dimensional electronic spectrum requires the characterization of the electronic states involved in the electronic transitions photoinduced by the pump/probe pulses in the experiment. Such a goal constitutes a considerable computational challenge, since up to 100 states need to be described, for which state-of-the-art methods as RASSCF and RASPT2 have to be wisely employed. [3] With the present contribution, the main features and potentialities of two-dimensional electronic spectroscopy are presented, together with the machinery in continuous development in our groups in order to compute two-dimensional electronic spectra. The results obtained using different level of theory and simulations are shown, bringing as examples the computed two-dimensional electronic spectra for some specific cases studied. [2-4] [1] Rivalta I, Nenov A, Cerullo G, Mukamel S, Garavelli M, Int. J. Quantum Chem., 2014, 114, 85 [2] Nenov A, Segarra-Martí J, Giussani A, Conti I, Rivalta I, Dumont E, Jaiswal V K, Altavilla S, Mukamel S, Garavelli M, Faraday Discuss. 2015, DOI: 10.1039/C4FD00175C [3] Nenov A, Giussani A, Segarra-Martí J, Jaiswal V K, Rivalta I, Cerullo G, Mukamel S, Garavelli M, J. Chem. Phys. submitted [4] Nenov A, Giussani A, Fingerhut B P, Rivalta I, Dumont E, Mukamel S, Garavelli M, Phys. Chem. Chem. Phys. Submitted [5] Krebs N, Pugliesi I, Hauer J, Riedle E, New J. Phys., 2013,15, 08501
Massively Parallel Processing for Fast and Accurate Stamping Simulations
NASA Astrophysics Data System (ADS)
Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu
2005-08-01
The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
Exploring accurate Poisson–Boltzmann methods for biomolecular simulations
Wang, Changhao; Wang, Jun; Cai, Qin; Li, Zhilin; Zhao, Hong-Kai; Luo, Ray
2013-01-01
Accurate and efficient treatment of electrostatics is a crucial step in computational analyses of biomolecular structures and dynamics. In this study, we have explored a second-order finite-difference numerical method to solve the widely used Poisson–Boltzmann equation for electrostatic analyses of realistic bio-molecules. The so-called immersed interface method was first validated and found to be consistent with the classical weighted harmonic averaging method for a diversified set of test biomolecules. The numerical accuracy and convergence behaviors of the new method were next analyzed in its computation of numerical reaction field grid potentials, energies, and atomic solvation forces. Overall similar convergence behaviors were observed as those by the classical method. Interestingly, the new method was found to deliver more accurate and better-converged grid potentials than the classical method on or nearby the molecular surface, though the numerical advantage of the new method is reduced when grid potentials are extrapolated to the molecular surface. Our exploratory study indicates the need for further improving interpolation/extrapolation schemes in addition to the developments of higher-order numerical methods that have attracted most attention in the field. PMID:24443709
Developing accurate simulations for high-speed fiber links
NASA Astrophysics Data System (ADS)
Searcy, Steven; Stark, Andrew; Hsueh, Yu-Ting; Detwiler, Thomas; Tibuleac, Sorin; Chang, GK; Ralph, Stephen E.
2011-01-01
Reliable simulations of high-speed fiber optic links are necessary to understand, design, and deploy fiber networks. Laboratory experiments cannot explore all possible component variations and fiber environments that are found in today's deployed systems. Simulations typically depict relative penalties compared to a reference link. However, absolute performance metrics are required to assess actual deployment configurations. Here we detail the efforts within the Georgia Tech 100G Consortium towards achieving high absolute accuracy between simulation and experimental performance with a goal of +/-0.25 dB for back-to-back configuration, and +/-0.5 dB for transmission over multiple spans with different dispersion maps. We measure all possible component parameters including fiber length, loss, and dispersion for use in simulation. We also validate experimental methods of performance evaluation including OSNR assessment and DSP-based demodulation. We investigate a wide range of parameters including modulator chirp, polarization state, polarization dependent loss, transmit spectrum, laser linewidth, and fiber nonlinearity. We evaluate 56 Gb/s (single-polarization) and 112 Gb/s (dual-polarization) DQPSK and coherent QPSK within a 50 GHz DWDM environment with 10 Gb/s OOK adjacent channels for worst-case XPM effects. We demonstrate good simulation accuracy within linear and some nonlinear regimes for a wide range of OSNR in both back-to-back configuration and up to eight spans, over a range of launch powers. This allows us to explore a wide range of environments not available in the lab, including different fiber types, ROADM passbands, and levels of crosstalk. Continued exploration is required to validate robustness over various demodulation algorithms.
NASA Astrophysics Data System (ADS)
Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja
2015-09-01
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor-liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields Tc = 1.3128 ± 0.0016, ρc = 0.316 ± 0.004, and pc = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρt ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using rcut = 3.5σ yield Tc and pc that are higher by 0.2% and 1.4% than simulations with rcut = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that rcut = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with various ranges of the
Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja
2015-09-21
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard
Accurate, practical simulation of satellite infrared radiometer spectral data
Sullivan, T.J.
1982-09-01
This study's purpose is to determine whether a relatively simple random band model formulation of atmospheric radiation transfer in the infrared region can provide valid simulations of narrow interval satellite-borne infrared sounder system data. Detailed ozonesondes provide the pertinent atmospheric information and sets of calibrated satellite measurements provide the validation. High resolution line-by-line model calculations are included to complete the evaluation.
Time Accurate CFD Simulations of the Orion Launch Abort Vehicle in the Transonic Regime
NASA Technical Reports Server (NTRS)
Ruf, Joseph; Rojahn, Josh
2011-01-01
Significant asymmetries in the fluid dynamics were calculated for some cases in the CFD simulations of the Orion Launch Abort Vehicle through its abort trajectories. The CFD simulations were performed steady state with symmetric boundary conditions and geometries. The trajectory points at issue were in the transonic regime, at 0 and 5 angles of attack with the Abort Motors with and without the Attitude Control Motors (ACM) firing. In some of the cases the asymmetric fluid dynamics resulted in aerodynamic side forces that were large enough that would overcome the control authority of the ACMs. MSFC s Fluid Dynamics Group supported the investigation into the cause of the flow asymmetries with time accurate CFD simulations, utilizing a hybrid RANS-LES turbulence model. The results show that the flow over the vehicle and the subsequent interaction with the AB and ACM motor plumes were unsteady. The resulting instantaneous aerodynamic forces were oscillatory with fairly large magnitudes. Time averaged aerodynamic forces were essentially symmetric.
High-Resolution Tsunami Inundation Simulations Based on Accurate Estimations of Coastal Waveforms
NASA Astrophysics Data System (ADS)
Oishi, Y.; Imamura, F.; Sugawara, D.; Furumura, T.
2015-12-01
We evaluate the accuracy of high-resolution tsunami inundation simulations in detail using the actual observational data of the 2011 Tohoku-Oki earthquake (Mw9.0) and investigate the methodologies to improve the simulation accuracy.Due to the recent development of parallel computing technologies, high-resolution tsunami inundation simulations are conducted more commonly than before. To evaluate how accurately these simulations can reproduce inundation processes, we test several types of simulation configurations on a parallel computer, where we can utilize the observational data (e.g., offshore and coastal waveforms and inundation properties) that are recorded during the Tohoku-Oki earthquake.Before discussing the accuracy of inundation processes on land, the incident waves at coastal sites must be accurately estimated. However, for megathrust earthquakes, it is difficult to find the tsunami source that can provide accurate estimations of tsunami waveforms at every coastal site because of the complex spatiotemporal distribution of the source and the limitation of observation. To overcome this issue, we employ a site-specific source inversion approach that increases the estimation accuracy within a specific coastal site by applying appropriate weighting to the observational data in the inversion process.We applied our source inversion technique to the Tohoku tsunami and conducted inundation simulations using 5-m resolution digital elevation model data (DEM) for the coastal area around Miyako Bay and Sendai Bay. The estimated waveforms at the coastal wave gauges of these bays successfully agree with the observed waveforms. However, the simulations overestimate the inundation extent indicating the necessity to improve the inundation model. We find that the value of Manning's roughness coefficient should be modified from the often-used value of n = 0.025 to n = 0.033 to obtain proper results at both cities.In this presentation, the simulation results with several
NASA Astrophysics Data System (ADS)
Garrison, Stephen L.
2005-07-01
The combination of molecular simulations and potentials obtained from quantum chemistry is shown to be able to provide reasonably accurate thermodynamic property predictions. Gibbs ensemble Monte Carlo simulations are used to understand the effects of small perturbations to various regions of the model Lennard-Jones 12-6 potential. However, when the phase behavior and second virial coefficient are scaled by the critical properties calculated for each potential, the results obey a corresponding states relation suggesting a non-uniqueness problem for interaction potentials fit to experimental phase behavior. Several variations of a procedure collectively referred to as quantum mechanical Hybrid Methods for Interaction Energies (HM-IE) are developed and used to accurately estimate interaction energies from CCSD(T) calculations with a large basis set in a computationally efficient manner for the neon-neon, acetylene-acetylene, and nitrogen-benzene systems. Using these results and methods, an ab initio, pairwise-additive, site-site potential for acetylene is determined and then improved using results from molecular simulations using this initial potential. The initial simulation results also indicate that a limited range of energies important for accurate phase behavior predictions. Second virial coefficients calculated from the improved potential indicate that one set of experimental data in the literature is likely erroneous. This prescription is then applied to methanethiol. Difficulties in modeling the effects of the lone pair electrons suggest that charges on the lone pair sites negatively impact the ability of the intermolecular potential to describe certain orientations, but that the lone pair sites may be necessary to reasonably duplicate the interaction energies for several orientations. Two possible methods for incorporating the effects of three-body interactions into simulations within the pairwise-additivity formulation are also developed. A low density
NASA Astrophysics Data System (ADS)
Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.
2016-07-01
In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.
Wijma, Hein J; Marrink, Siewert J; Janssen, Dick B
2014-07-28
Computational approaches could decrease the need for the laborious high-throughput experimental screening that is often required to improve enzymes by mutagenesis. Here, we report that using multiple short molecular dynamics (MD) simulations makes it possible to accurately model enantioselectivity for large numbers of enzyme-substrate combinations at low computational costs. We chose four different haloalkane dehalogenases as model systems because of the availability of a large set of experimental data on the enantioselective conversion of 45 different substrates. To model the enantioselectivity, we quantified the frequency of occurrence of catalytically productive conformations (near attack conformations) for pairs of enantiomers during MD simulations. We found that the angle of nucleophilic attack that leads to carbon-halogen bond cleavage was a critical variable that limited the occurrence of productive conformations; enantiomers for which this angle reached values close to 180° were preferentially converted. A cluster of 20-40 very short (10 ps) MD simulations allowed adequate conformational sampling and resulted in much better agreement to experimental enantioselectivities than single long MD simulations (22 ns), while the computational costs were 50-100 fold lower. With single long MD simulations, the dynamics of enzyme-substrate complexes remained confined to a conformational subspace that rarely changed significantly, whereas with multiple short MD simulations a larger diversity of conformations of enzyme-substrate complexes was observed. PMID:24916632
NASA Technical Reports Server (NTRS)
VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.
2000-01-01
The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.
Development of modified cable models to simulate accurate neuronal active behaviors
2014-01-01
In large network and single three-dimensional (3-D) neuron simulations, high computing speed dictates using reduced cable models to simulate neuronal firing behaviors. However, these models are unwarranted under active conditions and lack accurate representation of dendritic active conductances that greatly shape neuronal firing. Here, realistic 3-D (R3D) models (which contain full anatomical details of dendrites) of spinal motoneurons were systematically compared with their reduced single unbranched cable (SUC, which reduces the dendrites to a single electrically equivalent cable) counterpart under passive and active conditions. The SUC models matched the R3D model's passive properties but failed to match key active properties, especially active behaviors originating from dendrites. For instance, persistent inward currents (PIC) hysteresis, frequency-current (FI) relationship secondary range slope, firing hysteresis, plateau potential partial deactivation, staircase currents, synaptic current transfer ratio, and regional FI relationships were not accurately reproduced by the SUC models. The dendritic morphology oversimplification and lack of dendritic active conductances spatial segregation in the SUC models caused significant underestimation of those behaviors. Next, SUC models were modified by adding key branching features in an attempt to restore their active behaviors. The addition of primary dendritic branching only partially restored some active behaviors, whereas the addition of secondary dendritic branching restored most behaviors. Importantly, the proposed modified models successfully replicated the active properties without sacrificing model simplicity, making them attractive candidates for running R3D single neuron and network simulations with accurate firing behaviors. The present results indicate that using reduced models to examine PIC behaviors in spinal motoneurons is unwarranted. PMID:25277743
NASA Astrophysics Data System (ADS)
Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime
2016-03-01
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.
Application of the G-JF discrete-time thermostat for fast and accurate molecular simulations
NASA Astrophysics Data System (ADS)
Grønbech-Jensen, Niels; Hayre, Natha Robert; Farago, Oded
2014-02-01
A new Langevin-Verlet thermostat that preserves the fluctuation-dissipation relationship for discrete time steps is applied to molecular modeling and tested against several popular suites (AMBER, GROMACS, LAMMPS) using a small molecule as an example that can be easily simulated by all three packages. Contrary to existing methods, the new thermostat exhibits no detectable changes in the sampling statistics as the time step is varied in the entire numerical stability range. The simple form of the method, which we express in the three common forms (Velocity-Explicit, Störmer-Verlet, and Leap-Frog), allows for easy implementation within existing molecular simulation packages to achieve faster and more accurate results with no cost in either computing time or programming complexity.
Numerical simulations of catastrophic disruption: Recent results
NASA Technical Reports Server (NTRS)
Benz, W.; Asphaug, E.; Ryan, E. V.
1994-01-01
Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.
Numerical Simulation of the 2004 Indian Ocean Tsunami: Accurate Flooding and drying in Banda Aceh
NASA Astrophysics Data System (ADS)
Cui, Haiyang; Pietrzak, Julie; Stelling, Guus; Androsov, Alexey; Harig, Sven
2010-05-01
The Indian Ocean Tsunami on December 26, 2004 caused one of the largest tsunamis in recent times and led to widespread devastation and loss of life. One of the worst hit regions was Banda Aceh, which is the capital of the Aceh province, located in the northern part of Sumatra, 150km from the source of the earthquake. A German-Indonesian Tsunami Early Warning System (GITEWS) (www.gitews.de) is currently under active development. The work presented here is carried out within the GITEWS framework. One of the aims of this project is the development of accurate models with which to simulate the propagation, flooding and drying, and run-up of a tsunami. In this context, TsunAWI has been developed by the Alfred Wegener Institute; it is an explicit, () finite element model. However, the accurate numerical simulation of flooding and drying requires the conservation of mass and momentum. This is not possible in the current version of TsunAWi. The P1NC - P1element guarantees mass conservation in a global sense, yet as we show here it is important to guarantee mass conservation at the local level, that is within each individual cell. Here an unstructured grid, finite volume ocean model is presented. It is derived from the P1NC - P1 element, and is shown to be mass and momentum conserving. Then a number of simulations are presented, including dam break problems flooding over both a wet and a dry bed. Excellent agreement is found. Then we present simulations for Banda Aceh, and compare the results to on-site survey data, as well as to results from the original TsunAWI code.
Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.
Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique
2013-06-01
The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530
Time Accurate CFD Simulations of the Orion Launch Abort Vehicle in the Transonic Regime
NASA Technical Reports Server (NTRS)
Rojahn, Josh; Ruf, Joe
2011-01-01
Significant asymmetries in the fluid dynamics were calculated for some cases in the CFD simulations of the Orion Launch Abort Vehicle through its abort trajectories. The CFD simulations were performed steady state and in three dimensions with symmetric geometries, no freestream sideslip angle, and motors firing. The trajectory points at issue were in the transonic regime, at 0 and +/- 5 angles of attack with the Abort Motors with and without the Attitude Control Motors (ACM) firing. In some of the cases the asymmetric fluid dynamics resulted in aerodynamic side forces that were large enough that would overcome the control authority of the ACMs. MSFC's Fluid Dynamics Group supported the investigation into the cause of the flow asymmetries with time accurate CFD simulations, utilizing a hybrid RANS-LES turbulence model. The results show that the flow over the vehicle and the subsequent interaction with the AB and ACM motor plumes were unsteady. The resulting instantaneous aerodynamic forces were oscillatory with fairly large magnitudes. Time averaged aerodynamic forces were essentially symmetric.
Lee, M.W.; Meuwly, M.
2013-01-01
The evaluation of hydration free energies is a sensitive test to assess force fields used in atomistic simulations. We showed recently that the vibrational relaxation times, 1D- and 2D-infrared spectroscopies for CN(-) in water can be quantitatively described from molecular dynamics (MD) simulations with multipolar force fields and slightly enlarged van der Waals radii for the C- and N-atoms. To validate such an approach, the present work investigates the solvation free energy of cyanide in water using MD simulations with accurate multipolar electrostatics. It is found that larger van der Waals radii are indeed necessary to obtain results close to the experimental values when a multipolar force field is used. For CN(-), the van der Waals ranges refined in our previous work yield hydration free energy between -72.0 and -77.2 kcal mol(-1), which is in excellent agreement with the experimental data. In addition to the cyanide ion, we also study the hydroxide ion to show that the method used here is readily applicable to similar systems. Hydration free energies are found to sensitively depend on the intermolecular interactions, while bonded interactions are less important, as expected. We also investigate in the present work the possibility of applying the multipolar force field in scoring trajectories generated using computationally inexpensive methods, which should be useful in broader parametrization studies with reduced computational resources, as scoring is much faster than the generation of the trajectories.
Lee, Myung Won; Meuwly, Markus
2013-12-14
The evaluation of hydration free energies is a sensitive test to assess force fields used in atomistic simulations. We showed recently that the vibrational relaxation times, 1D- and 2D-infrared spectroscopies for CN(-) in water can be quantitatively described from molecular dynamics (MD) simulations with multipolar force fields and slightly enlarged van der Waals radii for the C- and N-atoms. To validate such an approach, the present work investigates the solvation free energy of cyanide in water using MD simulations with accurate multipolar electrostatics. It is found that larger van der Waals radii are indeed necessary to obtain results close to the experimental values when a multipolar force field is used. For CN(-), the van der Waals ranges refined in our previous work yield hydration free energy between -72.0 and -77.2 kcal mol(-1), which is in excellent agreement with the experimental data. In addition to the cyanide ion, we also study the hydroxide ion to show that the method used here is readily applicable to similar systems. Hydration free energies are found to sensitively depend on the intermolecular interactions, while bonded interactions are less important, as expected. We also investigate in the present work the possibility of applying the multipolar force field in scoring trajectories generated using computationally inexpensive methods, which should be useful in broader parametrization studies with reduced computational resources, as scoring is much faster than the generation of the trajectories. PMID:24170171
Pacheco, P; Miller, P; Kim, J; Leese, T; Zabiyaka, Y
2003-05-07
Object-oriented NeuroSys (ooNeuroSys) is a collection of programs for simulating very large networks of biologically accurate neurons on distributed memory parallel computers. It includes two principle programs: ooNeuroSys, a parallel program for solving the large systems of ordinary differential equations arising from the interconnected neurons, and Neurondiz, a parallel program for visualizing the results of ooNeuroSys. Both programs are designed to be run on clusters and use the MPI library to obtain parallelism. ooNeuroSys also includes an easy-to-use Python interface. This interface allows neuroscientists to quickly develop and test complex neuron models. Both ooNeuroSys and Neurondiz have a design that allows for both high performance and relative ease of maintenance.
Time-Accurate Computational Fluid Dynamics Simulation of a Pair of Moving Solid Rocket Boosters
NASA Technical Reports Server (NTRS)
Strutzenberg, Louise L.; Williams, Brandon R.
2011-01-01
Since the Columbia accident, the threat to the Shuttle launch vehicle from debris during the liftoff timeframe has been assessed by the Liftoff Debris Team at NASA/MSFC. In addition to engineering methods of analysis, CFD-generated flow fields during the liftoff timeframe have been used in conjunction with 3-DOF debris transport methods to predict the motion of liftoff debris. Early models made use of a quasi-steady flow field approximation with the vehicle positioned at a fixed location relative to the ground; however, a moving overset mesh capability has recently been developed for the Loci/CHEM CFD software which enables higher-fidelity simulation of the Shuttle transient plume startup and liftoff environment. The present work details the simulation of the launch pad and mobile launch platform (MLP) with truncated solid rocket boosters (SRBs) moving in a prescribed liftoff trajectory derived from Shuttle flight measurements. Using Loci/CHEM, time-accurate RANS and hybrid RANS/LES simulations were performed for the timeframe T0+0 to T0+3.5 seconds, which consists of SRB startup to a vehicle altitude of approximately 90 feet above the MLP. Analysis of the transient flowfield focuses on the evolution of the SRB plumes in the MLP plume holes and the flame trench, impingement on the flame deflector, and especially impingment on the MLP deck resulting in upward flow which is a transport mechanism for debris. The results show excellent qualitative agreement with the visual record from past Shuttle flights, and comparisons to pressure measurements in the flame trench and on the MLP provide confidence in these simulation capabilities.
Achieving accurate simulations of urban impacts on ozone at high resolution
NASA Astrophysics Data System (ADS)
Li, J.; Georgescu, M.; Hyde, P.; Mahalov, A.; Moustaoui, M.
2014-11-01
The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations [O3] due to urbanization within cities in arid/semi-arid environments. First, a method based on a shape preserving Monotonic Cubic Interpolation (MCI) is developed and used to downscale anthropogenic emissions from the 4 km resolution 2005 National Emissions Inventory (NEI05) to the finest model resolution of 1 km. Using the rapidly expanding Phoenix metropolitan region as the area of focus, we demonstrate the proposed MCI method achieves ozone simulation results with appreciably improved correspondence to observations relative to the default interpolation method of the WRF-Chem system. Next, two additional sets of experiments are conducted, with the recommended MCI approach, to examine impacts of urbanization on ozone production: (1) the urban land cover is included (i.e., urbanization experiments) and, (2) the urban land cover is replaced with the region’s native shrubland. Impacts due to the presence of the built environment on [O3] are highly heterogeneous across the metropolitan area. Increased near surface [O3] due to urbanization of 10-20 ppb is predominantly a nighttime phenomenon while simulated impacts during daytime are negligible. Urbanization narrows the daily [O3] range (by virtue of increasing nighttime minima), an impact largely due to the region’s urban heat island. Our results demonstrate the importance of the MCI method for accurate representation of the diurnal profile of ozone, and highlight its utility for high-resolution air quality simulations for urban areas.
An accurate and efficient 3-D micromagnetic simulation of metal evaporated tape
NASA Astrophysics Data System (ADS)
Jones, M.; Miles, J. J.
1997-07-01
Metal evaporated tape (MET) has a complex column-like structure in which magnetic domains are arranged randomly. In order to accurately simulate the behaviour of MET it is important to capture these aspects of the material in a high-resolution 3-D micromagnetic model. The scale of this problem prohibits the use of traditional scalar computers and leads us to develop algorithms for a vector processor architecture. We demonstrate that despite the materials highly non-uniform structure, it is possible to develop fast vector algorithms for the computation of the magnetostatic interaction field. We do this by splitting the field calculation into near and far components. The near field component is calculated exactly using an efficient vector algorithm, whereas the far field is calculated approximately using a novel fast Fourier transform (FFT) technique. Results are presented which demonstrate that, in practice, the algorithms require sub-O( N log( N)) computation time. In addition results of highly realistic simulation of hysteresis in MET are presented.
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.
2015-01-01
Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.
NASA Astrophysics Data System (ADS)
Sellers, Michael; Lisal, Martin; Brennan, John
2015-06-01
Investigating the ability of a molecular model to accurately represent a real material is crucial to model development and use. When the model simulates materials in extreme conditions, one such property worth evaluating is the phase transition point. However, phase transitions are often overlooked or approximated because of difficulty or inaccuracy when simulating them. Techniques such as super-heating or super-squeezing a material to induce a phase change suffer from inherent timescale limitations leading to ``over-driving,'' and dual-phase simulations require many long-time runs to seek out what frequently results in an inexact location of phase-coexistence. We present a compilation of methods for the determination of solid-solid and solid-liquid phase transition points through the accurate calculation of the chemical potential. The methods are applied to the Smith-Bharadwaj atomistic potential's representation of cyclotrimethylene trinitramine (RDX) to accurately determine its melting point (Tm) and the alpha to gamma solid phase transition pressure. We also determine Tm for a coarse-grain model of RDX, and compare its value to experiment and atomistic counterpart. All methods are employed via the LAMMPS simulator, resulting in 60-70 simulations that total 30-50 ns. Approved for public release. Distribution is unlimited.
Liquid propellant rocket engine combustion simulation with a time-accurate CFD method
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Shang, H. M.; Liaw, Paul; Hutt, J.
1993-01-01
Time-accurate computational fluid dynamics (CFD) algorithms are among the basic requirements as an engineering or research tool for realistic simulations of transient combustion phenomena, such as combustion instability, transient start-up, etc., inside the rocket engine combustion chamber. A time-accurate pressure based method is employed in the FDNS code for combustion model development. This is in connection with other program development activities such as spray combustion model development and efficient finite-rate chemistry solution method implementation. In the present study, a second-order time-accurate time-marching scheme is employed. For better spatial resolutions near discontinuities (e.g., shocks, contact discontinuities), a 3rd-order accurate TVD scheme for modeling the convection terms is implemented in the FDNS code. Necessary modification to the predictor/multi-corrector solution algorithm in order to maintain time-accurate wave propagation is also investigated. Benchmark 1-D and multidimensional test cases, which include the classical shock tube wave propagation problems, resonant pipe test case, unsteady flow development of a blast tube test case, and H2/O2 rocket engine chamber combustion start-up transient simulation, etc., are investigated to validate and demonstrate the accuracy and robustness of the present numerical scheme and solution algorithm.
A mechanistic approach for accurate simulation of village scale malaria transmission
Bomblies, Arne; Duchemin, Jean-Bernard; Eltahir, Elfatih AB
2009-01-01
Background Malaria transmission models commonly incorporate spatial environmental and climate variability for making regional predictions of disease risk. However, a mismatch of these models' typical spatial resolutions and the characteristic scale of malaria vector population dynamics may confound disease risk predictions in areas of high spatial hydrological variability such as the Sahel region of Africa. Methods Field observations spanning two years from two Niger villages are compared. The two villages are separated by only 30 km but exhibit a ten-fold difference in anopheles mosquito density. These two villages would be covered by a single grid cell in many malaria models, yet their entomological activity differs greatly. Environmental conditions and associated entomological activity are simulated at high spatial- and temporal resolution using a mechanistic approach that couples a distributed hydrology scheme and an entomological model. Model results are compared to regular field observations of Anopheles gambiae sensu lato mosquito populations and local hydrology. The model resolves the formation and persistence of individual pools that facilitate mosquito breeding and predicts spatio-temporal mosquito population variability at high resolution using an agent-based modeling approach. Results Observations of soil moisture, pool size, and pool persistence are reproduced by the model. The resulting breeding of mosquitoes in the simulated pools yields time-integrated seasonal mosquito population dynamics that closely follow observations from captured mosquito abundance. Interannual difference in mosquito abundance is simulated, and the inter-village difference in mosquito population is reproduced for two years of observations. These modeling results emulate the known focal nature of malaria in Niger Sahel villages. Conclusion Hydrological variability must be represented at high spatial and temporal resolution to achieve accurate predictive ability of malaria risk
Joldes, Grand Roman; Wittek, Adam; Miller, Karol
2008-01-01
Real time computation of soft tissue deformation is important for the use of augmented reality devices and for providing haptic feedback during operation or surgeon training. This requires algorithms that are fast, accurate and can handle material nonlinearities and large deformations. A set of such algorithms is presented in this paper, starting with the finite element formulation and the integration scheme used and addressing common problems such as hourglass control and locking. The computation examples presented prove that by using these algorithms, real time computations become possible without sacrificing the accuracy of the results. For a brain model having more than 7000 degrees of freedom, we computed the reaction forces due to indentation with frequency of around 1000 Hz using a standard dual core PC. Similarly, we conducted simulation of brain shift using a model with more than 50 000 degrees of freedom in less than a minute. The speed benefits of our models results from combining the Total Lagrangian formulation with explicit time integration and low order finite elements. PMID:19152791
CgWind: A high-order accurate simulation tool for wind turbines and wind farms
Chand, K K; Henshaw, W D; Lundquist, K A; Singer, M A
2010-02-22
CgWind is a high-fidelity large eddy simulation (LES) tool designed to meet the modeling needs of wind turbine and wind park engineers. This tool combines several advanced computational technologies in order to model accurately the complex and dynamic nature of wind energy applications. The composite grid approach provides high-quality structured grids for the efficient implementation of high-order accurate discretizations of the incompressible Navier-Stokes equations. Composite grids also provide a natural mechanism for modeling bodies in relative motion and complex geometry. Advanced algorithms such as matrix-free multigrid, compact discretizations and approximate factorization will allow CgWind to perform highly resolved calculations efficiently on a wide class of computing resources. Also in development are nonlinear LES subgrid-scale models required to simulate the many interacting scales present in large wind turbine applications. This paper outlines our approach, the current status of CgWind and future development plans.
Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.
2002-01-01
Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.
SARDA HITL Simulations: System Performance Results
NASA Technical Reports Server (NTRS)
Gupta, Gautam
2012-01-01
This presentation gives an overview of the 2012 SARDA human-in-the-loop simulation, and presents a summary of system performance results from the simulation, including delay, throughput and fuel consumption
A Three Dimensional Parallel Time Accurate Turbopump Simulation Procedure Using Overset Grid Systems
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Chan, William; Kwak, Dochan
2001-01-01
The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and non-uniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability will be presented along with the performance of parallel versions of the code.
A Three-Dimensional Parallel Time-Accurate Turbopump Simulation Procedure Using Overset Grid System
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Chan, William; Kwak, Dochan
2002-01-01
The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and nonuniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability are presented along with the performance of parallel versions of the code.
Computer simulation results of attitude estimation of earth orbiting satellites
NASA Technical Reports Server (NTRS)
Kou, S. R.
1976-01-01
Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.
NASA Astrophysics Data System (ADS)
Shukla, Ratnesh K.
2014-11-01
Single fluid schemes that rely on an interface function for phase identification in multicomponent compressible flows are widely used to study hydrodynamic flow phenomena in several diverse applications. Simulations based on standard numerical implementation of these schemes suffer from an artificial increase in the width of the interface function owing to the numerical dissipation introduced by an upwind discretization of the governing equations. In addition, monotonicity requirements which ensure that the sharp interface function remains bounded at all times necessitate use of low-order accurate discretization strategies. This results in a significant reduction in accuracy along with a loss of intricate flow features. In this paper we develop a nonlinear transformation based interface capturing method which achieves superior accuracy without compromising the simplicity, computational efficiency and robustness of the original flow solver. A nonlinear map from the signed distance function to the sigmoid type interface function is used to effectively couple a standard single fluid shock and interface capturing scheme with a high-order accurate constrained level set reinitialization method in a way that allows for oscillation-free transport of the sharp material interface. Imposition of a maximum principle, which ensures that the multidimensional preconditioned interface capturing method does not produce new maxima or minima even in the extreme events of interface merger or breakup, allows for an explicit determination of the interface thickness in terms of the grid spacing. A narrow band method is formulated in order to localize computations pertinent to the preconditioned interface capturing method. Numerical tests in one dimension reveal a significant improvement in accuracy and convergence; in stark contrast to the conventional scheme, the proposed method retains its accuracy and convergence characteristics in a shifted reference frame. Results from the test
Accurate Analytic Results for the Steady State Distribution of the Eigen Model
NASA Astrophysics Data System (ADS)
Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun
2016-04-01
Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.
Accurate Navier-Stokes results for the hypersonic flow over a spherical nosetip
Blottner, F.G.
1989-01-01
The unsteady thin-layer Navier-Stokes equations for a perfect gas are solved with a linearized block Alternating Direction Implicit finite-difference solution procedure. Solution errors due to numerical dissipation added to the governing equations are evaluated. Errors in the numerical predictions on three different grids are determined where Richardson extrapolation is used to estimate the exact solution. Accurate computational results are tabulated for the hypersonic laminar flow over a spherical body which can be used as a benchmark test case. Predictions obtained from the code are in good agreement with inviscid numerical results and experimental data. 9 refs., 11 figs., 3 tabs.
Accurate and efficient halo-based galaxy clustering modelling with simulations
NASA Astrophysics Data System (ADS)
Zheng, Zheng; Guo, Hong
2016-06-01
Small- and intermediate-scale galaxy clustering can be used to establish the galaxy-halo connection to study galaxy formation and evolution and to tighten constraints on cosmological parameters. With the increasing precision of galaxy clustering measurements from ongoing and forthcoming large galaxy surveys, accurate models are required to interpret the data and extract relevant information. We introduce a method based on high-resolution N-body simulations to accurately and efficiently model the galaxy two-point correlation functions (2PCFs) in projected and redshift spaces. The basic idea is to tabulate all information of haloes in the simulations necessary for computing the galaxy 2PCFs within the framework of halo occupation distribution or conditional luminosity function. It is equivalent to populating galaxies to dark matter haloes and using the mock 2PCF measurements as the model predictions. Besides the accurate 2PCF calculations, the method is also fast and therefore enables an efficient exploration of the parameter space. As an example of the method, we decompose the redshift-space galaxy 2PCF into different components based on the type of galaxy pairs and show the redshift-space distortion effect in each component. The generalizations and limitations of the method are discussed.
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of
Direct Simulations of Transition and Turbulence Using High-Order Accurate Finite-Difference Schemes
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
1997-01-01
In recent years the techniques of computational fluid dynamics (CFD) have been used to compute flows associated with geometrically complex configurations. However, success in terms of accuracy and reliability has been limited to cases where the effects of turbulence and transition could be modeled in a straightforward manner. Even in simple flows, the accurate computation of skin friction and heat transfer using existing turbulence models has proved to be a difficult task, one that has required extensive fine-tuning of the turbulence models used. In more complex flows (for example, in turbomachinery flows in which vortices and wakes impinge on airfoil surfaces causing periodic transitions from laminar to turbulent flow) the development of a model that accounts for all scales of turbulence and predicts the onset of transition may prove to be impractical. Fortunately, current trends in computing suggest that it may be possible to perform direct simulations of turbulence and transition at moderate Reynolds numbers in some complex cases in the near future. This seminar will focus on direct simulations of transition and turbulence using high-order accurate finite-difference methods. The advantage of the finite-difference approach over spectral methods is that complex geometries can be treated in a straightforward manner. Additionally, finite-difference techniques are the prevailing methods in existing application codes. In this seminar high-order-accurate finite-difference methods for the compressible and incompressible formulations of the unsteady Navier-Stokes equations and their applications to direct simulations of turbulence and transition will be presented.
A fast and accurate simulator for the design of birdcage coils in MRI.
Giovannetti, Giulio; Landini, Luigi; Santarelli, Maria Filomena; Positano, Vincenzo
2002-11-01
The birdcage coils are extensively used in MRI systems since they introduce a high signal to noise ratio and a high radiofrequency magnetic field homogeneity that guarantee a large field of view. The present article describes the implementation of a birdcage coil simulator, operating in high-pass and low-pass modes, using magnetostatic analysis of the coil. Respect to other simulators described in literature, our simulator allows to obtain in short time not only the dominant frequency mode, but also the complete resonant frequency spectrum and the relevant magnetic field pattern with high accuracy. Our simulator accounts for all the inductances including the mutual inductances between conductors. Moreover, the inductance calculation includes an accurately birdcage geometry description and the effect of a radiofrequency shield. The knowledge of all the resonance modes introduced by a birdcage coil is twofold useful during birdcage coil design: --higher order modes should be pushed far from the fundamental one, --for particular applications, it is necessary to localize other resonant modes (as the Helmholtz mode) jointly to the dominant mode. The knowledge of the magnetic field pattern allows to a priori verify the field homogeneity created inside the coil, when varying the coil dimension and mainly the number of the coil legs. The coil is analyzed using equivalent circuit method. Finally, the simulator is validated by implementing a low-pass birdcage coil and comparing our data with the literature. PMID:12413563
NASA Astrophysics Data System (ADS)
Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.
2007-09-01
BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.
Subramanian, Swetha; Mast, T Douglas
2015-10-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462
NASA Astrophysics Data System (ADS)
Subramanian, Swetha; Mast, T. Douglas
2015-09-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
Maudlin, P.J.; Stout, M.G.
1996-09-01
Strength and fracture constitutive relationships containing strain rate dependence and thermal softening are important for accurate simulation of metal cutting. The mechanical behavior of a hardened 4340 steel was characterized using the von Mises yield function, the Mechanical Threshold Stress model and the Johnson- Cook fracture model. This constitutive description was implemented into the explicit Lagrangian FEM continuum-mechanics code EPIC, and orthogonal plane-strain metal cutting calculations were performed. Heat conduction and friction at the toolwork-piece interface were included in the simulations. These transient calculations were advanced in time until steady state machining behavior (force) was realized. Experimental cutting force data (cutting and thrust forces) were measured for a planning operation and compared to the calculations. 13 refs., 6 figs.
Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James; Stamatakis, Michail
2013-12-14
Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.
De Vos, Maarten; De Lathauwer, Lieven; Vanrumste, Bart; Van Huffel, Sabine; Van Paesschen, W.
2007-01-01
Long-term electroencephalographic (EEG) recordings are important in the presurgical evaluation of refractory partial epilepsy for the delineation of the ictal onset zones. In this paper, we introduce a new concept for an automatic, fast, and objective localisation of the ictal onset zone in ictal EEG recordings. Canonical decomposition of ictal EEG decomposes the EEG in atoms. One or more atoms are related to the seizure activity. A single dipole was then fitted to model the potential distribution of each epileptic atom. In this study, we performed a simulation study in order to estimate the dipole localisation error. Ictal dipole localisation was very accurate, even at low signal-to-noise ratios, was not affected by seizure activity frequency or frequency changes, and was minimally affected by the waveform and depth of the ictal onset zone location. Ictal dipole localisation error using 21 electrodes was around 10.0 mm and improved more than tenfold in the range of 0.5–1.0 mm using 148 channels. In conclusion, our simulation study of canonical decomposition of ictal scalp EEG allowed a robust and accurate localisation of the ictal onset zone. PMID:18301715
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.
2015-01-01
A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.
Milestone M4900: Simulant Mixing Analytical Results
Kaplan, D.I.
2001-07-26
This report addresses Milestone M4900, ''Simulant Mixing Sample Analysis Results,'' and contains the data generated during the ''Mixing of Process Heels, Process Solutions, and Recycle Streams: Small-Scale Simulant'' task. The Task Technical and Quality Assurance Plan for this task is BNF-003-98-0079A. A report with a narrative description and discussion of the data will be issued separately.
SCEC Earthquake Simulator Comparison Results for California
NASA Astrophysics Data System (ADS)
Tullis, T. E.; Richards-Dinger, K. B.; Barall, M.; Dieterich, J. H.; Field, E. H.; Heien, E. M.; Kellogg, L. H.; Pollitz, F. F.; Rundle, J. B.; Sachs, M. K.; Turcotte, D. L.; Ward, S. N.; Zielke, O.
2011-12-01
This is our first report on comparisons of earthquake simulator results with one another and with actual earthquake data for all of California, excluding Cascadia. Earthquake simulators are computer programs that simulate long sequences of earthquakes and therefore allow study of a much longer earthquake history than is possible from instrumental, historical and paleoseismic data. The usefulness of simulated histories for anticipating the probabilities of future earthquakes and for contributing to public policy decisions depends on whether simulated earthquake catalogs properly represent actual earthquakes. Thus, we compare simulated histories generated by five different earthquake simulators with one another and with what is known about actual earthquake history in order to evaluate the usefulness of the simulator results. Although sharing common features, our simulators differ from one another in their details in many important ways. All simulators use the same fault geometry and the same ~15,000, 3x3 km elements to represent the strike-slip and thrust faults in California. The set of faults and the input slip rates on them are essentially those of the UCERF2 fault and deformation model; we will switch to the UCERF3 model once it is available. All simulators use the boundary element method to compute stress transfer between elements. Differences between the simulators include how they represent fault friction and what assumptions they make to promote rupture propagation from one element to another. The behavior of the simulators is encouragingly similar and the results are similar to what is known about real earthquakes, although some refinements are being made to some of the simulators to improve these comparisons as a result of our initial results. The frequency magnitude distributions of simulated events from M6 to M7.5 for a 30,000 year simulated history agree well with instrumental observations for all of California. Scaling relations, as seen on plots of
Unfitted Two-Phase Flow Simulations in Pore-Geometries with Accurate
NASA Astrophysics Data System (ADS)
Heimann, Felix; Engwer, Christian; Ippisch, Olaf; Bastian, Peter
2013-04-01
The development of better macro scale models for multi-phase flow in porous media is still impeded by the lack of suitable methods for the simulation of such flow regimes on the pore scale. The highly complicated geometry of natural porous media imposes requirements with regard to stability and computational efficiency which current numerical methods fail to meet. Therefore, current simulation environments are still unable to provide a thorough understanding of porous media in multi-phase regimes and still fail to reproduce well known effects like hysteresis or the more peculiar dynamics of the capillary fringe with satisfying accuracy. Although flow simulations in pore geometries were initially the domain of Lattice-Boltzmann and other particle methods, the development of Galerkin methods for such applications is important as they complement the range of feasible flow and parameter regimes. In the recent past, it has been shown that unfitted Galerkin methods can be applied efficiently to topologically demanding geometries. However, in the context of two-phase flows, the interface of the two immiscible fluids effectively separates the domain in two sub-domains. The exact representation of such setups with multiple independent and time depending geometries exceeds the functionality of common unfitted methods. We present a new approach to pore scale simulations with an unfitted discontinuous Galerkin (UDG) method. Utilizing a recursive sub-triangulation algorithm, we extent the UDG method to setups with multiple independent geometries. This approach allows an accurate representation of the moving contact line and the interface conditions, i.e. the pressure jump across the interface. Example simulations in two and three dimensions illustrate and verify the stability and accuracy of this approach.
Linaro, Daniele; Storace, Marco; Giugliano, Michele
2011-01-01
Stochastic channel gating is the major source of intrinsic neuronal noise whose functional consequences at the microcircuit- and network-levels have been only partly explored. A systematic study of this channel noise in large ensembles of biophysically detailed model neurons calls for the availability of fast numerical methods. In fact, exact techniques employ the microscopic simulation of the random opening and closing of individual ion channels, usually based on Markov models, whose computational loads are prohibitive for next generation massive computer models of the brain. In this work, we operatively define a procedure for translating any Markov model describing voltage- or ligand-gated membrane ion-conductances into an effective stochastic version, whose computer simulation is efficient, without compromising accuracy. Our approximation is based on an improved Langevin-like approach, which employs stochastic differential equations and no Montecarlo methods. As opposed to an earlier proposal recently debated in the literature, our approximation reproduces accurately the statistical properties of the exact microscopic simulations, under a variety of conditions, from spontaneous to evoked response features. In addition, our method is not restricted to the Hodgkin-Huxley sodium and potassium currents and is general for a variety of voltage- and ligand-gated ion currents. As a by-product, the analysis of the properties emerging in exact Markov schemes by standard probability calculus enables us for the first time to analytically identify the sources of inaccuracy of the previous proposal, while providing solid ground for its modification and improvement we present here. PMID:21423712
Meek, Garrett A; Levine, Benjamin G
2014-07-01
Spikes in the time-derivative coupling (TDC) near surface crossings make the accurate integration of the time-dependent Schrödinger equation in nonadiabatic molecular dynamics simulations a challenge. To address this issue, we present an approximation to the TDC based on a norm-preserving interpolation (NPI) of the adiabatic electronic wave functions within each time step. We apply NPI and two other schemes for computing the TDC in numerical simulations of the Landau-Zener model, comparing the simulated transfer probabilities to the exact solution. Though NPI does not require the analytical calculation of nonadiabatic coupling matrix elements, it consistently yields unsigned population transfer probability errors of ∼0.001, whereas analytical calculation of the TDC yields errors of 0.0-1.0 depending on the time step, the offset of the maximum in the TDC from the beginning of the time step, and the coupling strength. The approximation of Hammes-Schiffer and Tully yields errors intermediate between NPI and the analytical scheme. PMID:26279558
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.
2006-01-01
Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.
Margot Gerritsen
2008-10-31
Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids
Finite domain simulations with adaptive boundaries: accurate potentials and nonequilibrium movesets.
Wagoner, Jason A; Pande, Vijay S
2013-12-21
We extend the theory of hybrid explicit/implicit solvent models to include an explicit domain that grows and shrinks in response to a solute's evolving configuration. The goal of this model is to provide an appropriate but not excessive amount of solvent detail, and the inclusion of an adjustable boundary provides a significant computational advantage for solutes that explore a range of configurations. In addition to the theoretical development, a successful implementation of this method requires (1) an efficient moveset that propagates the boundary as a new coordinate of the system, and (2) an accurate continuum solvent model with parameters that are transferable to an explicit domain of any size. We address these challenges and develop boundary updates using Monte Carlo moves biased by nonequilibrium paths. We obtain the desired level of accuracy using a "decoupling interface" that we have previously shown to remove boundary artifacts common to hybrid solvent models. Using an uncharged, coarse-grained solvent model, we then study the efficiency of nonequilibrium paths that a simulation takes by quantifying the dissipation. In the spirit of optimization, we study this quantity over a range of simulation parameters. PMID:24359359
How to obtain accurate resist simulations in very low-k1 era?
NASA Astrophysics Data System (ADS)
Chiou, Tsann-Bim; Park, Chan-Ha; Choi, Jae-Seung; Min, Young-Hong; Hansen, Steve; Tseng, Shih-En; Chen, Alek C.; Yim, Donggyu
2006-03-01
A procedure for calibrating a resist model iteratively adjusts appropriate parameters until the simulations of the model match the experimental data. The tunable parameters may include the shape of the illuminator, the geometry and transmittance/phase of the mask, light source and scanner-related parameters that affect imaging quality, resist process control and most importantly the physical/chemical factors in the resist model. The resist model can be accurately calibrated by measuring critical dimensions (CD) of a focus-exposure matrix (FEM) and the technique has been demonstrated to be very successful in predicting lithographic performance. However, resist model calibration is more challenging in the low k1 (<0.3) regime because numerous uncertainties, such as mask and resist CD metrology errors, are becoming too large to be ignored. This study demonstrates a resist model calibration procedure for a 0.29 k1 process using a 6% halftone mask containing 2D brickwall patterns. The influence of different scanning electron microscopes (SEM) and their wafer metrology signal analysis algorithms on the accuracy of the resist model is evaluated. As an example of the metrology issue of the resist pattern, the treatment of a sidewall angle is demonstrated for the resist line ends where the contrast is relatively low. Additionally, the mask optical proximity correction (OPC) and corner rounding are considered in the calibration procedure that is based on captured SEM images. Accordingly, the average root-mean-square (RMS) error, which is the difference between simulated and experimental CDs, can be improved by considering the metrological issues. Moreover, a weighting method and a measured CD tolerance are proposed to handle the different CD variations of the various edge points of the wafer resist pattern. After the weighting method is implemented and the CD selection criteria applied, the RMS error can be further suppressed. Therefore, the resist CD and process window can
NASA Astrophysics Data System (ADS)
Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.
2015-03-01
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
Cassini radar : system concept and simulation results
NASA Astrophysics Data System (ADS)
Melacci, P. T.; Orosei, R.; Picardi, G.; Seu, R.
1998-10-01
The Cassini mission is an international venture, involving NASA, the European Space Agency (ESA) and the Italian Space Agency (ASI), for the investigation of the Saturn system and, in particular, Titan. The Cassini radar will be able to see through Titan's thick, optically opaque atmosphere, allowing us to better understand the composition and the morphology of its surface, but the interpretation of the results, due to the complex interplay of many different factors determining the radar echo, will not be possible without an extensive modellization of the radar system functioning and of the surface reflectivity. In this paper, a simulator of the multimode Cassini radar will be described, after a brief review of our current knowledge of Titan and a discussion of the contribution of the Cassini radar in answering to currently open questions. Finally, the results of the simulator will be discussed. The simulator has been implemented on a RISC 6000 computer by considering only the active modes of operation, that is altimeter and synthetic aperture radar. In the instrument simulation, strict reference has been made to the present planned sequence of observations and to the radar settings, including burst and single pulse duration, pulse bandwidth, pulse repetition frequency and all other parameters which may be changed, and possibly optimized, according to the operative mode. The observed surfaces are simulated by a facet model, allowing the generation of surfaces with Gaussian or non-Gaussian roughness statistic, together with the possibility of assigning to the surface an average behaviour which can represent, for instance, a flat surface or a crater. The results of the simulation will be discussed, in order to check the analytical evaluations of the models of the average received echoes and of the attainable performances. In conclusion, the simulation results should allow the validation of the theoretical evaluations of the capabilities of microwave instruments, when
Zhao, A P; Cvetkovic, S R
1994-08-20
An efficient, accurate, and automated vectorial finite-element software package (named WAVEGIDE), which is implemented within a PDE/Protran problem-solving environment, has been extended to general multilayer anisotropic waveguides. With our system, through an interactive question-and-answer session, the problem can be simply defined with high-level PDE/Protran commands. The problem can then be solved easily and quickly by the main processor within this intelligent environment. In particular, in our system the eigenvalue of waveguide problems may be either a propagation constant (β) or an operated light frequency (F). Furthermore, the cutoff frequencies of propagation modes in waveguides can be calculated. As an application of this approach, numerical results for both scalar and hybrid modes in multilayer anisotropic waveguides are presented and are also compared with results obtained with the domain-integral method. These results clearly illustrate the unique flexibility, accuracy, and the ease of use f the WAVEGIDE program. PMID:20935964
Accurate time delay technology in simulated test for high precision laser range finder
NASA Astrophysics Data System (ADS)
Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi
2015-10-01
With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.
Titan's organic chemistry: Results of simulation experiments
NASA Technical Reports Server (NTRS)
Sagan, Carl; Thompson, W. Reid; Khare, Bishun N.
1992-01-01
Recent low pressure continuous low plasma discharge simulations of the auroral electron driven organic chemistry in Titan's mesosphere are reviewed. These simulations yielded results in good accord with Voyager observations of gas phase organic species. Optical constants of the brownish solid tholins produced in similar experiments are in good accord with Voyager observations of the Titan haze. Titan tholins are rich in prebiotic organic constituents; the Huygens entry probe may shed light on some of the processes that led to the origin of life on Earth.
Romanov, V N; Cygan, R T; Myshakin, E M
2012-06-21
Naturally occurring clay minerals provide a distinctive material for carbon capture and carbon dioxide sequestration. Swelling clay minerals, such as the smectite variety, possess an aluminosilicate structure that is controlled by low-charge layers that readily expand to accommodate water molecules and, potentially, CO2. Recent experimental studies have demonstrated the efficacy of intercalating CO2 in the interlayer of layered clays, but little is known about the molecular mechanisms of the process and the extent of carbon capture as a function of clay charge and structure. A series of molecular dynamics simulations and vibrational analyses have been completed to assess the molecular interactions associated with incorporation of CO2 and H2O in the interlayer of montmorillonite clay and to help validate the models with experimental observation. An accurate and fully flexible set of interatomic potentials for CO2 is developed and combined with Clayff potentials to help evaluate the intercalation mechanism and examine the effect of molecular flexibility onthe diffusion rate of CO2 in water.
Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC
2009-06-19
Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.
Differential-equation-based representation of truncation errors for accurate numerical simulation
NASA Astrophysics Data System (ADS)
MacKinnon, Robert J.; Johnson, Richard W.
1991-09-01
High-order compact finite difference schemes for 2D convection-diffusion-type differential equations with constant and variable convection coefficients are derived. The governing equations are employed to represent leading truncation terms, including cross-derivatives, making the overall O(h super 4) schemes conform to a 3 x 3 stencil. It is shown that the two-dimensional constant coefficient scheme collapses to the optimal scheme for the one-dimensional case wherein the finite difference equation yields nodally exact results. The two-dimensional schemes are tested against standard model problems, including a Navier-Stokes application. Results show that the two schemes are generally more accurate, on comparable grids, than O(h super 2) centered differencing and commonly used O(h) and O(h super 3) upwinding schemes.
NASA Astrophysics Data System (ADS)
Farah, A.
The Ionospheric delay is still one of the largest sources of error that affects the positioning accuracy of any satellite positioning system. This problem could be solved due to the dispersive nature of the Ionosphere by combining simultaneous measurements of signals at two different frequencies but it is still there for single- frequency users. Much effort has been made in establishing models for single- frequency users to make this effect as small as possible. These models vary in accuracy, input data and computational complexity, so the choice between the different models depends on the individual circumstances of the user. From the simulation point of view, the model needed should be accurate with a global coverage and good description to the Ionosphere's variable nature with both time and location. The author reviews some of these established models, starting with the BENT model, the Klobuchar model and the IRI (International Reference Ionosphere) model. Since quiet a long time, Klobuchar model considers the most widely used model ever in this field, due to its simplicity and time saving. Any GPS user could find Klobuchar model's coefficients in the broadcast navigation message. CODE, Centre for Orbit Determination in Europe provides a new set of coefficients for Klobuchar model, which gives more accurate results for the Ionospheric delay computation. IGS (International GPS Service) services include providing GPS community with a global Ionospheric maps in IONEX-format (IONosphere Map Exchange format) which enables the computation of the Ionospheric delay at the desired location and time. The study was undertaken from GPS-data simulation point of view. The aim was to select a model for the simulation of GPS data that gives a good description of the Ionosphere's nature with a high degree of accuracy in computing the Ionospheric delay that yields to better-simulated data. A new model developed by the author based on IGS global Ionospheric maps. A comparison
Chemically Accurate Simulation of a Polyatomic Molecule-Metal Surface Reaction.
Nattino, Francesco; Migliorini, Davide; Kroes, Geert-Jan; Dombrowski, Eric; High, Eric A; Killelea, Daniel R; Utz, Arthur L
2016-07-01
Although important to heterogeneous catalysis, the ability to accurately model reactions of polyatomic molecules with metal surfaces has not kept pace with developments in gas phase dynamics. Partnering the specific reaction parameter (SRP) approach to density functional theory with ab initio molecular dynamics (AIMD) extends our ability to model reactions with metals with quantitative accuracy from only the lightest reactant, H2, to essentially all molecules. This is demonstrated with AIMD calculations on CHD3 + Ni(111) in which the SRP functional is fitted to supersonic beam experiments, and validated by showing that AIMD with the resulting functional reproduces initial-state selected sticking measurements with chemical accuracy (4.2 kJ/mol ≈ 1 kcal/mol). The need for only semilocal exchange makes our scheme computationally tractable for dissociation on transition metals. PMID:27284787
Chemically Accurate Simulation of a Polyatomic Molecule-Metal Surface Reaction
2016-01-01
Although important to heterogeneous catalysis, the ability to accurately model reactions of polyatomic molecules with metal surfaces has not kept pace with developments in gas phase dynamics. Partnering the specific reaction parameter (SRP) approach to density functional theory with ab initio molecular dynamics (AIMD) extends our ability to model reactions with metals with quantitative accuracy from only the lightest reactant, H2, to essentially all molecules. This is demonstrated with AIMD calculations on CHD3 + Ni(111) in which the SRP functional is fitted to supersonic beam experiments, and validated by showing that AIMD with the resulting functional reproduces initial-state selected sticking measurements with chemical accuracy (4.2 kJ/mol ≈ 1 kcal/mol). The need for only semilocal exchange makes our scheme computationally tractable for dissociation on transition metals. PMID:27284787
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
Accurate simulation of the electron cloud in the Fermilab Main Injector with VORPAL
Lebrun, Paul L.G.; Spentzouris, Panagiotis; Cary, John R.; Stoltz, Peter; Veitzer, Seth A.; /Tech-X, Boulder
2011-01-01
We present results from a precision simulation of the electron cloud (EC) in the Fermilab Main Injector using the code VORPAL. This is a fully 3d and self consistent treatment of the EC. Both distributions of electrons in 6D phase-space and E.M. field maps have been generated. This has been done for various configurations of the magnetic fields found around the machine have been studied. Plasma waves associated to the fluctuation density of the cloud have been analyzed. Our results are compared with those obtained with the POSINST code. The response of a Retarding Field Analyzer (RFA) to the EC has been simulated, as well as the more challenging microwave absorption experiment. Definite predictions of their exact response are difficult to obtain,mostly because of the uncertainties in the secondary emission yield and, in the case of the RFA, because of the sensitivity of the electron collection efficiency to unknown stray magnetic fields. Nonetheless, our simulations do provide guidance to the experimental program.
NASA Astrophysics Data System (ADS)
Jolivet, L.; Cohen, M.; Ruas, A.
2015-08-01
Landscape influences fauna movement at different levels, from habitat selection to choices of movements' direction. Our goal is to provide a development frame in order to test simulation functions for animal's movement. We describe our approach for such simulations and we compare two types of functions to calculate trajectories. To do so, we first modelled the role of landscape elements to differentiate between elements that facilitate movements and the ones being hindrances. Different influences are identified depending on landscape elements and on animal species. Knowledge were gathered from ecologists, literature and observation datasets. Second, we analysed the description of animal movement recorded with GPS at fine scale, corresponding to high temporal frequency and good location accuracy. Analysing this type of data provides information on the relation between landscape features and movements. We implemented an agent-based simulation approach to calculate potential trajectories constrained by the spatial environment and individual's behaviour. We tested two functions that consider space differently: one function takes into account the geometry and the types of landscape elements and one cost function sums up the spatial surroundings of an individual. Results highlight the fact that the cost function exaggerates the distances travelled by an individual and simplifies movement patterns. The geometry accurate function represents a good bottom-up approach for discovering interesting areas or obstacles for movements.
Boriskina, Svetlana V; Sewell, Phillip; Benson, Trevor M; Nosich, Alexander I
2004-03-01
A fast and accurate method is developed to compute the natural frequencies and scattering characteristics of arbitrary-shape two-dimensional dielectric resonators. The problem is formulated in terms of a uniquely solvable set of second-kind boundary integral equations and discretized by the Galerkin method with angular exponents as global test and trial functions. The log-singular term is extracted from one of the kernels, and closed-form expressions are derived for the main parts of all the integral operators. The resulting discrete scheme has a very high convergence rate. The method is used in the simulation of several optical microcavities for modern dense wavelength-division-multiplexed systems. PMID:15005404
Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-03-14
Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 10(3)-10(5) molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online. PMID:25770527
Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-03-14
Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 10{sup 3}-10{sup 5} molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.
NASA Astrophysics Data System (ADS)
Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-03-01
Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.
Simulation of diurnal thermal energy storage systems: Preliminary results
NASA Astrophysics Data System (ADS)
Katipamula, S.; Somasundaram, S.; Williams, H. R.
1994-12-01
This report describes the results of a simulation of thermal energy storage (TES) integrated with a simple-cycle gas turbine cogeneration system. Integrating TES with cogeneration can serve the electrical and thermal loads independently while firing all fuel in the gas turbine. The detailed engineering and economic feasibility of diurnal TES systems integrated with cogeneration systems has been described in two previous PNL reports. The objective of this study was to lay the ground work for optimization of the TES system designs using a simulation tool called TRNSYS (TRaNsient SYstem Simulation). TRNSYS is a transient simulation program with a sequential-modular structure developed at the Solar Energy Laboratory, University of Wisconsin-Madison. The two TES systems selected for the base-case simulations were: (1) a one-tank storage model to represent the oil/rock TES system; and (2) a two-tank storage model to represent the molten nitrate salt TES system. Results of the study clearly indicate that an engineering optimization of the TES system using TRNSYS is possible. The one-tank stratified oil/rock storage model described here is a good starting point for parametric studies of a TES system. Further developments to the TRNSYS library of available models (economizer, evaporator, gas turbine, etc.) are recommended so that the phase-change processes is accurately treated.
Loco, Daniele; Jurinovich, Sandro; Di Bari, Lorenzo; Mennucci, Benedetta
2016-01-14
We present and discuss a simple and fast computational approach to the calculation of electronic circular dichroism spectra of nucleic acids. It is based on a exciton model in which the couplings are obtained in terms of the full transition-charge distributions, as resulting from TDDFT methods applied on the individual nucleobases. We validated the method on two systems, a DNA G-quadruplex and a RNA β-hairpin whose solution structures have been accurately determined by means of NMR. We have shown that the different characteristics of composition and structure of the two systems can lead to quite important differences in the dependence of the accuracy of the simulation on the excitonic parameters. The accurate reproduction of the CD spectra together with their interpretation in terms of the excitonic composition suggest that this method may lend itself as a general computational tool to both predict the spectra of hypothetic structures and define clear relationships between structural and ECD properties. PMID:26646952
Fast Plasma Instrument for MMS: Simulation Results
NASA Technical Reports Server (NTRS)
Figueroa-Vinas, Adolfo; Adrian, Mark L.; Lobell, James V.; Simpson, David G.; Barrie, Alex; Winkert, George E.; Yeh, Pen-Shu; Moore, Thomas E.
2008-01-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. The Dual Electron Spectrometer (DES) of the Fast Plasma Instrument (FPI) for MMS meets these demanding requirements by acquiring the electron velocity distribution functions (VDFs) for the full sky with high-resolution angular measurements every 30 ms. This will provide unprecedented access to electron scale dynamics within the reconnection diffusion region. The DES consists of eight half-top-hat energy analyzers. Each analyzer has a 6 deg. x 11.25 deg. Full-sky coverage is achieved by electrostatically stepping the FOV of each of the eight sensors through four discrete deflection look directions. Data compression and burst memory management will provide approximately 30 minutes of high time resolution data during each orbit of the four MMS spacecraft. Each spacecraft will intelligently downlink the data sequences that contain the greatest amount of temporal structure. Here we present the results of a simulation of the DES analyzer measurements, data compression and decompression, as well as ground-based analysis using as a seed re-processed Cluster/PEACE electron measurements. The Cluster/PEACE electron measurements have been reprocessed through virtual DES analyzers with their proper geometrical, energy, and timing scale factors and re-mapped via interpolation to the DES angular and energy phase-space sampling measurements. The results of the simulated DES measurements are analyzed and the full moments of the simulated VDFs are compared with those obtained from the Cluster/PEACE spectrometer using a standard quadrature moment, a newly implemented spectral spherical harmonic method, and a singular value decomposition method. Our preliminary moment calculations show a remarkable agreement within the uncertainties of the measurements, with the
ANOVA parameters influence in LCF experimental data and simulation results
NASA Astrophysics Data System (ADS)
Delprete, C.; Sesanaa, R.; Vercelli, A.
2010-06-01
The virtual design of components undergoing thermo mechanical fatigue (TMF) and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation) and the damage and life model (for life assessment). The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF) tests, low cycle fatigue (LCF) tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo structural FEM
TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow
Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.
1993-01-01
A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.
Consistent Multigroup Theory Enabling Accurate Course-Group Simulation of Gen IV Reactors
Rahnema, Farzad; Haghighat, Alireza; Ougouag, Abderrafi
2013-11-29
The objective of this proposal is the development of a consistent multi-group theory that accurately accounts for the energy-angle coupling associated with collapsed-group cross sections. This will allow for coarse-group transport and diffusion theory calculations that exhibit continuous energy accuracy and implicitly treat cross- section resonances. This is of particular importance when considering the highly heterogeneous and optically thin reactor designs within the Next Generation Nuclear Plant (NGNP) framework. In such reactors, ignoring the influence of anisotropy in the angular flux on the collapsed cross section, especially at the interface between core and reflector near which control rods are located, results in inaccurate estimates of the rod worth, a serious safety concern. The scope of this project will include the development and verification of a new multi-group theory enabling high-fidelity transport and diffusion calculations in coarse groups, as well as a methodology for the implementation of this method in existing codes. This will allow for a higher accuracy solution of reactor problems while using fewer groups and will reduce the computational expense. The proposed research represents a fundamental advancement in the understanding and improvement of multi- group theory for reactor analysis.
Simulation results for the Viterbi decoding algorithm
NASA Technical Reports Server (NTRS)
Batson, B. H.; Moorehead, R. W.; Taqvi, S. Z. H.
1972-01-01
Concepts involved in determining the performance of coded digital communications systems are introduced. The basic concepts of convolutional encoding and decoding are summarized, and hardware implementations of sequential and maximum likelihood decoders are described briefly. Results of parametric studies of the Viterbi decoding algorithm are summarized. Bit error probability is chosen as the measure of performance and is calculated, by using digital computer simulations, for various encoder and decoder parameters. Results are presented for code rates of one-half and one-third, for constraint lengths of 4 to 8, for both hard-decision and soft-decision bit detectors, and for several important systematic and nonsystematic codes. The effect of decoder block length on bit error rate also is considered, so that a more complete estimate of the relationship between performance and decoder complexity can be made.
How accurate are volcanic ash simulations of the 2010 Eyjafjallajökull eruption?
NASA Astrophysics Data System (ADS)
Dacre, Helen; Harvey, Natalie; Webley, Peter; Morton, Don
2016-04-01
In the event of a volcanic eruption the decision to close airspace is based on forecast ash maps, produced using volcanic ash transport and dispersion models. In this paper we quantitatively evaluate the spatial skill of volcanic ash simulations using satellite retrievals of ash from the Eyjafjallajökull eruption during the period from 7-16 May 2010. We find that at the start of this period, 7-10 May, the model (FLEXPART) has excellent skill and can predict the spatial distribution of the satellite retrieved ash to within 0.5°× 0.5° lat/lon. However, on the 10 May there is a decrease in the spatial accuracy of the model, to 2.5°× 2.5° lat/lon, and between 11-12 May the simulated ash location errors grow rapidly. On the 11 May ash is located close to a bifurcation point in the atmosphere, resulting in a rapid divergence in the modeled and satellite ash locations. In general, the model skill reduces as the residence time of ash increases. However, the error growth is not always steady. Rapid increases in error growth are linked to critical points in the ash trajectories. Ensemble modeling using perturbed meteorological data would help to represent this uncertainty and assimilation of satellite ash data would help to reduce uncertainty in volcanic ash forecasts.
How accurate are volcanic ash simulations of the 2010 Eyjafjallajökull eruption?
NASA Astrophysics Data System (ADS)
Dacre, H. F.; Harvey, N. J.; Webley, P. W.; Morton, D.
2016-04-01
In the event of a volcanic eruption the decision to close airspace is based on forecast ash maps, produced using volcanic ash transport and dispersion models. In this paper we quantitatively evaluate the spatial skill of volcanic ash simulations using satellite retrievals of ash from the Eyjafjallajökull eruption during the period from 7 to 16 May 2010. We find that at the start of this period, 7-10 May, the model (FLEXible PARTicle) has excellent skill and can predict the spatial distribution of the satellite-retrieved ash to within 0.5° × 0.5° latitude/longitude. However, on 10 May there is a decrease in the spatial accuracy of the model to 2.5°× 2.5° latitude/longitude, and between 11 and 12 May the simulated ash location errors grow rapidly. On 11 May ash is located close to a bifurcation point in the atmosphere, resulting in a rapid divergence in the modeled and satellite ash locations. In general, the model skill reduces as the residence time of ash increases. However, the error growth is not always steady. Rapid increases in error growth are linked to key points in the ash trajectories. Ensemble modeling using perturbed meteorological data would help to represent this uncertainty, and assimilation of satellite ash data would help to reduce uncertainty in volcanic ash forecasts.
NASA Astrophysics Data System (ADS)
Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith
2015-09-01
Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Medical Simulation Practices 2010 Survey Results
NASA Technical Reports Server (NTRS)
McCrindle, Jeffrey J.
2011-01-01
Medical Simulation Centers are an essential component of our learning infrastructure to prepare doctors and nurses for their careers. Unlike the military and aerospace simulation industry, very little has been published regarding the best practices currently in use within medical simulation centers. This survey attempts to provide insight into the current simulation practices at medical schools, hospitals, university nursing programs and community college nursing programs. Students within the MBA program at Saint Joseph's University conducted a survey of medical simulation practices during the summer 2010 semester. A total of 115 institutions responded to the survey. The survey resus discuss overall effectiveness of current simulation centers as well as the tools and techniques used to conduct the simulation activity
Evaluation of the EURO-CORDEX RCMs to accurately simulate the Etesian wind system
NASA Astrophysics Data System (ADS)
Dafka, Stella; Xoplaki, Elena; Toreti, Andrea; Zanis, Prodromos; Tyrlis, Evangelos; Luterbacher, Jürg
2016-04-01
The Etesians are among the most persistent regional scale wind systems in the lower troposphere that blow over the Aegean Sea during the extended summer season. ΑAn evaluation of the high spatial resolution, EURO-CORDEX Regional Climate Models (RCMs) is here presented. The study documents the performance of the individual models in representing the basic spatiotemporal pattern of the Etesian wind system for the period 1989-2004. The analysis is mainly focused on evaluating the abilities of the RCMs in simulating the surface wind over the Aegean Sea and the associated large scale atmospheric circulation. Mean Sea Level Pressure (SLP), wind speed and geopotential height at 500 hPa are used. The simulated results are validated against reanalysis datasets (20CR-v2c and ERA20-C) and daily observational measurements (12:00 UTC) from the mainland Greece and Aegean Sea. The analysis highlights the general ability of the RCMs to capture the basic features of the Etesians, but also indicates considerable deficiencies for selected metrics, regions and subperiods. Some of these deficiencies include the significant underestimation (overestimation) of the mean SLP in the northeastern part of the analysis domain in all subperiods (for May and June) when compared to 20CR-v2c (ERA20-C), the significant overestimation of the anomalous ridge over the Balkans and central Europe and the underestimation of the wind speed over the Aegean Sea. Future work will include an assessment of the Etesians for the next decades using EURO-CORDEX projections under different RCP scenarios and estimate the future potential for wind energy production.
Interhemispheric Field-Aligned Currents: Simulation Results
NASA Astrophysics Data System (ADS)
Lyatsky, Sonya
2016-04-01
We present simulation results of the 3-D magnetosphere-ionosphere current system including the Region 1, Region 2, and interhemispheric (IHC) field-aligned currents flowing between the Northern and Southern conjugate ionospheres in the case of asymmetry in ionospheric conductivities in two hemispheres (observed, for instance, during the summer-winter seasons). We also computed the maps of ionospheric and equivalent ionospheric currents in two hemispheres. The IHCs are an important part of the global 3-D current system in high-latitude ionospheres. These currents are especially significant during summer and winter months. In the winter ionosphere, they may be comparable and even exceed both Region 1 and Region 2 field-aligned currents. An important feature of these interhemispheric currents is that they link together processes in two hemispheres, so that the currents observed in one hemisphere can provide us with information about the currents in the opposite hemisphere. Despite the significant role of these IHCs in the global 3-D current system, they have not been sufficiently studied yet. The main results of our research may be summarized as follows: 1) In winter hemisphere, the IHCs may significantly exceed and be a substitute for the local Region 1 and Region 2 currents; 2) The IHCs may strongly affect the magnitude, location, and direction of the ionospheric and equivalent ionospheric currents (especially in the nightside winter auroral ionosphere). 3) The IHCs in winter hemisphere may be, in fact, an important (and sometimes even major) source of the Westward Auroral Electrojet, observed in both hemispheres during substorm activity. The study of the contribution from the IHCs into the total global 3-D current system allows us to improve the understanding and forecasting of geomagnetic, auroral, and ionospheric disturbances in two hemispheres. The results of our studies of the Interhemispheric currents are presented in papers: (note: for publications my last
Accurate simulation of the electron cloud in the Fermilab Main Injector with VORPAL
Lebrun, Paul L.G.; Spentzouris, Panagiotis; Cary, John R.; Stoltz, Peter; Veitzer, Seth A.; /Tech-X, Boulder
2010-05-01
Precision simulations of the electron cloud at the Fermilab Main Injector have been studied using the plasma simulation code VORPAL. Fully 3D and self consistent solutions that includes E.M. field maps generated by the cloud and the proton bunches have been obtained, as well detailed distributions of the electron's 6D phase space. We plan to include such maps in the ongoing simulation of the space charge effects in the Main Injector. Simulations of the response of beam position monitors, retarding field analyzers and microwave transmission experiments are ongoing.
Accurate simulation of near-wall turbulence over a compliant tensegrity fabric
NASA Astrophysics Data System (ADS)
Luo, Haoxiang; Bewley, Thomas R.
2005-05-01
This paper presents a new class of compliant surfaces, dubbed tensegrity fabrics, for the problem of reducing the drag induced by near-wall turbulent flows. The substructure upon which this compliant surface is built is based on the "tensegrity" structural paradigm, and is formed as a stable pretensioned network of compressive members ("bars") interconnected by tensile members ("tendons"). Compared with existing compliant surface studies, most of which are based on spring-supported plates or membranes, tensegrity fabrics appear to be better configured to respond to the shear stress fluctuations (in addition to the pressure fluctuations) generated by near-wall turbulence. As a result, once the several parameters affecting the compliance characteristics of the structure are tuned appropriately, the tensegrity fabric might exhibit an improved capacity for dampening the fluctuations of near-wall turbulence, thereby reducing drag. This paper improves our previous work (SPIE Paper 5049-57) and uses a 3D time-dependent coordinate transformation in the flow simulations to account for the motion of the channel walls, and the Cartesian components of the velocity are used as the flow variables. For the spatial discretization, a dealiased pseudospectral scheme is used in the homogeneous directions and a second-order finite difference scheme is used in the wall-normal direction. The code is first validated with several benchmark results that are available in the published literature for flows past both stationary and nonstationary walls. Direct numerical simulations of turbulent flows at Re_tau=150 over the compliant tensegrity fabric are then presented. It is found that, when the stiffness, mass, damping, and orientation of the members of the the unit cell defining the tensegrity fabric are selected appropriately, the near-wall statistics of the turbulence are altered significantly. The flow/structure interface is found to form streamwise-travelling waves reminiscent of those
A novel approach for accurate radiative transfer in cosmological hydrodynamic simulations
NASA Astrophysics Data System (ADS)
Petkova, Margarita; Springel, Volker
2011-08-01
accurately deal with non-equilibrium effects. We discuss several tests of the new method, including shadowing configurations in two and three dimensions, ionized sphere expansion in static and dynamic density fields and the ionization of a cosmological density field. The tests agree favourably with analytical expectations and results based on other numerical radiative transfer approximations.
Advanced material testing in support of accurate sheet metal forming simulations
NASA Astrophysics Data System (ADS)
Kuwabara, Toshihiko
2013-05-01
This presentation is a review of experimental methods for accurately measuring and modeling the anisotropic plastic deformation behavior of metal sheets under a variety of loading paths: biaxial compression test, hydraulic bulge test, biaxial tension test using a cruciform specimen, multiaxial tube expansion test using a closed-loop electrohydraulic testing machine for the measurement of forming limit strains and stresses, combined tension-shear test, and in-plane stress reversal test. Observed material responses are compared with predictions using phenomenological plasticity models to highlight the importance of accurate material testing. Special attention is paid to the plastic deformation behavior of sheet metals commonly used in industry, and to verifying the validity of constitutive models based on anisotropic yield functions at a large plastic strain range. The effects of using appropriate material models on the improvement of predictive accuracy for forming defects, such as springback and fracture, are also presented.
SALTSTONE MATRIX CHARACTERIZATION AND STADIUM SIMULATION RESULTS
Langton, C.
2009-07-30
SIMCO Technologies, Inc. was contracted to evaluate the durability of the saltstone matrix material and to measure saltstone transport properties. This information will be used to: (1) Parameterize the STADIUM{reg_sign} service life code, (2) Predict the leach rate (degradation rate) for the saltstone matrix over 10,000 years using the STADIUM{reg_sign} concrete service life code, and (3) Validate the modeled results by conducting leaching (water immersion) tests. Saltstone durability for this evaluation is limited to changes in the matrix itself and does not include changes in the chemical speciation of the contaminants in the saltstone. This report summarized results obtained to date which include: characterization data for saltstone cured up to 365 days and characterization of saltstone cured for 137 days and immersed in water for 31 days. Chemicals for preparing simulated non-radioactive salt solution were obtained from chemical suppliers. The saltstone slurry was mixed according to directions provided by SRNL. However SIMCO Technologies Inc. personnel made a mistake in the premix proportions. The formulation SIMCO personnel used to prepare saltstone premix was not the reference mix proportions: 45 wt% slag, 45 wt% fly ash, and 10 wt% cement. SIMCO Technologies Inc. personnel used the following proportions: 21 wt% slag, 65 wt% fly ash, and 14 wt% cement. The mistake was acknowledged and new mixes have been prepared and are curing. The results presented in this report are assumed to be conservative since the excessive fly ash was used in the SIMCO saltstone. The SIMCO mixes are low in slag which is very reactive in the caustic salt solution. The impact is that the results presented in this report are expected to be conservative since the samples prepared were deficient in slag and contained excess fly ash. The hydraulic reactivity of slag is about four times that of fly ash so the amount of hydrated binder formed per unit volume in the SIMCO saltstone samples is
A Variable Coefficient Method for Accurate Monte Carlo Simulation of Dynamic Asset Price
NASA Astrophysics Data System (ADS)
Li, Yiming; Hung, Chih-Young; Yu, Shao-Ming; Chiang, Su-Yun; Chiang, Yi-Hui; Cheng, Hui-Wen
2007-07-01
In this work, we propose an adaptive Monte Carlo (MC) simulation technique to compute the sample paths for the dynamical asset price. In contrast to conventional MC simulation with constant drift and volatility (μ,σ), our MC simulation is performed with variable coefficient methods for (μ,σ) in the solution scheme, where the explored dynamic asset pricing model starts from the formulation of geometric Brownian motion. With the method of simultaneously updated (μ,σ), more than 5,000 runs of MC simulation are performed to fulfills basic accuracy of the large-scale computation and suppresses statistical variance. Daily changes of stock market index in Taiwan and Japan are investigated and analyzed.
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Singer, Bart A.
2003-01-01
We evaluate the applicability of a production computational fluid dynamics code for conducting detached eddy simulation for unsteady flows. A second-order accurate Navier-Stokes code developed at NASA Langley Research Center, known as TLNS3D, is used for these simulations. We focus our attention on high Reynolds number flow (Re = 5 x 10(sup 4) - 1.4 x 10(sup 5)) past a circular cylinder to simulate flows with large-scale separations. We consider two types of flow situations: one in which the flow at the separation point is laminar, and the other in which the flow is already turbulent when it detaches from the surface of the cylinder. Solutions are presented for two- and three-dimensional calculations using both the unsteady Reynolds-averaged Navier-Stokes paradigm and the detached eddy simulation treatment. All calculations use the standard Spalart-Allmaras turbulence model as the base model.
NASA Astrophysics Data System (ADS)
Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.
2010-04-01
In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.
NASA Astrophysics Data System (ADS)
Jiang, Xikai; Karpeev, Dmitry; Li, Jiyuan; de Pablo, Juan; Hernandez-Ortiz, Juan; Heinonen, Olle
Boundary integrals arise in many electrostatic and magnetostatic problems. In computational modeling of these problems, although the integral is performed only on the boundary of a domain, its direct evaluation needs O(N2) operations, where N is number of unknowns on the boundary. The O(N2) scaling impedes a wider usage of the boundary integral method in scientific and engineering communities. We have developed a parallel computational approach that utilize the Fast Multipole Method to evaluate the boundary integral in O(N) operations. To demonstrate the accuracy, efficiency, and scalability of our approach, we consider two test cases. In the first case, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space using a hybrid finite element-boundary integral method. In the second case, we solve an electrostatic problem involving the polarization of dielectric objects in free space using the boundary element method. The results from test cases show that our parallel approach can enable highly efficient and accurate simulations of mesoscale electrostatic/magnetostatic problems. Computing resources was provided by Blues, a high-performance cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory. Work at Argonne was supported by U. S. DOE, Office of Science under Contract No. DE-AC02-06CH11357.
Enabling R&D for accurate simulation of non-ideal explosives.
Aidun, John Bahram; Thompson, Aidan Patrick; Schmitt, Robert Gerard
2010-09-01
We implemented two numerical simulation capabilities essential to reliably predicting the effect of non-ideal explosives (NXs). To begin to be able to treat the multiple, competing, multi-step reaction paths and slower kinetics of NXs, Sandia's CTH shock physics code was extended to include the TIGER thermochemical equilibrium solver as an in-line routine. To facilitate efficient exploration of reaction pathways that need to be identified for the CTH simulations, we implemented in Sandia's LAMMPS molecular dynamics code the MSST method, which is a reactive molecular dynamics technique for simulating steady shock wave response. Our preliminary demonstrations of these two capabilities serve several purposes: (i) they demonstrate proof-of-principle for our approach; (ii) they provide illustration of the applicability of the new functionality; and (iii) they begin to characterize the use of the new functionality and identify where improvements will be needed for the ultimate capability to meet national security needs. Next steps are discussed.
Offner, Stella S. R.; Robitaille, Thomas P.; Hansen, Charles E.; Klein, Richard I.; McKee, Christopher F.
2012-07-10
The properties of unresolved protostars and their local environment are frequently inferred from spectral energy distributions (SEDs) using radiative transfer modeling. In this paper, we use synthetic observations of realistic star formation simulations to evaluate the accuracy of properties inferred from fitting model SEDs to observations. We use ORION, an adaptive mesh refinement (AMR) three-dimensional gravito-radiation-hydrodynamics code, to simulate low-mass star formation in a turbulent molecular cloud including the effects of protostellar outflows. To obtain the dust temperature distribution and SEDs of the forming protostars, we post-process the simulations using HYPERION, a state-of-the-art Monte Carlo radiative transfer code. We find that the ORION and HYPERION dust temperatures typically agree within a factor of two. We compare synthetic SEDs of embedded protostars for a range of evolutionary times, simulation resolutions, aperture sizes, and viewing angles. We demonstrate that complex, asymmetric gas morphology leads to a variety of classifications for individual objects as a function of viewing angle. We derive best-fit source parameters for each SED through comparison with a pre-computed grid of radiative transfer models. While the SED models correctly identify the evolutionary stage of the synthetic sources as embedded protostars, we show that the disk and stellar parameters can be very discrepant from the simulated values, which is expected since the disk and central source are obscured by the protostellar envelope. Parameters such as the stellar accretion rate, stellar mass, and disk mass show better agreement, but can still deviate significantly, and the agreement may in some cases be artificially good due to the limited range of parameters in the set of model SEDs. Lack of correlation between the model and simulation properties in many individual instances cautions against overinterpreting properties inferred from SEDs for unresolved protostellar
Results of a new polarization simulation
NASA Astrophysics Data System (ADS)
Fetrow, Matthew P.; Wellems, David; Sposato, Stephanie H.; Bishop, Kenneth P.; Caudill, Thomas R.; Davis, Michael L.; Simrell, Elizabeth R.
2002-01-01
Including polarization signatures of material samples in passive sensing may enhance target detection capabilities. To obtain more information on this potential improvement, a simulation is being developed to aid in interpreting IR polarization measurements in a complex environment. The simulation accounts for the background, or incident illumination, and the scattering and emission from the target into the sensor. MODTRAN, in combination with a dipole approximation to singly scattered radiance, is used to polarimetrically model the background, or sky conditions. The scattering and emission from rough surfaces are calculated using an energy conserving polarimetric Torrance and Sparrow BRDF model. The simulation can be used to examine the surface properties of materials in a laboratory environment, to investigate IR polarization signatures in the field, or a complex environment, and to predict trends in LWIR polarization data. In this paper we discuss the simulation architecture, the process for determining and roughness as a function of wavelength, which involves making polarization measurements of flat glass plates at various angles and temperatures in the laboratory at Kirtland AF Base, and the comparison of the simulation with field dat taken at Elgin Air Force Base. The later process entails using the extrapolated index of refraction and surface roughness, and a polarimetric incident sky dome generated by MODTRAN. We also present some parametric studies in which the sky condition, the sky temperature and the sensor declination angle were all varied.
A hybrid method for efficient and accurate simulations of diffusion compartment imaging signals
NASA Astrophysics Data System (ADS)
Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît; Taquet, Maxime
2015-12-01
Diffusion-weighted imaging is sensitive to the movement of water molecules through the tissue microstructure and can therefore be used to gain insight into the tissue cellular architecture. While the diffusion signal arising from simple geometrical microstructure is known analytically, it remains unclear what diffusion signal arises from complex microstructural configurations. Such knowledge is important to design optimal acquisition sequences, to understand the limitations of diffusion-weighted imaging and to validate novel models of the brain microstructure. We present a novel framework for the efficient simulation of high-quality DW-MRI signals based on the hybrid combination of exact analytic expressions in simple geometric compartments such as cylinders and spheres and Monte Carlo simulations in more complex geometries. We validate our approach on synthetic arrangements of parallel cylinders representing the geometry of white matter fascicles, by comparing it to complete, all-out Monte Carlo simulations commonly used in the literature. For typical configurations, equal levels of accuracy are obtained with our hybrid method in less than one fifth of the computational time required for Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Yi, Sha-Sha; Pan, Cong; Hu, Zhong-Han
2015-12-01
Modern computer simulations of biological systems often involve an explicit treatment of the complex interactions among a large number of molecules. While it is straightforward to compute the short-ranged Van der Waals interaction in classical molecular dynamics simulations, it has been a long-lasting issue to develop accurate methods for the longranged Coulomb interaction. In this short review, we discuss three types of methodologies for the accurate treatment of electrostatics in simulations of explicit molecules: truncation-type methods, Ewald-type methods, and mean-field-type methods. Throughout the discussion, we brief the formulations and developments of these methods, emphasize the intrinsic connections among the three types of methods, and focus on the existing problems which are often associated with the boundary conditions of electrostatics. This brief survey is summarized with a short perspective on future trends along the method developments and applications in the field of biological simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 91127015 and 21522304) and the Open Project from the State Key Laboratory of Theoretical Physics, and the Innovation Project from the State Key Laboratory of Supramolecular Structure and Materials.
A new class of accurate, mesh-free hydrodynamic simulation methods
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2015-06-01
We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.
Hepburn, I; Chen, W; De Schutter, E
2016-08-01
Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realistic biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification. PMID:27497550
NASA Astrophysics Data System (ADS)
Hepburn, I.; Chen, W.; De Schutter, E.
2016-08-01
Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realistic biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.
H2 Adsorption in a Porous Crystal: Accurate First-Principles Quantum Simulation.
D'Arcy, Jordan H; Jordan, Meredith J T; Frankcombe, Terry J; Collins, Michael A
2015-12-17
A general method is presented for constructing, from ab initio quantum chemistry calculations, the potential energy surface (PES) for H2 absorbed in a porous crystalline material. The method is illustrated for the metal-organic framework material MOF-5. Rigid body quantum diffusion Monte Carlo simulations are used in the construction of the PES and to evaluate the quantum ground state of H2 in MOF-5, the zero-point energy, and the enthalpy of adsorption at 0 K. PMID:26322374
Pre-Stall Behavior of a Transonic Axial Compressor Stage via Time-Accurate Numerical Simulation
NASA Technical Reports Server (NTRS)
Chen, Jen-Ping; Hathaway, Michael D.; Herrick, Gregory P.
2008-01-01
CFD calculations using high-performance parallel computing were conducted to simulate the pre-stall flow of a transonic compressor stage, NASA compressor Stage 35. The simulations were run with a full-annulus grid that models the 3D, viscous, unsteady blade row interaction without the need for an artificial inlet distortion to induce stall. The simulation demonstrates the development of the rotating stall from the growth of instabilities. Pressure-rise performance and pressure traces are compared with published experimental data before the study of flow evolution prior to the rotating stall. Spatial FFT analysis of the flow indicates a rotating long-length disturbance of one rotor circumference, which is followed by a spike-type breakdown. The analysis also links the long-length wave disturbance with the initiation of the spike inception. The spike instabilities occur when the trajectory of the tip clearance flow becomes perpendicular to the axial direction. When approaching stall, the passage shock changes from a single oblique shock to a dual-shock, which distorts the perpendicular trajectory of the tip clearance vortex but shows no evidence of flow separation that may contribute to stall.
NASA Astrophysics Data System (ADS)
Bozinoski, Radoslav
Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
High-order accurate multi-phase simulations: building blocks and whats tricky about them
NASA Astrophysics Data System (ADS)
Kummer, Florian
2015-11-01
We are going to present a high-order numerical method for multi-phase flow problems, which employs a sharp interface representation by a level-set and an extended discontinuous Galerkin (XDG) discretization for the flow properties. The shape of the XDG basis functions is dynamically adapted to the position of the fluid interface, so that the spatial approximation space can represent jumps in pressure and kinks in velocity accurately. By this approach, the `hp-convergence' property of the classical discontinuous Galerkin (DG) method can be preserved for the low-regularity, discontinuous solutions, such as those appearing in multi-phase flows. Within the past years, several building blocks of such a method were presented: this includes numerical integration on cut-cells, the spatial discretization by the XDG method, precise evaluation of curvature and level-set algorithms tailored to the special requirements of XDG-methods. The presentation covers a short review on these building-block and their integration into a full multi-phase solver. A special emphasis is put on the discussion of the several pitfalls one may expire in the formulation of such a solver. German Research Foundation.
Zhou, Nengji; Chen, Lipeng; Huang, Zhongkai; Sun, Kewei; Tanimura, Yoshitaka; Zhao, Yang
2016-03-10
By employing the Dirac-Frenkel time-dependent variational principle, we study the dynamical properties of the Holstein molecular crystal model with diagonal and off-diagonal exciton-phonon coupling. A linear combination of the Davydov D1 (D2) ansatz, referred to as the "multi-D1 ansatz" ("multi-D2 ansatz"), is used as the trial state with enhanced accuracy but without sacrificing efficiency. The time evolution of the exciton probability is found to be in perfect agreement with that of the hierarchy equations of motion, demonstrating the promise the multiple Davydov trial states hold as an efficient, robust description of dynamics of complex quantum systems. In addition to the linear absorption spectra computed for both diagonal and off-diagonal cases, for the first time, 2D spectra have been calculated for systems with off-diagonal exciton-phonon coupling by employing the multiple D2 ansatz to compute the nonlinear response function, testifying to the great potential of the multiple D2 ansatz for fast, accurate implementation of multidimensional spectroscopy. It is found that the signal exhibits a single peak for weak off-diagonal coupling, while a vibronic multipeak structure appears for strong off-diagonal coupling. PMID:26871592
Lippert, Ross A; Predescu, Cristian; Ierardi, Douglas J; Mackenzie, Kenneth M; Eastwood, Michael P; Dror, Ron O; Shaw, David E
2013-10-28
In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity. PMID:24182003
A novel fast and accurate pseudo-analytical simulation approach for MOAO
NASA Astrophysics Data System (ADS)
Gendron, É.; Charara, A.; Abdelfattah, A.; Gratadour, D.; Keyes, D.; Ltaief, H.; Morel, C.; Vidal, F.; Sevin, A.; Rousset, G.
2014-08-01
Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is
Cohen, Trevor; Blatter, Brett; Patel, Vimla
2008-01-01
Cognitive studies reveal that less-than-expert clinicians are less able to recognize meaningful patterns of data in clinical narratives. Accordingly, psychiatric residents early in training fail to attend to information that is relevant to diagnosis and the assessment of dangerousness. This manuscript presents cognitively motivated methodology for the simulation of expert ability to organize relevant findings supporting intermediate diagnostic hypotheses. Latent Semantic Analysis is used to generate a semantic space from which meaningful associations between psychiatric terms are derived. Diagnostically meaningful clusters are modeled as geometric structures within this space and compared to elements of psychiatric narrative text using semantic distance measures. A learning algorithm is defined that alters components of these geometric structures in response to labeled training data. Extraction and classification of relevant text segments is evaluated against expert annotation, with system-rater agreement approximating rater-rater agreement. A range of biomedical informatics applications for these methods are suggested. PMID:18455483
NASA Astrophysics Data System (ADS)
Sagui, Celeste; Pedersen, Lee G.; Darden, Thomas A.
2004-01-01
The accurate simulation of biologically active macromolecules faces serious limitations that originate in the treatment of electrostatics in the empirical force fields. The current use of "partial charges" is a significant source of errors, since these vary widely with different conformations. By contrast, the molecular electrostatic potential (MEP) obtained through the use of a distributed multipole moment description, has been shown to converge to the quantum MEP outside the van der Waals surface, when higher order multipoles are used. However, in spite of the considerable improvement to the representation of the electronic cloud, higher order multipoles are not part of current classical biomolecular force fields due to the excessive computational cost. In this paper we present an efficient formalism for the treatment of higher order multipoles in Cartesian tensor formalism. The Ewald "direct sum" is evaluated through a McMurchie-Davidson formalism [L. McMurchie and E. Davidson, J. Comput. Phys. 26, 218 (1978)]. The "reciprocal sum" has been implemented in three different ways: using an Ewald scheme, a particle mesh Ewald (PME) method, and a multigrid-based approach. We find that even though the use of the McMurchie-Davidson formalism considerably reduces the cost of the calculation with respect to the standard matrix implementation of multipole interactions, the calculation in direct space remains expensive. When most of the calculation is moved to reciprocal space via the PME method, the cost of a calculation where all multipolar interactions (up to hexadecapole-hexadecapole) are included is only about 8.5 times more expensive than a regular AMBER 7 [D. A. Pearlman et al., Comput. Phys. Commun. 91, 1 (1995)] implementation with only charge-charge interactions. The multigrid implementation is slower but shows very promising results for parallelization. It provides a natural way to interface with continuous, Gaussian-based electrostatics in the future. It is
Recent Results on the Accurate Measurements of the Dielectric Constant of Seawater at 1.413GHZ
NASA Technical Reports Server (NTRS)
Lang, R.H.; Tarkocin, Y.; Utku, C.; Le Vine, D.M.
2008-01-01
Measurements of the complex. dielectric constant of seawater at 30.00 psu, 35.00 psu and 38.27 psu over the temperature range from 5 C to 3 5 at 1.413 GHz are given and compared with the Klein-Swift results. A resonant cavity technique is used. The calibration constant used in the cavity perturbation formulas is determined experimentally using methanol and ethanediol (ethylene glycol) as reference liquids. Analysis of the data shows that the measurements are accurate to better than 1.0% in almost all cases studied.
Midgley, David J.; Greenfield, Paul; Shaw, Janet M.; Oytam, Yalchin; Li, Dongmei; Kerr, Caroline A.; Hendry, Philip
2012-01-01
The second generation (G2) PhyloChip is designed to detect over 8700 bacteria and archaeal and has been used over 50 publications and conference presentations. Many of those publications reveal that the PhyloChip measures of species richness greatly exceed statistical estimates of richness based on other methods. An examination of probes downloaded from Greengenes suggested that the system may have the potential to distort the observed community structure. This may be due to the sharing of probes by taxa; more than 21% of the taxa in that downloaded data have no unique probes. In-silico simulations using these data showed that a population of 64 taxa representing a typical anaerobic subterranean community returned 96 different taxa, including 15 families incorrectly called present and 19 families incorrectly called absent. A study of nasal and oropharyngeal microbial communities by Lemon et al (2010) found some 1325 taxa using the G2 PhyloChip, however, about 950 of these taxa have, in the downloaded data, no unique probes and cannot be definitively called present. Finally, data from Brodie et al (2007), when re-examined, indicate that the abundance of the majority of detected taxa, are highly correlated with one another, suggesting that many probe sets do not act independently. Based on our analyses of downloaded data, we conclude that outputs from the G2 PhyloChip should be treated with some caution, and that the presence of taxa represented solely by non-unique probes be independently verified. PMID:22457798
Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.
2015-01-01
Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational. PMID:25615870
NASA Astrophysics Data System (ADS)
Adidharma, Hertanto; Tan, Sugata P.
2016-07-01
Canonical Monte Carlo simulations on face-centered cubic (FCC) and hexagonal closed packed (HCP) Lennard-Jones (LJ) solids are conducted at very low temperatures (0.10 ≤ T∗ ≤ 1.20) and high densities (0.96 ≤ ρ∗ ≤ 1.30). A simple and robust method is introduced to determine whether or not the cutoff distance used in the simulation is large enough to provide accurate thermodynamic properties, which enables us to distinguish the properties of FCC from that of HCP LJ solids with confidence, despite their close similarities. Free-energy expressions derived from the simulation results are also proposed, not only to describe the properties of those individual structures but also the FCC-liquid, FCC-vapor, and FCC-HCP solid phase equilibria.
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Vu-Quoc, Loc
2007-07-01
We present in this paper the displacement-driven version of a tangential force-displacement (TFD) model that accounts for both elastic and plastic deformations together with interfacial friction occurring in collisions of spherical particles. This elasto-plastic frictional TFD model, with its force-driven version presented in [L. Vu-Quoc, L. Lesburg, X. Zhang. An accurate tangential force-displacement model for granular-flow simulations: contacting spheres with plastic deformation, force-driven formulation, Journal of Computational Physics 196(1) (2004) 298-326], is consistent with the elasto-plastic frictional normal force-displacement (NFD) model presented in [L. Vu-Quoc, X. Zhang. An elasto-plastic contact force-displacement model in the normal direction: displacement-driven version, Proceedings of the Royal Society of London, Series A 455 (1991) 4013-4044]. Both the NFD model and the present TFD model are based on the concept of additive decomposition of the radius of contact area into an elastic part and a plastic part. The effect of permanent indentation after impact is represented by a correction to the radius of curvature. The effect of material softening due to plastic flow is represented by a correction to the elastic moduli. The proposed TFD model is accurate, and is validated against nonlinear finite element analyses involving plastic flows in both the loading and unloading conditions. The proposed consistent displacement-driven, elasto-plastic NFD and TFD models are designed for implementation in computer codes using the discrete-element method (DEM) for granular-flow simulations. The model is shown to be accurate and is validated against nonlinear elasto-plastic finite-element analysis.
Simulation of optical diagnostics for crystal growth: models and results
NASA Astrophysics Data System (ADS)
Banish, Michele R.; Clark, Rodney L.; Kathman, Alan D.; Lawson, Shelah M.
1991-12-01
A computer simulation of a two-color holographic interferometric (TCHI) optical system was performed using a physical (wave) optics model. This model accurately simulates propagation through time-varying, 2-D or 3-D concentration and temperature fields as a wave phenomenon. The model calculates wavefront deformations that can be used to generate fringe patterns. This simulation modeled a proposed TriGlycine sulphate TGS flight experiment by propagating through the simplified onion-like refractive index distribution of the growing crystal and calculating the recorded wavefront deformation. The phase of this wavefront was used to generate sample interferograms that map index of refraction variation. Two such fringe patterns, generated at different wavelengths, were used to extract the original temperature and concentration field characteristics within the growth chamber. This proves feasibility for this TCHI crystal growth diagnostic technique. This simulation provides feedback to the experimental design process.
NASA Astrophysics Data System (ADS)
Baccarini, Lane Maria Rabelo; de Menezes, Benjamim Rodrigues; Caminhas, Walmir Matos
2010-01-01
The study of induction motor behavior under not normal conditions and the ability to detect and predict these conditions has been an area of increasing interest. Early detection and diagnosis of incipient faults are desirable for interactive evaluation over the running condition, product quality guarantee, and improved operational efficiency of induction motors. The main difficulty in this task is the lack of accurate analytical models to describe a faulty motor. This paper proposes a dynamic model to analyze electrical and mechanical faults in induction machines and includes net asymmetries and load conditions. The model permits to analyze the interactions between different faults in order to detect possible false alarms. Simulations and experimental results were performed to confirm the validity of the model.
NASA Astrophysics Data System (ADS)
Shauly, Eitan; Rotstein, Israel; Peltinov, Ram; Latinski, Sergei; Adan, Ofer; Levi, Shimon; Menadeva, Ovadya
2009-03-01
The continues transistors scaling efforts, for smaller devices, similar (or larger) drive current/um and faster devices, increase the challenge to predict and to control the transistor off-state current. Typically, electrical simulators like SPICE, are using the design intent (as-drawn GDS data). At more sophisticated cases, the simulators are fed with the pattern after lithography and etch process simulations. As the importance of electrical simulation accuracy is increasing and leakage is becoming more dominant, there is a need to feed these simulators, with more accurate information extracted from physical on-silicon transistors. Our methodology to predict changes in device performances due to systematic lithography and etch effects was used in this paper. In general, the methodology consists on using the OPCCmaxTM for systematic Edge-Contour-Extraction (ECE) from transistors, taking along the manufacturing and includes any image distortions like line-end shortening, corner rounding and line-edge roughness. These measurements are used for SPICE modeling. Possible application of this new metrology is to provide a-head of time, physical and electrical statistical data improving time to market. In this work, we applied our methodology to analyze a small and large array's of 2.14um2 6T-SRAM, manufactured using Tower Standard Logic for General Purposes Platform. 4 out of the 6 transistors used "U-Shape AA", known to have higher variability. The predicted electrical performances of the transistors drive current and leakage current, in terms of nominal values and variability are presented. We also used the methodology to analyze an entire SRAM Block array. Study of an isolation leakage and variability are presented.
Superspreading: molecular dynamics simulations and experimental results
NASA Astrophysics Data System (ADS)
Theodorakis, Panagiotis; Kovalchuk, Nina; Starov, Victor; Muller, Erich; Craster, Richard; Matar, Omar
2015-11-01
The intriguing ability of certain surfactant molecules to drive the superspreading of liquids to complete wetting on hydrophobic substrates is central to numerous applications that range from coating flow technology to enhanced oil recovery. Recently, we have observed that for superspreading to occur, two key conditions must be simultaneously satisfied: the adsorption of surfactants from the liquid-vapor surface onto the three-phase contact line augmented by local bilayer formation. Crucially, this must be coordinated with the rapid replenishment of liquid-vapor and solid-liquid interfaces with surfactants from the interior of the droplet. Here, we present the structural characteristics and kinetics of the droplet spreading during the different stages of this process, and we compare our results with experimental data for trisiloxane and poly oxy ethylene surfactants. In this way, we highlight and explore the differences between surfactants, paving the way for the design of molecular architectures tailored specifically for applications that rely on the control of wetting. EPSRC Platform Grant MACIPh (EP/L020564/).
Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.
2015-01-01
The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.
Chocholousová, Jana; Feig, Michael
2006-04-30
Different integrator time steps in NVT and NVE simulations of protein and nucleic acid systems are tested with the GBMV (Generalized Born using Molecular Volume) and GBSW (Generalized Born with simple SWitching) methods. The simulation stability and energy conservation is investigated in relation to the agreement with the Poisson theory. It is found that very close agreement between generalized Born methods and the Poisson theory based on the commonly used sharp molecular surface definition results in energy drift and simulation artifacts in molecular dynamics simulation protocols with standard 2-fs time steps. New parameters are proposed for the GBMV method, which maintains very good agreement with the Poisson theory while providing energy conservation and stable simulations at time steps of 1 to 1.5 fs. PMID:16518883
NASA Astrophysics Data System (ADS)
Grimminck, Dennis L. A. G.; Polman, Ben J. W.; Kentgens, Arno P. M.; Leo Meerts, W.
2011-08-01
A fast and accurate fit program is presented for deconvolution of one-dimensional solid-state quadrupolar NMR spectra of powdered materials. Computational costs of the synthesis of theoretical spectra are reduced by the use of libraries containing simulated time/frequency domain data. These libraries are calculated once and with the use of second-party simulation software readily available in the NMR community, to ensure a maximum flexibility and accuracy with respect to experimental conditions. EASY-GOING deconvolution ( EGdeconv) is equipped with evolutionary algorithms that provide robust many-parameter fitting and offers efficient parallellised computing. The program supports quantification of relative chemical site abundances and (dis)order in the solid-state by incorporation of (extended) Czjzek and order parameter models. To illustrate EGdeconv's current capabilities, we provide three case studies. Given the program's simple concept it allows a straightforward extension to include other NMR interactions. The program is available as is for 64-bit Linux operating systems.
NASA Astrophysics Data System (ADS)
Pasternack, G. B.; Wyrick, J. R.; Jackson, J. R.
2014-12-01
Long practiced in fisheries, visual substrate mapping of coarse-bedded rivers is eschewed by geomorphologists for inaccuracy and limited sizing data. Geomorphologists perform time-consuming measurements of surficial grains, with the few locations precluding spatially explicit mapping and analysis of sediment facies. Remote sensing works for bare land, but not vegetated or subaqueous sediments. As visual systems apply the log2 Wentworth scale made for sieving, they suffer from human inability to readily discern those classes. We hypothesized that size classes centered on the PDF of the anticipated sediment size distribution would enable field crews to accurately (i) identify presence/absence of each class in a facies patch and (ii) estimate the relative amount of each class to within 10%. We first tested 6 people using 14 measured samples with different mixtures. Next, we carried out facies mapping for ~ 37 km of the lower Yuba River in California. Finally, we tested the resulting data to see if it produced statistically significant hydraulic-sedimentary-geomorphic results. Presence/absence performance error was 0-4% for four people, 13% for one person, and 33% for one person. The last person was excluded from further effort. For the abundance estimation performance error was 1% for one person, 7-12% for three people, and 33% for one person. This last person was further trained and re-tested. We found that the samples easiest to visually quantify were unimodal and bimodal, while those most difficult had nearly equal amounts of each size. This confirms psychological studies showing that humans have a more difficult time quantifying abundances of subgroups when confronted with well-mixed groups. In the Yuba, mean grain size decreased downstream, as is typical for an alluvial river. When averaged by reach, mean grain size and bed slope were correlated with an r2 of 0.95. At the morphological unit (MU) scale, eight in-channel bed MU types had an r2 of 0.90 between mean
Silva, Romesh; Amouzou, Agbessi; Munos, Melinda; Marsh, Andrew; Hazel, Elizabeth; Victora, Cesar; Black, Robert; Bryce, Jennifer
2016-01-01
Introduction Most low-income countries lack complete and accurate vital registration systems. As a result, measures of under-five mortality rates rely mostly on household surveys. In collaboration with partners in Ethiopia, Ghana, Malawi, and Mali, we assessed the completeness and accuracy of reporting of births and deaths by community-based health workers, and the accuracy of annualized under-five mortality rate estimates derived from these data. Here we report on results from Ethiopia, Malawi and Mali. Method In all three countries, community health workers (CHWs) were trained, equipped and supported to report pregnancies, births and deaths within defined geographic areas over a period of at least fifteen months. In-country institutions collected these data every month. At each study site, we administered a full birth history (FBH) or full pregnancy history (FPH), to women of reproductive age via a census of households in Mali and via household surveys in Ethiopia and Malawi. Using these FBHs/FPHs as a validation data source, we assessed the completeness of the counts of births and deaths and the accuracy of under-five, infant, and neonatal mortality rates from the community-based method against the retrospective FBH/FPH for rolling twelve-month periods. For each method we calculated total cost, average annual cost per 1,000 population, and average cost per vital event reported. Results On average, CHWs submitted monthly vital event reports for over 95 percent of catchment areas in Ethiopia and Malawi, and for 100 percent of catchment areas in Mali. The completeness of vital events reporting by CHWs varied: we estimated that 30%-90% of annualized expected births (i.e. the number of births estimated using a FPH) were documented by CHWs and 22%-91% of annualized expected under-five deaths were documented by CHWs. Resulting annualized under-five mortality rates based on the CHW vital events reporting were, on average, under-estimated by 28% in Ethiopia, 32% in
NASA Astrophysics Data System (ADS)
Reinhardt, Colin N.; Ritcey, James A.
2015-09-01
We present a novel method for efficient and physically-accurate modeling & simulation of anisoplanatic imaging through the atmosphere; in particular we present a new space-variant volumetric image blur algorithm. The method is based on the use of physical atmospheric meteorology models, such as vertical turbulence profiles and aerosol/molecular profiles which can be in general fully spatially-varying in 3 dimensions and also evolving in time. The space-variant modeling method relies on the metadata provided by 3D computer graphics modeling and rendering systems to decompose the image into a set of slices which can be treated in an independent but physically consistent manner to achieve simulated image blur effects which are more accurate and realistic than the homogeneous and stationary blurring methods which are commonly used today. We also present a simple illustrative example of the application of our algorithm, and show its results and performance are in agreement with the expected relative trends and behavior of the prescribed turbulence profile physical model used to define the initial spatially-varying environmental scenario conditions. We present the details of an efficient Fourier-transform-domain formulation of the SV volumetric blur algorithm and detailed algorithm pseudocode description of the method implementation and clarification of some nonobvious technical details.
NASA Astrophysics Data System (ADS)
Pokorna, Lucie; Kliegrova, Stanislava; Huth, Radan; Farda, Ales; Stepanek, Petr
2014-05-01
Regional climate models (RCM) are a useful tool for a simulation of surface climate with respect to conditions of individual regions. The need of the realistic representation of surface elements at the local scale is important particularly in terrain with complex orography. The Czech Republic with the mountain chains along its border and highlands as well as lowlands in the inland seems to be a good representation of such region. A good performance of the models in reproducing recent temporal and spatial distribution of temperature and precipitation can enhance our confidence in the changes projected for future climate conditions. In this study, we compare two versions of the RCM ALARO covering a 30-year climate period (1961-1990); a simulation with a common resolution 25-km and a simulation with a very high resolution 6-km. The ALARO-Climate RCM has been developed in recent years in the Czech Hydrometeorological Institute on the basis of the numerical weather prediction model ALADIN and is already operated at other five national meteorological services. Both presented simulations are driven by the ERA-40 reanalysis and run on the large pan-European integration domain ("ENSEMBLES / Euro-Cordex domain"). As the reference dataset we use technical homogenized series based on time series from stations in the Czech Republic interpolated to the same network as both model simulations but with real altitude of the grid points (GriSt). The seasonal and monthly values of mean, maximum and minimum temperature as well as precipitation amounts are examined. We display a spatial distribution of biases of seasonal means and the temporal distribution of biases based on monthly values with respect to the altitude for both simulations. The results indicate that a higher resolution of model tends to improve the simulation of present day climate, with larger improvements in areas affected by mountains.
Sharapov, Vladimir A; Mandelshtam, Vladimir A
2007-10-18
We consider systems undergoing very-low-temperature solid-solid transitions associated with minima of similar energy but different symmetry, and separated by a high potential barrier. In such cases the well-known "broken-ergodicity" problem is often difficult to overcome, even using the most advanced Monte Carlo (MC) techniques, including the replica exchange method (REM). The methodology that we develop in this paper is suitable for the above specified cases and is numerically accurate and efficient. It is based on a new MC move implemented within the REM framework, in which trial points are generated analytically using an auxiliary harmonic superposition system that mimics well the true system at low temperatures. Due to the new move, the low-temperature random walks are able to frequently switch the relevant potential energy funnels leading to an efficient sampling. Numerically accurate results are obtained for a number of Lennard-Jones clusters, including those that have so far been treated only by the harmonic superposition approximation (HSA). The latter is believed to provide good estimates for low-temperature equilibrium properties but is manifestly uncontrollable and is difficult to validate. The present results provide a good test for the HSA and demonstrate its reliability, particularly for estimation of the solid-solid transition temperatures in most cases considered. PMID:17685597
The VIIRS ocean data simulator enhancements and results
NASA Astrophysics Data System (ADS)
Robinson, Wayne D.; Patt, Frederick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.
2011-10-01
The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.
The VIIRS Ocean Data Simulator Enhancements and Results
NASA Technical Reports Server (NTRS)
Robinson, Wayne D.; Patt, Fredrick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.
2011-01-01
The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.
Accurate path integral molecular dynamics simulation of ab-initio water at near-zero added cost
NASA Astrophysics Data System (ADS)
Elton, Daniel; Fritz, Michelle; Soler, José; Fernandez-Serra, Marivi
It is now established that nuclear quantum motion plays an important role in determining water's structure and dynamics. These effects are important to consider when evaluating DFT functionals and attempting to develop better ones for water. The standard way of treating nuclear quantum effects, path integral molecular dynamics (PIMD), multiplies the number of energy/force calculations by the number of beads, which is typically 32. Here we introduce a method whereby PIMD can be incorporated into a DFT molecular dynamics simulation at virtually zero cost. The method is based on the cluster (many body) expansion of the energy. We first subtract the DFT monomer energies, using a custom DFT-based monomer potential energy surface. The evolution of the PIMD beads is then performed using only the more-accurate Partridge-Schwenke monomer energy surface. The DFT calculations are done using the centroid positions. Various bead thermostats can be employed to speed up the sampling of the quantum ensemble. The method bears some resemblance to multiple timestep algorithms and other schemes used to speed up PIMD with classical force fields. We show that our method correctly captures some of key effects of nuclear quantum motion on both the structure and dynamics of water. We acknowledge support from DOE Award No. DE-FG02-09ER16052 (D.E.) and DOE Early Career Award No. DE-SC0003871 (M.V.F.S.).
NASA Astrophysics Data System (ADS)
Dougherty, N. S.; Burnette, D. W.; Holt, J. B.; Matienzo, Jose
1993-07-01
Time-accurate unsteady flow simulations are being performed supporting the SRM T+68sec pressure 'spike' anomaly investigation. The anomaly occurred in the RH SRM during the STS-54 flight (STS-54B) but not in the LH SRM (STS-54A) causing a momentary thrust mismatch approaching the allowable limit at that time into the flight. Full-motor internal flow simulations using the USA-2D axisymmetric code are in progress for the nominal propellant burn-back geometry and flow conditions at T+68-sec--Pc = 630 psi, gamma = 1.1381, T(sub c) = 6200 R, perfect gas without aluminum particulate. In a cooperative effort with other investigation team members, CFD-derived pressure loading on the NBR and castable inhibitors was used iteratively to obtain nominal deformed geometry of each inhibitor, and the deformed (bent back) inhibitor geometry was entered into this model. Deformed geometry was computed using structural finite-element models. A solution for the unsteady flow has been obtained for the nominal flow conditions (existing prior to the occurrence of the anomaly) showing sustained standing pressure oscillations at nominally 14.5 Hz in the motor IL acoustic mode that flight and static test data confirm to be normally present at this time. Average mass flow discharged from the nozzle was confirmed to be the nominal expected (9550 lbm/sec). The local inlet boundary condition is being perturbed at the location of the presumed reconstructed anomaly as identified by interior ballistics performance specialist team members. A time variation in local mass flow is used to simulate sudden increase in burning area due to localized propellant grain cracks. The solution will proceed to develop a pressure rise (proportional to total mass flow rate change squared). The volume-filling time constant (equivalent to 0.5 Hz) comes into play in shaping the rise rate of the developing pressure 'spike' as it propagates at the speed of sound in both directions to the motor head end and nozzle. The
NASA Technical Reports Server (NTRS)
Dougherty, N. S.; Burnette, D. W.; Holt, J. B.; Matienzo, Jose
1993-01-01
Time-accurate unsteady flow simulations are being performed supporting the SRM T+68sec pressure 'spike' anomaly investigation. The anomaly occurred in the RH SRM during the STS-54 flight (STS-54B) but not in the LH SRM (STS-54A) causing a momentary thrust mismatch approaching the allowable limit at that time into the flight. Full-motor internal flow simulations using the USA-2D axisymmetric code are in progress for the nominal propellant burn-back geometry and flow conditions at T+68-sec--Pc = 630 psi, gamma = 1.1381, T(sub c) = 6200 R, perfect gas without aluminum particulate. In a cooperative effort with other investigation team members, CFD-derived pressure loading on the NBR and castable inhibitors was used iteratively to obtain nominal deformed geometry of each inhibitor, and the deformed (bent back) inhibitor geometry was entered into this model. Deformed geometry was computed using structural finite-element models. A solution for the unsteady flow has been obtained for the nominal flow conditions (existing prior to the occurrence of the anomaly) showing sustained standing pressure oscillations at nominally 14.5 Hz in the motor IL acoustic mode that flight and static test data confirm to be normally present at this time. Average mass flow discharged from the nozzle was confirmed to be the nominal expected (9550 lbm/sec). The local inlet boundary condition is being perturbed at the location of the presumed reconstructed anomaly as identified by interior ballistics performance specialist team members. A time variation in local mass flow is used to simulate sudden increase in burning area due to localized propellant grain cracks. The solution will proceed to develop a pressure rise (proportional to total mass flow rate change squared). The volume-filling time constant (equivalent to 0.5 Hz) comes into play in shaping the rise rate of the developing pressure 'spike' as it propagates at the speed of sound in both directions to the motor head end and nozzle. The
Crowley, Jason M; Tahir-Kheli, Jamil; Goddard, William A
2015-10-01
It has been established experimentally that Bi2Te3 and Bi2Se3 are topological insulators, with zero band gap surface states exhibiting linear dispersion at the Fermi energy. Standard density functional theory (DFT) methods such as PBE lead to large errors in the band gaps for such strongly correlated systems, while more accurate GW methods are too expensive computationally to apply to the thin films studied experimentally. We show here that the hybrid B3PW91 density functional yields GW-quality results for these systems at a computational cost comparable to PBE. The efficiency of our approach stems from the use of Gaussian basis functions instead of plane waves or augmented plane waves. This remarkable success without empirical corrections of any kind opens the door to computational studies of real chemistry involving the topological surface state, and our approach is expected to be applicable to other semiconductors with strong spin-orbit coupling. PMID:26722872
NASA Astrophysics Data System (ADS)
Mehmani, Yashar; Oostrom, Mart; Balhoff, Matthew T.
2014-03-01
Several approaches have been developed in the literature for solving flow and transport at the pore scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect-mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and validated against micromodel experiments; excellent matches were obtained across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3-D disordered granular media.
Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew
2014-03-20
Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.
NASA Astrophysics Data System (ADS)
Kairn, T.; Crowe, S. B.; Charles, P. H.; Trapp, J. V.
2014-03-01
This study investigates the variation of photon field penumbra shape with initial electron beam diameter, for very narrow beams. A Varian Millenium MLC (Varian Medical Systems, Palo Alto, USA) and a Brainlab m3 microMLC (Brainlab AB. Feldkirchen, Germany) were used, with one Varian iX linear accelerator, to produce fields that were (nominally) 0.20 cm across. Dose profiles for these fields were measured using radiochromic film and compared with the results of simulations completed using BEAMnrc and DOSXYZnrc, where the initial electron beam was set to FWHM = 0.02, 0.10, 0.12, 0.15, 0.20 and 0.50 cm. Increasing the electron-beam FWHM produced increasing occlusion of the photon source by the closely spaced collimator leaves and resulted in blurring of the simulated profile widths from 0.24 to 0.58 cm, for the MLC, from 0.11 to 0.40 cm, for the microMLC. Comparison with measurement data suggested that the electron spot size in the clinical linear accelerator was between FWHM = 0.10 and 0.15 cm, encompassing the result of our previous output-factor based work, which identified a FWHM of 0.12 cm. Investigation of narrow-beam penumbra variation has been found to be a useful procedure, with results varying noticeably with linear accelerator spot size and allowing FWHM estimates obtained using other methods to be verified.
NASA Astrophysics Data System (ADS)
Zhang, Na; Yao, Jun; Huang, Zhaoqin; Wang, Yueying
2013-06-01
Numerical simulation in naturally fractured media is challenging because of the coexistence of porous media and fractures on multiple scales that need to be coupled. We present a new approach to reservoir simulation that gives accurate resolution of both large-scale and fine-scale flow patterns. Multiscale methods are suitable for this type of modeling, because it enables capturing the large scale behavior of the solution without solving all the small features. Dual-porosity models in view of their strength and simplicity can be mainly used for sugar-cube representation of fractured media. In such a representation, the transfer function between the fracture and the matrix block can be readily calculated for water-wet media. For a mixed-wet system, the evaluation of the transfer function becomes complicated due to the effect of gravity. In this work, we use a multiscale finite element method (MsFEM) for two-phase flow in fractured media using the discrete-fracture model. By combining MsFEM with the discrete-fracture model, we aim towards a numerical scheme that facilitates fractured reservoir simulation without upscaling. MsFEM uses a standard Darcy model to approximate the pressure and saturation on a coarse grid, whereas fine scale effects are captured through basis functions constructed by solving local flow problems using the discrete-fracture model. The accuracy and the robustness of MsFEM are shown through several examples. In the first example, we consider several small fractures in a matrix and then compare the results solved by the finite element method. Then, we use the MsFEM in more complex models. The results indicate that the MsFEM is a promising path toward direct simulation of highly resolution geomodels.
Aerosol kinetic code "AERFORM": Model, validation and simulation results
NASA Astrophysics Data System (ADS)
Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.
2016-06-01
The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.
Experimental and simulational result multipactors in 112 MHz QWR injector
Xin, T.; Ben-Zvi, I.; Belomestnykh, S.; Brutus, J. C.; Skaritka, J.; Wu, Q.; Xiao, B.
2015-05-03
The first RF commissioning of 112 MHz QWR superconducting electron gun was done in late 2014. The coaxial Fundamental Power Coupler (FPC) and Cathode Stalk (stalk) were installed and tested for the first time. During this experiment, we observed several multipacting barriers at different gun voltage levels. The simulation work was done within the same range. The comparison between the experimental observation and the simulation results are presented in this paper. The observations during the test are consisted with the simulation predictions. We were able to overcome most of the multipacting barriers and reach 1.8 MV gun voltage under pulsed mode after several round of conditioning processes.
Preliminary Results from SCEC Earthquake Simulator Comparison Project
NASA Astrophysics Data System (ADS)
Tullis, T. E.; Barall, M.; Richards-Dinger, K. B.; Ward, S. N.; Heien, E.; Zielke, O.; Pollitz, F. F.; Dieterich, J. H.; Rundle, J. B.; Yikilmaz, M. B.; Turcotte, D. L.; Kellogg, L. H.; Field, E. H.
2010-12-01
Earthquake simulators are computer programs that simulate long sequences of earthquakes. If such simulators could be shown to produce synthetic earthquake histories that are good approximations to actual earthquake histories they could be of great value in helping to anticipate the probabilities of future earthquakes and so could play an important role in helping to make public policy decisions. Consequently it is important to discover how realistic are the earthquake histories that result from these simulators. One way to do this is to compare their behavior with the limited knowledge we have from the instrumental, historic, and paleoseismic records of past earthquakes. Another, but slow process for large events, is to use them to make predictions about future earthquake occurrence and to evaluate how well the predictions match what occurs. A final approach is to compare the results of many varied earthquake simulators to determine the extent to which the results depend on the details of the approaches and assumptions made by each simulator. Five independently developed simulators, capable of running simulations on complicated geometries containing multiple faults, are in use by some of the authors of this abstract. Although similar in their overall purpose and design, these simulators differ from one another widely in their details in many important ways. They require as input for each fault element a value for the average slip rate as well as a value for friction parameters or stress reduction due to slip. They share the use of the boundary element method to compute stress transfer between elements. None use dynamic stress transfer by seismic waves. A notable difference is the assumption different simulators make about the constitutive properties of the faults. The earthquake simulator comparison project is designed to allow comparisons among the simulators and between the simulators and past earthquake history. The project uses sets of increasingly detailed
Hyper-X Stage Separation: Simulation Development and Results
NASA Technical Reports Server (NTRS)
Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.
2001-01-01
This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.
LiF TLD-100 as a Dosimeter in High Energy Proton Beam Therapy-Can It Yield Accurate Results?
Zullo, John R. Kudchadker, Rajat J.; Zhu, X. Ronald; Sahoo, Narayan; Gillin, Michael T.
2010-04-01
In the region of high-dose gradients at the end of the proton range, the stopping power ratio of the protons undergoes significant changes, allowing for a broad spectrum of proton energies to be deposited within a relatively small volume. Because of the potential linear energy transfer dependence of LiF TLD-100 (thermolumescent dosimeter), dose measurements made in the distal fall-off region of a proton beam may be less accurate than those made in regions of low-dose gradients. The purpose of this study is to determine the accuracy and precision of dose measured using TLD-100 for a pristine Bragg peak, particularly in the distal fall-off region. All measurements were made along the central axis of an unmodulated 200-MeV proton beam from a Probeat passive beam-scattering proton accelerator (Hitachi, Ltd., Tokyo, Japan) at varying depths along the Bragg peak. Measurements were made using TLD-100 powder flat packs, placed in a virtual water slab phantom. The measurements were repeated using a parallel plate ionization chamber. The dose measurements using TLD-100 in a proton beam were accurate to within {+-}5.0% of the expected dose, previously seen in our past photon and electron measurements. The ionization chamber and the TLD relative dose measurements agreed well with each other. Absolute dose measurements using TLD agreed with ionization chamber measurements to within {+-} 3.0 cGy, for an exposure of 100 cGy. In our study, the differences in the dose measured by the ionization chamber and those measured by TLD-100 were minimal, indicating that the accuracy and precision of measurements made in the distal fall-off region of a pristine Bragg peak is within the expected range. Thus, the rapid change in stopping power ratios at the end of the range should not affect such measurements, and TLD-100 may be used with confidence as an in vivo dosimeter for proton beam therapy.
NASA Astrophysics Data System (ADS)
Ahmed, Mahmoud; Eslamian, Morteza
2015-07-01
Laminar natural convection in differentially heated ( β = 0°, where β is the inclination angle), inclined ( β = 30° and 60°), and bottom-heated ( β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.
Ahmed, Mahmoud; Eslamian, Morteza
2015-12-01
Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number. PMID:26183389
Advanced Thermal Simulator Testing: Thermal Analysis and Test Results
Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe
2008-01-21
Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the potential development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a liquid metal cooled reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.
Advanced Thermal Simulator Testing: Thermal Analysis and Test Results
NASA Technical Reports Server (NTRS)
Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe
2008-01-01
Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a SNAP derivative reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.
Results from Binary Black Hole Simulations in Astrophysics Applications
NASA Technical Reports Server (NTRS)
Baker, John G.
2007-01-01
Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.
Simulating lightning into the RAMS model: implementation and preliminary results
NASA Astrophysics Data System (ADS)
Federico, S.; Avolio, E.; Petracca, M.; Panegrossi, G.; Sanò, P.; Casella, D.; Dietrich, S.
2014-05-01
This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS). The method gives the flash density at the resolution of the RAMS grid-scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity. Results show that the model predicts reasonably well both cases and that the lightning activity is well reproduced especially for the most intense case. However, there are errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the intensity and the evolution of the convection. This shows the importance of the use of computationally efficient lightning schemes, such as the one described in this paper, in forecast models.
NASA Astrophysics Data System (ADS)
van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis
2014-11-01
In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.
Kim, Ellen S; Satter, Martin; Reed, Marilyn; Fadell, Ronald; Kardan, Arash
2016-06-01
Glioblastoma multiforme (GBM) is the most common and lethal malignant glioma in adults. Currently, the modality of choice for diagnosing brain tumor is high-resolution magnetic resonance imaging (MRI) with contrast, which provides anatomic detail and localization. Studies have demonstrated, however, that MRI may have limited utility in delineating the full tumor extent precisely. Studies suggest that MR spectroscopy (MRS) can also be used to distinguish high-grade from low-grade gliomas. However, due to operator dependent variables and the heterogeneous nature of gliomas, the potential for error in diagnostic accuracy with MRS is a concern. Positron emission tomography (PET) imaging with (11)C-methionine (MET) and (18)F-fluorodeoxyglucose (FDG) has been shown to add additional information with respect to tumor grade, extent, and prognosis based on the premise of biochemical changes preceding anatomic changes. Combined PET/MRS is a technique that integrates information from PET in guiding the location for the most accurate metabolic characterization of a lesion via MRS. We describe a case of glioblastoma multiforme in which MRS was initially non-diagnostic for malignancy, but when MRS was repeated with PET guidance, demonstrated elevated choline/N-acetylaspartate (Cho/NAA) ratio in the right parietal mass consistent with a high-grade malignancy. Stereotactic biopsy, followed by PET image-guided resection, confirmed the diagnosis of grade IV GBM. To our knowledge, this is the first reported case of an integrated PET/MRS technique for the voxel placement of MRS. Our findings suggest that integrated PET/MRS may potentially improve diagnostic accuracy in high-grade gliomas. PMID:27122050
Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results
Tisseur, D. Costin, M. Rattoni, B. Vienne, C. Vabre, A. Cattiaux, G.; Sollier, T.
2015-03-31
The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.
Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results
NASA Astrophysics Data System (ADS)
Tisseur, D.; Costin, M.; Rattoni, B.; Vienne, C.; Vabre, A.; Cattiaux, G.; Sollier, T.
2015-03-01
The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.
Recent results in analysis and simulation of beam halo
Ryne, Robert D.; Wangler, Thomas P.
1995-09-15
Understanding and predicting beam halo is a major issue for accelerator driven transmutation technologies. If strict beam loss requirements are not met, the resulting radioactivation can reduce the availability of the accelerator facility and may lead to the necessity for time-consuming remote maintenance. Recently there has been much activity related to the core-halo model of halo evolution [1-5]. In this paper we will discuss the core-halo model in the context of constant focusing channels and periodic focusing channels. We will present numerical results based on this model and we will show comparisons with results from large scale particle simulations run on a massively parallel computer. We will also present results from direct Vlasov simulations.
Recent results in analysis and simulation of beam halo
Ryne, R.D.; Wangler, T.P.
1994-09-01
Understanding and predicting beam halo is a major issue for accelerator driven transmutation technologies. If strict beam loss requirements are not met, the resulting radioactivation can reduce the availability of the accelerator facility and may lead to the necessity for time-consuming remote maintenance. Recently there has been much activity related to the core-halo model of halo evolution. In this paper the authors will discuss the core-halo model in the context of constant focusing channels and periodic focusing channels. They will present numerical results based on this model and they will show comparisons with results from large scale particle simulations run on a massively parallel computer. They will also present results from direct Vlasov simulations.
LENS: μLENS Simulations, Analysis, and Results
NASA Astrophysics Data System (ADS)
Rasco, Charles
2013-04-01
Simulations of the Low-Energy Neutrino Spectrometer prototype, μLENS, have been performed in order to benchmark the first measurements of the μLENS detector at the Kimballton Underground Research Facility (KURF). μLENS is a 6x6x6 celled scintillation lattice filled with Linear Alkylbenzene based scintillator. We have performed simulations of μLENS using the GEANT4 toolkit. We have measured various radioactive sources, LEDs, and environmental background radiation measurements at KURF using up to 96 PMTs with a simplified data acquisition system of QDCs and TDCs. In this talk we will demonstrate our understanding of the light propagation and we will compare simulation results with measurements of the μLENS detector of various radioactive sources, LEDs, and the environmental background radiation.
NASA Astrophysics Data System (ADS)
Sun, Yuansheng; Periasamy, Ammasi
2010-03-01
Förster resonance energy transfer (FRET) microscopy is commonly used to monitor protein interactions with filter-based imaging systems, which require spectral bleedthrough (or cross talk) correction to accurately measure energy transfer efficiency (E). The double-label (donor+acceptor) specimen is excited with the donor wavelength, the acceptor emission provided the uncorrected FRET signal and the donor emission (the donor channel) represents the quenched donor (qD), the basis for the E calculation. Our results indicate this is not the most accurate determination of the quenched donor signal as it fails to consider the donor spectral bleedthrough (DSBT) signals in the qD for the E calculation, which our new model addresses, leading to a more accurate E result. This refinement improves E comparisons made with lifetime and spectral FRET imaging microscopy as shown here using several genetic (FRET standard) constructs, where cerulean and venus fluorescent proteins are tethered by different amino acid linkers.
NASA Astrophysics Data System (ADS)
Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.
2015-12-01
We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (<200 km) of comet 67P. The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).
Primary simulation and experimental results of a coaxial plasma accelerator
NASA Astrophysics Data System (ADS)
Chen, Z.; Huang, J.; Han, J.; Zhang, Z.; Quan, R.; Wang, L.; Yang, X.; Feng, C.
A coaxial plasma accelerator with a compressing coil is developed to simulate the impacting and erosion effect of space debris on exposed materials of spacecrafts During its adjustment operation some measurements are conducted including discharging current by Rogowski coil average plasma speed in the coaxial gun by magnetic coils and ejected particle speed by piezoelectric sensor etc In concert with the experiment a primary physical model is constructed in which only the coaxial gun is taken into account with the compressor coil not considered for its unimportant contribution to the plasma ejection speed The calculation results by the model agree well with the diagnostic results considering some assumptions for simplification Based on the simulation result some important suggestions for optimum design and adjustment of the accelerator are obtained for its later operation
NASA Astrophysics Data System (ADS)
Yu, D. O.; Kwon, O. J.
2014-06-01
In the present study, aeroelastic simulations of horizontal-axis wind turbine rotor blades were conducted using a coupled CFD-CSD method. The unsteady blade aerodynamic loads and the dynamic blade response due to yaw misalignment and non-uniform sheared wind were investigated. For this purpose, a CFD code solving the RANS equations on unstructured meshes and a FEM-based CSD beam solver were used. The coupling of the CFD and CSD solvers was made by exchanging the data between the two solvers in a loosely coupled manner. The present coupled CFD-CSD method was applied to the NREL 5MW reference wind turbine rotor, and the results were compared with those of CFD-alone rigid blade calculations. It was found that aeroelastic blade deformation leads to a significant reduction of blade aerodynamic loads, and alters the unsteady load behaviours, mainly due to the torsional deformation. The reduction of blade aerodynamic loads is particularly significant at the advancing rotor blade side for yawed flow conditions, and at the upper half of rotor disk where wind velocity is higher due to wind shear.
Preliminary Simulation Results of the 23 June, 2001 Peruvian Tsunami
NASA Astrophysics Data System (ADS)
Titov, V. V.; Koshimura, S.; Ortiz, M.; Borrero, J.
2001-12-01
The tsunami generated by the June 23, 2001 Peruvian earthquake devastated a 50--km section of coast near the earthquake epicenter and was recorded on tide-gages throughout the Pacific. The coastal town of Camana sustained the most damage with tsunami waves penetrating up to 1--km inland and runup exceeding 5--m. The extreme local effects and widespread impact motivated modeling efforts to produce a realistic tsunami simulation of this event. Preliminary results were produced by the TIME center using two resident numerical models, TUNAMI--2 and MOST. Both models were used to produce preliminary simulations shortly after the earthquake, and first results were posted on the Internet a day after the event (http://www.pmel.noaa.gov/tsunami/peru_pmel.html). These numerical results aimed to quantify the magnitude of the tsunami and, to certain extent, to guide the post-tsunami survey. The first simulations have been revised using new data about the seismic source and the results of the post-tsunami survey. Measured inundation distances, flow depths, and runup along topographic transects are used to constrain the inundation model. Preliminary numerical analysis of tsunami inundation will be presented.
Simulating lightning into the RAMS model: implementation and preliminary results
NASA Astrophysics Data System (ADS)
Federico, S.; Avolio, E.; Petracca, M.; Panegrossi, G.; Sanò, P.; Casella, D.; Dietrich, S.
2014-11-01
This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS). The method gives the flash density at the resolution of the RAMS grid scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity which occurred, respectively, on 20 October 2011 and on 15 October 2012. The number of flashes simulated (observed) over Lazio is 19435 (16231) for the first case and 7012 (4820) for the second case, and the model correctly reproduces the larger number of flashes that characterized the 20 October 2011 event compared to the 15 October 2012 event. There are, however, errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. For the 20 October 2011 case study, spatial errors are of the order of a few tens of kilometres and the timing of the event is correctly simulated. For the 15 October 2012 case study, the spatial error in the positioning of the convection is of the order of 100 km and the event has a longer duration in the simulation than in the reality. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the
Enhanced vision systems: results of simulation and operational tests
NASA Astrophysics Data System (ADS)
Hecker, Peter; Doehler, Hans-Ullrich
1998-07-01
Today's aircrews have to handle more and more complex situations. Most critical tasks in the field of civil aviation are landing approaches and taxiing. Especially under bad weather conditions the crew has to handle a tremendous workload. Therefore DLR's Institute of Flight Guidance has developed a concept for an enhanced vision system (EVS), which increases performance and safety of the aircrew and provides comprehensive situational awareness. In previous contributions some elements of this concept have been presented, i.e. the 'Simulation of Imaging Radar for Obstacle Detection and Enhanced Vision' by Doehler and Bollmeyer 1996. Now the presented paper gives an overview about the DLR's enhanced vision concept and research approach, which consists of two main components: simulation and experimental evaluation. In a first step the simulational environment for enhanced vision research with a pilot-in-the-loop is introduced. An existing fixed base flight simulator is supplemented by real-time simulations of imaging sensors, i.e. imaging radar and infrared. By applying methods of data fusion an enhanced vision display is generated combining different levels of information, such as terrain model data, processed images acquired by sensors, aircraft state vectors and data transmitted via datalink. The second part of this contribution presents some experimental results. In cooperation with Daimler Benz Aerospace Sensorsystems Ulm, a test van and a test aircraft were equipped with a prototype of an imaging millimeter wave radar. This sophisticated HiVision Radar is up to now one of the most promising sensors for all weather operations. Images acquired by this sensor are shown as well as results of data fusion processes based on digital terrain models. The contribution is concluded by a short video presentation.
Key results from SB8 simulant flowsheet studies
Koopman, D. C.
2013-04-26
Key technically reviewed results are presented here in support of the Defense Waste Processing Facility (DWPF) acceptance of Sludge Batch 8 (SB8). This report summarizes results from simulant flowsheet studies of the DWPF Chemical Process Cell (CPC). Results include: Hydrogen generation rate for the Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) cycles of the CPC on a 6,000 gallon basis; Volume percent of nitrous oxide, N2O, produced during the SRAT cycle; Ammonium ion concentrations recovered from the SRAT and SME off-gas; and, Dried weight percent solids (insoluble, soluble, and total) measurements and density.
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Cerling, T.
2006-12-01
Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across
NASA Astrophysics Data System (ADS)
Crivellini, A.
2016-02-01
This paper deals with the numerical performance of a sponge layer as a non-reflective boundary condition. This technique is well known and widely adopted, but only recently have the reasons for a sponge failure been recognised, in analysis by Mani. For multidimensional problems, the ineffectiveness of the method is due to the self-reflections of the sponge occurring when it interacts with an oblique acoustic wave. Based on his theoretical investigations, Mani gives some useful guidelines for implementing effective sponge layers. However, in our opinion, some practical indications are still missing from the current literature. Here, an extensive numerical study of the performance of this technique is presented. Moreover, we analyse a reduced sponge implementation characterised by undamped partial differential equations for the velocity components. The main aim of this paper relies on the determination of the minimal width of the layer, as well as of the corresponding strength, required to obtain a reflection error of no more than a few per cent of that observed when solving the same problem on the same grid, but without employing the sponge layer term. For this purpose, a test case of computational aeroacoustics, the single airfoil gust response problem, has been addressed in several configurations. As a direct consequence of our investigation, we present a well documented and highly validated reference solution for the far-field acoustic intensity, a result that is not well established in the literature. Lastly, the proof of the accuracy of an algorithm for coupling sub-domains solved by the linear and non-liner Euler governing equations is given. This result is here exploited to adopt a linear-based sponge layer even in a non-linear computation.
Frolov, Andrey I
2015-05-12
Accurate calculation of solvation free energies (SFEs) is a fundamental problem of theoretical chemistry. In this work we perform a careful validation of the theory of solutions in energy representation (ER method) developed by Matubayasi et al. [J. Chem. Phys. 2000, 113, 6070-6081] for SFE calculations in supercritical solvents. This method can be seen as a bridge between the molecular simulations and the classical (not quantum) density functional theory (DFT) formulated in energy representation. We performed extensive calculations of SFEs of organic molecules of different chemical natures in pure supercritical CO2 (sc-CO2) and in sc-CO2 with addition of 6 mol % of ethanol, acetone, and n-hexane as cosolvents. We show that the ER method reproduces SFE data calculated by a method free of theoretical approximations (the Bennett's acceptance ratio) with the mean absolute error of only 0.05 kcal/mol. However, the ER method requires by an order less computational resources. Also, we show that the quality of ER calculations should be carefully monitored since the lack of sampling can result into a considerable bias in predictions. The present calculations reproduce the trends in the cosolvent-induced solubility enhancement factors observed in experimental data. Thus, we think that molecular simulations coupled with the ER method can be used for quick calculations of the effect of variation of temperature, pressure, and cosolvent concentration on SFE and hence solubility of bioactive compounds in supercritical fluids. This should dramatically reduce the burden of experimental work on optimizing solvency of supercritical solvents. PMID:26574423
Preliminary Results of Laboratory Simulation of Magnetic Reconnection
NASA Astrophysics Data System (ADS)
Zhang, Shou-Biao; Xie, Jin-Lin; Hu, Guang-Hai; Li, Hong; Huang, Guang-Li; Liu, Wan-Dong
2011-10-01
In the Linear Magnetized Plasma (LMP) device of University of Science and Technology of China and by exerting parallel currents on two parallel copper plates, we have realized the magnetic reconnection in laboratory plasma. With the emissive probes, we have measured the parallel (along the axial direction) electric field in the process of reconnection, and verified the dependence of reconnection current on passing particles. Using the magnetic probe, we have measured the time evolution of magnetic flux, and the measured result shows no pileup of magnetic flux, in consistence with the result of numerical simulation.
Airflow Hazard Visualization for Helicopter Pilots: Flight Simulation Study Results
NASA Technical Reports Server (NTRS)
Aragon, Cecilia R.; Long, Kurtis R.
2005-01-01
Airflow hazards such as vortices or low level wind shear have been identified as a primary contributing factor in many helicopter accidents. US Navy ships generate airwakes over their decks, creating potentially hazardous conditions for shipboard rotorcraft launch and recovery. Recent sensor developments may enable the delivery of airwake data to the cockpit, where visualizing the hazard data may improve safety and possibly extend ship/helicopter operational envelopes. A prototype flight-deck airflow hazard visualization system was implemented on a high-fidelity rotorcraft flight dynamics simulator. Experienced helicopter pilots, including pilots from all five branches of the military, participated in a usability study of the system. Data was collected both objectively from the simulator and subjectively from post-test questionnaires. Results of the data analysis are presented, demonstrating a reduction in crash rate and other trends that illustrate the potential of airflow hazard visualization to improve flight safety.
BWR Full Integral Simulation Test (FIST). Phase I test results
Hwang, W S; Alamgir, M; Sutherland, W A
1984-09-01
A new full height BWR system simulator has been built under the Full-Integral-Simulation-Test (FIST) program to investigate the system responses to various transients. The test program consists of two test phases. This report provides a summary, discussions, highlights and conclusions of the FIST Phase I tests. Eight matrix tests were conducted in the FIST Phase I. These tests have investigated the large break, small break and steamline break LOCA's, as well as natural circulation and power transients. Results and governing phenomena of each test have been evaluated and discussed in detail in this report. One of the FIST program objectives is to assess the TRAC code by comparisons with test data. Two pretest predictions made with TRACB02 are presented and compared with test data in this report.
The route to MBxNyCz molecular wheels: II. Results using accurate functionals and basis sets
NASA Astrophysics Data System (ADS)
Güthler, A.; Mukhopadhyay, S.; Pandey, R.; Boustani, I.
2014-04-01
Applying ab initio quantum chemical methods, molecular wheels composed of metal and light atoms were investigated. High quality basis sets 6-31G*, TZPV, and cc-pVTZ as well as exchange and non-local correlation functionals B3LYP, BP86 and B3P86 were used. The ground-state energy and structures of cyclic planar and pyramidal clusters TiBn (for n = 3-10) were computed. In addition, the relative stability and electronic structures of molecular wheels TiBxNyCz (for x, y, z = 0-10) and MBnC10-n (for n = 2 to 5 and M = Sc to Zn) were determined. This paper sustains a follow-up study to the previous one of Boustani and Pandey [Solid State Sci. 14 (2012) 1591], in which the calculations were carried out at the HF-SCF/STO3G/6-31G level of theory to determine the initial stability and properties. The results show that there is a competition between the 2D planar and the 3D pyramidal TiBn clusters (for n = 3-8). Different isomers of TiB10 clusters were also studied and a structural transition of 3D-isomer into 2D-wheel is presented. Substitution boron in TiB10 by carbon or/and nitrogen atoms enhances the stability and leads toward the most stable wheel TiB3C7. Furthermore, the computations show that Sc, Ti and V at the center of the molecular wheels are energetically favored over other transition metal atoms of the first row.
Analysis of Numerical Simulation Results of LIPS-200 Lifetime Experiments
NASA Astrophysics Data System (ADS)
Chen, Juanjuan; Zhang, Tianping; Geng, Hai; Jia, Yanhui; Meng, Wei; Wu, Xianming; Sun, Anbang
2016-06-01
Accelerator grid structural and electron backstreaming failures are the most important factors affecting the ion thruster's lifetime. During the thruster's operation, Charge Exchange Xenon (CEX) ions are generated from collisions between plasma and neutral atoms. Those CEX ions grid's barrel and wall frequently, which cause the failures of the grid system. In order to validate whether the 20 cm Lanzhou Ion Propulsion System (LIPS-200) satisfies China's communication satellite platform's application requirement for North-South Station Keeping (NSSK), this study analyzed the measured depth of the pit/groove on the accelerator grid's wall and aperture diameter's variation and estimated the operating lifetime of the ion thruster. Different from the previous method, in this paper, the experimental results after the 5500 h of accumulated operation of the LIPS-200 ion thruster are presented firstly. Then, based on these results, theoretical analysis and numerical calculations were firstly performed to predict the on-orbit lifetime of LIPS-200. The results obtained were more accurate to calculate the reliability and analyze the failure modes of the ion thruster. The results indicated that the predicted lifetime of LIPS-200's was about 13218.1 h which could satisfy the required lifetime requirement of 11000 h very well.
Modeling results for a linear simulator of a divertor
Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.
1993-06-23
A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach {approximately} 1 Gw/m{sup 2} along the magnetic fieldlines and > 10 MW/m{sup 2} on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report.
NASA Astrophysics Data System (ADS)
Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning
2016-02-01
The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.
Lobato, I; Van Dyck, D
2015-09-01
The main features and the GPU implementation of the MULTEM program are presented and described. This new program performs accurate and fast multislice simulations by including higher order expansion of the multislice solution of the high energy Schrödinger equation, the correct subslicing of the three-dimensional potential and top-bottom surfaces. The program implements different kinds of simulation for CTEM, STEM, ED, PED, CBED, ADF-TEM and ABF-HC with proper treatment of the spatial and temporal incoherences. The multislice approach described here treats the specimen as amorphous material which allows a straightforward implementation of the frozen phonon approximation. The generalized transmission function for each slice is calculated when is needed and then discarded. This allows us to perform large simulations that can include millions of atoms and keep the computer memory requirements to a reasonable level. PMID:25965576
Ovchinnikov, Victor; Nam, Kwangho; Karplus, Martin
2016-08-25
A method is developed to obtain simultaneously free energy profiles and diffusion constants from restrained molecular simulations in diffusive systems. The method is based on low-order expansions of the free energy and diffusivity as functions of the reaction coordinate. These expansions lead to simple analytical relationships between simulation statistics and model parameters. The method is tested on 1D and 2D model systems; its accuracy is found to be comparable to or better than that of the existing alternatives, which are briefly discussed. An important aspect of the method is that the free energy is constructed by integrating its derivatives, which can be computed without need for overlapping sampling windows. The implementation of the method in any molecular simulation program that supports external umbrella potentials (e.g., CHARMM) requires modification of only a few lines of code. As a demonstration of its applicability to realistic biomolecular systems, the method is applied to model the α-helix ↔ β-sheet transition in a 16-residue peptide in implicit solvent, with the reaction coordinate provided by the string method. Possible modifications of the method are briefly discussed; they include generalization to multidimensional reaction coordinates [in the spirit of the model of Ermak and McCammon (Ermak, D. L.; McCammon, J. A. J. Chem. Phys. 1978, 69, 1352-1360)], a higher-order expansion of the free energy surface, applicability in nonequilibrium systems, and a simple test for Markovianity. In view of the small overhead of the method relative to standard umbrella sampling, we suggest its routine application in the cases where umbrella potential simulations are appropriate. PMID:27135391
Simulation results of corkscrew motion in DARHT-II
Chan, K. D.; Ekdahl, C. A.; Chen, Y. J.; Hughes, T. P.
2003-01-01
DARHT-II, the second axis of the Dual-Axis Radiographic Hydrodynamics Test Facility, is being commissioned. DARHT-II is a linear induction accelerator producing 2-microsecond electron beam pulses at 20 MeV and 2 kA. These 2-microsecond pulses will be chopped into four short pulses to produce time resolved x-ray images. Radiographic application requires the DARHT-II beam to have excellent beam quality, and it is important to study various beam effects that may cause quality degradation of a DARHT-II beam. One of the beam dynamic effects under study is 'corkscrew' motion. For corkscrew motion, the beam centroid is deflected off axis due to misalignments of the solenoid magnets. The deflection depends on the beam energy variation, which is expected to vary by {+-}0.5% during the 'flat-top' part of a beam pulse. Such chromatic aberration will result in broadening of beam spot size. In this paper, we will report simulation results of our study of corkscrew motion in DARHT-II. Sensitivities of beam spot size to various accelerator parameters and the strategy for minimizing corkscrew motion will be described. Measured magnet misalignment is used in the simulation.
Electron transport in the solar wind -results from numerical simulations
NASA Astrophysics Data System (ADS)
Smith, Håkan; Marsch, Eckart; Helander, Per
A conventional fluid approach is in general insufficient for a correct description of electron trans-port in weakly collisional plasmas such as the solar wind. The classical Spitzer-Hürm theory is a not valid when the Knudsen number (the mean free path divided by the length scale of tem-perature variation) is greater than ˜ 10-2 . Despite this, the heat transport from Spitzer-Hürm a theory is widely used in situations with relatively long mean free paths. For realistic Knud-sen numbers in the solar wind, the electron distribution function develops suprathermal tails, and the departure from a local Maxwellian can be significant at the energies which contribute the most to the heat flux moment. To accurately model heat transport a kinetic approach is therefore more adequate. Different techniques have been used previously, e.g. particle sim-ulations [Landi, 2003], spectral methods [Pierrard, 2001], the so-called 16 moment method [Lie-Svendsen, 2001], and approximation by kappa functions [Dorelli, 2003]. In the present study we solve the Fokker-Planck equation for electrons in one spatial dimension and two velocity dimensions. The distribution function is expanded in Laguerre polynomials in energy, and a finite difference scheme is used to solve the equation in the spatial dimension and the velocity pitch angle. The ion temperature and density profiles are assumed to be known, but the electric field is calculated self-consistently to guarantee quasi-neutrality. The kinetic equation is of a two-way diffusion type, for which the distribution of particles entering the computational domain in both ends of the spatial dimension must be specified, leaving the outgoing distributions to be calculated. The long mean free path of the suprathermal electrons has the effect that the details of the boundary conditions play an important role in determining the particle and heat fluxes as well as the electric potential drop across the domain. Dorelli, J. C., and J. D. Scudder, J. D
Diamond-NICAM-SPRINTARS: downscaling and simulation results
NASA Astrophysics Data System (ADS)
Uchida, J.
2012-12-01
As a part of initiative "Research Program on Climate Change Adaptation" (RECCA) which investigates how predicted large-scale climate change may affect a local weather, and further examines possible atmospheric hazards that cities may encounter due to such a climate change, thus to guide policy makers on implementing new environmental measures, a "Development of Seamless Chemical AssimiLation System and its Application for Atmospheric Environmental Materials" (SALSA) project is funded by the Japanese Ministry of Education, Culture, Sports, Science and Technology and is focused on creating a regional (local) scale assimilation system that can accurately recreate and predict a transport of carbon dioxide and other air pollutants. In this study, a regional model of the next generation global cloud-resolving model NICAM (Non-hydrostatic ICosahedral Atmospheric Model) (Tomita and Satoh, 2004) is used and ran together with a transport model SPRINTARS (Spectral Radiation Transport Model for Aerosol Species) (Takemura et al, 2000) and a chemical transport model CHASER (Sudo et al, 2002) to simulate aerosols across urban cities (over a Kanto region including metropolitan Tokyo). The presentation will mainly be on a "Diamond-NICAM" (Figure 1), a regional climate model version of the global climate model NICAM, and its dynamical downscaling methodologies. Originally, a global NICAM can be described as twenty identical equilateral triangular-shaped panels covering the entire globe where grid points are at the corners of those panels, and to increase a resolution (called a "global-level" in NICAM), additional points are added at the middle of existing two adjacent points, so a number of panels increases by fourfold with an increment of one global-level. On the other hand, a Diamond-NICAM only uses two of those initial triangular-shaped panels, thus only covers part of the globe. In addition, NICAM uses an adaptive mesh scheme and its grid size can gradually decrease, as the grid
Assaraf, Roland; Caffarel, Michel; Kollias, A C
2011-04-15
We present a method to efficiently evaluate small energy differences of two close N-body systems by employing stochastic processes having a stability versus chaos property. By using the same random noise, energy differences are computed from close trajectories without reweighting procedures. The approach is presented for quantum systems but can be applied to classical N-body systems as well. It is exemplified with diffusion Monte Carlo simulations for long chains of hydrogen atoms and molecules for which it is shown that the long-standing problem of computing energy derivatives is solved. PMID:21568537
Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.
2008-01-01
This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.
Some results on ethnic conflicts based on evolutionary game simulation
NASA Astrophysics Data System (ADS)
Qin, Jun; Yi, Yunfei; Wu, Hongrun; Liu, Yuhang; Tong, Xiaonian; Zheng, Bojin
2014-07-01
The force of the ethnic separatism, essentially originating from the negative effect of ethnic identity, is damaging the stability and harmony of multiethnic countries. In order to eliminate the foundation of the ethnic separatism and set up a harmonious ethnic relationship, some scholars have proposed a viewpoint: ethnic harmony could be promoted by popularizing civic identity. However, this viewpoint is discussed only from a philosophical prospective and still lacks support of scientific evidences. Because ethnic group and ethnic identity are products of evolution and ethnic identity is the parochialism strategy under the perspective of game theory, this paper proposes an evolutionary game simulation model to study the relationship between civic identity and ethnic conflict based on evolutionary game theory. The simulation results indicate that: (1) the ratio of individuals with civic identity has a negative association with the frequency of ethnic conflicts; (2) ethnic conflict will not die out by killing all ethnic members once for all, and it also cannot be reduced by a forcible pressure, i.e., increasing the ratio of individuals with civic identity; (3) the average frequencies of conflicts can stay in a low level by promoting civic identity periodically and persistently.
HOMs simulation and measurement results of IHEP02 cavity
NASA Astrophysics Data System (ADS)
Zheng, Hong-Juan; Zhai, Ji-Yuan; Zhao, Tong-Xian; Gao, Jie
2015-11-01
In accelerator RF cavities, there exists not only the fundamental mode which is used to accelerate the beam, but also higher order modes (HOMs). The higher order modes excited by the beam can seriously affect beam quality, especially for the higher R/Q modes. 1.3 GHz low-loss 9-cell superconducting cavity as a candidate for ILC high gradient cavity, the properties of higher order mode has not been studied carefully. IHEP based on existing low loss cavity, designed and developed a large grain size 1.3 GHz low-loss 9-cell superconducting cavity (IHEP02 cavity). The higher order mode coupler of IHEP02 used TESLA coupler's design. As a result of the limitation of the mechanical design, the distance between higher order mode coupler and end cell is larger than TESLA cavity. This paper reports on measured results of higher order modes in the IHEP02 1.3 GHz low-loss 9-cell superconducting cavity. Using different methods, Qe of the dangerous modes passbands have been obtained. The results are compared with TESLA cavity results. R/Q of the first three passbands have also been obtained by simulation and compared with the results of the TESLA cavity. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences
SLAC E144 Plots, Simulation Results, and Data
The 1997 E144 experiments at the Stanford Linear Accelerator Center (SLAC) utilitized extremely high laser intensities and collided huge groups of photons together so violently that positron-electron pairs were briefly created, actual particles of matter and antimatter. Instead of matter exploding into heat and light, light actually become matter. That accomplishment opened a new path into the exploration of the interactions of electrons and photons or quantum electrodynamics (QED). The E144 information at this website includes Feynmann Diagrams, simulation results, and data files. See also aseries of frames showing the E144 laser colliding with a beam electron and producing an electron-positron pair at http://www.slac.stanford.edu/exp/e144/focpic/focpic.html and lists of collaborators' papers, theses, and a page of press articles.
Wastewater neutralization control based in fuzzy logic: Simulation results
Garrido, R.; Adroer, M.; Poch, M.
1997-05-01
Neutralization is a technique widely used as a part of wastewater treatment processes. Due to the importance of this technique, extensive study has been devoted to its control. However, industrial wastewater neutralization control is a procedure with a lot of problems--nonlinearity of the titration curve, variable buffering, changes in loading--and despite the efforts devoted to this subject, the problem has not been totally solved. in this paper, the authors present the development of a controller based in fuzzy logic (FLC). In order to study its effectiveness, it has been compared, by simulation, with other advanced controllers (using identification techniques and adaptive control algorithms using reference models) when faced with various types of wastewater with different buffer capacity or when changes in the concentration of the acid present in the wastewater take place. Results obtained show that FLC could be considered as a powerful alternative for wastewater neutralization processes.
Governance of complex systems: results of a sociological simulation experiment.
Adelt, Fabian; Weyer, Johannes; Fink, Robin D
2014-01-01
Social sciences have discussed the governance of complex systems for a long time. The following paper tackles the issue by means of experimental sociology, in order to investigate the performance of different modes of governance empirically. The simulation framework developed is based on Esser's model of sociological explanation as well as on Kroneberg's model of frame selection. The performance of governance has been measured by means of three macro and two micro indicators. Surprisingly, central control mostly performs better than decentralised coordination. However, results not only depend on the mode of governance, but there is also a relation between performance and the composition of actor populations, which has yet not been investigated sufficiently. Practitioner Summary: Practitioners can gain insights into the functioning of complex systems and learn how to better manage them. Additionally, they are provided with indicators to measure the performance of complex systems. PMID:24456093
NASA Astrophysics Data System (ADS)
Baiardi, Alberto; Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien
2014-06-01
Two parallel theories including Franck-Condon, Herzberg-Teller and Duschinsky (i.e., mode mixing) effects, allowing different approximations for the description of excited state PES have been developed in order to simulate realistic, asymmetric, electronic spectra line-shapes taking into account the vibrational structure: the so-called sum-over-states or time-independent (TI) method and the alternative time-dependent (TD) approach, which exploits the properties of the Fourier transform. The integrated TI-TD procedure included within a general purpose QM code [1,2], allows to compute one photon absorption, fluorescence, phosphorescence, electronic circular dichroism, circularly polarized luminescence and resonance Raman spectra. Combining both approaches, which use a single set of starting data, permits to profit from their respective advantages and minimize their respective limits: the time-dependent route automatically includes all vibrational states and, possibly, temperature effects, while the time-independent route allows to identify and assign single vibronic transitions. Interpretation, analysis and assignment of experimental spectra based on integrated TI-TD vibronic computations will be illustrated for challenging cases of medium-sized open-shell systems in the gas and condensed phases with inclusion of leading anharmonic effects. 1. V. Barone, A. Baiardi, M. Biczysko, J. Bloino, C. Cappelli, F. Lipparini Phys. Chem. Chem. Phys, 14, 12404, (2012) 2. A. Baiardi, V. Barone, J. Bloino J. Chem. Theory Comput., 9, 4097-4115 (2013)
NASA Astrophysics Data System (ADS)
Lemoine, X.; Sriram, S.; Kergen, R.
2011-05-01
ArcelorMittal continuously develops new steel grades (AHSS) with high performance for the automotive industry to improve the weight reduction and the passive safety. The wide market introduction of AHSS raises a new challenge for manufacturers in terms of material models in the prediction of forming—especially formability and springback. The relatively low uniform elongation, the high UTS and the low forming limit curve of these AHSS may cause difficulties in forming simulations. One of these difficulties is the consequence of the relatively low uniform elongation on the parameters identification of isotropic hardening model. Different experimental tests allow to reach large plastic strain levels (hydraulic bulge test, stack compression test, shear test…). After a description on how to determine the flow curve in these experimental tests, a comparison of the different flow curves is made for different steel grades. The ArcelorMittal identification protocol for hardening models is only based on stress-strain curves determined in uniaxial tension. Experimental tests where large plastic strain levels are reached are used to validate our identification protocol and to recommend some hardening models. Finally, the influence of isotropic hardening models and yield loci in forming prediction for AHSS steels will be presented.
NASA Astrophysics Data System (ADS)
Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.
2013-12-01
The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales
Mid-Holocene permafrost: Results from CMIP5 simulations
NASA Astrophysics Data System (ADS)
Liu, Yeyi; Jiang, Dabang
2016-01-01
Distribution of frozen ground and active layer thickness in the Northern Hemisphere during the mid-Holocene (MH) and differences with respect to the preindustrial (PI) were investigated here using the Coupled Model Intercomparison Project Phase 5 (CMIP5) models. Two typical diagnostic methods, respectively, based on soil temperature (Ts based; a direct method) and air temperature (Ta based; an indirect method) were employed to classify categories and extents of frozen ground. In relation to orbitally induced changes in climate and in turn freezing and thawing indices, the MH permafrost extent was 20.5% (1.8%) smaller than the PI, whereas seasonally frozen ground increased by 9.2% (0.8%) in the Northern Hemisphere according to the Ts-based (Ta-based) method. Active layer thickness became larger, but by ≤ 1.0 m in most of permafrost areas during the MH. Intermodel disagreement remains within areas of permafrost boundary by both the Ts-based and Ta-based results, with the former demonstrating less agreement among the CMIP5 models because of larger variation in land model abilities to represent permafrost processes. However, both the methods were able to reproduce the MH relatively degenerated permafrost and increased active layer thickness (although with smaller magnitudes) as observed in data reconstruction. Disparity between simulation and reconstruction was mainly found in the seasonally frozen ground regions at low to middle latitudes, where the reconstruction suggested a reduction of seasonally frozen ground extent to the north, whereas the simulation demonstrated a slightly expansion to the south for the MH compared to the PI.
NASA Astrophysics Data System (ADS)
Reile, E.; Radons, U.; Hennecke, D. K.
1985-09-01
The development of advanced compressors for modern aero-engines requires detailed knowledge of the transient thermal behavior of the rotor disks to enable accurate prediction of rotor life and, additionally, of the thermal growth of the rotor for the evaluation of tip clearances. In the quest for longer life and higher reliability of the parts as well as reduced clearances even at transient conditions, the designer has to be able to influence the thermal behavior of the rotor. A very effective way is to vent small amounts of air through the rotor cavities. The design of such a vented rotor is presented. The main emphasis is placed on a detailed description of a test rig specially built for this purpose. The testing was carried out under simulated engine conditions for a wide range of parameters. The results are compared with those obtained with a theoretical model derived from fundamental tests at the University of Sussex, where heat transfer in rotating cavities is investigated. Good agreement is observed. Some final tests were done in an engine. The results also exhibit good agreement with the rig results under simulated conditions, when the proper dimensionless parameters are considered, providing the validity of the simulation.
Stellar hydrodynamical modeling of dwarf galaxies: simulation methodology, tests, and first results
NASA Astrophysics Data System (ADS)
Vorobyov, Eduard I.; Recchi, Simone; Hensler, Gerhard
2015-07-01
Context. In spite of enormous progress and brilliant achievements in cosmological simulations, they still lack numerical resolution or physical processes to simulate dwarf galaxies in sufficient detail. Accurate numerical simulations of individual dwarf galaxies are thus still in demand. Aims: We aim to improve available numerical techniques to simulate individual dwarf galaxies. In particular, we aim to (i) study in detail the coupling between stars and gas in a galaxy, exploiting the so-called stellar hydrodynamical approach; and (ii) study for the first time the chemodynamical evolution of individual galaxies starting from self-consistently calculated initial gas distributions. Methods: We present a novel chemodynamical code for studying the evolution of individual dwarf galaxies. In this code, the dynamics of gas is computed using the usual hydrodynamics equations, while the dynamics of stars is described by the stellar hydrodynamics approach, which solves for the first three moments of the collisionless Boltzmann equation. The feedback from stellar winds and dying stars is followed in detail. In particular, a novel and detailed approach has been developed to trace the aging of various stellar populations, which facilitates an accurate calculation of the stellar feedback depending on the stellar age. The code has been accurately benchmarked, allowing us to provide a recipe for improving the code performance on the Sedov test problem. Results: We build initial equilibrium models of dwarf galaxies that take gas self-gravity into account and present different levels of rotational support. Models with high rotational support (and hence high degrees of flattening) develop prominent bipolar outflows; a newly-born stellar population in these models is preferentially concentrated to the galactic midplane. Models with little rotational support blow away a large fraction of the gas and the resulting stellar distribution is extended and diffuse. Models that start from non
NASA Astrophysics Data System (ADS)
Byun, Jaeseung; Bodony, Daniel; Pantano, Carlos
2014-11-01
Improved order-of-accuracy discretizations often require careful consideration of their numerical stability. We report on new high-order finite difference schemes using Summation-By-Parts (SBP) operators along with the Simultaneous-Approximation-Terms (SAT) boundary condition treatment for first and second-order spatial derivatives with variable coefficients. In particular, we present a highly accurate operator for SBP-SAT-based approximations of second-order derivatives with variable coefficients for Dirichlet and Neumann boundary conditions. These terms are responsible for approximating the physical dissipation of kinetic and thermal energy in a simulation, and contain grid metrics when the grid is curvilinear. Analysis using the Laplace transform method shows that strong stability is ensured with Dirichlet boundary conditions while weaker stability is obtained for Neumann boundary conditions. Furthermore, the benefits of the scheme is shown in the direct numerical simulation (DNS) of a Mach 1.5 compressible turbulent supersonic jet using curvilinear grids and skew-symmetric discretization. Particularly, we show that the improved methods allow minimization of the numerical filter often employed in these simulations and we discuss the qualities of the simulation.
Short-time dynamics of isotropic and anisotropic Bak-Sneppen model: extensive simulation results
NASA Astrophysics Data System (ADS)
Tirnakli, Ugur; Lyra, Marcelo L.
2004-12-01
In this work, the short-time dynamics of the isotropic and anisotropic versions of the Bak-Sneppen (BS) model has been investigated using the standard damage spreading technique. Since the system sizes attained in our simulations are larger than the ones employed in previous studies, our results for the dynamic scaling exponents are expected to be more accurate than the results of the existing literature. The obtained scaling exponents of both versions of the BS model are found to be greater than the ones given in previous works. These findings are in agreement with the recent claim of Cafiero et al. (Eur. Phys. J. B7 (1999) 505). Moreover, it is found that the short-time dynamics of the anisotropic model is only slightly affected by finite-size effects and the reported estimate of α≃0.53 can be considered as a good estimate of the true exponent in the thermodynamic limit.
Airborne ICESat-2 simulator (MABEL) results from Greenland
NASA Astrophysics Data System (ADS)
Neumann, T.; Markus, T.; Brunt, K. M.; Walsh, K.; Hancock, D.; Cook, W. B.; Brenner, A. C.; Csatho, B. M.; De Marco, E.
2012-12-01
The Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) is a next-generation laser altimeter designed to continue key observations of sea ice freeboard, ice sheet elevation change, vegetation canopy height, earth surface elevation and sea surface heights. Scheduled for launch in mid-2016, ICESat-2 will collect data between 88 degrees north and south using a high-repetition rate (10 kHz) laser operating at 532nm, and using a photon-counting detection strategy. Our airborne simulator, the Multiple Altimeter Beam Experimental Lidar (MABEL) uses a similar photon-counting measurement strategy, operates at 532nm (16 beams) and 1064 nm (8 beams) to collect similar data to what we expect for ICESat-2. The comparison between frequencies allows for studies of possible penetration of green light into water or snow. MABEL collects more spatially-dense data than ICESat-2 (2cm along-track vs. 70 cm along track for ICESat-2, and has a smaller footprint than ICESat-2 (2m nominal diameter vs. 10m nominal diameter for ICESat-2) requiring geometric and radiometric scaling to relate MABEL data to simulate ICESat-2 data. We based MABEL out of Keflavik, Iceland during April 2012, and collected ~ 100 hours of data from 20km altitude over a variety of targets. MABEL collected sea ice data over the Nares Strait, and off the east coast of Greenland, the later flight in coordination with NASA's Operation IceBridge, which collected ATM data along the same track within 90 minutes of MABEL data collection. MABEL flew a variety of lines over Greenland in the southwest, Jakobshavn region, and over the ice sheet interior, including 4 hours of coincident data with Operation IceBridge in southwest Greenland. MABEL flew a number of calibration sites, including corner cubes in Svalbard, Summit Station (where a GPS survey of the surface elevation was collected within an hour of our overflight), and well-surveyed targets in Iceland and western Greenland. In this presentation, we present an overview of
RFI in hybrid loops - Simulation and experimental results.
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.
1972-01-01
A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.
2015-01-01
discrepancies, future studies should seek to employ vessel-appropriate material models to simulate the response of diseased femoral tissue in order to obtain the most accurate numerical results. PMID:25602515
Results of a Flight Simulation Software Methods Survey
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce
1995-01-01
A ten-page questionnaire was mailed to members of the AIAA Flight Simulation Technical Committee in the spring of 1994. The survey inquired about various aspects of developing and maintaining flight simulation software, as well as a few questions dealing with characterization of each facility. As of this report, 19 completed surveys (out of 74 sent out) have been received. This paper summarizes those responses.
Techniques and results of tokamak-edge simulation
Smith, G.R.; Brown, P.N.; Rensink, M.E.; Rognlien, T.D.; Campbell, R.B.; Knoll, D.A.; McHugh, P.R.
1994-05-20
This paper describes recent development of the UEDGE code in three important areas. (1) Non-orthogonal grids allow accurate treatment of experimental geometries in which divertor plates intersect flux surfaces at oblique angles. (2) Radating impurities are included by means of one or more continuity equations that describe transport and sources, and sinks due to ionization and recombination processes. (3) Advanced iterative methods that reduce storage and execution time allow us to find fully converged solutions of larger problems (i.e., finer grids). Sample calculations are presented to illustrate these development.
Comparison of theoretical and simulated performance results for sloppy-slotted Aloha signaling
NASA Astrophysics Data System (ADS)
Crozier, Stewart N.
Sloppy-slotted Aloha refers to a form of random access signaling which allows slotted packets, with random timing errors, to spill over into adjacent slots. For the North American mobile satellite (MSAT) system, the two-way propagation delay variation is on the order of 40 milliseconds. The higher the signaling rate, or the shorter the packet length, the wider the timing error distribution, measured in packet lengths. With 192 transmission bits per packet, a 40 millisecond timing error corresponds to 2 packet lengths at 9600 bits per second. Approximate theoretical and simulated performance results are presented and compared for a mixed Gaussian discrete timing error distribution model. This model allows a fraction of the users to have corrected timing. It is found that the theoretical approximations are generally quite accurate. Where differences are observed, the theoretical approximations are always found to be pessimistic. The conclusion is that the theoretical approximations can be used with confidence as a conservative measure of performance.
Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L.
1995-10-01
This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.
NASA Technical Reports Server (NTRS)
1978-01-01
A hybrid-computer simulation of the over the wing turbofan engine was constructed to develop the dynamic design of the control. This engine and control system includes a full authority digital electronic control using compressor stator reset to achieve fast thrust response and a modified Kalman filter to correct for sensor failures. Fast thrust response for powered-lift operations and accurate, fast responding, steady state control of the engine is provided. Simulation results for throttle bursts from 62 to 100 percent takeoff thrust predict that the engine will accelerate from 62 to 95 percent takeoff thrust in one second.
Albedo in the ATIC Experiment: Results of Measurements and Simulation
NASA Technical Reports Server (NTRS)
Sokolskaya, N. V.; Adams, J. H., Jr.; Ahn, H. S.; Bashindzhagyan, G. L.; Batkov, K. E.; Chang, J.; Christl, M.; Fazely, A. R.; Ganel, O.; Gunasingha, R. M.
2004-01-01
Characteristics of albedo, or backscatter current, providing a 'background' for calorimeter experiments in high energy cosmic rays are analyzed. The comparison of experimental data obtained in the flights of the ATIC spectrometer is made with simulations performed using the GEANT 3.21 code. The influence of the backscatter on charge resolution in the ATIC experiment is discussed.
SOME RESULTS OF A SIMULATION OF AN URBAN SCHOOL DISTRICT.
ERIC Educational Resources Information Center
SISSON, ROGER L.
A COMPUTER PROGRAM WHICH SIMULATES THE GROSS OPERATIONAL FEATURES OF A LARGE URBAN SCHOOL DISTRICT IS DESIGNED TO PREDICT SCHOOL DISTRICT POLICY VARIABLES ON A YEAR-TO-YEAR BASIS. THE MODEL EXPLORES THE CONSEQUENCES OF VARYING SUCH DISTRICT PARAMETERS AS STUDENT POPULATION, STAFF, COMPUTER EQUIPMENT, NUMBERS AND SIZES OF SCHOOL BUILDINGS, SALARY,…
SIMULATION OF DNAPL DISTRIBUTION RESULTING FROM MULTIPLE SOURCES
A three-dimensional and three-phase (water, NAPL and gas) numerical simulator, called NAPL, was employed to study the interaction between DNAPL (PCE) plumes in a variably saturated porous media. Several model verification tests have been performed, including a series of 2-D labo...
Fast Plasma Instrument for MMS: Data Compression Simulation Results
NASA Technical Reports Server (NTRS)
Barrie, A.; Adrian, Mark L.; Yeh, P.-S.; Winkert, G. E.; Lobell, J. V.; Vinas, A.F.; Simpson, D. J.; Moore, T. E.
2008-01-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eights (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6 deg x 180 deg fields-of-view (FOV) are set 90 deg apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45 deg x 180 deg fan about its nominal viewing (0 deg deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the results in the DES complement of a given spacecraft generating 6.5-Mbs(exp -1) of electron data while the DIS generates 1.1-Mbs(exp -1) of ion data yielding an FPI total data rate of 6.6-MBs(exp -1). The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mbs(exp -1). Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re-processed Cluster/PEACE electron measurements. Topics to be discussed include: review of compression algorithm; data quality
Fast Plasma Instrument for MMS: Data Compression Simulation Results
NASA Astrophysics Data System (ADS)
Barrie, A.; Adrian, M. L.; Yeh, P.; Winkert, G.; Lobell, J.; Vinas, A. F.; Simpson, D. G.
2009-12-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° x 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° x 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 6.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present updated simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data as well as the FPI-DIS ion data. Compression analysis is based upon a seed of re-processed Cluster
Fast Plasma Instrument for MMS: Data Compression Simulation Results
NASA Astrophysics Data System (ADS)
Barrie, A. C.; Adrian, M. L.; Yeh, P.; Winkert, G. E.; Lobell, J. V.; Viňas, A. F.; Simpson, D. G.; Moore, T. E.
2008-12-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° × 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° × 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 7.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm- based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re- processed Cluster/PEACE electron measurements. Topics to be
FINAL SIMULATION RESULTS FOR DEMONSTRATION CASE 1 AND 2
David Sloan; Woodrow Fiveland
2003-10-15
The goal of this DOE Vision-21 project work scope was to develop an integrated suite of software tools that could be used to simulate and visualize advanced plant concepts. Existing process simulation software did not meet the DOE's objective of ''virtual simulation'' which was needed to evaluate complex cycles. The overall intent of the DOE was to improve predictive tools for cycle analysis, and to improve the component models that are used in turn to simulate equipment in the cycle. Advanced component models are available; however, a generic coupling capability that would link the advanced component models to the cycle simulation software remained to be developed. In the current project, the coupling of the cycle analysis and cycle component simulation software was based on an existing suite of programs. The challenge was to develop a general-purpose software and communications link between the cycle analysis software Aspen Plus{reg_sign} (marketed by Aspen Technology, Inc.), and specialized component modeling packages, as exemplified by industrial proprietary codes (utilized by ALSTOM Power Inc.) and the FLUENT{reg_sign} computational fluid dynamics (CFD) code (provided by Fluent Inc). A software interface and controller, based on an open CAPE-OPEN standard, has been developed and extensively tested. Various test runs and demonstration cases have been utilized to confirm the viability and reliability of the software. ALSTOM Power was tasked with the responsibility to select and run two demonstration cases to test the software--(1) a conventional steam cycle (designated as Demonstration Case 1), and (2) a combined cycle test case (designated as Demonstration Case 2). Demonstration Case 1 is a 30 MWe coal-fired power plant for municipal electricity generation, while Demonstration Case 2 is a 270 MWe, natural gas-fired, combined cycle power plant. Sufficient data was available from the operation of both power plants to complete the cycle configurations. Three runs
Experiments with encapsulation of Monte Carlo simulation results in machine learning models
NASA Astrophysics Data System (ADS)
Lal Shrestha, Durga; Kayastha, Nagendra; Solomatine, Dimitri
2010-05-01
Uncertainty analysis techniques based on Monte Carlo (MC) simulation have been applied in hydrological sciences successfully in the last decades. They allow for quantification of the model output uncertainty resulting from uncertain model parameters, input data or model structure. They are very flexible, conceptually simple and straightforward, but become impractical in real time applications for complex models when there is little time to perform the uncertainty analysis because of the large number of model runs required. A number of new methods were developed to improve the efficiency of Monte Carlo methods and still these methods require considerable number of model runs in both offline and operational mode to produce reliable and meaningful uncertainty estimation. This paper presents experiments with machine learning techniques used to encapsulate the results of MC runs. A version of MC simulation method, the generalised likelihood uncertain estimation (GLUE) method, is first used to assess the parameter uncertainty of the conceptual rainfall-runoff model HBV. Then the three machines learning methods, namely artificial neural networks, M5 model trees and locally weighted regression methods are trained to encapsulate the uncertainty estimated by the GLUE method using the historical input data. The trained machine learning models are then employed to predict the uncertainty of the model output for the new input data. This method has been applied to two contrasting catchments: the Brue catchment (United Kingdom) and the Bagamati catchment (Nepal). The experimental results demonstrate that the machine learning methods are reasonably accurate in approximating the uncertainty estimated by GLUE. The great advantage of the proposed method is its efficiency to reproduce the MC based simulation results; it can thus be an effective tool to assess the uncertainty of flood forecasting in real time.
Simulations Build Efficacy: Empirical Results from a Four-Week Congressional Simulation
ERIC Educational Resources Information Center
Mariani, Mack; Glenn, Brian J.
2014-01-01
This article describes a four-week congressional committee simulation implemented in upper level courses on Congress and the Legislative process at two liberal arts colleges. We find that the students participating in the simulation possessed high levels of political knowledge and confidence in their political skills prior to the simulation. An…
Direct drive: Simulations and results from the National Ignition Facility
Radha, P. B.; Hohenberger, M.; Edgell, D. H.; Marozas, J. A.; Marshall, F. J.; Michel, D. T.; Rosenberg, M. J.; Seka, W.; Shvydky, A.; Boehly, T. R.; et al
2016-04-19
Here, the direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivitymore » analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.« less
Direct drive: Simulations and results from the National Ignition Facility
NASA Astrophysics Data System (ADS)
Radha, P. B.; Hohenberger, M.; Edgell, D. H.; Marozas, J. A.; Marshall, F. J.; Michel, D. T.; Rosenberg, M. J.; Seka, W.; Shvydky, A.; Boehly, T. R.; Collins, T. J. B.; Campbell, E. M.; Craxton, R. S.; Delettrez, J. A.; Dixit, S. N.; Frenje, J. A.; Froula, D. H.; Goncharov, V. N.; Hu, S. X.; Knauer, J. P.; McCrory, R. L.; McKenty, P. W.; Meyerhofer, D. D.; Moody, J.; Myatt, J. F.; Petrasso, R. D.; Regan, S. P.; Sangster, T. C.; Sio, H.; Skupsky, S.; Zylstra, A.
2016-05-01
Direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivity analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.
Statistics of dark matter substructure - II. Comparison of model with simulation results
NASA Astrophysics Data System (ADS)
van den Bosch, Frank C.; Jiang, Fangzhou
2016-05-01
We compare subhalo mass and velocity functions obtained from different simulations with different subhalo finders among each other, and with predictions from the new semi-analytical model presented in Paper I. We find that subhalo mass functions (SHMFs) obtained using different subhalo finders agree with each other at the level of ˜20 per cent, but only at the low-mass end. At the massive end, subhalo finders that identify subhaloes based purely on density in configuration space dramatically underpredict the subhalo abundances by more than an order of magnitude. These problems are much less severe for subhalo velocity functions (SHVFs), indicating that they arise from issues related to assigning masses to the subhaloes, rather than from detecting them. Overall the predictions from the semi-analytical model are in excellent agreement with simulation results obtained using the more advanced subhalo finders that use information in six-dimensional phase-space. In particular, the model accurately reproduces the slope and host-mass-dependent normalization of both the subhalo mass and velocity functions. We find that the SHMFs and SHVFs have power-law slopes of 0.86 and 2.77, respectively, significantly shallower than what has been claimed in several studies in the literature.
Implementation and Simulation Results using Autonomous Aerobraking Development Software
NASA Technical Reports Server (NTRS)
Maddock, Robert W.; DwyerCianciolo, Alicia M.; Bowes, Angela; Prince, Jill L. H.; Powell, Richard W.
2011-01-01
An Autonomous Aerobraking software system is currently under development with support from the NASA Engineering and Safety Center (NESC) that would move typically ground-based operations functions to onboard an aerobraking spacecraft, reducing mission risk and mission cost. The suite of software that will enable autonomous aerobraking is the Autonomous Aerobraking Development Software (AADS) and consists of an ephemeris model, onboard atmosphere estimator, temperature and loads prediction, and a maneuver calculation. The software calculates the maneuver time, magnitude and direction commands to maintain the spacecraft periapsis parameters within design structural load and/or thermal constraints. The AADS is currently tested in simulations at Mars, with plans to also evaluate feasibility and performance at Venus and Titan.
Chromium coatings by HVOF thermal spraying: Simulation and practical results
Knotek, O.; Lugscheider, E.; Jokiel, P.; Schnaut, U.; Wiemers, A.
1994-12-31
Within recent years High Velocity Oxygen-Fuel (HVOF) thermal spraying has been considered an asset to the family of thermal spraying processes. Especially for spray materials with melting points below 3,000 K it has proven successful, since it shows advantages when compared to coating processes that produce similar qualities. In order to enlarge the fields of thermal spraying applications into regions with rather low thickness, e.g. about 50--100 {micro}m, especially HVOF thermally sprayed coatings seem to be advantageous. The usual evaluation of optimized spraying parameters, including spray distance, traverse speed, gas flow rates etc. is, however, based on numerous and extensive experiments laid out by trial-and-error or statistical experimental design and thus being expensive: man-power and material is required, spray systems are occupied for experimental works and the optimal solution is questioned, for instance, when a new powder fraction or nozzle is used. In this paper the possibility of reducing such experimental efforts by using modeling and simulation is exemplified for producing thin chromium coatings with a CDS{trademark}-HVOF system. The aim is the production of thermally sprayed chromium coatings competing with galvanic hard chromium platings, which are applied to reduce friction and corrosion but are environmentally disadvantageous during their production.