Fluid-Structure Interaction in Composite Structures
2014-03-01
polymer composite structures. Some previous experimental observations were confirmed using the results from the computer simulations , which also...computer simulations , which also enhanced understanding the effect of FSI on dynamic responses of composite structures. vi THIS PAGE INTENTIONALLY...forces) are applied. A great amount of research has been made using the FEM to study and simulate the cases when the structures are surrounded by
Correction for spatial averaging in laser speckle contrast analysis
Thompson, Oliver; Andrews, Michael; Hirst, Evan
2011-01-01
Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623
2012-11-01
performance . The simulations confirm that the PID algorithm can be applied to this cohort without the risk of hypoglycemia . Funding: The study was... Performance Computing Software Applications Institute, Telemedicine and Advanced Technology Research Center, U.S. Army Medical Research and Materiel Command...safe operating region, type 1 diabetes mellitus simulator Corresponding Author: Jaques Reifman, Ph.D., DoD Biotechnology High- Performance Computing
Computational design and experimental validation of new thermal barrier systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Shengmin
2015-03-31
The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validationmore » applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr 2.75O 8 and confirmed it’s hot corrosion performance.« less
GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy
NASA Astrophysics Data System (ADS)
Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro
2011-03-01
The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.
2016-07-10
Elastic Collision Scattering Angle for Electric Propulsion Plume Simulation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...atom needs to be sampled; however, it is confirmed that initial target atom velocity does not play significant role in typical electric propulsion ...by ANSI Std. 239.18 Fast Computation of High Energy Elastic Collision Scattering Angle for Electric Propulsion Plume Simulation∗ Samuel J. Araki1
Computer Simulation and Field Experiment for Downlink Multiuser MIMO in Mobile WiMAX System.
Yamaguchi, Kazuhiro; Nagahashi, Takaharu; Akiyama, Takuya; Matsue, Hideaki; Uekado, Kunio; Namera, Takakazu; Fukui, Hiroshi; Nanamatsu, Satoshi
2015-01-01
The transmission performance for a downlink mobile WiMAX system with multiuser multiple-input multiple-output (MU-MIMO) systems in a computer simulation and field experiment is described. In computer simulation, a MU-MIMO transmission system can be realized by using the block diagonalization (BD) algorithm, and each user can receive signals without any signal interference from other users. The bit error rate (BER) performance and channel capacity in accordance with modulation schemes and the number of streams were simulated in a spatially correlated multipath fading environment. Furthermore, we propose a method for evaluating the transmission performance for this downlink mobile WiMAX system in this environment by using the computer simulation. In the field experiment, the received power and downlink throughput in the UDP layer were measured on an experimental mobile WiMAX system developed in Azumino City in Japan. In comparison with the simulated and experimented results, the measured maximum throughput performance in the downlink had almost the same performance as the simulated throughput. It was confirmed that the experimental mobile WiMAX system for MU-MIMO transmission successfully increased the total channel capacity of the system.
Computer Simulation and Field Experiment for Downlink Multiuser MIMO in Mobile WiMAX System
Yamaguchi, Kazuhiro; Nagahashi, Takaharu; Akiyama, Takuya; Matsue, Hideaki; Uekado, Kunio; Namera, Takakazu; Fukui, Hiroshi; Nanamatsu, Satoshi
2015-01-01
The transmission performance for a downlink mobile WiMAX system with multiuser multiple-input multiple-output (MU-MIMO) systems in a computer simulation and field experiment is described. In computer simulation, a MU-MIMO transmission system can be realized by using the block diagonalization (BD) algorithm, and each user can receive signals without any signal interference from other users. The bit error rate (BER) performance and channel capacity in accordance with modulation schemes and the number of streams were simulated in a spatially correlated multipath fading environment. Furthermore, we propose a method for evaluating the transmission performance for this downlink mobile WiMAX system in this environment by using the computer simulation. In the field experiment, the received power and downlink throughput in the UDP layer were measured on an experimental mobile WiMAX system developed in Azumino City in Japan. In comparison with the simulated and experimented results, the measured maximum throughput performance in the downlink had almost the same performance as the simulated throughput. It was confirmed that the experimental mobile WiMAX system for MU-MIMO transmission successfully increased the total channel capacity of the system. PMID:26421311
Evaluation of a grid based molecular dynamics approach for polypeptide simulations.
Merelli, Ivan; Morra, Giulia; Milanesi, Luciano
2007-09-01
Molecular dynamics is very important for biomedical research because it makes possible simulation of the behavior of a biological macromolecule in silico. However, molecular dynamics is computationally rather expensive: the simulation of some nanoseconds of dynamics for a large macromolecule such as a protein takes very long time, due to the high number of operations that are needed for solving the Newton's equations in the case of a system of thousands of atoms. In order to obtain biologically significant data, it is desirable to use high-performance computation resources to perform these simulations. Recently, a distributed computing approach based on replacing a single long simulation with many independent short trajectories has been introduced, which in many cases provides valuable results. This study concerns the development of an infrastructure to run molecular dynamics simulations on a grid platform in a distributed way. The implemented software allows the parallel submission of different simulations that are singularly short but together bring important biological information. Moreover, each simulation is divided into a chain of jobs to avoid data loss in case of system failure and to contain the dimension of each data transfer from the grid. The results confirm that the distributed approach on grid computing is particularly suitable for molecular dynamics simulations thanks to the elevated scalability.
Experimental verification and simulation of negative index of refraction using Snell's law.
Parazzoli, C G; Greegor, R B; Li, K; Koltenbah, B E C; Tanielian, M
2003-03-14
We report the results of a Snell's law experiment on a negative index of refraction material in free space from 12.6 to 13.2 GHz. Numerical simulations using Maxwell's equations solvers show good agreement with the experimental results, confirming the existence of negative index of refraction materials. The index of refraction is a function of frequency. At 12.6 GHz we measure and compute the real part of the index of refraction to be -1.05. The measurements and simulations of the electromagnetic field profiles were performed at distances of 14lambda and 28lambda from the sample; the fields were also computed at 100lambda.
Management of health care expenditure by soft computing methodology
NASA Astrophysics Data System (ADS)
Maksimović, Goran; Jović, Srđan; Jovanović, Radomir; Aničić, Obrad
2017-01-01
In this study was managed the health care expenditure by soft computing methodology. The main goal was to predict the gross domestic product (GDP) according to several factors of health care expenditure. Soft computing methodologies were applied since GDP prediction is very complex task. The performances of the proposed predictors were confirmed with the simulation results. According to the results, support vector regression (SVR) has better prediction accuracy compared to other soft computing methodologies. The soft computing methods benefit from the soft computing capabilities of global optimization in order to avoid local minimum issues.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Good coupling for the multiscale patch scheme on systems with microscale heterogeneity
NASA Astrophysics Data System (ADS)
Bunder, J. E.; Roberts, A. J.; Kevrekidis, I. G.
2017-05-01
Computational simulation of microscale detailed systems is frequently only feasible over spatial domains much smaller than the macroscale of interest. The 'equation-free' methodology couples many small patches of microscale computations across space to empower efficient computational simulation over macroscale domains of interest. Motivated by molecular or agent simulations, we analyse the performance of various coupling schemes for patches when the microscale is inherently 'rough'. As a canonical problem in this universality class, we systematically analyse the case of heterogeneous diffusion on a lattice. Computer algebra explores how the dynamics of coupled patches predict the large scale emergent macroscale dynamics of the computational scheme. We determine good design for the coupling of patches by comparing the macroscale predictions from patch dynamics with the emergent macroscale on the entire domain, thus minimising the computational error of the multiscale modelling. The minimal error on the macroscale is obtained when the coupling utilises averaging regions which are between a third and a half of the patch. Moreover, when the symmetry of the inter-patch coupling matches that of the underlying microscale structure, patch dynamics predicts the desired macroscale dynamics to any specified order of error. The results confirm that the patch scheme is useful for macroscale computational simulation of a range of systems with microscale heterogeneity.
Sun, Peng; Zhou, Haoyin; Ha, Seongmin; Hartaigh, Bríain ó; Truong, Quynh A.; Min, James K.
2016-01-01
In clinical cardiology, both anatomy and physiology are needed to diagnose cardiac pathologies. CT imaging and computer simulations provide valuable and complementary data for this purpose. However, it remains challenging to gain useful information from the large amount of high-dimensional diverse data. The current tools are not adequately integrated to visualize anatomic and physiologic data from a complete yet focused perspective. We introduce a new computer-aided diagnosis framework, which allows for comprehensive modeling and visualization of cardiac anatomy and physiology from CT imaging data and computer simulations, with a primary focus on ischemic heart disease. The following visual information is presented: (1) Anatomy from CT imaging: geometric modeling and visualization of cardiac anatomy, including four heart chambers, left and right ventricular outflow tracts, and coronary arteries; (2) Function from CT imaging: motion modeling, strain calculation, and visualization of four heart chambers; (3) Physiology from CT imaging: quantification and visualization of myocardial perfusion and contextual integration with coronary artery anatomy; (4) Physiology from computer simulation: computation and visualization of hemodynamics (e.g., coronary blood velocity, pressure, shear stress, and fluid forces on the vessel wall). Substantially, feedback from cardiologists have confirmed the practical utility of integrating these features for the purpose of computer-aided diagnosis of ischemic heart disease. PMID:26863663
Funaki, Ayumu; Ohkubo, Masaki; Wada, Shinichi; Murao, Kohei; Matsumoto, Toru; Niizuma, Shinji
2012-07-01
With the wide dissemination of computed tomography (CT) screening for lung cancer, measuring the nodule volume accurately with computer-aided volumetry software is increasingly important. Many studies for determining the accuracy of volumetry software have been performed using a phantom with artificial nodules. These phantom studies are limited, however, in their ability to reproduce the nodules both accurately and in the variety of sizes and densities required. Therefore, we propose a new approach of using computer-simulated nodules based on the point spread function measured in a CT system. The validity of the proposed method was confirmed by the excellent agreement obtained between computer-simulated nodules and phantom nodules regarding the volume measurements. A practical clinical evaluation of the accuracy of volumetry software was achieved by adding simulated nodules onto clinical lung images, including noise and artifacts. The tested volumetry software was revealed to be accurate within an error of 20 % for nodules >5 mm and with the difference between nodule density and background (lung) (CT value) being 400-600 HU. Such a detailed analysis can provide clinically useful information on the use of volumetry software in CT screening for lung cancer. We concluded that the proposed method is effective for evaluating the performance of computer-aided volumetry software.
NASA Astrophysics Data System (ADS)
Vilagosh, Zoltan; Lajevardipour, Alireza; Wood, Andrew
2018-01-01
Finite-difference time-domain (FDTD) computational phantoms aid the analysis of THz radiation interaction with human skin. The presented computational phantoms have accurate anatomical layering and electromagnetic properties. A novel "large sheet" simulation technique is used allowing for a realistic representation of lateral absorption and reflection of in-vivo measurements. Simulations carried out to date have indicated that hair follicles act as THz propagation channels and confirms the possible role of melanin, both in nevi and skin pigmentation, to act as a significant absorber of THz radiation. A novel freezing technique has promise in increasing the depth of skin penetration of THz radiation to aid diagnostic imaging.
Xia, Pu; Mou, Fei-Fei; Wang, Li-Wei
2012-01-01
Non-small-cell lung cancer (NSCLC) is a leading cause of cancer deaths worldwide. Crizotinib has been approved by the U.S. Food and Drug Administration for the treatment of patients with advanced NSCLC. However, understanding of mechanisms of action is still limited. In our studies, we confirmed crizotinib-induced apoptosis in A549 lung cancer cells. In order to assess mechanisms, small molecular docking technology was used as a preliminary simulation of signaling pathways. Interesting, our results of experiments were consistent with the results of computer simulation. This indicates that small molecular docking technology should find wide use for its reliability and convenience.
Analysis of Square Cup Deep-Drawing Test of Pure Titanium
NASA Astrophysics Data System (ADS)
Ogawa, Takaki; Ma, Ninshu; Ueyama, Minoru; Harada, Yasunori
2016-08-01
The prediction of formability of titunium is more difficult than steels since its strong anisotropy. If computer simulation can estimate the formability of titanium, we can select the optimal forming conditions. The purpose of this study was to acquire knowledge for the formability prediction by the computer simulation of the square cup deep-drawing of pure titanium. In this paper, the results of FEM analsis of pure titanium were compared with the experimental results to examine the analysis validity. We analyzed the formability of deepdrawing square cup of titanium by the FEM using solid elements. Compared the analysis results with the experimental results such as the forming shape, the punch load, and the thickness, the validity was confirmed. Further, through analyzing the change of the thickness around the forming corner, it was confirmed that the thickness increased to its maximum value during forming process at the stroke of 35mm more than the maximum stroke.
Yamamoto, Takehiro; Ueda, Shuya
2013-01-01
Biofilm is a slime-like complex aggregate of microorganisms and their products, extracellular polymer substances, that grows on a solid surface. The growth phenomenon of biofilm is relevant to the corrosion and clogging of water pipes, the chemical processes in a bioreactor, and bioremediation. In these phenomena, the behavior of the biofilm under flow has an important role. Therefore, controlling the biofilm behavior in each process is important. To provide a computational tool for analyzing biofilm growth, the present study proposes a computational model for the simulation of biofilm growth in flows. This model accounts for the growth, decay, detachment and adhesion of biofilms. The proposed model couples the computation of the surrounding fluid flow, using the finite volume method, with the simulation of biofilm growth, using the cellular automaton approach, a relatively low-computational-cost method. Furthermore, a stochastic approach for considering the adhesion process is proposed. Numerical simulations for the biofilm growth on a planar wall and that in an L-shaped rectangular channel were carried out. A variety of biofilm structures were observed depending on the strength of the flow. Moreover, the importance of the detachment and adhesion processes was confirmed.
Gas-liquid coexistence in a system of dipolar soft spheres.
Jia, Ran; Braun, Heiko; Hentschke, Reinhard
2010-12-01
The existence of gas-liquid coexistence in dipolar fluids with no other contribution to attractive interaction than dipole-dipole interaction is a basic and open question in the theory of fluids. Here we compute the gas-liquid critical point in a system of dipolar soft spheres subject to an external electric field using molecular dynamics computer simulation. Tracking the critical point as the field strength is approaching zero we find the following limiting values: T(c)=0.063 and ρ(c)=0.0033 (dipole moment μ=1). These values are confirmed by independent simulation at zero field strength.
Kovačič, Aljaž; Borovinšek, Matej; Vesenjak, Matej; Ren, Zoran
2018-01-26
This paper addresses the problem of reconstructing realistic, irregular pore geometries of lotus-type porous iron for computer models that allow for simple porosity and pore size variation in computational characterization of their mechanical properties. The presented methodology uses image-recognition algorithms for the statistical analysis of pore morphology in real material specimens, from which a unique fingerprint of pore morphology at a certain porosity level is derived. The representative morphology parameter is introduced and used for the indirect reconstruction of realistic and statistically representative pore morphologies, which can be used for the generation of computational models with an arbitrary porosity. Such models were subjected to parametric computer simulations to characterize the dependence of engineering elastic modulus on the porosity of lotus-type porous iron. The computational results are in excellent agreement with experimental observations, which confirms the suitability of the presented methodology of indirect pore geometry reconstruction for computational simulations of similar porous materials.
New Tooling System for Forming Aluminum Beverage Can End Shell
NASA Astrophysics Data System (ADS)
Yamazaki, Koetsu; Otsuka, Takayasu; Han, Jing; Hasegawa, Takashi; Shirasawa, Taketo
2011-08-01
This paper proposes a new tooling system for forming shells of aluminum beverage can ends. At first, forming process of a conversional tooling system has been simulated using three-dimensional finite element models. Simulation results have been confirmed to be consistent with those of axisymmetric models, so simulations for further study have been performed using axisymmetric models to save computational time. A comparison shows that thinning of the shell formed by the proposed tooling system has been improved about 3.6%. Influences of the tool upmost surface profiles and tool initial positions in the new tooling system have been investigated and the design optimization method based on the numerical simulations has been then applied to search optimum design points, in order to minimize thinning subjected to the constraints of the geometrical dimensions of the shell. At last, the performance of the shell subjected to internal pressure has been confirmed to meet design requirements.
Electron and ion heating by whistler turbulence: Three-dimensional particle-in-cell simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, R. Scott; Gary, S. Peter; Wang, Joseph
2014-12-17
Three-dimensional particle-in-cell simulations of decaying whistler turbulence are carried out on a collisionless, homogeneous, magnetized, electron-ion plasma model. In addition, the simulations use an initial ensemble of relatively long wavelength whistler modes with a broad range of initial propagation directions with an initial electron beta β e = 0.05. The computations follow the temporal evolution of the fluctuations as they cascade into broadband turbulent spectra at shorter wavelengths. Three simulations correspond to successively larger simulation boxes and successively longer wavelengths of the initial fluctuations. The computations confirm previous results showing electron heating is preferentially parallel to the background magnetic fieldmore » B o, and ion heating is preferentially perpendicular to B o. The new results here are that larger simulation boxes and longer initial whistler wavelengths yield weaker overall dissipation, consistent with linear dispersion theory predictions of decreased damping, stronger ion heating, consistent with a stronger ion Landau resonance, and weaker electron heating.« less
Laser Simulations of the Destructive Impact of Nuclear Explosions on Hazardous Asteroids
NASA Astrophysics Data System (ADS)
Aristova, E. Yu.; Aushev, A. A.; Baranov, V. K.; Belov, I. A.; Bel'kov, S. A.; Voronin, A. Yu.; Voronich, I. N.; Garanin, R. V.; Garanin, S. G.; Gainullin, K. G.; Golubinskii, A. G.; Gorodnichev, A. V.; Denisova, V. A.; Derkach, V. N.; Drozhzhin, V. S.; Ericheva, I. A.; Zhidkov, N. V.; Il'kaev, R. I.; Krayukhin, A. A.; Leonov, A. G.; Litvin, D. N.; Makarov, K. N.; Martynenko, A. S.; Malinov, V. I.; Mis'ko, V. V.; Rogachev, V. G.; Rukavishnikov, A. N.; Salatov, E. A.; Skorochkin, Yu. V.; Smorchkov, G. Yu.; Stadnik, A. L.; Starodubtsev, V. A.; Starodubtsev, P. V.; Sungatullin, R. R.; Suslov, N. A.; Sysoeva, T. I.; Khatunkin, V. Yu.; Tsoi, E. S.; Shubin, O. N.; Yufa, V. N.
2018-01-01
We present the results of preliminary experiments at laser facilities in which the processes of the undeniable destruction of stony asteroids (chondrites) in space by nuclear explosions on the asteroid surface are simulated based on the principle of physical similarity. We present the results of comparative gasdynamic computations of a model nuclear explosion on the surface of a large asteroid and computations of the impact of a laser pulse on a miniature asteroid simulator confirming the similarity of the key processes in the fullscale and model cases. The technology of fabricating miniature mockups with mechanical properties close to those of stony asteroids is described. For mini-mockups 4-10 mm in size differing by the shape and impact conditions, we have made an experimental estimate of the energy threshold for the undeniable destruction of a mockup and investigated the parameters of its fragmentation at a laser energy up to 500 J. The results obtained confirm the possibility of an experimental determination of the criteria for the destruction of asteroids of various types by a nuclear explosion in laser experiments. We show that the undeniable destruction of a large asteroid is possible at attainable nuclear explosion energies on its surface.
Computer Simulation Shows the Effect of Communication on Day of Surgery Patient Flow.
Taaffe, Kevin; Fredendall, Lawrence; Huynh, Nathan; Franklin, Jennifer
2015-07-01
To improve patient flow in a surgical environment, practitioners and academicians often use process mapping and simulation as tools to evaluate and recommend changes. We used simulations to help staff visualize the effect of communication and coordination delays that occur on the day of surgery. Perioperative services staff participated in tabletop exercises in which they chose the delays that were most important to eliminate. Using a day-of-surgery computer simulation model, the elimination of delays was tested and the results were shared with the group. This exercise, repeated for multiple groups of staff, provided an understanding of not only the dynamic events taking place, but also how small communication delays can contribute to a significant loss in efficiency and the ability to provide timely care. Survey results confirmed these understandings. Copyright © 2015 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Application of artificial neural networks to gaming
NASA Astrophysics Data System (ADS)
Baba, Norio; Kita, Tomio; Oda, Kazuhiro
1995-04-01
Recently, neural network technology has been applied to various actual problems. It has succeeded in producing a large number of intelligent systems. In this article, we suggest that it could be applied to the field of gaming. In particular, we suggest that the neural network model could be used to mimic players' characters. Several computer simulation results using a computer gaming system which is a modified version of the COMMONS GAME confirm our idea.
On Laminar to Turbulent Transition of Arc-Jet Flow in the NASA Ames Panel Test Facility
NASA Technical Reports Server (NTRS)
Gokcen, Tahir; Alunni, Antonella I.
2012-01-01
This paper provides experimental evidence and supporting computational analysis to characterize the laminar to turbulent flow transition in a high enthalpy arc-jet facility at NASA Ames Research Center. The arc-jet test data obtained in the 20 MW Panel Test Facility include measurements of surface pressure and heat flux on a water-cooled calibration plate, and measurements of surface temperature on a reaction-cured glass coated tile plate. Computational fluid dynamics simulations are performed to characterize the arc-jet test environment and estimate its parameters consistent with the facility and calibration measurements. The present analysis comprises simulations of the nonequilibrium flowfield in the facility nozzle, test box, and flowfield over test articles. Both laminar and turbulent simulations are performed, and the computed results are compared with the experimental measurements, including Stanton number dependence on Reynolds number. Comparisons of computed and measured surface heat fluxes (and temperatures), along with the accompanying analysis, confirm that that the boundary layer in the Panel Test Facility flow is transitional at certain archeater conditions.
Quantum simulation of dissipative processes without reservoir engineering
Di Candia, R.; Pedernales, J. S.; del Campo, A.; ...
2015-05-29
We present a quantum algorithm to simulate general finite dimensional Lindblad master equations without the requirement of engineering the system-environment interactions. The proposed method is able to simulate both Markovian and non-Markovian quantum dynamics. It consists in the quantum computation of the dissipative corrections to the unitary evolution of the system of interest, via the reconstruction of the response functions associated with the Lindblad operators. Our approach is equally applicable to dynamics generated by effectively non-Hermitian Hamiltonians. We confirm the quality of our method providing specific error bounds that quantify its accuracy.
Proctor, CJ; Macdonald, C; Milner, JM; Rowan, AD; Cawston, TE
2014-01-01
Objective To use a novel computational approach to examine the molecular pathways involved in cartilage breakdown and to use computer simulation to test possible interventions for reducing collagen release. Methods We constructed a computational model of the relevant molecular pathways using the Systems Biology Markup Language, a computer-readable format of a biochemical network. The model was constructed using our experimental data showing that interleukin-1 (IL-1) and oncostatin M (OSM) act synergistically to up-regulate collagenase protein levels and activity and initiate cartilage collagen breakdown. Simulations were performed using the COPASI software package. Results The model predicted that simulated inhibition of JNK or p38 MAPK, and overexpression of tissue inhibitor of metalloproteinases 3 (TIMP-3) led to a reduction in collagen release. Overexpression of TIMP-1 was much less effective than that of TIMP-3 and led to a delay, rather than a reduction, in collagen release. Simulated interventions of receptor antagonists and inhibition of JAK-1, the first kinase in the OSM pathway, were ineffective. So, importantly, the model predicts that it is more effective to intervene at targets that are downstream, such as the JNK pathway, rather than those that are close to the cytokine signal. In vitro experiments confirmed the effectiveness of JNK inhibition. Conclusion Our study shows the value of computer modeling as a tool for examining possible interventions by which to reduce cartilage collagen breakdown. The model predicts that interventions that either prevent transcription or inhibit the activity of collagenases are promising strategies and should be investigated further in an experimental setting. PMID:24757149
Digital hardware implementation of a stochastic two-dimensional neuron model.
Grassia, F; Kohno, T; Levi, T
2016-11-01
This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Cook, D. W.
1977-01-01
Computer simulation is used to demonstrate that crewman comfort can be assured by using automatic control of the inlet temperature of the coolant into the liquid cooled garment when input to the controller consists of measurements of the garment inlet temperature and the garment outlet temperature difference. Subsequent tests using a facsimile of the control logic developed in the computer program confirmed the feasibility of such a design scheme.
Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code
NASA Astrophysics Data System (ADS)
Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.
2015-08-01
MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.
NASA Astrophysics Data System (ADS)
Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi
2017-09-01
A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.
Face and construct validity of a computer-based virtual reality simulator for ERCP.
Bittner, James G; Mellinger, John D; Imam, Toufic; Schade, Robert R; Macfadyen, Bruce V
2010-02-01
Currently, little evidence supports computer-based simulation for ERCP training. To determine face and construct validity of a computer-based simulator for ERCP and assess its perceived utility as a training tool. Novice and expert endoscopists completed 2 simulated ERCP cases by using the GI Mentor II. Virtual Education and Surgical Simulation Laboratory, Medical College of Georgia. Outcomes included times to complete the procedure, reach the papilla, and use fluoroscopy; attempts to cannulate the papilla, pancreatic duct, and common bile duct; and number of contrast injections and complications. Subjects assessed simulator graphics, procedural accuracy, difficulty, haptics, overall realism, and training potential. Only when performance data from cases A and B were combined did the GI Mentor II differentiate novices and experts based on times to complete the procedure, reach the papilla, and use fluoroscopy. Across skill levels, overall opinions were similar regarding graphics (moderately realistic), accuracy (similar to clinical ERCP), difficulty (similar to clinical ERCP), overall realism (moderately realistic), and haptics. Most participants (92%) claimed that the simulator has definite training potential or should be required for training. Small sample size, single institution. The GI Mentor II demonstrated construct validity for ERCP based on select metrics. Most subjects thought that the simulated graphics, procedural accuracy, and overall realism exhibit face validity. Subjects deemed it a useful training tool. Study repetition involving more participants and cases may help confirm results and establish the simulator's ability to differentiate skill levels based on ERCP-specific metrics.
Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles
Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.
2014-01-01
We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845
Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees
Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael
2014-01-01
Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis. PMID:24693210
Nagaoka, Tomoaki; Watanabe, Soichi
2012-01-01
Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.
Simulation of the photodetachment spectrum of HHfO- using coupled-cluster calculations
NASA Astrophysics Data System (ADS)
Mok, Daniel K. W.; Dyke, John M.; Lee, Edmond P. F.
2016-12-01
The photodetachment spectrum of HHfO- was simulated using restricted-spin coupled-cluster single-double plus perturbative triple {RCCSD(T)} calculations performed on the ground electronic states of HHfO and HHfO-, employing basis sets of up to quintuple-zeta quality. The computed RCCSD(T) electron affinity of 1.67 ± 0.02 eV at the complete basis set limit, including Hf 5s25p6 core correlation and zero-point energy corrections, agrees well with the experimental value of 1.70 ± 0.05 eV from a recent photodetachment study [X. Li et al., J. Chem. Phys. 136, 154306 (2012)]. For the simulation, Franck-Condon factors were computed which included allowances for anharmonicity and Duschinsky rotation. Comparisons between simulated and experimental spectra confirm the assignments of the molecular carrier and electronic states involved but suggest that the experimental vibrational structure has suffered from poor signal-to-noise ratio. An alternative assignment of the vibrational structure to that suggested in the experimental work is presented.
Inai, Takuma; Takabayashi, Tomoya; Edama, Mutsuaki; Kubo, Masayoshi
2018-04-27
The association between repetitive hip moment impulse and the progression of hip osteoarthritis is a recently recognized area of study. A sit-to-stand movement is essential for daily life and requires hip extension moment. Although a change in the sit-to-stand movement time may influence the hip moment impulse in the sagittal plane, this effect has not been examined. The purpose of this study was to clarify the relationship between sit-to-stand movement time and hip moment impulse in the sagittal plane. Twenty subjects performed the sit-to-stand movement at a self-selected natural speed. The hip, knee, and ankle joint angles obtained from experimental trials were used to perform two computer simulations. In the first simulation, the actual sit-to-stand movement time obtained from the experiment was entered. In the second simulation, sit-to-stand movement times ranging from 0.5 to 4.0 s at intervals of 0.25 s were entered. Hip joint moments and hip moment impulses in the sagittal plane during sit-to-stand movements were calculated for both computer simulations. The reliability of the simulation model was confirmed, as indicated by the similarities in the hip joint moment waveforms (r = 0.99) and the hip moment impulses in the sagittal plane between the first computer simulation and the experiment. In the second computer simulation, the hip moment impulse in the sagittal plane decreased with a decrease in the sit-to-stand movement time, although the peak hip extension moment increased with a decrease in the movement time. These findings clarify the association between the sit-to-stand movement time and hip moment impulse in the sagittal plane and may contribute to the prevention of the progression of hip osteoarthritis.
Modeling, Simulation and Analysis of Public Key Infrastructure
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)
1998-01-01
Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.
Influence of Contact Angle Boundary Condition on CFD Simulation of T-Junction
NASA Astrophysics Data System (ADS)
Arias, S.; Montlaur, A.
2018-03-01
In this work, we study the influence of the contact angle boundary condition on 3D CFD simulations of the bubble generation process occurring in a capillary T-junction. Numerical simulations have been performed with the commercial Computational Fluid Dynamics solver ANSYS Fluent v15.0.7. Experimental results serve as a reference to validate numerical results for four independent parameters: the bubble generation frequency, volume, velocity and length. CFD simulations accurately reproduce experimental results both from qualitative and quantitative points of view. Numerical results are very sensitive to the gas-liquid-wall contact angle boundary conditions, confirming that this is a fundamental parameter to obtain accurate CFD results for simulations of this kind of problems.
NIMROD: A computational laboratory for studying nonlinear fusion magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Sovinec, C. R.; Gianakon, T. A.; Held, E. D.; Kruger, S. E.; Schnack, D. D.
2003-05-01
Nonlinear numerical studies of macroscopic modes in a variety of magnetic fusion experiments are made possible by the flexible high-order accurate spatial representation and semi-implicit time advance in the NIMROD simulation code [A. H. Glasser et al., Plasma Phys. Controlled Fusion 41, A747 (1999)]. Simulation of a resistive magnetohydrodynamics mode in a shaped toroidal tokamak equilibrium demonstrates computation with disparate time scales, simulations of discharge 87009 in the DIII-D tokamak [J. L. Luxon et al., Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159] confirm an analytic scaling for the temporal evolution of an ideal mode subject to plasma-β increasing beyond marginality, and a spherical torus simulation demonstrates nonlinear free-boundary capabilities. A comparison of numerical results on magnetic relaxation finds the n=1 mode and flux amplification in spheromaks to be very closely related to the m=1 dynamo modes and magnetic reversal in reversed-field pinch configurations. Advances in local and nonlocal closure relations developed for modeling kinetic effects in fluid simulation are also described.
Mou, Yun; Huang, Po-Ssu; Thomas, Leonard M; Mayo, Stephen L
2015-08-14
In standard implementations of computational protein design, a positive-design approach is used to predict sequences that will be stable on a given backbone structure. Possible competing states are typically not considered, primarily because appropriate structural models are not available. One potential competing state, the domain-swapped dimer, is especially compelling because it is often nearly identical with its monomeric counterpart, differing by just a few mutations in a hinge region. Molecular dynamics (MD) simulations provide a computational method to sample different conformational states of a structure. Here, we tested whether MD simulations could be used as a post-design screening tool to identify sequence mutations leading to domain-swapped dimers. We hypothesized that a successful computationally designed sequence would have backbone structure and dynamics characteristics similar to that of the input structure and that, in contrast, domain-swapped dimers would exhibit increased backbone flexibility and/or altered structure in the hinge-loop region to accommodate the large conformational change required for domain swapping. While attempting to engineer a homodimer from a 51-amino-acid fragment of the monomeric protein engrailed homeodomain (ENH), we had instead generated a domain-swapped dimer (ENH_DsD). MD simulations on these proteins showed increased B-factors derived from MD simulation in the hinge loop of the ENH_DsD domain-swapped dimer relative to monomeric ENH. Two point mutants of ENH_DsD designed to recover the monomeric fold were then tested with an MD simulation protocol. The MD simulations suggested that one of these mutants would adopt the target monomeric structure, which was subsequently confirmed by X-ray crystallography. Copyright © 2015. Published by Elsevier Ltd.
Computer simulation of fibrillation threshold measurements and electrophysiologic testing procedures
NASA Technical Reports Server (NTRS)
Grumbach, M. P.; Saxberg, B. E.; Cohen, R. J.
1987-01-01
A finite element model of cardiac conduction was used to simulate two experimental protocols: 1) fibrillation threshold measurements and 2) clinical electrophysiologic (EP) testing procedures. The model consisted of a cylindrical lattice whose properties were determined by four parameters: element length, conduction velocity, mean refractory period, and standard deviation of refractory periods. Different stimulation patterns were applied to the lattice under a given set of lattice parameter values and the response of the model was observed through a simulated electrocardiogram. The studies confirm that the model can account for observations made in experimental fibrillation threshold measurements and in clinical EP testing protocols.
NASA Astrophysics Data System (ADS)
Reeta Felscia, U.; Rajkumar, Beulah J. M.; Sankar, Pranitha; Philip, Reji; Briget Mary, M.
2017-09-01
The interaction of pyrene on silver has been investigated using both experimental and computational methods. Hyperpolarizabilities computed theoretically together with experimental nonlinear absorption from open aperture Z-scan measurements, point towards a possible use of pyrene adsorbed on silver in the rational design of NLO devices. Presence of a red shift in both simulated and experimental UV-Vis spectra confirms the adsorption on silver, which is due to the electrostatic interaction between silver and pyrene, inducing variations in the structural parameters of pyrene. Fukui calculations along with MEP plot predict the electrophilic nature of the silver cluster in the presence of pyrene, with NBO analysis revealing that the adsorption causes charge redistribution from the first three rings of pyrene towards the fourth ring, from where the 2p orbitals of carbon interact with the valence 5s orbitals of the cluster. This is further confirmed by the downshifting of ring breathing modes in both the experimental and theoretical Raman spectra.
Computer simulation of solutions of polyharmonic equations in plane domain
NASA Astrophysics Data System (ADS)
Kazakova, A. O.
2018-05-01
A systematic study of plane problems of the theory of polyharmonic functions is presented. A method of reducing boundary problems for polyharmonic functions to the system of integral equations on the boundary of the domain is given and a numerical algorithm for simulation of solutions of this system is suggested. Particular attention is paid to the numerical solution of the main tasks when the values of the function and its derivatives are given. Test examples are considered that confirm the effectiveness and accuracy of the suggested algorithm.
Chitale, Rohan; Ghobrial, George M; Lobel, Darlene; Harrop, James
2013-10-01
The learning and development of technical skills are paramount for neurosurgical trainees. External influences and a need for maximizing efficiency and proficiency have encouraged advancements in simulator-based learning models. To confirm the importance of establishing an educational curriculum for teaching minimally invasive techniques of pedicle screw placement using a computer-enhanced physical model of percutaneous pedicle screw placement with simultaneous didactic and technical components. A 2-hour educational curriculum was created to educate neurosurgical residents on anatomy, pathophysiology, and technical aspects associated with image-guided pedicle screw placement. Predidactic and postdidactic practical and written scores were analyzed and compared. Scores were calculated for each participant on the basis of the optimal pedicle screw starting point and trajectory for both fluoroscopy and computed tomographic navigation. Eight trainees participated in this module. Average mean scores on the written didactic test improved from 78% to 100%. The technical component scores for fluoroscopic guidance improved from 58.8 to 52.9. Technical score for computed tomography-navigated guidance also improved from 28.3 to 26.6. Didactic and technical quantitative scores with a simulator-based educational curriculum improved objectively measured resident performance. A minimally invasive spine simulation model and curriculum may serve a valuable function in the education of neurosurgical residents and outcomes for patients.
Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.
Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz
2015-01-01
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS
NASA Astrophysics Data System (ADS)
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L.; Bolch, Wesley E.
2017-06-01
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L; Bolch, Wesley E
2017-06-21
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
Ozaki, Y.; Kaida, A.; Miura, M.; Nakagawa, K.; Toda, K.; Yoshimura, R.; Sumi, Y.; Kurabayashi, T.
2017-01-01
Abstract Early stage oral cancer can be cured with oral brachytherapy, but whole-body radiation exposure status has not been previously studied. Recently, the International Commission on Radiological Protection Committee (ICRP) recommended the use of ICRP phantoms to estimate radiation exposure from external and internal radiation sources. In this study, we used a Monte Carlo simulation with ICRP phantoms to estimate whole-body exposure from oral brachytherapy. We used a Particle and Heavy Ion Transport code System (PHITS) to model oral brachytherapy with 192Ir hairpins and 198Au grains and to perform a Monte Carlo simulation on the ICRP adult reference computational phantoms. To confirm the simulations, we also computed local dose distributions from these small sources, and compared them with the results from Oncentra manual Low Dose Rate Treatment Planning (mLDR) software which is used in day-to-day clinical practice. We successfully obtained data on absorbed dose for each organ in males and females. Sex-averaged equivalent doses were 0.547 and 0.710 Sv with 192Ir hairpins and 198Au grains, respectively. Simulation with PHITS was reliable when compared with an alternative computational technique using mLDR software. We concluded that the absorbed dose for each organ and whole-body exposure from oral brachytherapy can be estimated with Monte Carlo simulation using PHITS on ICRP reference phantoms. Effective doses for patients with oral cancer were obtained. PMID:28339846
Laviola, Marianna; Das, Anup; Chikhani, Marc; Bates, Declan G; Hardman, Jonathan G
2017-07-01
Gaseous mixing in the anatomical deadspace with stimulation of respiratory ventilation through cardiogenic oscillations is an important physiological mechanism at the onset of apnea, which has been credited with various beneficial effects, e.g. reduction of hypercapnia during the use of low flow ventilation techniques. In this paper, a novel method is proposed to investigate the effect of these mechanisms in silico. An existing computational model of cardio-pulmonary physiology is extended to include the apneic state, gas mixing within the anatomical deadspace, insufflation into the trachea and cardiogenic oscillations. The new model is validated against data published in an experimental animal (dog) study that reported an increase in arterial partial pressure of carbon dioxide (PaCO 2 ) during apnea. Computational simulations confirm that the model outputs accurately reproduce the available experimental data. This new model can be used to investigate the physiological mechanisms underlying clearance of carbon dioxide during apnea, and hence to develop more effective ventilation strategies for apneic patients.
Experimental study and simulation of space charge stimulated discharge
NASA Astrophysics Data System (ADS)
Noskov, M. D.; Malinovski, A. S.; Cooke, C. M.; Wright, K. A.; Schwab, A. J.
2002-11-01
The electrical discharge of volume distributed space charge in poly(methylmethacrylate) (PMMA) has been investigated both experimentally and by computer simulation. The experimental space charge was implanted in dielectric samples by exposure to a monoenergetic electron beam of 3 MeV. Electrical breakdown through the implanted space charge region within the sample was initiated by a local electric field enhancement applied to the sample surface. A stochastic-deterministic dynamic model for electrical discharge was developed and used in a computer simulation of these breakdowns. The model employs stochastic rules to describe the physical growth of the discharge channels, and deterministic laws to describe the electric field, the charge, and energy dynamics within the discharge channels and the dielectric. Simulated spatial-temporal and current characteristics of the expanding discharge structure during physical growth are quantitatively compared with the experimental data to confirm the discharge model. It was found that a single fixed set of physically based dielectric parameter values was adequate to simulate the complete family of experimental space charge discharges in PMMA. It is proposed that such a set of parameters also provides a useful means to quantify the breakdown properties of other dielectrics.
Modeling Early Galaxies Using Radiation Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This simulation uses a flux-limited diffusion solver to explore the radiation hydrodynamics of early galaxies, in particular, the ionizing radiation created by Population III stars. At the time of this rendering, the simulation has evolved to a redshift of 3.5. The simulation volume is 11.2 comoving megaparsecs, and has a uniform grid of 10243 cells, with over 1 billion dark matter and star particles. This animation shows a combined view of the baryon density, dark matter density, radiation energy and emissivity from this simulation. The multi-variate rendering is particularly useful because is shows both the baryonic matter ("normal") and darkmore » matter, and the pressure and temperature variables are properties of only the baryonic matter. Visible in the gas density are "bubbles", or shells, created by the radiation feedback from young stars. Seeing the bubbles from feedback provides confirmation of the physics model implemented. Features such as these are difficult to identify algorithmically, but easily found when viewing the visualization. Simulation was performed on Kraken at the National Institute for Computational Sciences. Visualization was produced using resources of the Argonne Leadership Computing Facility at Argonne National Laboratory.« less
DES Prediction of Cavitation Erosion and Its Validation for a Ship Scale Propeller
NASA Astrophysics Data System (ADS)
Ponkratov, Dmitriy, Dr
2015-12-01
Lloyd's Register Technical Investigation Department (LR TID) have developed numerical functions for the prediction of cavitation erosion aggressiveness within Computational Fluid Dynamics (CFD) simulations. These functions were previously validated for a model scale hydrofoil and ship scale rudder [1]. For the current study the functions were applied to a cargo ship's full scale propeller, on which the severe cavitation erosion was reported. The performed Detach Eddy Simulation (DES) required a fine computational mesh (approximately 22 million cells), together with a very small time step (2.0E-4 s). As the cavitation for this type of vessel is primarily caused by a highly non-uniform wake, the hull was also included in the simulation. The applied method under predicted the cavitation extent and did not fully resolve the tip vortex; however, the areas of cavitation collapse were captured successfully. Consequently, the developed functions showed a very good prediction of erosion areas, as confirmed by comparison with underwater propeller inspection results.
NASA Technical Reports Server (NTRS)
Pujar, Vijay V.; Cawley, James D.; Levine, S. (Technical Monitor)
2000-01-01
Earlier results from computer simulation studies suggest a correlation between the spatial distribution of stacking errors in the Beta-SiC structure and features observed in X-ray diffraction patterns of the material. Reported here are experimental results obtained from two types of nominally Beta-SiC specimens, which yield distinct XRD data. These samples were analyzed using high resolution transmission electron microscopy (HRTEM) and the stacking error distribution was directly determined. The HRTEM results compare well to those deduced by matching the XRD data with simulated spectra, confirming the hypothesis that the XRD data is indicative not only of the presence and density of stacking errors, but also that it can yield information regarding their distribution. In addition, the stacking error population in both specimens is related to their synthesis conditions and it appears that it is similar to the relation developed by others to explain the formation of the corresponding polytypes.
Selectivity trend of gas separation through nanoporous graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hongjun; Chen, Zhongfang; Dai, Sheng
2014-01-29
We demonstrate that porous graphene can efficiently separate gases according to their molecular sizes using molecular dynamic (MD) simulations,. The flux sequence from the classical MD simulation is H 2>CO 2>>N 2>Ar>CH 4, which generally follows the trend in the kinetic diameters. Moreover, this trend is also confirmed from the fluxes based on the computed free energy barriers for gas permeation using the umbrella sampling method and kinetic theory of gases. Both brute-force MD simulations and free-energy calcualtions lead to the flux trend consistent with experiments. Case studies of two compositions of CO 2/N 2 mixtures further demonstrate the separationmore » capability of nanoporous graphene.« less
NASA Astrophysics Data System (ADS)
Keserű, György M.; Vásárhelyi, Helga; Makara, Gergely
1994-09-01
The conformation of the new macrocyclic β-lactam ( 1) was investigated by NMR and molecular dynamics (MD) calculations. Restraints obtained from NOESY and ROESY experiments were introduced into MD simulations which led to well-defined conformations. The preference for the calculated minimum energy conformation was confirmed by the analysis of vicinal coupling constants. Experimental coupling constants agreed with computed values.
Plan View Pattern Control for Steel Plates through Constrained Locally Weighted Regression
NASA Astrophysics Data System (ADS)
Shigemori, Hiroyasu; Nambu, Koji; Nagao, Ryo; Araki, Tadashi; Mizushima, Narihito; Kano, Manabu; Hasebe, Shinji
A technique for performing parameter identification in a locally weighted regression model using foresight information on the physical properties of the object of interest as constraints was proposed. This method was applied to plan view pattern control of steel plates, and a reduction of shape nonconformity (crop) at the plate head end was confirmed by computer simulation based on real operation data.
McCormack, Patrick; Han, Fei; Yan, Zijie
2018-02-01
Light-driven self-organization of metal nanoparticles (NPs) can lead to unique optical matter systems, yet simulation of such self-organization (i.e., optical binding) is a complex computational problem that increases nonlinearly with system size. Here we show that a combined electrodynamics-molecular dynamics simulation technique can simulate the trajectories and predict stable configurations of silver NPs in optical fields. The simulated dynamic equilibrium of a two-NP system matches the probability density of oscillations for two optically bound NPs obtained experimentally. The predicted stable configurations for up to eight NPs are further compared to experimental observations of silver NP clusters formed by optical binding in a Bessel beam. All configurations are confirmed to form in real systems, including pentagonal clusters with five-fold symmetry. Our combined simulations and experiments have revealed a diverse optical matter system formed by anisotropic optical binding interactions, providing a new strategy to discover artificial materials.
Computational findings of metastable ferroelectric phases of squaric acid
NASA Astrophysics Data System (ADS)
Ishibashi, Shoji; Horiuchi, Sachio; Kumai, Reiji
2018-05-01
Antiferroelectric-to-ferroelectric transitions in squaric acid are simulated by computationally applying a static electric field. Depending on the direction of the electric field, two different metastable ferroelectric (and piezoelectric) phases have been found. One of them corresponds to the experimentally confirmed phase, whereas the other is an optimally polarized phase. The structural details of these phases have been determined as a function of the electric field. The spontaneous polarization values of the phases are 14.5 and 20.5 μ C /cm2, respectively, and are relatively high among those of the existing organic ferroelectrics.
Image formation of thick three-dimensional objects in differential-interference-contrast microscopy.
Trattner, Sigal; Kashdan, Eugene; Feigin, Micha; Sochen, Nir
2014-05-01
The differential-interference-contrast (DIC) microscope is of widespread use in life sciences as it enables noninvasive visualization of transparent objects. The goal of this work is to model the image formation process of thick three-dimensional objects in DIC microscopy. The model is based on the principles of electromagnetic wave propagation and scattering. It simulates light propagation through the components of the DIC microscope to the image plane using a combined geometrical and physical optics approach and replicates the DIC image of the illuminated object. The model is evaluated by comparing simulated images of three-dimensional spherical objects with the recorded images of polystyrene microspheres. Our computer simulations confirm that the model captures the major DIC image characteristics of the simulated object, and it is sensitive to the defocusing effects.
Computational Design of a Thermostable Mutant of Cocaine Esterase via Molecular Dynamics Simulations
Huang, Xiaoqin; Gao, Daquan; Zhan, Chang-Guo
2015-01-01
Cocaine esterase (CocE) has been known as the most efficient native enzyme for metabolizing the naturally occurring cocaine. A major obstacle to the clinical application of CocE is the thermoinstability of native CocE with a half-life of only ~11 min at physiological temperature (37°C). It is highly desirable to develop a thermostable mutant of CocE for therapeutic treatment of cocaine overdose and addiction. To establish a structure-thermostability relationship, we carried out molecular dynamics (MD) simulations at 400 K on wild-type CocE and previously known thermostable mutants, demonstrating that the thermostability of the active form of the enzyme correlates with the fluctuation (characterized as the RMSD and RMSF of atomic positions) of the catalytic residues (Y44, S117, Y118, H287, and D259) in the simulated enzyme. In light of the structure-thermostability correlation, further computational modeling including MD simulations at 400 K predicted that the active site structure of the L169K mutant should be more thermostable. The prediction has been confirmed by wet experimental tests showing that the active form of the L169K mutant had a half-life of 570 min at 37°C, which is significantly longer than those of the wild-type and previously known thermostable mutants. The encouraging outcome suggests that the high-temperature MD simulations and the structure-thermostability may be considered as a valuable tool for computational design of thermostable mutants of an enzyme. PMID:21373712
Ozaki, Y; Watanabe, H; Kaida, A; Miura, M; Nakagawa, K; Toda, K; Yoshimura, R; Sumi, Y; Kurabayashi, T
2017-07-01
Early stage oral cancer can be cured with oral brachytherapy, but whole-body radiation exposure status has not been previously studied. Recently, the International Commission on Radiological Protection Committee (ICRP) recommended the use of ICRP phantoms to estimate radiation exposure from external and internal radiation sources. In this study, we used a Monte Carlo simulation with ICRP phantoms to estimate whole-body exposure from oral brachytherapy. We used a Particle and Heavy Ion Transport code System (PHITS) to model oral brachytherapy with 192Ir hairpins and 198Au grains and to perform a Monte Carlo simulation on the ICRP adult reference computational phantoms. To confirm the simulations, we also computed local dose distributions from these small sources, and compared them with the results from Oncentra manual Low Dose Rate Treatment Planning (mLDR) software which is used in day-to-day clinical practice. We successfully obtained data on absorbed dose for each organ in males and females. Sex-averaged equivalent doses were 0.547 and 0.710 Sv with 192Ir hairpins and 198Au grains, respectively. Simulation with PHITS was reliable when compared with an alternative computational technique using mLDR software. We concluded that the absorbed dose for each organ and whole-body exposure from oral brachytherapy can be estimated with Monte Carlo simulation using PHITS on ICRP reference phantoms. Effective doses for patients with oral cancer were obtained. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.
2014-02-01
To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second generation DGVM LPJ-GUESS to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, that increased the model's speed by approximately the factor 8, we were able to faster detect shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south-transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high resolution LPJ-GUESS simulation results for a large part of the Alpine region.
NASA Astrophysics Data System (ADS)
Araki, Samuel J.
2016-11-01
In the plumes of Hall thrusters and ion thrusters, high energy ions experience elastic collisions with slow neutral atoms. These collisions involve a process of momentum exchange, altering the initial velocity vectors of the collision pair. In addition to the momentum exchange process, ions and atoms can exchange electrons, resulting in slow charge-exchange ions and fast atoms. In these simulations, it is particularly important to accurately perform computations of ion-atom elastic collisions in determining the plume current profile and assessing the integration of spacecraft components. The existing models are currently capable of accurate calculation but are not fast enough such that the calculation can be a bottleneck of plume simulations. This study investigates methods to accelerate an ion-atom elastic collision calculation that includes both momentum- and charge-exchange processes. The scattering angles are pre-computed through a classical approach with ab initio spin-orbit free potential and are stored in a two-dimensional array as functions of impact parameter and energy. When performing a collision calculation for an ion-atom pair, the scattering angle is computed by a table lookup and multiple linear interpolations, given the relative energy and randomly determined impact parameter. In order to further accelerate the calculations, the number of collision calculations is reduced by properly defining two cut-off cross-sections for the elastic scattering. In the MCC method, the target atom needs to be sampled; however, it is confirmed that initial target atom velocity does not play a significant role in typical electric propulsion plume simulations such that the sampling process is unnecessary. With these implementations, the computational run-time to perform a collision calculation is reduced significantly compared to previous methods, while retaining the accuracy of the high fidelity models.
Validation of hydrogen gas stratification and mixing models
Wu, Hsingtzu; Zhao, Haihua
2015-05-26
Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
Simulating pad-electrodes with high-definition arrays in transcranial electric stimulation
NASA Astrophysics Data System (ADS)
Kempe, René; Huang, Yu; Parra, Lucas C.
2014-04-01
Objective. Research studies on transcranial electric stimulation, including direct current, often use a computational model to provide guidance on the placing of sponge-electrode pads. However, the expertise and computational resources needed for finite element modeling (FEM) make modeling impractical in a clinical setting. Our objective is to make the exploration of different electrode configurations accessible to practitioners. We provide an efficient tool to estimate current distributions for arbitrary pad configurations while obviating the need for complex simulation software. Approach. To efficiently estimate current distributions for arbitrary pad configurations we propose to simulate pads with an array of high-definition (HD) electrodes and use an efficient linear superposition to then quickly evaluate different electrode configurations. Main results. Numerical results on ten different pad configurations on a normal individual show that electric field intensity simulated with the sampled array deviates from the solutions with pads by only 5% and the locations of peak magnitude fields have a 94% overlap when using a dense array of 336 electrodes. Significance. Computationally intensive FEM modeling of the HD array needs to be performed only once, perhaps on a set of standard heads that can be made available to multiple users. The present results confirm that by using these models one can now quickly and accurately explore and select pad-electrode montages to match a particular clinical need.
NASA Astrophysics Data System (ADS)
Abdi, Mohamad; Hajihasani, Mojtaba; Gharibzadeh, Shahriar; Tavakkoli, Jahan
2012-12-01
Ultrasound waves have been widely used in diagnostic and therapeutic medical applications. Accurate and effective simulation of ultrasound beam propagation and its interaction with tissue has been proved to be important. The nonlinear nature of the ultrasound beam propagation, especially in the therapeutic regime, plays an important role in the mechanisms of interaction with tissue. There are three main approaches in current computational fluid dynamics (CFD) methods to model and simulate nonlinear ultrasound beams: macroscopic, mesoscopic and microscopic approaches. In this work, a mesoscopic CFD method based on the Lattice-Boltzmann model (LBM) was investigated. In the developed method, the Boltzmann equation is evolved to simulate the flow of a Newtonian fluid with the collision model instead of solving the Navier-Stokes, continuity and state equations which are used in conventional CFD methods. The LBM has some prominent advantages over conventional CFD methods, including: (1) its parallel computational nature; (2) taking microscopic boundaries into account; and (3) capability of simulating in porous and inhomogeneous media. In our proposed method, the propagating medium is discretized with a square grid in 2 dimensions with 9 velocity vectors for each node. Using the developed model, the nonlinear distortion and shock front development of a finiteamplitude diffractive ultrasonic beam in a dissipative fluid medium was computed and validated against the published data. The results confirm that the LBM is an accurate and effective approach to model and simulate nonlinearity in finite-amplitude ultrasound beams with Mach numbers of up to 0.01 which, among others, falls within the range of therapeutic ultrasound regime such as high intensity focused ultrasound (HIFU) beams. A comparison between the HIFU nonlinear beam simulations using the proposed model and pseudospectral methods in a 2D geometry is presented.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.
2014-07-01
To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.
Ogata, Yuma; Ohnishi, Takashi; Moriya, Takahiro; Inadama, Naoko; Nishikido, Fumihiko; Yoshida, Eiji; Murayama, Hideo; Yamaya, Taiga; Haneishi, Hideaki
2014-01-01
The X'tal cube is a next-generation DOI detector for PET that we are developing to offer higher resolution and higher sensitivity than is available with present detectors. It is constructed from a cubic monolithic scintillation crystal and silicon photomultipliers which are coupled on various positions of the six surfaces of the cube. A laser-processing technique is applied to produce 3D optical boundaries composed of micro-cracks inside the monolithic scintillator crystal. The current configuration is based on an empirical trial of a laser-processed boundary. There is room to improve the spatial resolution by optimizing the setting of the laser-processed boundary. In fact, the laser-processing technique has high freedom in setting the parameters of the boundary such as size, pitch, and angle. Computer simulation can effectively optimize such parameters. In this study, to design optical characteristics properly for the laser-processed crystal, we developed a Monte Carlo simulator which can model arbitrary arrangements of laser-processed optical boundaries (LPBs). The optical characteristics of the LPBs were measured by use of a setup with a laser and a photo-diode, and then modeled in the simulator. The accuracy of the simulator was confirmed by comparison of position histograms obtained from the simulation and from experiments with a prototype detector composed of a cubic LYSO monolithic crystal with 6 × 6 × 6 segments and multi-pixel photon counters. Furthermore, the simulator was accelerated by parallel computing with general-purpose computing on a graphics processing unit. The calculation speed was about 400 times faster than that with a CPU.
Three-dimensional monochromatic x-ray computed tomography using synchrotron radiation
NASA Astrophysics Data System (ADS)
Saito, Tsuneo; Kudo, Hiroyuki; Takeda, Tohoru; Itai, Yuji; Tokumori, Kenji; Toyofuku, Fukai; Hyodo, Kazuyuki; Ando, Masami; Nishimura, Katsuyuki; Uyama, Chikao
1998-08-01
We describe a technique of 3D computed tomography (3D CT) using monochromatic x rays generated by synchrotron radiation, which performs a direct reconstruction of a 3D volume image of an object from its cone-beam projections. For the development, we propose a practical scanning orbit of the x-ray source to obtain complete 3D information on an object, and its corresponding 3D image reconstruction algorithm. The validity and usefulness of the proposed scanning orbit and reconstruction algorithm were confirmed by computer simulation studies. Based on these investigations, we have developed a prototype 3D monochromatic x-ray CT using synchrotron radiation, which provides exact 3D reconstruction and material-selective imaging by using the K-edge energy subtraction technique.
Fitting neuron models to spike trains.
Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.
The ambivalent effect of lattice structure on a spatial game
NASA Astrophysics Data System (ADS)
Zhang, Hui; Gao, Meng; Li, Zizhen; Maa, Zhihui; Wang, Hailong
2011-06-01
The evolution of cooperation is studied in lattice-structured populations, in which each individual who adopts one of the following strategies ‘always defect' (ALLD), ‘tit-for-tat' (TFT), and ‘always cooperate' (ALLC) plays the repeated Prisoner's Dilemma game with its neighbors according to an asynchronous update rule. Computer simulations are applied to analyse the dynamics depending on major parameters. Mathematical analyses based on invasion probability analysis, mean-field approximation, as well as pair approximation are also used. We find that the lattice structure promotes the evolution of cooperation compared with a non-spatial population, this is also confirmed by invasion probability analysis in one dimension. Meanwhile, it also inhibits the evolution of cooperation due to the advantage of being spiteful, which indicates the key role of specific life-history assumptions. Mean-field approximation fails to predict the outcome of computer simulations. Pair approximation is accurate in two dimensions but fails in one dimension.
Lipid Biomembrane in Ionic Liquids
NASA Astrophysics Data System (ADS)
Yoo, Brian; Jing, Benxin; Shah, Jindal; Maginn, Ed; Zhu, Y. Elaine; Department of Chemical and Biomolecular Engineering Team
2014-03-01
Ionic liquids (ILs) have been recently explored as new ``green'' chemicals in several chemical and biomedical processes. In our pursuit of understanding their toxicities towards aquatic and terrestrial organisms, we have examined the IL interaction with lipid bilayers as model cell membranes. Experimentally by fluorescence microscopy, we have directly observed the disruption of lipid bilayer by added ILs. Depending on the concentration, alkyl chain length, and anion hydrophobicity of ILs, the interaction of ILs with lipid bilayers leads to the formation of micelles, fibrils, and multi-lamellar vesicles for IL-lipid complexes. By MD computer simulations, we have confirmed the insertion of ILs into lipid bilayers to modify the spatial organization of lipids in the membrane. The combined experimental and simulation results correlate well with the bioassay results of IL-induced suppression in bacteria growth, thereby suggesting a possible mechanism behind the IL toxicity. National Science Foundation, Center for Research Computing at Notre Dame.
Conformational Heterogeneity of Bax Helix 9 Dimer for Apoptotic Pore Formation
NASA Astrophysics Data System (ADS)
Liao, Chenyi; Zhang, Zhi; Kale, Justin; Andrews, David W.; Lin, Jialing; Li, Jianing
2016-07-01
Helix α9 of Bax protein can dimerize in the mitochondrial outer membrane (MOM) and lead to apoptotic pores. However, it remains unclear how different conformations of the dimer contribute to the pore formation on the molecular level. Thus we have investigated various conformational states of the α9 dimer in a MOM model — using computer simulations supplemented with site-specific mutagenesis and crosslinking of the α9 helices. Our data not only confirmed the critical membrane environment for the α9 stability and dimerization, but also revealed the distinct lipid-binding preference of the dimer in different conformational states. In our proposed pathway, a crucial iso-parallel dimer that mediates the conformational transition was discovered computationally and validated experimentally. The corroborating evidence from simulations and experiments suggests that, helix α9 assists Bax activation via the dimer heterogeneity and interactions with specific MOM lipids, which eventually facilitate proteolipidic pore formation in apoptosis regulation.
Finite element analyses of a linear-accelerator electron gun
NASA Astrophysics Data System (ADS)
Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.
2014-02-01
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.
Finite element analyses of a linear-accelerator electron gun.
Iqbal, M; Wasy, A; Islam, G U; Zhou, Z
2014-02-01
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.
NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.
Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C
2011-09-14
An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics
Pang, Yujia; Li, Wenliang; Zhang, Jingping
2017-09-15
A novel type of porous organic frameworks, based on Mg-porphyrin, with diamond-like topology, named POF-Mgs is computationally designed, and the gas uptakes of CO 2 , H 2 , N 2 , and H 2 O in POF-Mgs are investigated by Grand canonical Monte Carlo simulations based on first-principles derived force fields (FF). The FF, which describes the interactions between POF-Mgs and gases, are fitted by dispersion corrected double-hybrid density functional theory, B2PLYP-D3. The good agreement between the obtained FF and the first-principle energies data confirms the reliability of the FF. Furthermore our simulation shows the presence of a small amount of H 2 O (≤ 0.01 kPa) does not much affect the adsorption quantity of CO 2 , but the presence of higher partial pressure of H 2 O (≥ 0.1 kPa) results in the CO 2 adsorption decrease significantly. The good performance of POF-Mgs in the simulation inspires us to design novel porous materials experimentally for gas adsorption and purification. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Explicit and implicit calculations of turbulent cavity flows with and without yaw angle
NASA Astrophysics Data System (ADS)
Yen, Guan-Wei
1989-08-01
Computations were performed to simulate turbulent supersonic flows past three-dimensional deep cavities with and without yaw. Simulation of these self-sustained oscillatory flows were generated through time accurate solutions of the Reynolds averaged complete Navier-Stokes equations using two different schemes: (1) MacCormack, finite-difference; and (2) implicit, upwind, finite-volume schemes. The second scheme, which is approximately 30 percent faster, is found to produce better time accurate results. The Reynolds stresses were modeled, using the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties everywhere in the computational domain. Time series analyses were performed for the instantaneous pressure values on the cavity floor. The time averaged computational results show good agreement with the experimental data along the cavity floor and walls. When the yaw angle is nonzero, there is no longer a single length scale (length-to-depth ratio) for the flow, as is the case for zero yaw angle flow. The dominant directions and inclinations of the vortices are dramatically different for this nonsymmetric flow. The vortex shedding from the cavity into the mainstream flow is captured computationally. This phenomenon, which is due to the oscillation of the shear layer, is confirmed by the solutions of both schemes.
Explicit and implicit calculations of turbulent cavity flows with and without yaw angle. M.S. Thesis
NASA Technical Reports Server (NTRS)
Yen, Guan-Wei
1989-01-01
Computations were performed to simulate turbulent supersonic flows past three-dimensional deep cavities with and without yaw. Simulation of these self-sustained oscillatory flows were generated through time accurate solutions of the Reynolds averaged complete Navier-Stokes equations using two different schemes: (1) MacCormack, finite-difference; and (2) implicit, upwind, finite-volume schemes. The second scheme, which is approximately 30 percent faster, is found to produce better time accurate results. The Reynolds stresses were modeled, using the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties everywhere in the computational domain. Time series analyses were performed for the instantaneous pressure values on the cavity floor. The time averaged computational results show good agreement with the experimental data along the cavity floor and walls. When the yaw angle is nonzero, there is no longer a single length scale (length-to-depth ratio) for the flow, as is the case for zero yaw angle flow. The dominant directions and inclinations of the vortices are dramatically different for this nonsymmetric flow. The vortex shedding from the cavity into the mainstream flow is captured computationally. This phenomenon, which is due to the oscillation of the shear layer, is confirmed by the solutions of both schemes.
Lehmann, Eldon D
2003-01-01
The purpose of this review is to describe research applications of the AIDA diabetes software simulator. AIDA is a computer program that permits the interactive simulation of insulin and glucose profiles for teaching, demonstration, and self-learning purposes. Since March/April 1996 it has been made freely available on the Internet as a noncommercial contribution to continuing diabetes education. Up to May 2003 well over 320,000 visits have been logged at the main AIDA Website--www.2aida.org--and over 65,000 copies of the AIDA program have been downloaded free-of-charge. This review (the second of two parts) overviews research projects and ventures, undertaken for the most part by other research workers in the diabetes computing field, that have made use of the freeware AIDA program. As with Part 1 of the review (Diabetes Technol Ther 2003;5:425-438) relevant research work was identified in three main ways: (i) by personal (e-mail/written) communications from researchers, (ii) via the ISI Web of Science citation database to identify published articles which referred to AIDA-related papers, and (iii) via searches on the Internet. Also, in a number of cases research students who had sought advice about AIDA, and diabetes computing in general, provided copies of their research dissertations/theses upon the completion of their projects. Part 2 of this review highlights some more of the research projects that have made use of the AIDA diabetes simulation program to date. A wide variety of diabetes computing topics are addressed. These range from learning about parameter interactions using simulated blood glucose data, to considerations of dietary assessments, developing new diabetes models, and performance monitoring of closed-loop insulin delivery devices. Other topics include evaluation/validation research usage of such software, applying simulated blood glucose data for prototype training/validation, and other research uses of placing technical information on the Web. This review confirms an unexpected but useful benefit of distributing a medical program, like AIDA, for free via the Internet--demonstrating how it is possible to have a synergistic benefit with other researchers--facilitating their own research projects in related medical fields. A common theme that emerges from the research ventures that have been reviewed is the use of simulated blood glucose data from the AIDA software for preliminary computer lab-based testing of other decision support prototypes. Issues surrounding such use of simulated data for separate computer prototype testing are considered further.
Time-Domain Simulation of Along-Track Interferometric SAR for Moving Ocean Surfaces.
Yoshida, Takero; Rheem, Chang-Kyu
2015-06-10
A time-domain simulation of along-track interferometric synthetic aperture radar (AT-InSAR) has been developed to support ocean observations. The simulation is in the time domain and based on Bragg scattering to be applicable for moving ocean surfaces. The time-domain simulation is suitable for examining velocities of moving objects. The simulation obtains the time series of microwave backscattering as raw signals for movements of ocean surfaces. In terms of realizing Bragg scattering, the computational grid elements for generating the numerical ocean surface are set to be smaller than the wavelength of the Bragg resonant wave. In this paper, the simulation was conducted for a Bragg resonant wave and irregular waves with currents. As a result, the phases of the received signals from two antennas differ due to the movement of the numerical ocean surfaces. The phase differences shifted by currents were in good agreement with the theoretical values. Therefore, the adaptability of the simulation to observe velocities of ocean surfaces with AT-InSAR was confirmed.
Time-Domain Simulation of Along-Track Interferometric SAR for Moving Ocean Surfaces
Yoshida, Takero; Rheem, Chang-Kyu
2015-01-01
A time-domain simulation of along-track interferometric synthetic aperture radar (AT-InSAR) has been developed to support ocean observations. The simulation is in the time domain and based on Bragg scattering to be applicable for moving ocean surfaces. The time-domain simulation is suitable for examining velocities of moving objects. The simulation obtains the time series of microwave backscattering as raw signals for movements of ocean surfaces. In terms of realizing Bragg scattering, the computational grid elements for generating the numerical ocean surface are set to be smaller than the wavelength of the Bragg resonant wave. In this paper, the simulation was conducted for a Bragg resonant wave and irregular waves with currents. As a result, the phases of the received signals from two antennas differ due to the movement of the numerical ocean surfaces. The phase differences shifted by currents were in good agreement with the theoretical values. Therefore, the adaptability of the simulation to observe velocities of ocean surfaces with AT-InSAR was confirmed. PMID:26067197
NASA Astrophysics Data System (ADS)
Seo, Jung-Hee; Bakhshaee, Hani; Zhu, Chi; Mittal, Rajat
2015-11-01
Patterns of blood flow associated with abnormal heart conditions generate characteristic sounds that can be measured on the chest surface using a stethoscope. This technique of `cardiac auscultation' has been used effectively for over a hundred years to diagnose heart conditions, but the mechanisms that generate heart sounds, as well as the physics of sound transmission through the thorax, are not well understood. Here we present a new computational method for simulating the physics of heart murmur generation and transmission and use it to simulate the murmurs associated with a modeled aortic stenosis. The flow in the model aorta is simulated by the incompressible Navier-Stokes equations and the three-dimensional elastic wave generation and propagation on the surrounding viscoelastic structure are solved with a high-order finite difference method in the time domain. The simulation results are compared with experimental measurements and show good agreement. The present study confirms that the pressure fluctuations on the vessel wall are the source of these heart murmurs, and both compression and shear waves likely plan an important role in cardiac auscultation. Supported by the NSF Grants IOS-1124804 and IIS-1344772, Computational resource by XSEDE NSF grant TG-CTS100002.
NASA Astrophysics Data System (ADS)
Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki
2010-12-01
We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydrationmore » shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.« less
Kim, Dae Wook; Kim, Sug-Whan
2005-02-07
We present a novel simulation technique that offers efficient mass fabrication strategies for 2m class hexagonal mirror segments of extremely large telescopes. As the first of two studies in series, we establish the theoretical basis of the tool influence function (TIF) for precessing tool polishing simulation for non-rotating workpieces. These theoretical TIFs were then used to confirm the reproducibility of the material removal foot-prints (measured TIFs) of the bulged precessing tooling reported elsewhere. This is followed by the reverse-computation technique that traces, employing the simplex search method, the real polishing pressure from the empirical TIF. The technical details, together with the results and implications described here, provide the theoretical tool for material removal essential to the successful polishing simulation which will be reported in the second study.
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
NASA Astrophysics Data System (ADS)
Yamanishi, Manabu
A combined experimental and computational investigation was performed in order to evaluate the effects of various design parameters of an in-line injection pump on the nozzle exit characteristics for DI diesel engines. Measurements of the pump chamber pressure and the delivery valve lift were included for validation by using specially designed transducers installed inside the pump. The results confirm that the simulation model is capable of predicting the pump operation for all the different designs investigated pump operating conditions. Following the successful validation of this model, parametric studies were performed which allow for improved fuel injection system design.
Coupled fluid-flow and magnetic-field simulation of the Riga dynamo experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenjeres, S.; Hanjalic, K.; Renaudier, S.
2006-12-15
Magnetic fields of planets, stars, and galaxies result from self-excitation in moving electroconducting fluids, also known as the dynamo effect. This phenomenon was recently experimentally confirmed in the Riga dynamo experiment [A. Gailitis et al., Phys. Rev. Lett. 84, 4365 (2000); A. Gailitis et al., Physics of Plasmas 11, 2838 (2004)], consisting of a helical motion of sodium in a long pipe followed by a straight backflow in a surrounding annular passage, which provided adequate conditions for magnetic-field self-excitation. In this paper, a first attempt to simulate computationally the Riga experiment is reported. The velocity and turbulence fields are modeledmore » by a finite-volume Navier-Stokes solver using a Reynolds-averaged-Navier-Stokes turbulence model. The magnetic field is computed by an Adams-Bashforth finite-difference solver. The coupling of the two computational codes, although performed sequentially, provides an improved understanding of the interaction between the fluid velocity and magnetic fields in the saturation regime of the Riga dynamo experiment under realistic working conditions.« less
Numerical simulation of compressor endwall and casing treatment flow phenomena
NASA Technical Reports Server (NTRS)
Crook, A. J.; Greitzer, E. M.; Tan, C. S.; Adamczyk, J. J.
1992-01-01
A numerical study is presented of the flow in the endwall region of a compressor blade row, in conditions of operation with both smooth and grooved endwalls. The computations are first compared to velocity field measurements in a cantilevered stator/rotating hub configuration to confirm that the salient features are captured. Computations are then interrogated to examine the tip leakage flow structure since this is a dominant feature of the endwall region. In particular, the high blockage that can exist near the endwalls at the rear of a compressor blade passage appears to be directly linked to low total pressure fluid associated with the leakage flow. The fluid dynamic action of the grooved endwall, representative of the casing treatments that have been most successful in suppressing stall, is then simulated computationally and two principal effects are identified. One is suction of the low total pressure, high blockage fluid at the rear of the passage. The second is energizing of the tip leakage flow, most notably in the core of the leakage vortex, thereby suppressing the blockage at its source.
Equilibration and analysis of first-principles molecular dynamics simulations of water
NASA Astrophysics Data System (ADS)
Dawson, William; Gygi, François
2018-03-01
First-principles molecular dynamics (FPMD) simulations based on density functional theory are becoming increasingly popular for the description of liquids. In view of the high computational cost of these simulations, the choice of an appropriate equilibration protocol is critical. We assess two methods of estimation of equilibration times using a large dataset of first-principles molecular dynamics simulations of water. The Gelman-Rubin potential scale reduction factor [A. Gelman and D. B. Rubin, Stat. Sci. 7, 457 (1992)] and the marginal standard error rule heuristic proposed by White [Simulation 69, 323 (1997)] are evaluated on a set of 32 independent 64-molecule simulations of 58 ps each, amounting to a combined cumulative time of 1.85 ns. The availability of multiple independent simulations also allows for an estimation of the variance of averaged quantities, both within MD runs and between runs. We analyze atomic trajectories, focusing on correlations of the Kohn-Sham energy, pair correlation functions, number of hydrogen bonds, and diffusion coefficient. The observed variability across samples provides a measure of the uncertainty associated with these quantities, thus facilitating meaningful comparisons of different approximations used in the simulations. We find that the computed diffusion coefficient and average number of hydrogen bonds are affected by a significant uncertainty in spite of the large size of the dataset used. A comparison with classical simulations using the TIP4P/2005 model confirms that the variability of the diffusivity is also observed after long equilibration times. Complete atomic trajectories and simulation output files are available online for further analysis.
Equilibration and analysis of first-principles molecular dynamics simulations of water.
Dawson, William; Gygi, François
2018-03-28
First-principles molecular dynamics (FPMD) simulations based on density functional theory are becoming increasingly popular for the description of liquids. In view of the high computational cost of these simulations, the choice of an appropriate equilibration protocol is critical. We assess two methods of estimation of equilibration times using a large dataset of first-principles molecular dynamics simulations of water. The Gelman-Rubin potential scale reduction factor [A. Gelman and D. B. Rubin, Stat. Sci. 7, 457 (1992)] and the marginal standard error rule heuristic proposed by White [Simulation 69, 323 (1997)] are evaluated on a set of 32 independent 64-molecule simulations of 58 ps each, amounting to a combined cumulative time of 1.85 ns. The availability of multiple independent simulations also allows for an estimation of the variance of averaged quantities, both within MD runs and between runs. We analyze atomic trajectories, focusing on correlations of the Kohn-Sham energy, pair correlation functions, number of hydrogen bonds, and diffusion coefficient. The observed variability across samples provides a measure of the uncertainty associated with these quantities, thus facilitating meaningful comparisons of different approximations used in the simulations. We find that the computed diffusion coefficient and average number of hydrogen bonds are affected by a significant uncertainty in spite of the large size of the dataset used. A comparison with classical simulations using the TIP4P/2005 model confirms that the variability of the diffusivity is also observed after long equilibration times. Complete atomic trajectories and simulation output files are available online for further analysis.
Lagrangian Particle Tracking Simulation for Warm-Rain Processes in Quasi-One-Dimensional Domain
NASA Astrophysics Data System (ADS)
Kunishima, Y.; Onishi, R.
2017-12-01
Conventional cloud simulations are based on the Euler method and compute each microphysics process in a stochastic way assuming infinite numbers of particles within each numerical grid. They therefore cannot provide the Lagrangian statistics of individual particles in cloud microphysics (i.e., aerosol particles, cloud particles, and rain drops) nor discuss the statistical fluctuations due to finite number of particles. We here simulate the entire precipitation process of warm-rain, with tracking individual particles. We use the Lagrangian Cloud Simulator (LCS), which is based on the Euler-Lagrangian framework. In that framework, flow motion and scalar transportation are computed with the Euler method, and particle motion with the Lagrangian one. The LCS tracks particle motions and collision events individually with considering the hydrodynamic interaction between approaching particles with a superposition method, that is, it can directly represent the collisional growth of cloud particles. It is essential for trustworthy collision detection to take account of the hydrodynamic interaction. In this study, we newly developed a stochastic model based on the Twomey cloud condensation nuclei (CCN) activation for the Lagrangian tracking simulation and integrated it into the LCS. Coupling with the Euler computation for water vapour and temperature fields, the initiation and condensational growth of water droplets were computed in the Lagrangian way. We applied the integrated LCS for a kinematic simulation of warm-rain processes in a vertically-elongated domain of, at largest, 0.03×0.03×3000 (m3) with horizontal periodicity. Aerosol particles with a realistic number density, 5×107 (m3), were evenly distributed over the domain at the initial state. Prescribed updraft at the early stage initiated development of a precipitating cloud. We have confirmed that the obtained bulk statistics fairly agree with those from a conventional spectral-bin scheme for a vertical column domain. The centre of the discussion will be the Lagrangian statistics which is collected from the individual behaviour of the tracked particles.
Experimental and numerical research on forging with torsion
NASA Astrophysics Data System (ADS)
Petrov, Mikhail A.; Subich, Vadim N.; Petrov, Pavel A.
2017-10-01
Increasing the efficiency of the technological operations of blank production is closely related to the computer-aided technologies (CAx). On the one hand, the practical result represents reality exactly. On the other hand, the development procedure of new process development demands unrestricted resources, which are limited on the SMEs. The tools of CAx were successfully applied for development of new process of forging with torsion and result analysis as well. It was shown, that the theoretical calculations find the confirmation both in praxis and during numerical simulation. The mostly used constructional materials were under study. The torque angles were stated. The simulated results were evaluated by experimental procedure.
Simulation of forming a flat forging
NASA Astrophysics Data System (ADS)
Solomonov, K.; Tishchuk, L.; Fedorinin, N.
2017-11-01
The metal flow in some of the metal shaping processes (rolling, pressing, die forging) is subjected to the regularities which determine the scheme of deformation in the metal samples upsetting. The object of the study was the research of the metal flow picture including the contour of the part, the demarcation lines of the metal flow and the flow lines. We have created an algorithm for constructing the metal flow picture, which is based on the representation of the metal flow demarcation line as an equidistant. Computer and physical simulation of the metal flow picture with the help of various software systems confirms the suggested hypothesis.
Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs
NASA Astrophysics Data System (ADS)
Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.
1982-12-01
Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.
On the efficient and reliable numerical solution of rate-and-state friction problems
NASA Astrophysics Data System (ADS)
Pipping, Elias; Kornhuber, Ralf; Rosenau, Matthias; Oncken, Onno
2016-03-01
We present a mathematically consistent numerical algorithm for the simulation of earthquake rupture with rate-and-state friction. Its main features are adaptive time stepping, a novel algebraic solution algorithm involving nonlinear multigrid and a fixed point iteration for the rate-and-state decoupling. The algorithm is applied to a laboratory scale subduction zone which allows us to compare our simulations with experimental results. Using physical parameters from the experiment, we find a good fit of recurrence time of slip events as well as their rupture width and peak slip. Computations in 3-D confirm efficiency and robustness of our algorithm.
NASA Astrophysics Data System (ADS)
Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt
2015-05-01
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
NASA Astrophysics Data System (ADS)
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
Huang, Xiaoqin; Gao, Daquan; Zhan, Chang-Guo
2011-06-07
Cocaine esterase (CocE) has been known as the most efficient native enzyme for metabolizing naturally occurring cocaine. A major obstacle to the clinical application of CocE is the thermoinstability of native CocE with a half-life of only ∼11 min at physiological temperature (37 °C). It is highly desirable to develop a thermostable mutant of CocE for therapeutic treatment of cocaine overdose and addiction. To establish a structure-thermostability relationship, we carried out molecular dynamics (MD) simulations at 400 K on wild-type CocE and previously known thermostable mutants, demonstrating that the thermostability of the active form of the enzyme correlates with the fluctuation (characterized as the root-mean square deviation and root-mean square fluctuation of atomic positions) of the catalytic residues (Y44, S117, Y118, H287, and D259) in the simulated enzyme. In light of the structure-thermostability correlation, further computational modelling including MD simulations at 400 K predicted that the active site structure of the L169K mutant should be more thermostable. The prediction has been confirmed by wet experimental tests showing that the active form of the L169K mutant had a half-life of 570 min at 37 °C, which is significantly longer than those of the wild-type and previously known thermostable mutants. The encouraging outcome suggests that the high-temperature MD simulations and the structure-thermostability relationship may be considered as a valuable tool for the computational design of thermostable mutants of an enzyme.
NASA Astrophysics Data System (ADS)
Escobar Gómez, J. D.; Torres-Verdín, C.
2018-03-01
Single-well pressure-diffusion simulators enable improved quantitative understanding of hydraulic-testing measurements in the presence of arbitrary spatial variations of rock properties. Simulators of this type implement robust numerical algorithms which are often computationally expensive, thereby making the solution of the forward modeling problem onerous and inefficient. We introduce a time-domain perturbation theory for anisotropic permeable media to efficiently and accurately approximate the transient pressure response of spatially complex aquifers. Although theoretically valid for any spatially dependent rock/fluid property, our single-phase flow study emphasizes arbitrary spatial variations of permeability and anisotropy, which constitute key objectives of hydraulic-testing operations. Contrary to time-honored techniques, the perturbation method invokes pressure-flow deconvolution to compute the background medium's permeability sensitivity function (PSF) with a single numerical simulation run. Subsequently, the first-order term of the perturbed solution is obtained by solving an integral equation that weighs the spatial variations of permeability with the spatial-dependent and time-dependent PSF. Finally, discrete convolution transforms the constant-flow approximation to arbitrary multirate conditions. Multidimensional numerical simulation studies for a wide range of single-well field conditions indicate that perturbed solutions can be computed in less than a few CPU seconds with relative errors in pressure of <5%, corresponding to perturbations in background permeability of up to two orders of magnitude. Our work confirms that the proposed joint perturbation-convolution (JPC) method is an efficient alternative to analytical and numerical solutions for accurate modeling of pressure-diffusion phenomena induced by Neumann or Dirichlet boundary conditions.
CFD simulation and experimental validation of a GM type double inlet pulse tube refrigerator
NASA Astrophysics Data System (ADS)
Banjare, Y. P.; Sahoo, R. K.; Sarangi, S. K.
2010-04-01
Pulse tube refrigerator has the advantages of long life and low vibration over the conventional cryocoolers, such as GM and stirling coolers because of the absence of moving parts in low temperature. This paper performs a three-dimensional computational fluid dynamic (CFD) simulation of a GM type double inlet pulse tube refrigerator (DIPTR) vertically aligned, operating under a variety of thermal boundary conditions. A commercial computational fluid dynamics (CFD) software package, Fluent 6.1 is used to model the oscillating flow inside a pulse tube refrigerator. The simulation represents fully coupled systems operating in steady-periodic mode. The externally imposed boundary conditions are sinusoidal pressure inlet by user defined function at one end of the tube and constant temperature or heat flux boundaries at the external walls of the cold-end heat exchangers. The experimental method to evaluate the optimum parameters of DIPTR is difficult. On the other hand, developing a computer code for CFD analysis is equally complex. The objectives of the present investigations are to ascertain the suitability of CFD based commercial package, Fluent for study of energy and fluid flow in DIPTR and to validate the CFD simulation results with available experimental data. The general results, such as the cool down behaviours of the system, phase relation between mass flow rate and pressure at cold end, the temperature profile along the wall of the cooler and refrigeration load are presented for different boundary conditions of the system. The results confirm that CFD based Fluent simulations are capable of elucidating complex periodic processes in DIPTR. The results also show that there is an excellent agreement between CFD simulation results and experimental results.
Confirmation of a realistic reactor model for BNCT dosimetry at the TRIGA Mainz
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziegner, Markus, E-mail: Markus.Ziegner.fl@ait.ac.at; Schmitz, Tobias; Hampel, Gabriele
2014-11-01
Purpose: In order to build up a reliable dose monitoring system for boron neutron capture therapy (BNCT) applications at the TRIGA reactor in Mainz, a computer model for the entire reactor was established, simulating the radiation field by means of the Monte Carlo method. The impact of different source definition techniques was compared and the model was validated by experimental fluence and dose determinations. Methods: The depletion calculation code ORIGEN2 was used to compute the burn-up and relevant material composition of each burned fuel element from the day of first reactor operation to its current core. The material composition ofmore » the current core was used in a MCNP5 model of the initial core developed earlier. To perform calculations for the region outside the reactor core, the model was expanded to include the thermal column and compared with the previously established ATTILA model. Subsequently, the computational model is simplified in order to reduce the calculation time. Both simulation models are validated by experiments with different setups using alanine dosimetry and gold activation measurements with two different types of phantoms. Results: The MCNP5 simulated neutron spectrum and source strength are found to be in good agreement with the previous ATTILA model whereas the photon production is much lower. Both MCNP5 simulation models predict all experimental dose values with an accuracy of about 5%. The simulations reveal that a Teflon environment favorably reduces the gamma dose component as compared to a polymethyl methacrylate phantom. Conclusions: A computer model for BNCT dosimetry was established, allowing the prediction of dosimetric quantities without further calibration and within a reasonable computation time for clinical applications. The good agreement between the MCNP5 simulations and experiments demonstrates that the ATTILA model overestimates the gamma dose contribution. The detailed model can be used for the planning of structural modifications in the thermal column irradiation channel or the use of different irradiation sites than the thermal column, e.g., the beam tubes.« less
MD Simulations of P-Type ATPases in a Lipid Bilayer System.
Autzen, Henriette Elisabeth; Musgaard, Maria
2016-01-01
Molecular dynamics (MD) simulation is a computational method which provides insight on protein dynamics with high resolution in both space and time, in contrast to many experimental techniques. MD simulations can be used as a stand-alone method to study P-type ATPases as well as a complementary method aiding experimental studies. In particular, MD simulations have proved valuable in generating and confirming hypotheses relating to the structure and function of P-type ATPases. In the following, we describe a detailed practical procedure on how to set up and run a MD simulation of a P-type ATPase embedded in a lipid bilayer using software free of use for academics. We emphasize general considerations and problems typically encountered when setting up simulations. While full coverage of all possible procedures is beyond the scope of this chapter, we have chosen to illustrate the MD procedure with the Nanoscale Molecular Dynamics (NAMD) and the Visual Molecular Dynamics (VMD) software suites.
Harding, R. M.; Boyce, A. J.; Martinson, J. J.; Flint, J.; Clegg, J. B.
1993-01-01
Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. We show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation model reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. We use sampling theory to confirm the intrinsically poor fit to the infinite alleles model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations. PMID:8293988
Harding, R M; Boyce, A J; Martinson, J J; Flint, J; Clegg, J B
1993-11-01
Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. We show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation model reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. We use sampling theory to confirm the intrinsically poor fit to the infinite alleles model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, R.M.; Martinson, J.J.; Flint, J.
1993-11-01
Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. The authors show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation modelmore » reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. The authors use sampling theory to confirm the intrinsically poor fit to the infinite model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations. 25 refs., 20 figs., 4 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frolov, T.; Setyawan, W.; Kurtz, R. J.
We report a computational discovery of novel grain boundary structures and multiple grain boundary phases in elemental bcc tungsten. While grain boundary structures created by the - surface method as a union of two perfect half crystals have been studied extensively, it is known that the method has limitations and does not always predict the correct ground states. Here, we use a newly developed computational tool, based on evolutionary algorithms, to perform a grand-canonical search of high-angle symmetric tilt boundary in tungsten, and we find new ground states and multiple phases that cannot be described using the conventional structural unitmore » model. We use MD simulations to demonstrate that the new structures can coexist at finite temperature in a closed system, confirming these are examples of different GB phases. The new ground state is confirmed by first-principles calculations.Evolutionary grand-canonical search predicts novel grain boundary structures and multiple grain boundary phases in elemental body-centered cubic (bcc) metals represented by tungsten, tantalum and molybdenum.« less
Finite element analyses of a linear-accelerator electron gun
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iqbal, M., E-mail: muniqbal.chep@pu.edu.pk, E-mail: muniqbal@ihep.ac.cn; Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049; Wasy, A.
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gunmore » is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.« less
Electron cloud simulations for the main ring of J-PARC
NASA Astrophysics Data System (ADS)
Yee-Rendon, Bruce; Muto, Ryotaro; Ohmi, Kazuhito; Satou, Kenichirou; Tomizawa, Masahito; Toyama, Takeshi
2017-07-01
The simulation of beam instabilities is a helpful tool to evaluate potential threats against the machine protection of the high intensity beams. At Main Ring (MR) of J-PARC, signals related to the electron cloud have been observed during the slow beam extraction mode. Hence, several studies were conducted to investigate the mechanism that produces it, the results confirmed a strong dependence on the beam intensity and the bunch structure in the formation of the electron cloud, however, the precise explanation of its trigger conditions remains incomplete. To shed light on the problem, electron cloud simulations were done using an updated version of the computational model developed from previous works at KEK. The code employed the signals of the measurements to reproduce the events seen during the surveys.
Numerical simulation of mechanical mixing in high solid anaerobic digester.
Yu, Liang; Ma, Jingwei; Chen, Shulin
2011-01-01
Computational fluid dynamics (CFD) was employed to study mixing performance in high solid anaerobic digester (HSAD) with A-310 impeller and helical ribbon. A mathematical model was constructed to assess flow fields. Good agreement of the model results with experimental data was obtained for the A-310 impeller. A systematic comparison for the interrelationship of power number, flow number and Reynolds number was simulated in a digester with less than 5% TS and 10% TS (total solids). The simulation results suggested a great potential for using the helical ribbon mixer in the mixing of high solids digester. The results also provided quantitative confirmation for minimum power consumption in HSAD and the effect of share rate on bio-structure. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Becker, Matthew R.
2013-10-01
I present a new algorithm, Curved-sky grAvitational Lensing for Cosmological Light conE simulatioNS (CALCLENS), for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift-dependent shear signals including corrections to the Born approximation by using multiple-plane ray tracing and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (˜10 000 square degrees) can be ray traced efficiently at high resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy (≲1 per cent) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogues to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.
Comparison of three large-eddy simulations of shock-induced turbulent separation bubbles
NASA Astrophysics Data System (ADS)
Touber, Emile; Sandham, Neil D.
2009-12-01
Three different large-eddy simulation investigations of the interaction between an impinging oblique shock and a supersonic turbulent boundary layer are presented. All simulations made use of the same inflow technique, specifically aimed at avoiding possible low-frequency interferences with the shock/boundary-layer interaction system. All simulations were run on relatively wide computational domains and integrated over times greater than twenty five times the period of the most commonly reported low-frequency shock-oscillation, making comparisons at both time-averaged and low-frequency-dynamic levels possible. The results confirm previous experimental results which suggested a simple linear relation between the interaction length and the oblique-shock strength if scaled using the boundary-layer thickness and wall-shear stress. All the tested cases show evidences of significant low-frequency shock motions. At the wall, energetic low-frequency pressure fluctuations are observed, mainly in the initial part of interaction.
NASA Astrophysics Data System (ADS)
Komura, Yukihiro; Okabe, Yutaka
2016-04-01
We study the Ising models on the Penrose lattice and the dual Penrose lattice by means of the high-precision Monte Carlo simulation. Simulating systems up to the total system size N = 20633239, we estimate the critical temperatures on those lattices with high accuracy. For high-speed calculation, we use the generalized method of the single-GPU-based computation for the Swendsen-Wang multi-cluster algorithm of Monte Carlo simulation. As a result, we estimate the critical temperature on the Penrose lattice as Tc/J = 2.39781 ± 0.00005 and that of the dual Penrose lattice as Tc*/J = 2.14987 ± 0.00005. Moreover, we definitely confirm the duality relation between the critical temperatures on the dual pair of quasilattices with a high degree of accuracy, sinh (2J/Tc)sinh (2J/Tc*) = 1.00000 ± 0.00004.
Detailed Comparison of DNS to PSE for Oblique Breakdown at Mach 3
NASA Technical Reports Server (NTRS)
Mayer, Christian S. J.; Fasel, Hermann F.; Choudhari, Meelan; Chang, Chau-Lyan
2010-01-01
A pair of oblique waves at low amplitudes is introduced in a supersonic flat-plate boundary layer. Their downstream development and the concomitant process of laminar to turbulent transition is then investigated numerically using Direct Numerical Simulations (DNS) and Parabolized Stability Equations (PSE). This abstract is the last part of an extensive study of the complete transition process initiated by oblique breakdown at Mach 3. In contrast to the previous simulations, the symmetry condition in the spanwise direction is removed for the simulation presented in this abstract. By removing the symmetry condition, we are able to confirm that the flow is indeed symmetric over the entire computational domain. Asymmetric modes grow in the streamwise direction but reach only small amplitude values at the outflow. Furthermore, this abstract discusses new time-averaged data from our previous simulation CASE 3 and compares PSE data obtained from NASA's LASTRAC code to DNS results.
NASA Technical Reports Server (NTRS)
Broeren, Andy P.; Woodard, Brian S.; Diebold, Jeffrey M.; Moens, Frederic
2017-01-01
Aerodynamic assessment of icing effects on swept wings is an important component of a larger effort to improve three-dimensional icing simulation capabilities. An understanding of ice-shape geometric fidelity and Reynolds and Mach number effects on the iced-wing aerodynamics is needed to guide the development and validation of ice-accretion simulation tools. To this end, wind-tunnel testing and computational flow simulations were carried out for an 8.9%-scale semispan wing based upon the Common Research Model airplane configuration. The wind-tunnel testing was conducted at the Wichita State University 7 ft x 10 ft Beech wind tunnel from Reynolds numbers of 0.8×10(exp 6) to 2.4×10(exp 6) and corresponding Mach numbers of 0.09 to 0.27. This paper presents the results of initial studies investigating the model mounting configuration, clean-wing aerodynamics and effects of artificial ice roughness. Four different model mounting configurations were considered and a circular splitter plate combined with a streamlined shroud was selected as the baseline geometry for the remainder of the experiments and computational simulations. A detailed study of the clean-wing aerodynamics and stall characteristics was made. In all cases, the flow over the outboard sections of the wing separated as the wing stalled with the inboard sections near the root maintaining attached flow. Computational flow simulations were carried out with the ONERA elsA software that solves the compressible, three-dimensional RANS equations. The computations were carried out in either fully turbulent mode or with natural transition. Better agreement between the experimental and computational results was obtained when considering computations with free transition compared to turbulent solutions. These results indicate that experimental evolution of the clean wing performance coefficients were due to the effect of three-dimensional transition location and that this must be taken into account for future data analysis. This research also confirmed that artificial ice roughness created with rapid-prototype manufacturing methods can generate aerodynamic performance effects comparable to grit roughness of equivalent size when proper care is exercised in design and installation. The conclusions of this combined experimental and computational study contributed directly to the successful implementation of follow-on test campaigns with numerous artificial ice-shape configurations for this 8.9% scale model.
NASA Technical Reports Server (NTRS)
Broeren, Andy P.; Woodard, Brian S.; Diebold, Jeffrey M.; Moens, Frederic
2017-01-01
Aerodynamic assessment of icing effects on swept wings is an important component of a larger effort to improve three-dimensional icing simulation capabilities. An understanding of ice-shape geometric fidelity and Reynolds and Mach number effects on the iced-wing aerodynamics is needed to guide the development and validation of ice-accretion simulation tools. To this end, wind-tunnel testing and computational flow simulations were carried out for an 8.9 percent-scale semispan wing based upon the Common Research Model airplane configuration. The wind-tunnel testing was conducted at the Wichita State University 7 by 10 ft Beech wind tunnel from Reynolds numbers of 0.8×10(exp 6) to 2.4×10(exp 6) and corresponding Mach numbers of 0.09 to 0.27. This paper presents the results of initial studies investigating the model mounting configuration, clean-wing aerodynamics and effects of artificial ice roughness. Four different model mounting configurations were considered and a circular splitter plate combined with a streamlined shroud was selected as the baseline geometry for the remainder of the experiments and computational simulations. A detailed study of the clean-wing aerodynamics and stall characteristics was made. In all cases, the flow over the outboard sections of the wing separated as the wing stalled with the inboard sections near the root maintaining attached flow. Computational flow simulations were carried out with the ONERA elsA software that solves the compressible, threedimensional RANS equations. The computations were carried out in either fully turbulent mode or with natural transition. Better agreement between the experimental and computational results was obtained when considering computations with free transition compared to turbulent solutions. These results indicate that experimental evolution of the clean wing performance coefficients were due to the effect of three-dimensional transition location and that this must be taken into account for future data analysis. This research also confirmed that artificial ice roughness created with rapid-prototype manufacturing methods can generate aerodynamic performance effects comparable to grit roughness of equivalent size when proper care is exercised in design and installation. The conclusions of this combined experimental and computational study contributed directly to the successful implementation of follow-on test campaigns with numerous artificial ice-shape configurations for this 8.9 percent scale model.
Precession feature extraction of ballistic missile warhead with high velocity
NASA Astrophysics Data System (ADS)
Sun, Huixia
2018-04-01
This paper establishes the precession model of ballistic missile warhead, and derives the formulas of micro-Doppler frequency induced by the target with precession. In order to obtain micro-Doppler feature of ballistic missile warhead with precession, micro-Doppler bandwidth estimation algorithm, which avoids velocity compensation, is presented based on high-resolution time-frequency transform. The results of computer simulations confirm the effectiveness of the proposed method even with low signal-to-noise ratio.
Recursive regularization step for high-order lattice Boltzmann methods
NASA Astrophysics Data System (ADS)
Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre
2017-09-01
A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.
IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
William M. Bond; Salih Ersayin
2007-03-30
This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less
The importance of vertical resolution in the free troposphere for modeling intercontinental plumes
NASA Astrophysics Data System (ADS)
Zhuang, Jiawei; Jacob, Daniel J.; Eastham, Sebastian D.
2018-05-01
Chemical plumes in the free troposphere can preserve their identity for more than a week as they are transported on intercontinental scales. Current global models cannot reproduce this transport. The plumes dilute far too rapidly due to numerical diffusion in sheared flow. We show how model accuracy can be limited by either horizontal resolution (Δx) or vertical resolution (Δz). Balancing horizontal and vertical numerical diffusion, and weighing computational cost, implies an optimal grid resolution ratio (Δx / Δz)opt ˜ 1000 for simulating the plumes. This is considerably higher than current global models (Δx / Δz ˜ 20) and explains the rapid plume dilution in the models as caused by insufficient vertical resolution. Plume simulations with the Geophysical Fluid Dynamics Laboratory Finite-Volume Cubed-Sphere Dynamical Core (GFDL-FV3) over a range of horizontal and vertical grid resolutions confirm this limiting behavior. Our highest-resolution simulation (Δx ≈ 25 km, Δz ≈ 80 m) preserves the maximum mixing ratio in the plume to within 35 % after 8 days in strongly sheared flow, a drastic improvement over current models. Adding free tropospheric vertical levels in global models is computationally inexpensive and would also improve the simulation of water vapor.
Postcollapse Evolution of Globular Clusters
NASA Astrophysics Data System (ADS)
Makino, Junichiro
1996-11-01
A number of globular clusters appear to have undergone core collapse, in the sense that their predicted collapse times are much shorter than their current ages. Simulations with gas models and the Fokker-Planck approximation have shown that the central density of a globular cluster after the collapse undergoes nonlinear oscillation with a large amplitude (gravothermal oscillation). However, the question whether such an oscillation actually takes place in real N-body systems has remained unsolved because an N-body simulation with a sufficiently high resolution would have required computing resources of the order of several GFLOPS-yr. In the present paper, we report the results of such a simulation performed on a dedicated special-purpose computer, GRAPE-4. We have simulated the evolution of isolated point-mass systems with up to 32,768 particles. The largest number of particles reported previously is 10,000. We confirm that gravothermal oscillation takes place in an N-body system. The expansion phase shows all the signatures that are considered to be evidence of the gravothermal nature of the oscillation. At the maximum expansion, the core radius is ˜1% of the half-mass radius for the run with 32,768 particles. The maximum core size, rc, depends on N as
Distributed interactive communication in simulated space-dwelling groups.
Brady, Joseph V; Hienz, Robert D; Hursh, Steven R; Ragusa, Leonard C; Rouse, Charles O; Gasior, Eric D
2004-03-01
This report describes the development and preliminary application of an experimental test bed for modeling human behavior in the context of a computer generated environment to analyze the effects of variations in communication modalities, incentives and stressful conditions. In addition to detailing the methodological development of a simulated task environment that provides for electronic monitoring and recording of individual and group behavior, the initial substantive findings from an experimental analysis of distributed interactive communication in simulated space dwelling groups are described. Crews of three members each (male and female) participated in simulated "planetary missions" based upon a synthetic scenario task that required identification, collection, and analysis of geologic specimens with a range of grade values. The results of these preliminary studies showed clearly that cooperative and productive interactions were maintained between individually isolated and distributed individuals communicating and problem-solving effectively in a computer-generated "planetary" environment over extended time intervals without benefit of one another's physical presence. Studies on communication channel constraints confirmed the functional interchangeability between available modalities with the highest degree of interchangeability occurring between Audio and Text modes of communication. The effects of task-related incentives were determined by the conditions under which they were available with Positive Incentives effectively attenuating decrements in performance under stressful time pressure. c2003 Elsevier Ltd. All rights reserved.
Coupled neutronics and thermal-hydraulics numerical simulations of a Molten Fast Salt Reactor (MFSR)
NASA Astrophysics Data System (ADS)
Laureau, A.; Rubiolo, P. R.; Heuer, D.; Merle-Lucotte, E.; Brovchenko, M.
2014-06-01
Coupled neutronics and thermalhydraulic numerical analyses of a molten salt fast reactor are presented. These preliminary numerical simulations are carried-out using the Monte Carlo code MCNP and the Computation Fluid Dynamic code OpenFOAM. The main objectives of this analysis performed at steady-reactor conditions are to confirm the acceptability of the current neutronic and thermalhydraulic designs of the reactor, to study the effects of the reactor operating conditions on some of the key MSFR design parameters such as the temperature peaking factor. The effects of the precursor's motion on the reactor safety parameters such as the effective fraction of delayed neutrons have been evaluated.
Sliding mode control of direct coupled interleaved boost converter for fuel cell
NASA Astrophysics Data System (ADS)
Wang, W. Y.; Ding, Y. H.; Ke, X.; Ma, X.
2017-12-01
A three phase direct coupled interleaved boost converter (TP-DIBC) was recommended in this paper. This converter has a small unbalance current sharing among the branches of TP-DIBC. An adaptive control law sliding mode control (SMC) is designed for the TP-DIBC. The aim is to 1) reduce ripple output voltage, inductor current and regulate output voltage tightly 2) The total current carried by direct coupled interleaved boost converter (DIBC) must be equally shared between different parallel branches. The efficacy and robustness of the proposed TP-DIBC and adaptive SMC is confirmed via computer simulations using Matlab SimPower System Tools. The simulation result is in line with the expectation.
Glass polymorphism in amorphous germanium probed by first-principles computer simulations
NASA Astrophysics Data System (ADS)
Mancini, G.; Celino, M.; Iesari, F.; Di Cicco, A.
2016-01-01
The low-density (LDA) to high-density (HDA) transformation in amorphous Ge at high pressure is studied by first-principles molecular dynamics simulations in the framework of density functional theory. Previous experiments are accurately reproduced, including the presence of a well-defined LDA-HDA transition above 8 GPa. The LDA-HDA density increase is found to be about 14%. Pair and bond-angle distributions are obtained in the 0-16 GPa pressure range and allowed us a detailed analysis of the transition. The local fourfold coordination is transformed in an average HDA sixfold coordination associated with different local geometries as confirmed by coordination number analysis and shape of the bond-angle distributions.
Tuning the critical solution temperature of polymers by copolymerization
NASA Astrophysics Data System (ADS)
Schulz, Bernhard; Chudoba, Richard; Heyda, Jan; Dzubiella, Joachim
2015-12-01
We study statistical copolymerization effects on the upper critical solution temperature (CST) of generic homopolymers by means of coarse-grained Langevin dynamics computer simulations and mean-field theory. Our systematic investigation reveals that the CST can change monotonically or non-monotonically with copolymerization, as observed in experimental studies, depending on the degree of non-additivity of the monomer (A-B) cross-interactions. The simulation findings are confirmed and qualitatively explained by a combination of a two-component Flory-de Gennes model for polymer collapse and a simple thermodynamic expansion approach. Our findings provide some rationale behind the effects of copolymerization and may be helpful for tuning CST behavior of polymers in soft material design.
An experimental phylogeny to benchmark ancestral sequence reconstruction
Randall, Ryan N.; Radford, Caelan E.; Roof, Kelsey A.; Natarajan, Divya K.; Gaucher, Eric A.
2016-01-01
Ancestral sequence reconstruction (ASR) is a still-burgeoning method that has revealed many key mechanisms of molecular evolution. One criticism of the approach is an inability to validate its algorithms within a biological context as opposed to a computer simulation. Here we build an experimental phylogeny using the gene of a single red fluorescent protein to address this criticism. The evolved phylogeny consists of 19 operational taxonomic units (leaves) and 17 ancestral bifurcations (nodes) that display a wide variety of fluorescent phenotypes. The 19 leaves then serve as ‘modern' sequences that we subject to ASR analyses using various algorithms and to benchmark against the known ancestral genotypes and ancestral phenotypes. We confirm computer simulations that show all algorithms infer ancient sequences with high accuracy, yet we also reveal wide variation in the phenotypes encoded by incorrectly inferred sequences. Specifically, Bayesian methods incorporating rate variation significantly outperform the maximum parsimony criterion in phenotypic accuracy. Subsampling of extant sequences had minor effect on the inference of ancestral sequences. PMID:27628687
Plazinska, Anita; Plazinski, Wojciech; Jozwiak, Krzysztof
2014-04-30
The computational approach applicable for the molecular dynamics (MD)-based techniques is proposed to predict the ligand-protein binding affinities dependent on the ligand stereochemistry. All possible stereoconfigurations are expressed in terms of one set of force-field parameters [stereoconfiguration-independent potential (SIP)], which allows for calculating all relative free energies by only single simulation. SIP can be used for studying diverse, stereoconfiguration-dependent phenomena by means of various computational techniques of enhanced sampling. The method has been successfully tested on the β2-adrenergic receptor (β2-AR) binding the four fenoterol stereoisomers by both metadynamics simulations and replica-exchange MD. Both the methods gave very similar results, fully confirming the presence of stereoselective effects in the fenoterol-β2-AR interactions. However, the metadynamics-based approach offered much better efficiency of sampling which allows for significant reduction of the unphysical region in SIP. Copyright © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Dodani, Sheel C.; Kiss, Gert; Cahn, Jackson K. B.; Su, Ye; Pande, Vijay S.; Arnold, Frances H.
2016-05-01
The dynamic motions of protein structural elements, particularly flexible loops, are intimately linked with diverse aspects of enzyme catalysis. Engineering of these loop regions can alter protein stability, substrate binding and even dramatically impact enzyme function. When these flexible regions are unresolvable structurally, computational reconstruction in combination with large-scale molecular dynamics simulations can be used to guide the engineering strategy. Here we present a collaborative approach that consists of both experiment and computation and led to the discovery of a single mutation in the F/G loop of the nitrating cytochrome P450 TxtE that simultaneously controls loop dynamics and completely shifts the enzyme's regioselectivity from the C4 to the C5 position of L-tryptophan. Furthermore, we find that this loop mutation is naturally present in a subset of homologous nitrating P450s and confirm that these uncharacterized enzymes exclusively produce 5-nitro-L-tryptophan, a previously unknown biosynthetic intermediate.
Fitting Neuron Models to Spike Trains
Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925
NASA Astrophysics Data System (ADS)
Cheng, Shiwang; Carrillo, Jan-Michael Y.; Carroll, Bobby; Sumpter, Bobby G.; Sokolov, Alexei P.
There are growing experimental evidences showing the existence of an interfacial layer that has a finite thickness with slowing down dynamics in polymer nanocomposites (PNCs). Moreover, it is believed that the interfacial layer plays a significant role on various macroscopic properties of PNCs. A thicker interfacial layer is found to have more pronounced effect on the macroscopic properties such as the mechanical enhancement. However, it is not clear what molecular parameter controls the interfacial layer thickness. Inspired by our recent computer simulations that showed the chain rigidity correlated well with the interfacial layer thickness, we performed systematic experimental studies on different polymer nanocomposites by varying the chain stiffness. Combining small-angle X-ray scattering, broadband dielectric spectroscopy and temperature modulated differential scanning calorimetry, we find a good correlation between the polymer Kuhn length and the thickness of the interfacial layer, confirming the earlier computer simulations results. Our findings provide a direct guidance for the design of new PNCs with desired properties.
Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity
NASA Astrophysics Data System (ADS)
Miah, Md Mamun
This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.
Virtual evaluation of stent graft deployment: a validated modeling and simulation study.
De Bock, S; Iannaccone, F; De Santis, G; De Beule, M; Van Loo, D; Devos, D; Vermassen, F; Segers, P; Verhegghe, B
2012-09-01
The presented study details the virtual deployment of a bifurcated stent graft (Medtronic Talent) in an Abdominal Aortic Aneurysm model, using the finite element method. The entire deployment procedure is modeled, with the stent graft being crimped and bent according to the vessel geometry, and subsequently released. The finite element results are validated in vitro with placement of the device in a silicone mock aneurysm, using high resolution CT scans to evaluate the result. The presented work confirms the capability of finite element computer simulations to predict the deformed configuration after endovascular aneurysm repair (EVAR). These simulations can be used to quantify mechanical parameters, such as neck dilations, radial forces and stresses in the device, that are difficult or impossible to obtain from medical imaging. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Montcel, Bruno; Chabrier, Renée; Poulet, Patrick
2006-12-01
Time-resolved diffuse optical methods have been applied to detect hemodynamic changes induced by cerebral activity. We describe a near infrared spectroscopic (NIRS) reconstruction free method which allows retrieving depth-related information on absorption variations. Variations in the absorption coefficient of tissues have been computed over the duration of the whole experiment, but also over each temporal step of the time-resolved optical signal, using the microscopic Beer-Lambert law.Finite element simulations show that time-resolved computation of the absorption difference as a function of the propagation time of detected photons is sensitive to the depth profile of optical absorption variations. Differences in deoxyhemoglobin and oxyhemoglobin concentrations can also be calculated from multi-wavelength measurements. Experimental validations of the simulated results have been obtained for resin phantoms. They confirm that time-resolved computation of the absorption differences exhibited completely different behaviours, depending on whether these variations occurred deeply or superficially. The hemodynamic response to a short finger tapping stimulus was measured over the motor cortex and compared to experiments involving Valsalva manoeuvres. Functional maps were also calculated for the hemodynamic response induced by finger tapping movements.
Montcel, Bruno; Chabrier, Renée; Poulet, Patrick
2006-12-11
Time-resolved diffuse optical methods have been applied to detect hemodynamic changes induced by cerebral activity. We describe a near infrared spectroscopic (NIRS) reconstruction free method which allows retrieving depth-related information on absorption variations. Variations in the absorption coefficient of tissues have been computed over the duration of the whole experiment, but also over each temporal step of the time-resolved optical signal, using the microscopic Beer-Lambert law.Finite element simulations show that time-resolved computation of the absorption difference as a function of the propagation time of detected photons is sensitive to the depth profile of optical absorption variations. Differences in deoxyhemoglobin and oxyhemoglobin concentrations can also be calculated from multi-wavelength measurements. Experimental validations of the simulated results have been obtained for resin phantoms. They confirm that time-resolved computation of the absorption differences exhibited completely different behaviours, depending on whether these variations occurred deeply or superficially. The hemodynamic response to a short finger tapping stimulus was measured over the motor cortex and compared to experiments involving Valsalva manoeuvres. Functional maps were also calculated for the hemodynamic response induced by finger tapping movements.
Computational Models Reveal a Passive Mechanism for Cell Migration in the Crypt
Dunn, Sara-Jane; Näthke, Inke S.; Osborne, James M.
2013-01-01
Cell migration in the intestinal crypt is essential for the regular renewal of the epithelium, and the continued upward movement of cells is a key characteristic of healthy crypt dynamics. However, the driving force behind this migration is unknown. Possibilities include mitotic pressure, active movement driven by motility cues, or negative pressure arising from cell loss at the crypt collar. It is possible that a combination of factors together coordinate migration. Here, three different computational models are used to provide insight into the mechanisms that underpin cell movement in the crypt, by examining the consequence of eliminating cell division on cell movement. Computational simulations agree with existing experimental results, confirming that migration can continue in the absence of mitosis. Importantly, however, simulations allow us to infer mechanisms that are sufficient to generate cell movement, which is not possible through experimental observation alone. The results produced by the three models agree and suggest that cell loss due to apoptosis and extrusion at the crypt collar relieves cell compression below, allowing cells to expand and move upwards. This finding suggests that future experiments should focus on the role of apoptosis and cell extrusion in controlling cell migration in the crypt. PMID:24260407
Liquid-liquid transition in ST2 water
NASA Astrophysics Data System (ADS)
Liu, Yang; Palmer, Jeremy C.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.
2012-12-01
We use the weighted histogram analysis method [S. Kumar, D. Bouzida, R. H. Swendsen, P. A. Kollman, and J. M. Rosenberg, J. Comput. Chem. 13, 1011 (1992), 10.1002/jcc.540130812] to calculate the free energy surface of the ST2 model of water as a function of density and bond-orientational order. We perform our calculations at deeply supercooled conditions (T = 228.6 K, P = 2.2 kbar; T = 235 K, P = 2.2 kbar) and focus our attention on the region of bond-orientational order that is relevant to disordered phases. We find a first-order transition between a low-density liquid (LDL, ρ ≈ 0.9 g/cc) and a high-density liquid (HDL, ρ ≈ 1.15 g/cc), confirming our earlier sampling of the free energy surface of this model as a function of density [Y. Liu, A. Z. Panagiotopoulos, and P. G. Debenedetti, J. Chem. Phys. 131, 104508 (2009), 10.1063/1.3229892]. We demonstrate the disappearance of the LDL basin at high pressure and of the HDL basin at low pressure, in agreement with independent simulations of the system's equation of state. Consistency between directly computed and reweighted free energies, as well as between free energy surfaces computed using different thermodynamic starting conditions, confirms proper equilibrium sampling. Diffusion and structural relaxation calculations demonstrate that equilibration of the LDL phase, which exhibits slow dynamics, is attained in the course of the simulations. Repeated flipping between the LDL and HDL phases in the course of long molecular dynamics runs provides further evidence of a phase transition. We use the Ewald summation with vacuum boundary conditions to calculate long-ranged Coulombic interactions and show that conducting boundary conditions lead to unphysical behavior at low temperatures.
A prototype of behavior selection mechanism based on emotion
NASA Astrophysics Data System (ADS)
Zhang, Guofeng; Li, Zushu
2007-12-01
In bionic methodology rather than in design methodology more familiar with, summarizing the psychological researches of emotion, we propose the biologic mechanism of emotion, emotion selection role in creature evolution and a anima framework including emotion similar to the classical control structure; and consulting Prospect Theory, build an Emotion Characteristic Functions(ECF) that computer emotion; two more emotion theories are added to them that higher emotion is preferred and middle emotion makes brain run more efficiently, emotional behavior mechanism comes into being. A simulation of proposed mechanism are designed and carried out on Alife Swarm software platform. In this simulation, a virtual grassland ecosystem is achieved where there are two kinds of artificial animals: herbivore and preyer. These artificial animals execute four types of behavior: wandering, escaping, finding food, finding sex partner in their lives. According the theories of animal ethnology, escaping from preyer is prior to other behaviors for its existence, finding food is secondly important behavior, rating is third one and wandering is last behavior. In keeping this behavior order, based on our behavior characteristic function theory, the specific functions of emotion computing are built of artificial autonomous animals. The result of simulation confirms the behavior selection mechanism.
An exact and efficient first passage time algorithm for reaction-diffusion processes on a 2D-lattice
NASA Astrophysics Data System (ADS)
Bezzola, Andri; Bales, Benjamin B.; Alkire, Richard C.; Petzold, Linda R.
2014-01-01
We present an exact and efficient algorithm for reaction-diffusion-nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for large ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.
An exact and efficient first passage time algorithm for reaction–diffusion processes on a 2D-lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezzola, Andri, E-mail: andri.bezzola@gmail.com; Bales, Benjamin B., E-mail: bbbales2@gmail.com; Alkire, Richard C., E-mail: r-alkire@uiuc.edu
2014-01-01
We present an exact and efficient algorithm for reaction–diffusion–nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for largemore » ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.« less
NASA Astrophysics Data System (ADS)
Woods, Christopher J.; Malaisree, Maturos; Long, Ben; McIntosh-Smith, Simon; Mulholland, Adrian J.
2013-12-01
The emergence of a novel H7N9 avian influenza that infects humans is a serious cause for concern. Of the genome sequences of H7N9 neuraminidase available, one contains a substitution of arginine to lysine at position 292, suggesting a potential for reduced drug binding efficacy. We have performed molecular dynamics simulations of oseltamivir, zanamivir and peramivir bound to H7N9, H7N9-R292K, and a structurally related H11N9 neuraminidase. They show that H7N9 neuraminidase is structurally homologous to H11N9, binding the drugs in identical modes. The simulations reveal that the R292K mutation disrupts drug binding in H7N9 in a comparable manner to that observed experimentally for H11N9-R292K. Absolute binding free energy calculations with the WaterSwap method confirm a reduction in binding affinity. This indicates that the efficacy of antiviral drugs against H7N9-R292K will be reduced. Simulations can assist in predicting disruption of binding caused by mutations in neuraminidase, thereby providing a computational `assay.'
Fast Virtual Stenting with Active Contour Models in Intracranical Aneurysm
Zhong, Jingru; Long, Yunling; Yan, Huagang; Meng, Qianqian; Zhao, Jing; Zhang, Ying; Yang, Xinjian; Li, Haiyun
2016-01-01
Intracranial stents are becoming increasingly a useful option in the treatment of intracranial aneurysms (IAs). Image simulation of the releasing stent configuration together with computational fluid dynamics (CFD) simulation prior to intervention will help surgeons optimize intervention scheme. This paper proposed a fast virtual stenting of IAs based on active contour model (ACM) which was able to virtually release stents within any patient-specific shaped vessel and aneurysm models built on real medical image data. In this method, an initial stent mesh was generated along the centerline of the parent artery without the need for registration between the stent contour and the vessel. Additionally, the diameter of the initial stent volumetric mesh was set to the maximum inscribed sphere diameter of the parent artery to improve the stenting accuracy and save computational cost. At last, a novel criterion for terminating virtual stent expanding that was based on the collision detection of the axis aligned bounding boxes was applied, making the stent expansion free of edge effect. The experiment results of the virtual stenting and the corresponding CFD simulations exhibited the efficacy and accuracy of the ACM based method, which are valuable to intervention scheme selection and therapy plan confirmation. PMID:26876026
Mollet, Mike; Godoy-Silva, Ruben; Berdugo, Claudia; Chalmers, Jeffrey J
2008-06-01
Fluorescence activated cell sorting, FACS, is a widely used method to sort subpopulations of cells to high purities. To achieve relatively high sorting speeds, FACS instruments operate by forcing suspended cells to flow in a single file line through a laser(s) beam(s). Subsequently, this flow stream breaks up into individual drops which can be charged and deflected into multiple collection streams. Previous work by Ma et al. (2002) and Mollet et al. (2007; Biotechnol Bioeng 98:772-788) indicates that subjecting cells to hydrodynamic forces consisting of both high extensional and shear components in micro-channels results in significant cell damage. Using the fluid dynamics software FLUENT, computer simulations of typical fluid flow through the nozzle of a BD FACSVantage indicate that hydrodynamic forces, quantified using the scalar parameter energy dissipation rate, are similar in the FACS nozzle to levels reported to create significant cell damage in micro-channels. Experimental studies in the FACSVantage, operated under the same conditions as the simulations confirmed significant cell damage in two cell lines, Chinese Hamster Ovary cells (CHO) and THP1, a human acute monocytic leukemia cell line.
The Role of Histone Tails in the Nucleosome: A Computational Study
Erler, Jochen; Zhang, Ruihan; Petridis, Loukas; Cheng, Xiaolin; Smith, Jeremy C.; Langowski, Jörg
2014-01-01
Histone tails play an important role in gene transcription and expression. We present here a systematic computational study of the role of histone tails in the nucleosome, using replica exchange molecular dynamics simulations with an implicit solvent model and different well-established force fields. We performed simulations for all four histone tails, H4, H3, H2A, and H2B, isolated and with inclusion of the nucleosome. The results confirm predictions of previous theoretical studies for the secondary structure of the isolated tails but show a strong dependence on the force field used. In the presence of the entire nucleosome for all force fields, the secondary structure of the histone tails is destabilized. Specific contacts are found between charged lysine and arginine residues and DNA phosphate groups and other binding sites in the minor and major DNA grooves. Using cluster analysis, we found a single dominant configuration of binding to DNA for the H4 and H2A histone tails, whereas H3 and H2B show multiple binding configurations with an equal probability. The leading stabilizing contribution for those binding configurations is the attractive interaction between the positively charged lysine and arginine residues and the negatively charged phosphate groups, and thus the resulting charge neutralization. Finally, we present results of molecular dynamics simulations in explicit solvent to confirm our conclusions. Results from both implicit and explicit solvent models show that large portions of the histone tails are not bound to DNA, supporting the complex role of these tails in gene transcription and expression and making them possible candidates for binding sites of transcription factors, enzymes, and other proteins. PMID:25517156
A hybrid framework for coupling arbitrary summation-by-parts schemes on general meshes
NASA Astrophysics Data System (ADS)
Lundquist, Tomas; Malan, Arnaud; Nordström, Jan
2018-06-01
We develop a general interface procedure to couple both structured and unstructured parts of a hybrid mesh in a non-collocated, multi-block fashion. The target is to gain optimal computational efficiency in fluid dynamics simulations involving complex geometries. While guaranteeing stability, the proposed procedure is optimized for accuracy and requires minimal algorithmic modifications to already existing schemes. Initial numerical investigations confirm considerable efficiency gains compared to non-hybrid calculations of up to an order of magnitude.
Detection of fuze defects by image-processing methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, M.J.
1988-03-01
This paper describes experimental studies of the detection of mechanical defects by the application of computer-processing methods to real-time radiographic images of fuze assemblies. The experimental results confirm that a new algorithm developed at Materials Research Laboratory has potential for the automatic inspection of these assemblies and of others that contain discrete components. The algorithm was applied to images that contain a range of grey levels and has been found to be tolerant to image variations encountered under simulated production conditions.
Bio-Inspired Controller on an FPGA Applied to Closed-Loop Diaphragmatic Stimulation
Zbrzeski, Adeline; Bornat, Yannick; Hillen, Brian; Siu, Ricardo; Abbas, James; Jung, Ranu; Renaud, Sylvie
2016-01-01
Cervical spinal cord injury can disrupt connections between the brain respiratory network and the respiratory muscles which can lead to partial or complete loss of ventilatory control and require ventilatory assistance. Unlike current open-loop technology, a closed-loop diaphragmatic pacing system could overcome the drawbacks of manual titration as well as respond to changing ventilation requirements. We present an original bio-inspired assistive technology for real-time ventilation assistance, implemented in a digital configurable Field Programmable Gate Array (FPGA). The bio-inspired controller, which is a spiking neural network (SNN) inspired by the medullary respiratory network, is as robust as a classic controller while having a flexible, low-power and low-cost hardware design. The system was simulated in MATLAB with FPGA-specific constraints and tested with a computational model of rat breathing; the model reproduced experimentally collected respiratory data in eupneic animals. The open-loop version of the bio-inspired controller was implemented on the FPGA. Electrical test bench characterizations confirmed the system functionality. Open and closed-loop paradigm simulations were simulated to test the FPGA system real-time behavior using the rat computational model. The closed-loop system monitors breathing and changes in respiratory demands to drive diaphragmatic stimulation. The simulated results inform future acute animal experiments and constitute the first step toward the development of a neuromorphic, adaptive, compact, low-power, implantable device. The bio-inspired hardware design optimizes the FPGA resource and time costs while harnessing the computational power of spike-based neuromorphic hardware. Its real-time feature makes it suitable for in vivo applications. PMID:27378844
Reis, H; Papadopoulos, M G; Grzybowski, A
2006-09-21
This is the second part of a study to elucidate the local field effects on the nonlinear optical properties of p-nitroaniline (pNA) in three solvents of different multipolar character, that is, cyclohexane (CH), 1,4-dioxane (DI), and tetrahydrofuran (THF), employing a discrete description of the solutions. By the use of liquid structure information from molecular dynamics simulations and molecular properties computed by high-level ab initio methods, the local field and local field gradients on p-nitroaniline and the solvent molecules are computed in quadrupolar approximation. To validate the simulations and the induction model, static and dynamic (non)linear properties of the pure solvents are also computed. With the exception of the static dielectric constant of pure THF, a good agreement between computed and experimental refractive indices, dielectric constants, and third harmonic generation signals is obtained for the solvents. For the solutions, it is found that multipole moments up to two orders higher than quadrupole have a negligible influence on the local fields on pNA, if a simple distribution model is employed for the electric properties of pNA. Quadrupole effects are found to be nonnegligible in all three solvents but are especially pronounced in the 1,4-dioxane solvent, in which the local fields are similar to those in THF, although the dielectric constant of DI is 2.2 and that of the simulated THF is 5.4. The electric-field-induced second harmonic generation (EFISH) signal and the hyper-Rayleigh scattering signal of pNA in the solutions computed with the local field are in good to fair agreement with available experimental results. This confirms the effect of the "dioxane anomaly" also on nonlinear optical properties. Predictions based on an ellipsoidal Onsager model as applied by experimentalists are in very good agreement with the discrete model predictions. This is in contrast to a recent discrete reaction field calculation of pNA in 1,4-dioxane, which found that the predicted first hyperpolarizability of pNA deviated strongly from the predictions obtained using Onsager-Lorentz local field factors.
Garcia, E; Laganà, A; Pirani, F; Bartolomei, M; Cacciatore, M; Kurnosov, A
2016-07-14
Prompted by a comparison of measured and computed rate coefficients of Vibration-to-Vibration and Vibration-to-Translation energy transfer in O2 + N2 non-reactive collisions, extended semiclassical calculations of the related cross sections were performed to rationalize the role played by attractive and repulsive components of the interaction on two different potential energy surfaces. By exploiting the distributed concurrent scheme of the Grid Empowered Molecular Simulator we extended the computational work to quasiclassical techniques, investigated in this way more in detail the underlying microscopic mechanisms, singled out the interaction components facilitating the energy transfer, improved the formulation of the potential, and performed additional calculations that confirmed the effectiveness of the improvement introduced.
SU-E-I-91: Development of a Compact Radiographic Simulator Using Microsoft Kinect.
Ono, M; Kozono, K; Aoki, M; Mizoguchi, A; Kamikawa, Y; Umezu, Y; Arimura, H; Toyofuku, F
2012-06-01
Radiographic simulator system is useful for learning radiographic techniques and confirmation of positioning before x-ray irradiation. Conventional x-ray simulators have drawbacks in cost and size, and are only applicable to situations in which position of the object does not change. Therefore, we have developed a new radiographic simulator system using an infrared-ray based three-dimensional shape measurement device (Microsoft Kinect). We made a computer program using OpenCV and OpenNI for processing of depth image data obtained from Kinect, and calculated the exact distance from Kinect to the object by calibration. Theobject was measured from various directions, and positional relationship between the x-ray tube and the object was obtained. X-ray projection images were calculated by projecting x-rays onto the mathematical three-dimensional CT data of a head phantom with almost the same size. The object was rotated from 0 degree (standard position) through 90 degrees in increments of 10 degrees, and the accuracy of the measured rotation angle values was evaluated. In order to improve the computational time, the projection image size was changed (512*512, 256*256, and 128*128). The x-ray simulation images corresponding to the radiographic images produced by using the x-ray tube were obtained. The three-dimensional position of the object was measured with good precision from 0 to 50 degrees, but above 50 degrees, measured position error increased with the increase of the rotation angle. The computational time and image size were 30, 12, and 7 seconds for 512*512, 256*256, and 128*128, respectively. We could measure the three-dimensional position of the object using properly calibrated Kinect sensor, and obtained projection images at relatively high-speed using the three-dimensional CTdata. It was suggested that this system can be used for obtaining simulated projection x-ray images before x-ray exposure by attaching this device onto an x-ray tube. © 2012 American Association of Physicists in Medicine.
Design and analysis considerations for deployment mechanisms in a space environment
NASA Technical Reports Server (NTRS)
Vorlicek, P. L.; Gore, J. V.; Plescia, C. T.
1982-01-01
On the second flight of the INTELSAT V spacecraft the time required for successful deployment of the north solar array was longer than originally predicted. The south solar array deployed as predicted. As a result of the difference in deployment times a series of experiments was conducted to locate the cause of the difference. Deployment rate sensitivity to hinge friction and temperature levels was investigated. A digital computer simulation of the deployment was created to evaluate the effects of parameter changes on deployment. Hinge design was optimized for nominal solar array deployment time for future INTELSAT V satellites. The nominal deployment times of both solar arrays on the third flight of INTELSAT V confirms the validity of the simulation and design optimization.
Coupled Kardar-Parisi-Zhang Equations in One Dimension
NASA Astrophysics Data System (ADS)
Ferrari, Patrik L.; Sasamoto, Tomohiro; Spohn, Herbert
2013-11-01
Over the past years our understanding of the scaling properties of the solutions to the one-dimensional KPZ equation has advanced considerably, both theoretically and experimentally. In our contribution we export these insights to the case of coupled KPZ equations in one dimension. We establish equivalence with nonlinear fluctuating hydrodynamics for multi-component driven stochastic lattice gases. To check the predictions of the theory, we perform Monte Carlo simulations of the two-component AHR model. Its steady state is computed using the matrix product ansatz. Thereby all coefficients appearing in the coupled KPZ equations are deduced from the microscopic model. Time correlations in the steady state are simulated and we confirm not only the scaling exponent, but also the scaling function and the non-universal coefficients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2017-04-01
This report describes the work and results of the verification and validation (V&V) of the version 1.0 release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, the equation of motion for fuel element thermal expansion, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This V&V effort was intended to confirm that the code showsmore » good agreement between simulation and actual ACRR operations.« less
NASA Technical Reports Server (NTRS)
Hansman, R. J., Jr.
1982-01-01
The feasibility of computerized simulation of the physics of advanced microwave anti-icing systems, which preheat impinging supercooled water droplets prior to impact, was investigated. Theoretical and experimental work performed to create a physically realistic simulation is described. The behavior of the absorption cross section for melting ice particles was measured by a resonant cavity technique and found to agree with theoretical predictions. Values of the dielectric parameters of supercooled water were measured by a similar technique at lambda = 2.82 cm down to -17 C. The hydrodynamic behavior of accelerated water droplets was studied photograhically in a wind tunnel. Droplets were found to initially deform as oblate spheroids and to eventually become unstable and break up in Bessel function modes for large values of acceleration or droplet size. This confirms the theory as to the maximum stable droplet size in the atmosphere. A computer code which predicts droplet trajectories in an arbitrary flow field was written and confirmed experimentally. The results were consolidated into a simulation to study the heating by electromagnetic fields of droplets impinging onto an object such as an airfoil. It was determined that there is sufficient time to heat droplets prior to impact for typical parameter values. Design curves for such a system are presented.
Majority logic gate for 3D magnetic computing.
Eichwald, Irina; Breitkreutz, Stephan; Ziemys, Grazvydas; Csaba, György; Porod, Wolfgang; Becherer, Markus
2014-08-22
For decades now, microelectronic circuits have been exclusively built from transistors. An alternative way is to use nano-scaled magnets for the realization of digital circuits. This technology, known as nanomagnetic logic (NML), may offer significant improvements in terms of power consumption and integration densities. Further advantages of NML are: non-volatility, radiation hardness, and operation at room temperature. Recent research focuses on the three-dimensional (3D) integration of nanomagnets. Here we show, for the first time, a 3D programmable magnetic logic gate. Its computing operation is based on physically field-interacting nanometer-scaled magnets arranged in a 3D manner. The magnets possess a bistable magnetization state representing the Boolean logic states '0' and '1.' Magneto-optical and magnetic force microscopy measurements prove the correct operation of the gate over many computing cycles. Furthermore, micromagnetic simulations confirm the correct functionality of the gate even for a size in the nanometer-domain. The presented device demonstrates the potential of NML for three-dimensional digital computing, enabling the highest integration densities.
Selent, Marcin; Nyman, Jonas; Roukala, Juho; Ilczyszyn, Marek; Oilunkaniemi, Raija; Bygrave, Peter J.; Laitinen, Risto; Jokisaari, Jukka
2017-01-01
Abstract An approach is presented for the structure determination of clathrates using NMR spectroscopy of enclathrated xenon to select from a set of predicted crystal structures. Crystal structure prediction methods have been used to generate an ensemble of putative structures of o‐ and m‐fluorophenol, whose previously unknown clathrate structures have been studied by 129Xe NMR spectroscopy. The high sensitivity of the 129Xe chemical shift tensor to the chemical environment and shape of the crystalline cavity makes it ideal as a probe for porous materials. The experimental powder NMR spectra can be used to directly confirm or reject hypothetical crystal structures generated by computational prediction, whose chemical shift tensors have been simulated using density functional theory. For each fluorophenol isomer one predicted crystal structure was found, whose measured and computed chemical shift tensors agree within experimental and computational error margins and these are thus proposed as the true fluorophenol xenon clathrate structures. PMID:28111848
NASA Astrophysics Data System (ADS)
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
A computational model for epidural electrical stimulation of spinal sensorimotor circuits.
Capogrosso, Marco; Wenger, Nikolaus; Raspopovic, Stanisa; Musienko, Pavel; Beauparlant, Janine; Bassi Luciani, Lorenzo; Courtine, Grégoire; Micera, Silvestro
2013-12-04
Epidural electrical stimulation (EES) of lumbosacral segments can restore a range of movements after spinal cord injury. However, the mechanisms and neural structures through which EES facilitates movement execution remain unclear. Here, we designed a computational model and performed in vivo experiments to investigate the type of fibers, neurons, and circuits recruited in response to EES. We first developed a realistic finite element computer model of rat lumbosacral segments to identify the currents generated by EES. To evaluate the impact of these currents on sensorimotor circuits, we coupled this model with an anatomically realistic axon-cable model of motoneurons, interneurons, and myelinated afferent fibers for antagonistic ankle muscles. Comparisons between computer simulations and experiments revealed the ability of the model to predict EES-evoked motor responses over multiple intensities and locations. Analysis of the recruited neural structures revealed the lack of direct influence of EES on motoneurons and interneurons. Simulations and pharmacological experiments demonstrated that EES engages spinal circuits trans-synaptically through the recruitment of myelinated afferent fibers. The model also predicted the capacity of spatially distinct EES to modulate side-specific limb movements and, to a lesser extent, extension versus flexion. These predictions were confirmed during standing and walking enabled by EES in spinal rats. These combined results provide a mechanistic framework for the design of spinal neuroprosthetic systems to improve standing and walking after neurological disorders.
A Priori Subgrid Analysis of Temporal Mixing Layers with Evaporating Droplets
NASA Technical Reports Server (NTRS)
Okongo, Nora; Bellan, Josette
1999-01-01
Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using three sets of results from a Direct Numerical Simulation (DNS) database, with Reynolds numbers (based on initial vorticity thickness) as large as 600 and with droplet mass loadings as large as 0.5. In the DNS, the gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. The Large Eddy Simulation (LES) equations corresponding to the DNS are first derived, and key assumptions in deriving them are first confirmed by computing the terms using the DNS database. Since LES of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be the sum of the filtered variables and a correction based on the filtered standard deviation; this correction is then computed from the Subgrid Scale (SGS) standard deviation. This model predicts the unfiltered variables at droplet locations considerably better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: the Smagorinsky approach, the Gradient model and the Scale-Similarity formulation. When the proportionality constant inherent in the SGS models is properly calculated, the Gradient and Scale-Similarity methods give results in excellent agreement with the DNS.
Mora Cardozo, Juan F; Burankova, T; Embs, J P; Benedetto, A; Ballone, P
2017-12-21
Systematic molecular dynamics simulations based on an empirical force field have been carried out for samples of triethylammonium trifluoromethanesulfonate (triethylammonium triflate, [TEA][Tf]), covering a wide temperature range 200 K ≤ T ≤ 400 K and analyzing a broad set of properties, from self-diffusion and electrical conductivity to rotational relaxation and hydrogen-bond dynamics. The study is motivated by recent quasi-elastic neutron scattering and differential scanning calorimetry measurements on the same system, revealing two successive first order transitions at T ≈ 230 and 310 K (on heating), as well as an intriguing and partly unexplained variety of subdiffusive motions of the acidic proton. Simulations show a weakly discontinuous transition at T = 310 K and highlight an anomaly at T = 260 K in the rotational relaxation of ions that we identify with the simulation analogue of the experimental transition at T = 230 K. Thus, simulations help identifying the nature of the experimental transitions, confirming that the highest temperature one corresponds to melting, while the one taking place at lower T is a transition from the crystal, stable at T ≤ 260 K, to a plastic phase (260 ≤ T ≤ 310 K), in which molecules are able to rotate without diffusing. Rotations, in particular, account for the subdiffusive motion seen at intermediate T both in the experiments and in the simulation. The structure, distribution, and strength of hydrogen bonds are investigated by molecular dynamics and by density functional computations. Clustering of ions of the same sign and the effect of contamination by water at 1% wgt concentration are discussed as well.
Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations
Radak, Brian K.; Roux, Benoît
2016-10-07
Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Frainier, Richard; Colombano, Silvano; Hazelton, Lyman; Szolovits, Peter
1993-01-01
This paper describes portions of a novel system called MARIKA (Model Analysis and Revision of Implicit Key Assumptions) to automatically revise a model of the normal human orientation system. The revision is based on analysis of discrepancies between experimental results and computer simulations. The discrepancies are calculated from qualitative analysis of quantitative simulations. The experimental and simulated time series are first discretized in time segments. Each segment is then approximated by linear combinations of simple shapes. The domain theory and knowledge are represented as a constraint network. Incompatibilities detected during constraint propagation within the network yield both parameter and structural model alterations. Interestingly, MARIKA diagnosed a data set from the Massachusetts Eye and Ear Infirmary Vestibular Laboratory as abnormal though the data was tagged as normal. Published results from other laboratories confirmed the finding. These encouraging results could lead to a useful clinical vestibular tool and to a scientific discovery system for space vestibular adaptation.
NASA Astrophysics Data System (ADS)
Asath, R. Mohamed; Rekha, T. N.; Premkumar, S.; Mathavan, T.; Benial, A. Milton Franklin
2016-12-01
Conformational analysis was carried out for N-(5-aminopyridin-2-yl)acetamide (APA) molecule. The most stable, optimized structure was predicted by the density functional theory calculations using the B3LYP functional with cc-pVQZ basis set. The optimized structural parameters and vibrational frequencies were calculated. The experimental and theoretical vibrational frequencies were assigned and compared. Ultraviolet-visible spectrum was simulated and validated experimentally. The molecular electrostatic potential surface was simulated. Frontier molecular orbitals and related molecular properties were computed, which reveals that the higher molecular reactivity and stability of the APA molecule and further density of states spectrum was simulated. The natural bond orbital analysis was also performed to confirm the bioactivity of the APA molecule. Antidiabetic activity was studied based on the molecular docking analysis and the APA molecule was identified that it can act as a good inhibitor against diabetic nephropathy.
Distributed communication and psychosocial performance in simulated space dwelling groups
NASA Astrophysics Data System (ADS)
Hienz, R. D.; Brady, J. V.; Hursh, S. R.; Ragusa, L. C.; Rouse, C. O.; Gasior, E. D.
2005-05-01
The present report describes the development and application of a distributed interactive multi-person simulation in a computer-generated planetary environment as an experimental test bed for modeling the human performance effects of variations in the types of communication modes available, and in the types of stress and incentive conditions underlying the completion of mission goals. The results demonstrated a high degree of interchangeability between communication modes (audio, text) when one mode was not available. Additionally, the addition of time pressure stress to complete tasks resulted in a reduction in performance effectiveness, and these performance reductions were ameliorated via the introduction of positive incentives contingent upon improved performances. The results obtained confirmed that cooperative and productive psychosocial interactions can be maintained between individually isolated and dispersed members of simulated spaceflight crews communicating and problem-solving effectively over extended time intervals without the benefit of one another's physical presence.
Motamedi, Shervin; Roy, Chandrabhushan; Shamshirband, Shahaboddin; Hashim, Roslan; Petković, Dalibor; Song, Ki-Il
2015-08-01
Ultrasonic pulse velocity is affected by defects in material structure. This study applied soft computing techniques to predict the ultrasonic pulse velocity for various peats and cement content mixtures for several curing periods. First, this investigation constructed a process to simulate the ultrasonic pulse velocity with adaptive neuro-fuzzy inference system. Then, an ANFIS network with neurons was developed. The input and output layers consisted of four and one neurons, respectively. The four inputs were cement, peat, sand content (%) and curing period (days). The simulation results showed efficient performance of the proposed system. The ANFIS and experimental results were compared through the coefficient of determination and root-mean-square error. In conclusion, use of ANFIS network enhances prediction and generation of strength. The simulation results confirmed the effectiveness of the suggested strategies. Copyright © 2015 Elsevier B.V. All rights reserved.
Dimas, Leon S; Buehler, Markus J
2014-07-07
Flaws, imperfections and cracks are ubiquitous in material systems and are commonly the catalysts of catastrophic material failure. As stresses and strains tend to concentrate around cracks and imperfections, structures tend to fail far before large regions of material have ever been subjected to significant loading. Therefore, a major challenge in material design is to engineer systems that perform on par with pristine structures despite the presence of imperfections. In this work we integrate knowledge of biological systems with computational modeling and state of the art additive manufacturing to synthesize advanced composites with tunable fracture mechanical properties. Supported by extensive mesoscale computer simulations, we demonstrate the design and manufacturing of composites that exhibit deformation mechanisms characteristic of pristine systems, featuring flaw-tolerant properties. We analyze the results by directly comparing strain fields for the synthesized composites, obtained through digital image correlation (DIC), and the computationally tested composites. Moreover, we plot Ashby diagrams for the range of simulated and experimental composites. Our findings show good agreement between simulation and experiment, confirming that the proposed mechanisms have a significant potential for vastly improving the fracture response of composite materials. We elucidate the role of stiffness ratio variations of composite constituents as an important feature in determining the composite properties. Moreover, our work validates the predictive ability of our models, presenting them as useful tools for guiding further material design. This work enables the tailored design and manufacturing of composites assembled from inferior building blocks, that obtain optimal combinations of stiffness and toughness.
Zhu, Shankuan; Kim, Jong-Eun; Ma, Xiaoguang; Shih, Alan; Laud, Purushottam W; Pintar, Frank; Shen, Wei; Heymsfield, Steven B; Allison, David B
2010-03-30
Men tend to have more upper body mass and fat than women, a physical characteristic that may predispose them to severe motor vehicle crash (MVC) injuries, particularly in certain body regions. This study examined MVC-related regional body injury and its association with the presence of driver obesity using both real-world data and computer crash simulation. Real-world data were from the 2001 to 2005 National Automotive Sampling System Crashworthiness Data System. A total of 10,941 drivers who were aged 18 years or older involved in frontal collision crashes were eligible for the study. Sex-specific logistic regression models were developed to analyze the associations between MVC injury and the presence of driver obesity. In order to confirm the findings from real-world data, computer models of obese subjects were constructed and crash simulations were performed. According to real-world data, obese men had a substantially higher risk of injury, especially serious injury, to the upper body regions including head, face, thorax, and spine than normal weight men (all p<0.05). A U-shaped relation was found between body mass index (BMI) and serious injury in the abdominal region for both men and women (p<0.05 for both BMI and BMI(2)). In the high-BMI range, men were more likely to be seriously injured than were women for all body regions except the extremities and abdominal region (all p<0.05 for interaction between BMI and sex). The findings from the computer simulation were generally consistent with the real-world results in the present study. Obese men endured a much higher risk of injury to upper body regions during MVCs. This higher risk may be attributed to differences in body shape, fat distribution, and center of gravity between obese and normal-weight subjects, and between men and women. Please see later in the article for the Editors' Summary.
Zhu, Shankuan; Kim, Jong-Eun; Ma, Xiaoguang; Shih, Alan; Laud, Purushottam W.; Pintar, Frank; Shen, Wei; Heymsfield, Steven B.; Allison, David B.
2010-01-01
Background Men tend to have more upper body mass and fat than women, a physical characteristic that may predispose them to severe motor vehicle crash (MVC) injuries, particularly in certain body regions. This study examined MVC-related regional body injury and its association with the presence of driver obesity using both real-world data and computer crash simulation. Methods and Findings Real-world data were from the 2001 to 2005 National Automotive Sampling System Crashworthiness Data System. A total of 10,941 drivers who were aged 18 years or older involved in frontal collision crashes were eligible for the study. Sex-specific logistic regression models were developed to analyze the associations between MVC injury and the presence of driver obesity. In order to confirm the findings from real-world data, computer models of obese subjects were constructed and crash simulations were performed. According to real-world data, obese men had a substantially higher risk of injury, especially serious injury, to the upper body regions including head, face, thorax, and spine than normal weight men (all p<0.05). A U-shaped relation was found between body mass index (BMI) and serious injury in the abdominal region for both men and women (p<0.05 for both BMI and BMI2). In the high-BMI range, men were more likely to be seriously injured than were women for all body regions except the extremities and abdominal region (all p<0.05 for interaction between BMI and sex). The findings from the computer simulation were generally consistent with the real-world results in the present study. Conclusions Obese men endured a much higher risk of injury to upper body regions during MVCs. This higher risk may be attributed to differences in body shape, fat distribution, and center of gravity between obese and normal-weight subjects, and between men and women. Please see later in the article for the Editors' Summary PMID:20361024
Study on the After Cavity Interaction in a 140 GHz Gyrotron Using 3D CFDTD PIC Simulations
NASA Astrophysics Data System (ADS)
Lin, M. C.; Illy, S.; Avramidis, K.; Thumm, M.; Jelonnek, J.
2016-10-01
A computational study on after cavity interaction (ACI) in a 140 GHz gryotron for fusion research has been performed using a 3-D conformal finite-difference time-domain (CFDTD) particle-in-cell (PIC) method. The ACI, i.e. beam wave interaction in the non-linear uptaper after the cavity has attracted a lot of attention and been widely investigated in recent years. In a dynamic ACI, a TE mode is excited by the electron beam at the same frequency as in the cavity, and the same mode is also interacting with the spent electron beam at a different frequency in the non-linear uptaper after the cavity while in a static ACI, a mode interacts with the beam both at the cavity and at the uptaper, but at the same frequency. A previous study on the dynamic ACI on a 140 GHz gyrotron has concluded that more advanced numerical simulations such as particle-in-cell (PIC) modeling should be employed to study or confirm the dynamic ACI in addition to using trajectory codes. In this work, we use a 3-D full wave time domain simulation based on the CFDTD PIC method to include the rippled-wall launcher of the quasi-optical output coupler into the simulations which breaks the axial symmetry of the original model employing a symmetric one. A preliminary simulation result has confirmed the dynamic ACI effect in this 140 GHz gyrotron in good agreement with the former study. A realistic launcher will be included in the model for studying the dynamic ACI and compared with the homogenous one.
A Computational and Experimental Study of Resonators in Three Dimensions
NASA Technical Reports Server (NTRS)
Tam, C. K. W.; Ju, H.; Jones, Michael G.; Watson, Willie R.; Parrott, Tony L.
2009-01-01
In a previous work by the present authors, a computational and experimental investigation of the acoustic properties of two-dimensional slit resonators was carried out. The present paper reports the results of a study extending the previous work to three dimensions. This investigation has two basic objectives. The first is to validate the computed results from direct numerical simulations of the flow and acoustic fields of slit resonators in three dimensions by comparing with experimental measurements in a normal incidence impedance tube. The second objective is to study the flow physics of resonant liners responsible for sound wave dissipation. Extensive comparisons are provided between computed and measured acoustic liner properties with both discrete frequency and broadband sound sources. Good agreements are found over a wide range of frequencies and sound pressure levels. Direct numerical simulation confirms the previous finding in two dimensions that vortex shedding is the dominant dissipation mechanism at high sound pressure intensity. However, it is observed that the behavior of the shed vortices in three dimensions is quite different from those of two dimensions. In three dimensions, the shed vortices tend to evolve into ring (circular in plan form) vortices, even though the slit resonator opening from which the vortices are shed has an aspect ratio of 2.5. Under the excitation of discrete frequency sound, the shed vortices align themselves into two regularly spaced vortex trains moving away from the resonator opening in opposite directions. This is different from the chaotic shedding of vortices found in two-dimensional simulations. The effect of slit aspect ratio at a fixed porosity is briefly studied. For the range of liners considered in this investigation, it is found that the absorption coefficient of a liner increases when the open area of the single slit is subdivided into multiple, smaller slits.
NASA Astrophysics Data System (ADS)
Becker, Matthew Rand
I present a new algorithm, CALCLENS, for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift- dependent shear signals including corrections to the Born approximation by using multiple- plane ray tracing, and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (~10,000 square degrees) can be ray traced efficiently at high-resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy ( ≲ 1%) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogs to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.
Computational modeling of magnetic particle margination within blood flow through LAMMPS
NASA Astrophysics Data System (ADS)
Ye, Huilin; Shen, Zhiqiang; Li, Ying
2017-11-01
We develop a multiscale and multiphysics computational method to investigate the transport of magnetic particles as drug carriers in blood flow under influence of hydrodynamic interaction and external magnetic field. A hybrid coupling method is proposed to handle red blood cell (RBC)-fluid interface (CFI) and magnetic particle-fluid interface (PFI), respectively. Immersed boundary method (IBM)-based velocity coupling is used to account for CFI, which is validated by tank-treading and tumbling behaviors of a single RBC in simple shear flow. While PFI is captured by IBM-based force coupling, which is verified through movement of a single magnetic particle under non-uniform external magnetic field and breakup of a magnetic chain in rotating magnetic field. These two components are seamlessly integrated within the LAMMPS framework, which is a highly parallelized molecular dynamics solver. In addition, we also implement a parallelized lattice Boltzmann simulator within LAMMPS to handle the fluid flow simulation. Based on the proposed method, we explore the margination behaviors of magnetic particles and magnetic chains within blood flow. We find that the external magnetic field can be used to guide the motion of these magnetic materials and promote their margination to the vascular wall region. Moreover, the scaling performance and speedup test further confirm the high efficiency and robustness of proposed computational method. Therefore, it provides an efficient way to simulate the transport of nanoparticle-based drug carriers within blood flow in a large scale. The simulation results can be applied in the design of efficient drug delivery vehicles that optimally accumulate within diseased tissue, thus providing better imaging sensitivity, therapeutic efficacy and lower toxicity.
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Barabash, R. I.; Agarwal, V.; Koric, S.; ...
2016-01-01
Tmore » he depth-dependent strain partitioning across the interfaces in the growth direction of the NiAl/Cr(Mo) nanocomposite between the Cr and NiAl lamellae was directly measured experimentally and simulated using a finite element method (FEM). Depth-resolved X-ray microdiffraction demonstrated that in the as-grown state both Cr and NiAl lamellae grow along the 111 direction with the formation of as-grown distinct residual ~0.16% compressive strains for Cr lamellae and ~0.05% tensile strains for NiAl lamellae. hree-dimensional simulations were carried out using an implicit FEM. First simulation was designed to study residual strains in the composite due to cooling resulting in formation of crystals. Strains in the growth direction were computed and compared to those obtained from the microdiffraction experiments. Second simulation was conducted to understand the combined strains resulting from cooling and mechanical indentation of the composite. Numerical results in the growth direction of crystal were compared to experimental results confirming the experimentally observed trends.« less
Melittin Aggregation in Aqueous Solutions: Insight from Molecular Dynamics Simulations.
Liao, Chenyi; Esai Selvan, Myvizhi; Zhao, Jun; Slimovitch, Jonathan L; Schneebeli, Severin T; Shelley, Mee; Shelley, John C; Li, Jianing
2015-08-20
Melittin is a natural peptide that aggregates in aqueous solutions with paradigmatic monomer-to-tetramer and coil-to-helix transitions. Since little is known about the molecular mechanisms of melittin aggregation in solution, we simulated its self-aggregation process under various conditions. After confirming the stability of a melittin tetramer in solution, we observed—for the first time in atomistic detail—that four separated melittin monomers aggregate into a tetramer. Our simulated dependence of melittin aggregation on peptide concentration, temperature, and ionic strength is in good agreement with prior experiments. We propose that melittin mainly self-aggregates via a mechanism involving the sequential addition of monomers, which is supported by both qualitative and quantitative evidence obtained from unbiased and metadynamics simulations. Moreover, by combining computer simulations and a theory of the electrical double layer, we provide evidence to suggest why melittin aggregation in solution likely stops at the tetramer, rather than forming higher-order oligomers. Overall, our study not only explains prior experimental results at the molecular level but also provides quantitative mechanistic information that may guide the engineering of melittin for higher efficacy and safety.
Maximum wind energy extraction strategies using power electronic converters
NASA Astrophysics Data System (ADS)
Wang, Quincy Qing
2003-10-01
This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through continuously improving the performance of wind power generation systems. This algorithm is independent of wind power generation system characteristics, and does not need wind speed and turbine speed measurements. Therefore, it can be easily implemented into various wind energy generation systems with different turbine inertia and diverse system hardware environments. In addition to the detailed description of the proposed algorithm, computer simulation results are presented in the thesis to demonstrate the advantage of this algorithm. As a final confirmation of the algorithm feasibility, the algorithm has been implemented inside a single-phase IGBT inverter, and tested with a wind simulator system in research laboratory. Test results were found consistent with the simulation results. (Abstract shortened by UMI.)
Ichikawa, Kazuhisa; Suzuki, Takashi; Murata, Noboru
2010-11-30
Molecular events in biological cells occur in local subregions, where the molecules tend to be small in number. The cytoskeleton, which is important for both the structural changes of cells and their functions, is also a countable entity because of its long fibrous shape. To simulate the local environment using a computer, stochastic simulations should be run. We herein report a new method of stochastic simulation based on random walk and reaction by the collision of all molecules. The microscopic reaction rate P(r) is calculated from the macroscopic rate constant k. The formula involves only local parameters embedded for each molecule. The results of the stochastic simulations of simple second-order, polymerization, Michaelis-Menten-type and other reactions agreed quite well with those of deterministic simulations when the number of molecules was sufficiently large. An analysis of the theory indicated a relationship between variance and the number of molecules in the system, and results of multiple stochastic simulation runs confirmed this relationship. We simulated Ca²(+) dynamics in a cell by inward flow from a point on the cell surface and the polymerization of G-actin forming F-actin. Our results showed that this theory and method can be used to simulate spatially inhomogeneous events.
Curtin, Lindsay B; Finn, Laura A; Czosnowski, Quinn A; Whitman, Craig B; Cawley, Michael J
2011-08-10
To assess the impact of computer-based simulation on the achievement of student learning outcomes during mannequin-based simulation. Participants were randomly assigned to rapid response teams of 5-6 students and then teams were randomly assigned to either a group that completed either computer-based or mannequin-based simulation cases first. In both simulations, students used their critical thinking skills and selected interventions independent of facilitator input. A predetermined rubric was used to record and assess students' performance in the mannequin-based simulations. Feedback and student performance scores were generated by the software in the computer-based simulations. More of the teams in the group that completed the computer-based simulation before completing the mannequin-based simulation achieved the primary outcome for the exercise, which was survival of the simulated patient (41.2% vs. 5.6%). The majority of students (>90%) recommended the continuation of simulation exercises in the course. Students in both groups felt the computer-based simulation should be completed prior to the mannequin-based simulation. The use of computer-based simulation prior to mannequin-based simulation improved the achievement of learning goals and outcomes. In addition to improving participants' skills, completing the computer-based simulation first may improve participants' confidence during the more real-life setting achieved in the mannequin-based simulation.
NASA Astrophysics Data System (ADS)
Starovoitova, Valeriia; Foote, Davy; Harris, Jason; Makarashvili, Vakhtang; Segebade, Christian R.; Sinha, Vaibhav; Wells, Douglas P.
2011-06-01
Cu-67 is considered as one of the most promising radioisotopes for cancer therapy with monoclonal antibodies. Current production schemes using high-flux reactors and cyclotrons do not meet potential market need. In this paper we discuss Cu-67 photonuclear production through the reaction Zn-68(γ,p)Cu-67. Computer simulations were done together with experiments to study and optimize Cu-67 yield in natural Zn target. The data confirms that the photonuclear method has potential to produce large quantities of the isotope with sufficient purity to be used in medical field.
Majorana Braiding with Thermal Noise.
Pedrocchi, Fabio L; DiVincenzo, David P
2015-09-18
We investigate the self-correcting properties of a network of Majorana wires, in the form of a trijunction, in contact with a parity-preserving thermal environment. As opposed to the case where Majorana bound states are immobile, braiding Majorana bound states within a trijunction introduces dangerous error processes that we identify. Such errors prevent the lifetime of the memory from increasing with the size of the system. We confirm our predictions with Monte Carlo simulations. Our findings put a restriction on the degree of self-correction of this specific quantum computing architecture.
Tang, Zhongwen
2015-01-01
An analytical way to compute predictive probability of success (PPOS) together with credible interval at interim analysis (IA) is developed for big clinical trials with time-to-event endpoints. The method takes account of the fixed data up to IA, the amount of uncertainty in future data, and uncertainty about parameters. Predictive power is a special type of PPOS. The result is confirmed by simulation. An optimal design is proposed by finding optimal combination of analysis time and futility cutoff based on some PPOS criteria.
Elastic properties of rigid fiber-reinforced composites
NASA Astrophysics Data System (ADS)
Chen, J.; Thorpe, M. F.; Davis, L. C.
1995-05-01
We study the elastic properties of rigid fiber-reinforced composites with perfect bonding between fibers and matrix, and also with sliding boundary conditions. In the dilute region, there exists an exact analytical solution. Around the rigidity threshold we find the elastic moduli and Poisson's ratio by decomposing the deformation into a compression mode and a rotation mode. For perfect bonding, both modes are important, whereas only the compression mode is operative for sliding boundary conditions. We employ the digital-image-based method and a finite element analysis to perform computer simulations which confirm our analytical predictions.
Long ligands reinforce biological adhesion under shear flow
NASA Astrophysics Data System (ADS)
Belyaev, Aleksey V.
2018-04-01
In this work, computer modeling has been used to show that longer ligands allow biological cells (e.g., blood platelets) to withstand stronger flows after their adhesion to solid walls. A mechanistic model of polymer-mediated ligand-receptor adhesion between a microparticle (cell) and a flat wall has been developed. The theoretical threshold between adherent and non-adherent regimes has been derived analytically and confirmed by simulations. These results lead to a deeper understanding of numerous biophysical processes, e.g., arterial thrombosis, and to the design of new biomimetic colloid-polymer systems.
NASA Technical Reports Server (NTRS)
Korkegi, R. H.
1983-01-01
The results of a National Research Council study on the effect that advances in computational fluid dynamics (CFD) will have on conventional aeronautical ground testing are reported. Current CFD capabilities include the depiction of linearized inviscid flows and a boundary layer, initial use of Euler coordinates using supercomputers to automatically generate a grid, research and development on Reynolds-averaged Navier-Stokes (N-S) equations, and preliminary research on solutions to the full N-S equations. Improvements in the range of CFD usage is dependent on the development of more powerful supercomputers, exceeding even the projected abilities of the NASA Numerical Aerodynamic Simulator (1 BFLOP/sec). Full representation of the Re-averaged N-S equations will require over one million grid points, a computing level predicted to be available in 15 yr. Present capabilities allow identification of data anomalies, confirmation of data accuracy, and adequateness of model design in wind tunnel trials. Account can be taken of the wall effects and the Re in any flight regime during simulation. CFD can actually be more accurate than instrumented tests, since all points in a flow can be modeled with CFD, while they cannot all be monitored with instrumentation in a wind tunnel.
The discounting model selector: Statistical software for delay discounting applications.
Gilroy, Shawn P; Franck, Christopher T; Hantula, Donald A
2017-05-01
Original, open-source computer software was developed and validated against established delay discounting methods in the literature. The software executed approximate Bayesian model selection methods from user-supplied temporal discounting data and computed the effective delay 50 (ED50) from the best performing model. Software was custom-designed to enable behavior analysts to conveniently apply recent statistical methods to temporal discounting data with the aid of a graphical user interface (GUI). The results of independent validation of the approximate Bayesian model selection methods indicated that the program provided results identical to that of the original source paper and its methods. Monte Carlo simulation (n = 50,000) confirmed that true model was selected most often in each setting. Simulation code and data for this study were posted to an online repository for use by other researchers. The model selection approach was applied to three existing delay discounting data sets from the literature in addition to the data from the source paper. Comparisons of model selected ED50 were consistent with traditional indices of discounting. Conceptual issues related to the development and use of computer software by behavior analysts and the opportunities afforded by free and open-sourced software are discussed and a review of possible expansions of this software are provided. © 2017 Society for the Experimental Analysis of Behavior.
Ye, Chunhong; Nikolov, Svetoslav V; Geryak, Ren D; Calabrese, Rossella; Ankner, John F; Alexeev, Alexander; Kaplan, David L; Tsukruk, Vladimir V
2016-07-13
Microscaled self-rolling construct sheets from silk protein material have been fabricated, containing a silk bimorph composed of silk ionomers as an active layer and cross-linked silk β-sheet as the passive layer. The programmable morphology was experimentally explored along with a computational simulation to understand the mechanism of shape reconfiguration. The neutron reflectivity shows that the active silk ionomers layer undergoes remarkable swelling (eight times increase in thickness) after deprotonation while the passive silk β-sheet retains constant volume under the same conditions and supports the bimorph construct. This selective swelling within the silk-on-silk bimorph microsheets generates strong interfacial stress between layers and out-of-plane forces, which trigger autonomous self-rolling into various 3D constructs such as cylindrical and helical tubules. The experimental observations and computational modeling confirmed the role of interfacial stresses and allow programming the morphology of the 3D constructs with particular design. We demonstrated that the biaxial stress distribution over the 2D planar films depends upon the lateral dimensions, thickness and the aspect ratio of the microsheets. The results allow the fine-tuning of autonomous shape transformations for the further design of complex micro-origami constructs and the silk based rolling/unrolling structures provide a promising platform for polymer-based biomimetic devices for implant applications.
Hierarchical neural network model of the visual system determining figure/ground relation
NASA Astrophysics Data System (ADS)
Kikuchi, Masayuki
2017-07-01
One of the most important functions of the visual perception in the brain is figure/ground interpretation from input images. Figural region in 2D image corresponding to object in 3D space are distinguished from background region extended behind the object. Previously the author proposed a neural network model of figure/ground separation constructed on the standpoint that local geometric features such as curvatures and outer angles at corners are extracted and propagated along input contour in a single layer network (Kikuchi & Akashi, 2001). However, such a processing principle has the defect that signal propagation requires manyiterations despite the fact that actual visual system determines figure/ground relation within the short period (Zhou et al., 2000). In order to attain speed-up for determining figure/ground, this study incorporates hierarchical architecture into the previous model. This study confirmed the effect of the hierarchization as for the computation time by simulation. As the number of layers increased, the required computation time reduced. However, such speed-up effect was saturatedas the layers increased to some extent. This study attempted to explain this saturation effect by the notion of average distance between vertices in the area of complex network, and succeeded to mimic the saturation effect by computer simulation.
NASA Astrophysics Data System (ADS)
King, Jacob; Kruger, Scott
2017-10-01
Flow can impact the stability and nonlinear evolution of range of instabilities (e.g. RWMs, NTMs, sawteeth, locked modes, PBMs, and high-k turbulence) and thus robust numerical algorithms for simulations with flow are essential. Recent simulations of DIII-D QH-mode [King et al., Phys. Plasmas and Nucl. Fus. 2017] with flow have been restricted to smaller time-step sizes than corresponding computations without flow. These computations use a mixed semi-implicit, implicit leapfrog time discretization as implemented in the NIMROD code [Sovinec et al., JCP 2004]. While prior analysis has shown that this algorithm is unconditionally stable with respect to the effect of large flows on the MHD waves in slab geometry [Sovinec et al., JCP 2010], our present Von Neumann stability analysis shows that a flow-induced numerical instability may arise when ad-hoc cylindrical curvature is included. Computations with the NIMROD code in cylindrical geometry with rigid rotation and without free-energy drive from current or pressure gradients qualitatively confirm this analysis. We explore potential methods to circumvent this flow-induced numerical instability such as using a semi-Lagrangian formulation instead of time-centered implicit advection and/or modification to the semi-implicit operator. This work is supported by the DOE Office of Science (Office of Fusion Energy Sciences).
Drift trajectories of a floating human body simulated in a hydraulic model of Puget Sound.
Ebbesmeyer, C C; Haglund, W D
1994-01-01
After a young man jumped off a 221-foot (67 meters) high bridge, the drift of the body that beached 20 miles (32 km) away at Alki Point in Seattle, Washington was simulated with a hydraulic model. Simulations for the appropriate time period were performed using a small floating bead to represent the body in the hydraulic model at the University of Washington. Bead movements were videotaped and transferred to Computer Aided Drafting (AutoCAD) charts on a personal computer. Because of strong tidal currents in the narrow passage under the bridge (The Narrows near Tacoma, WA), small changes in the time of the jump (+/- 30 minutes) made large differences in the distance the body traveled (30 miles; 48 km). Hydraulic and other types of oceanographic models may be located by contacting technical experts known as physical oceanographers at local universities, and can be utilized to demonstrate trajectories of floating objects and the time required to arrive at selected locations. Potential applications for forensic death investigators include: to be able to set geographic and time limits for searches; determine potential origin of remains found floating or beached; and confirm and correlate information regarding entry into the water and sightings of remains.
Final report for the DOE Early Career Award #DE-SC0003912
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayaraman, Arthi
This DoE supported early career project was aimed at developing computational models, theory and simulation methods that would be then be used to predict assembly and morphology in polymer nanocomposites. In particular, the focus was on composites in active layers of devices, containing conducting polymers that act as electron donors and nanoscale additives that act as electron acceptors. During the course this work, we developed the first of its kind molecular models to represent conducting polymers enabling simulations at the experimentally relevant length and time scales. By comparison with experimentally observed morphologies we validated these models. Furthermore, using these modelsmore » and molecular dynamics simulations on graphical processing units (GPUs) we predicted the molecular level design features in polymers and additive that lead to morphologies with optimal features for charge carrier behavior in solar cells. Additionally, we also predicted computationally new design rules for better dispersion of additives in polymers that have been confirmed through experiments. Achieving dispersion in polymer nanocomposites is valuable to achieve controlled macroscopic properties of the composite. The results obtained during the course of this DOE funded project enables optimal design of higher efficiency organic electronic and photovoltaic devices and improve every day life with engineering of these higher efficiency devices.« less
Mirman, Daniel; Yee, Eiling; Blumstein, Sheila E.; Magnuson, James S.
2011-01-01
We used eye tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., carrot – parrot) and cohort (e.g., beaker – beetle) competitors. Broca’s aphasic participants exhibited larger rhyme competition effects than age-matched controls. A reanalysis of previously reported data (Yee, Blumstein, & Sedivy, 2008) confirmed that Wernicke’s aphasic participants exhibited larger cohort competition effects. Individual-level analyses revealed a negative correlation between rhyme and cohort competition effect size across both groups of aphasic participants. Computational model simulations were performed to examine which of several accounts of lexical processing deficits in aphasia might account for the observed effects. Simulation results revealed that slower deactivation of lexical competitors could account for increased cohort competition in Wernicke’s aphasic participants; auditory perceptual impairment could account for increased rhyme competition in Broca's aphasic participants; and a perturbation of a parameter controlling selection among competing alternatives could account for both patterns, as well as the correlation between the effects. In light of these simulation results, we discuss theoretical accounts that have the potential to explain the dynamics of spoken word recognition in aphasia and the possible roles of anterior and posterior brain regions in lexical processing and cognitive control. PMID:21371743
NASA Astrophysics Data System (ADS)
Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen
2016-11-01
In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.
Ab Initio Simulations of Temperature Dependent Phase Stability and Martensitic Transitions in NiTi
NASA Technical Reports Server (NTRS)
Haskins, Justin B.; Thompson, Alexander E.; Lawson, John W.
2016-01-01
For NiTi based alloys, the shape memory effect is governed by a transition from a low-temperature martensite phase to a high-temperature austenite phase. Despite considerable experimental and computational work, basic questions regarding the stability of the phases and the martensitic phase transition remain unclear even for the simple case of binary, equiatomic NiTi. We perform ab initio molecular dynamics simulations to describe the temperature-dependent behavior of NiTi and resolve several of these outstanding issues. Structural correlation functions and finite temperature phonon spectra are evaluated to determine phase stability. In particular, we show that finite temperature, entropic effects stabilize the experimentally observed martensite (B19') and austenite (B2) phases while destabilizing the theoretically predicted (B33) phase. Free energy computations based on ab initio thermodynamic integration confirm these results and permit estimates of the transition temperature between the phases. In addition to the martensitic phase transition, we predict a new transition between the B33 and B19' phases. The role of defects in suppressing these phase transformations is discussed.
Study of CPM Device used for Rehabilitation and Effective Pain Management Following Knee Alloplasty
NASA Astrophysics Data System (ADS)
Trochimczuk, R.; Kuźmierowski, T.; Anchimiuk, P.
2017-02-01
This paper defines the design assumptions for the construction of an original demonstration of a CPM device, based on which a solid virtual model will be created in a CAD software environment. The overall dimensions and other input parameters for the design were determined for the entire patient population according to an anatomical atlas of human measures. The medical and physiotherapeutic community were also consulted with respect to the proposed engineering solutions. The virtual model of the CPM device that will be created will be used for computer simulations of changes in motion parameters as a function of time, accounting for loads and static states. The results obtained from computer simulation will be used to confirm the correctness of the design adopted assumptions and of the accepted structure of the CPM mechanism, and potentially to introduce necessary corrections. They will also provide a basis for the development of a control strategy for the laboratory prototype and for the selection of the strategy of the patient's rehabilitation in the future. This paper will be supplemented with identification of directions of further research.
Jones, Cameron C; McDonough, James M; Capasso, Patrizio; Wang, Dongfang; Rosenstein, Kyle S; Zwischenberger, Joseph B
2013-10-01
Computational fluid dynamics (CFD) is a useful tool in characterizing artificial lung designs by providing predictions of device performance through analyses of pressure distribution, perfusion dynamics, and gas transport properties. Validation of numerical results in membrane oxygenators has been predominantly based on experimental pressure measurements with little emphasis placed on confirmation of the velocity fields due to opacity of the fiber membrane and limitations of optical velocimetric methods. Biplane X-ray digital subtraction angiography was used to visualize flow of a blood analogue through a commercial membrane oxygenator at 1-4.5 L/min. Permeability and inertial coefficients of the Ergun equation were experimentally determined to be 180 and 2.4, respectively. Numerical simulations treating the fiber bundle as a single momentum sink according to the Ergun equation accurately predicted pressure losses across the fiber membrane, but significantly underestimated velocity magnitudes in the fiber bundle. A scaling constant was incorporated into the numerical porosity and reduced the average difference between experimental and numerical values in the porous media regions from 44 ± 4% to 6 ± 5%.
NASA Astrophysics Data System (ADS)
Bednar, Earl; Drager, Steven L.
2007-04-01
Quantum information processing's objective is to utilize revolutionary computing capability based on harnessing the paradigm shift offered by quantum computing to solve classically hard and computationally challenging problems. Some of our computationally challenging problems of interest include: the capability for rapid image processing, rapid optimization of logistics, protecting information, secure distributed simulation, and massively parallel computation. Currently, one important problem with quantum information processing is that the implementation of quantum computers is difficult to realize due to poor scalability and great presence of errors. Therefore, we have supported the development of Quantum eXpress and QuIDD Pro, two quantum computer simulators running on classical computers for the development and testing of new quantum algorithms and processes. This paper examines the different methods used by these two quantum computing simulators. It reviews both simulators, highlighting each simulators background, interface, and special features. It also demonstrates the implementation of current quantum algorithms on each simulator. It concludes with summary comments on both simulators.
Social Noise: Generating Random Numbers from Twitter Streams
NASA Astrophysics Data System (ADS)
Fernández, Norberto; Quintas, Fernando; Sánchez, Luis; Arias, Jesús
2015-12-01
Due to the multiple applications of random numbers in computer systems (cryptography, online gambling, computer simulation, etc.) it is important to have mechanisms to generate these numbers. True Random Number Generators (TRNGs) are commonly used for this purpose. TRNGs rely on non-deterministic sources to generate randomness. Physical processes (like noise in semiconductors, quantum phenomenon, etc.) play this role in state of the art TRNGs. In this paper, we depart from previous work and explore the possibility of defining social TRNGs using the stream of public messages of the microblogging service Twitter as randomness source. Thus, we define two TRNGs based on Twitter stream information and evaluate them using the National Institute of Standards and Technology (NIST) statistical test suite. The results of the evaluation confirm the feasibility of the proposed approach.
Generation and Computerized Simulation of Meshing and Contact of Modified Involute Helical Gears
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Chen, Ningxin; Lu, Jian
1995-01-01
The design and generation of modified involute helical gears that have a localized and stable bearing contact, and reduced noise and vibration characteristics are described. The localization of the bearing contact is achieved by the mismatch of the two generating surfaces that are used for generation of the pinion and the gear. The reduction of noise and vibration will be achieved by application of a parabolic function of transmission errors that is able to absorb the almost linear function of transmission errors caused by gear misalignment. The meshing and contact of misaligned gear drives can be analyzed by application of computer programs that have been developed. The computations confirmed the effectiveness of the proposed modification of the gear geometry. A numerical example that illustrates the developed theory is provided.
Toward an efficient Photometric Supernova Classifier
NASA Astrophysics Data System (ADS)
McClain, Bradley
2018-01-01
The Sloan Digital Sky Survey Supernova Survey (SDSS) discovered more than 1,000 Type Ia Supernovae, yet less than half of these have spectroscopic measurements. As wide-field imaging telescopes such as The Dark Energy Survey (DES) and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) discover more supernovae, the need for accurate and computationally cheap photometric classifiers increases. My goal is to use a photometric classification algorithm based on Sncosmo, a python library for supernova cosmology analysis, to reclassify previously identified Hubble SN and other non-spectroscopically confirmed surveys. My results will be compared to other photometric classifiers such as PSNID and STARDUST. In the near future, I expect to have the algorithm validated with simulated data, optimized for efficiency, and applied with high performance computing to real data.
Free Energy Simulations of Ligand Binding to the Aspartate Transporter GltPh
Heinzelmann, Germano; Baştuğ, Turgut; Kuyucak, Serdar
2011-01-01
Glutamate/Aspartate transporters cotransport three Na+ and one H+ ions with the substrate and countertransport one K+ ion. The binding sites for the substrate and two Na+ ions have been observed in the crystal structure of the archeal homolog GltPh, while the binding site for the third Na+ ion has been proposed from computational studies and confirmed by experiments. Here we perform detailed free energy simulations of GltPh, giving a comprehensive characterization of the substrate and ion binding sites, and calculating their binding free energies in various configurations. Our results show unequivocally that the substrate binds after the binding of two Na+ ions. They also shed light into Asp/Glu selectivity of GltPh, which is not observed in eukaryotic glutamate transporters. PMID:22098736
Design of neurophysiologically motivated structures of time-pulse coded neurons
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lobodzinska, Raisa F.
2009-04-01
The common methodology of biologically motivated concept of building of processing sensors systems with parallel input and picture operands processing and time-pulse coding are described in paper. Advantages of such coding for creation of parallel programmed 2D-array structures for the next generation digital computers which require untraditional numerical systems for processing of analog, digital, hybrid and neuro-fuzzy operands are shown. The optoelectronic time-pulse coded intelligent neural elements (OETPCINE) simulation results and implementation results of a wide set of neuro-fuzzy logic operations are considered. The simulation results confirm engineering advantages, intellectuality, circuit flexibility of OETPCINE for creation of advanced 2D-structures. The developed equivalentor-nonequivalentor neural element has power consumption of 10mW and processing time about 10...100us.
Barber, Larissa K; Smit, Brandon W
2014-01-01
This study replicated ego-depletion predictions from the self-control literature in a computer simulation task that requires ongoing decision-making in relation to constantly changing environmental information: the Network Fire Chief (NFC). Ego-depletion led to decreased self-regulatory effort, but not performance, on the NFC task. These effects were also buffered by task enjoyment so that individuals who enjoyed the dynamic decision-making task did not experience ego-depletion effects. These findings confirm that past ego-depletion effects on decision-making are not limited to static or isolated decision-making tasks and can be extended to dynamic, naturalistic decision-making processes more common to naturalistic settings. Furthermore, the NFC simulation provides a methodological mechanism for independently measuring effort and performance when studying ego-depletion.
Ferrite HOM Absorber for the RHIC ERL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn,H.; Choi, E.M.; Hammons, L.
A superconducting Energy Recovery Linac is under construction at Brookhaven National Laboratory to serve as test bed for RHIC upgrades. The damping of higher-order modes in the superconducting five-cell cavity for the Energy-Recovery linac at RHIC is performed exclusively by two ferrite absorbers. The ferrite properties have been measured in ferrite-loaded pill box cavities resulting in the permeability values given by a first-order Debye model for the tiled absorber structure and an equivalent permeability value for computer simulations with solid ring dampers. Measured and simulated results for the higher-order modes in the prototype copper cavity are discussed. First room-temperature measurementsmore » of the finished niobium cavity are presented which confirm the effective damping of higher-order modes in the ERL. by the ferrite absorbers.« less
Initial verification and validation of RAZORBACK - A research reactor transient analysis code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2015-09-01
This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actualmore » ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.« less
A breakthrough for experiencing and understanding simulated physics
NASA Technical Reports Server (NTRS)
Watson, Val
1988-01-01
The use of computer simulation in physics research is discussed, focusing on improvements to graphic workstations. Simulation capabilities and applications of enhanced visualization tools are outlined. The elements of an ideal computer simulation are presented and the potential for improving various simulation elements is examined. The interface between the human and the computer and simulation models are considered. Recommendations are made for changes in computer simulation practices and applications of simulation technology in education.
Secomb, Jacinta; McKenna, Lisa; Smith, Colleen
2012-12-01
To provide evidence on the effectiveness of simulation activities on the clinical decision-making abilities of undergraduate nursing students. Based on previous research, it was hypothesised that the higher the cognitive score, the greater the ability a nursing student would have to make informed valid decisions in their clinical practice. Globally, simulation is being espoused as an education method that increases the competence of health professionals. At present, there is very little evidence to support current investment in time and resources. Following ethical approval, fifty-eight third-year undergraduate nursing students were randomised in a pretest-post-test group-parallel controlled trial. The learning environment preferences (LEP) inventory was used to test cognitive abilities in order to refute the null hypothesis that activities in computer-based simulated learning environments have a negative effect on cognitive abilities when compared with activities in skills laboratory simulated learning environments. There was no significant difference in cognitive development following two cycles of simulation activities. Therefore, it is reasonable to assume that two simulation tasks, either computer-based or laboratory-based, have no effect on an undergraduate student's ability to make clinical decisions in practice. However, there was a significant finding for non-English first-language students, which requires further investigation. More longitudinal studies that quantify the education effects of simulation on the cognitive, affective and psychomotor attributes of health science students and professionals from both English-speaking and non-English-speaking backgrounds are urgently required. It is also recommended that to achieve increased participant numbers and prevent non-participation owing to absenteeism, further studies need to be imbedded directly into curricula. This investigation confirms the effect of simulation activities on real-life clinical practice, and the comparative learning benefits with traditional clinical practice and university education remain unknown. © 2012 Blackwell Publishing Ltd.
Population Synthesis of Radio & Gamma-Ray Millisecond Pulsars
NASA Astrophysics Data System (ADS)
Frederick, Sara; Gonthier, P. L.; Harding, A. K.
2014-01-01
In recent years, the number of known gamma-ray millisecond pulsars (MSPs) in the Galactic disk has risen substantially thanks to confirmed detections by Fermi Gamma-ray Space Telescope (Fermi). We have developed a new population synthesis of gamma-ray and radio MSPs in the galaxy which uses Markov Chain Monte Carlo techniques to explore the large and small worlds of the model parameter space and allows for comparisons of the simulated and detected MSP distributions. The simulation employs empirical radio and gamma-ray luminosity models that are dependent upon the pulsar period and period derivative with freely varying exponents. Parameters associated with the birth distributions are also free to vary. The computer code adjusts the magnitudes of the model luminosities to reproduce the number of MSPs detected by a group of ten radio surveys, thus normalizing the simulation and predicting the MSP birth rates in the Galaxy. Computing many Markov chains leads to preferred sets of model parameters that are further explored through two statistical methods. Marginalized plots define confidence regions in the model parameter space using maximum likelihood methods. A secondary set of confidence regions is determined in parallel using Kuiper statistics calculated from comparisons of cumulative distributions. These two techniques provide feedback to affirm the results and to check for consistency. Radio flux and dispersion measure constraints have been imposed on the simulated gamma-ray distributions in order to reproduce realistic detection conditions. The simulated and detected distributions agree well for both sets of radio and gamma-ray pulsar characteristics, as evidenced by our various comparisons.
SIDON: A simulator of radio-frequency networks. Application to WEST ICRF launchers
NASA Astrophysics Data System (ADS)
Helou, Walid; Dumortier, Pierre; Durodié, Frédéric; Goniche, Marc; Hillairet, Julien; Mollard, Patrick; Berger-By, Gilles; Bernard, Jean-Michel; Colas, Laurent; Lombard, Gilles; Maggiora, Riccardo; Magne, Roland; Milanesio, Daniele; Moreau, Didier
2015-12-01
SIDON (SImulator of raDiO-frequency Networks) is an in-house developed Radio-Frequency (RF) network solver that has been implemented to cross-validate the design of WEST ICRF launchers and simulate their impedance matching algorithm while considering all mutual couplings and asymmetries. In this paper, the authors illustrate the theory of SIDON as well as results of its calculations. The authors have built time-varying plasma scenarios (a sequence of launchers front-faces L-mode and H-mode Z-matrices), where at each time step (1 millisecond here), SIDON solves the RF network. At the same time, when activated, the impedance matching algorithm controls the matching elements (vacuum capacitors) and thus their corresponding S-matrices. Typically a 1-second pulse requires around 10 seconds of computational time on a desktop computer. These tasks can be hardly handled by commercial RF software. This innovative work allows identifying strategies for the launchers future operation while insuring the limitations on the currents, voltages and electric fields, matching and Load-Resilience, as well as the required straps voltage amplitude/phase balance. In this paper, a particular attention is paid to the simulation of the launchers behavior when arcs appear at several locations of their circuits using SIDON calculator. This latter work shall confirm or identify strategies for the arc detection using various RF electrical signals. One shall note that the use of such solvers in not limited to ICRF launchers simulations but can be employed, in principle, to any linear or linearized RF problem.
Simulation of dental collisions and occlusal dynamics in the virtual environment.
Stavness, I K; Hannam, A G; Tobias, D L; Zhang, X
2016-04-01
Semi-adjustable articulators have often been used to simulate occlusal dynamics, but advances in intra-oral scanning and computer software now enable dynamics to be modelled mathematically. Computer simulation of occlusal dynamics requires accurate virtual casts, records to register them and methods to handle mesh collisions during movement. Here, physical casts in a semi-adjustable articulator were scanned with a conventional clinical intra-oral scanner. A coordinate measuring machine was used to index their positions in intercuspation, protrusion, right and left laterotrusion, and to model features of the articulator. Penetrations between the indexed meshes were identified and resolved using restitution forces, and the final registrations were verified by distance measurements between dental landmarks at multiple sites. These sites were confirmed as closely approximating via measurements made from homologous transilluminated vinylpolysiloxane interocclusal impressions in the mounted casts. Movements between the indexed positions were simulated with two models in a custom biomechanical software platform. In model DENTAL, 6 degree-of-freedom movements were made to minimise deviation from a straight line path and also shaped by dynamic mesh collisions detected and resolved mathematically. In model ARTIC, the paths were further constrained by surfaces matching the control settings of the articulator. Despite these differences, the lower mid-incisor point paths were very similar in both models. The study suggests that mathematical simulation utilising interocclusal 'bite' registrations can closely replicate the primary movements of casts mounted in a semi-adjustable articulator. Additional indexing positions and appropriate software could, in some situations, replace the need for mechanical semi-adjustable articulation and/or its virtual representation. © 2015 John Wiley & Sons Ltd.
Hulme, Adam; Thompson, Jason; Nielsen, Rasmus Oestergaard; Read, Gemma J M; Salmon, Paul M
2018-06-18
There have been recent calls for the application of the complex systems approach in sports injury research. However, beyond theoretical description and static models of complexity, little progress has been made towards formalising this approach in way that is practical to sports injury scientists and clinicians. Therefore, our objective was to use a computational modelling method and develop a dynamic simulation in sports injury research. Agent-based modelling (ABM) was used to model the occurrence of sports injury in a synthetic athlete population. The ABM was developed based on sports injury causal frameworks and was applied in the context of distance running-related injury (RRI). Using the acute:chronic workload ratio (ACWR), we simulated the dynamic relationship between changes in weekly running distance and RRI through the manipulation of various 'athlete management tools'. The findings confirmed that building weekly running distances over time, even within the reported ACWR 'sweet spot', will eventually result in RRI as athletes reach and surpass their individual physical workload limits. Introducing training-related error into the simulation and the modelling of a 'hard ceiling' dynamic resulted in a higher RRI incidence proportion across the population at higher absolute workloads. The presented simulation offers a practical starting point to further apply more sophisticated computational models that can account for the complex nature of sports injury aetiology. Alongside traditional forms of scientific inquiry, the use of ABM and other simulation-based techniques could be considered as a complementary and alternative methodological approach in sports injury research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
Fang, Pan; Hou, Yongjun; Nan, Yanghai
2015-01-01
A new mechanism is proposed to implement synchronization of the two unbalanced rotors in a vibration system, which consists of a double vibro-body, two induction motors and spring foundations. The coupling relationship between the vibro-bodies is ascertained with the Laplace transformation method for the dynamics equation of the system obtained with the Lagrange's equation. An analytical approach, the average method of modified small parameters, is employed to study the synchronization characteristics between the two unbalanced rotors, which is converted into that of existence and the stability of zero solutions for the non-dimensional differential equations of the angular velocity disturbance parameters. By assuming the disturbance parameters that infinitely approach to zero, the synchronization condition for the two rotors is obtained. It indicated that the absolute value of the residual torque between the two motors should be equal to or less than the maximum of their coupling torques. Meanwhile, the stability criterion of synchronization is derived with the Routh-Hurwitz method, and the region of the stable phase difference is confirmed. At last, computer simulations are preformed to verify the correctness of the approximate solution of the theoretical computation for the stable phase difference between the two unbalanced rotors, and the results of theoretical computation is in accordance with that of computer simulations. To sum up, only the parameters of the vibration system satisfy the synchronization condition and the stability criterion of the synchronization, the two unbalanced rotors can implement the synchronization operation.
Fang, Pan; Hou, Yongjun; Nan, Yanghai
2015-01-01
A new mechanism is proposed to implement synchronization of the two unbalanced rotors in a vibration system, which consists of a double vibro-body, two induction motors and spring foundations. The coupling relationship between the vibro-bodies is ascertained with the Laplace transformation method for the dynamics equation of the system obtained with the Lagrange’s equation. An analytical approach, the average method of modified small parameters, is employed to study the synchronization characteristics between the two unbalanced rotors, which is converted into that of existence and the stability of zero solutions for the non-dimensional differential equations of the angular velocity disturbance parameters. By assuming the disturbance parameters that infinitely approach to zero, the synchronization condition for the two rotors is obtained. It indicated that the absolute value of the residual torque between the two motors should be equal to or less than the maximum of their coupling torques. Meanwhile, the stability criterion of synchronization is derived with the Routh-Hurwitz method, and the region of the stable phase difference is confirmed. At last, computer simulations are preformed to verify the correctness of the approximate solution of the theoretical computation for the stable phase difference between the two unbalanced rotors, and the results of theoretical computation is in accordance with that of computer simulations. To sum up, only the parameters of the vibration system satisfy the synchronization condition and the stability criterion of the synchronization, the two unbalanced rotors can implement the synchronization operation. PMID:25993472
Dimension reduction method for SPH equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Alexandre M.; Scheibe, Timothy D.
2011-08-26
Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less
Measurements and Simulations of Nadir-Viewing Radar Returns from the Melting Layer at X- and W-Bands
NASA Technical Reports Server (NTRS)
Liao, Liang; Meneghini, Robert; Tian, Lin; Heymsfield, Gerald M.
2010-01-01
Simulated radar signatures within the melting layer in stratiform rain, namely the radar bright band, are checked by means of comparisons with simultaneous measurements of the bright band made by the EDOP (X-band) and CRS (W-band) airborne Doppler radars during the CRYSTAL-FACE campaign in 2002. A stratified-sphere model, allowing the fractional water content to vary along the radius of the particle, is used to compute the scattering properties of individual melting snowflakes. Using the effective dielectric constants computed by the conjugate gradient-fast Fourier transform (CGFFT) numerical method for X and W bands, and expressing the fractional water content of melting particle as an exponential function in particle radius, it is found that at X band the simulated radar bright-band profiles are in an excellent agreement with the measured profiles. It is also found that the simulated W-band profiles usually resemble the shapes of the measured bright-band profiles even though persistent offsets between them are present. These offsets, however, can be explained by the attenuation caused by cloud water and water vapor at W band. This is confirmed by the comparisons of the radar profiles made in the rain regions where the un-attenuated W-band reflectivity profiles can be estimated through the X- and W band Doppler velocity measurements. The bright-band model described in this paper has the potential to be used effectively for both radar and radiometer algorithms relevant to the TRMM and GPM satellite missions.
Altwaijry, Nojood A; Baron, Michael; Wright, David W; Coveney, Peter V; Townsend-Nicholson, Andrea
2017-05-09
The accurate identification of the specific points of interaction between G protein-coupled receptor (GPCR) oligomers is essential for the design of receptor ligands targeting oligomeric receptor targets. A coarse-grained molecular dynamics computer simulation approach would provide a compelling means of identifying these specific protein-protein interactions and could be applied both for known oligomers of interest and as a high-throughput screen to identify novel oligomeric targets. However, to be effective, this in silico modeling must provide accurate, precise, and reproducible information. This has been achieved recently in numerous biological systems using an ensemble-based all-atom molecular dynamics approach. In this study, we describe an equivalent methodology for ensemble-based coarse-grained simulations. We report the performance of this method when applied to four different GPCRs known to oligomerize using error analysis to determine the ensemble size and individual replica simulation time required. Our measurements of distance between residues shown to be involved in oligomerization of the fifth transmembrane domain from the adenosine A 2A receptor are in very good agreement with the existing biophysical data and provide information about the nature of the contact interface that cannot be determined experimentally. Calculations of distance between rhodopsin, CXCR4, and β 1 AR transmembrane domains reported to form contact points in homodimers correlate well with the corresponding measurements obtained from experimental structural data, providing an ability to predict contact interfaces computationally. Interestingly, error analysis enables identification of noninteracting regions. Our results confirm that GPCR interactions can be reliably predicted using this novel methodology.
The 1990 MB: The first Mars Trojan
NASA Technical Reports Server (NTRS)
Innanen, Kimmo A.; Mikkola, Seppo; Bowell, Edward; Muinonen, Karri; Shoemaker, Eugene M.
1991-01-01
Asteroid 1990 MB was discovered by D. H. Levy and H. E. Holt during the course of the Mars and Earth Crossing Asteroid and Comet Survey. An orbit based on a 9 day arc and the asteroid's location near Mars' L5 (trailing Lagrangean) longitude led E. Boswell to speculate that it might be in 1:1 resonance with Mars, analogous to the Trojan asteroids of Jupiter. Subsequent observations strengthened the possibility, and later calculations confirmed it. Thus 1990 MB is the first known asteroid in 1:1 resonance with a planet other than Jupiter. The existence of 1990 MB (a small body most likely between 2 and 4 km in diameter) provides remarkable confirmation of computer simulations. These self consistent n-body simulations demonstrated this sort of stability for Trojans of all the terrestrial planets over at least a 2 million year time base. The discovery of 1990 MB suggests that others of similar or smaller diameter may be found. Using hypothetical populations of Mars Trojans, their possible sky plane distributions were modeled as a first step in undertaking a systematic observational search of Mars' L4 and L5 libration regions.
a Study of the Reconstruction of Accidents and Crime Scenes Through Computational Experiments
NASA Astrophysics Data System (ADS)
Park, S. J.; Chae, S. W.; Kim, S. H.; Yang, K. M.; Chung, H. S.
Recently, with an increase in the number of studies of the safety of both pedestrians and passengers, computer software, such as MADYMO, Pam-crash, and LS-dyna, has been providing human models for computer simulation. Although such programs have been applied to make machines beneficial for humans, studies that analyze the reconstruction of accidents or crime scenes are rare. Therefore, through computational experiments, the present study presents reconstructions of two questionable accidents. In the first case, a car fell off the road and the driver was separated from it. The accident investigator was very confused because some circumstantial evidence suggested the possibility that the driver was murdered. In the second case, a woman died in her house and the police suspected foul play with her boyfriend as a suspect. These two cases were reconstructed using the human model in MADYMO software. The first case was eventually confirmed as a traffic accident in which the driver bounced out of the car when the car fell off, and the second case was proved to be suicide rather than homicide.
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
Kendon, Vivien M; Nemoto, Kae; Munro, William J
2010-08-13
We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.
Student Ability, Confidence, and Attitudes Toward Incorporating a Computer into a Patient Interview.
Ray, Sarah; Valdovinos, Katie
2015-05-25
To improve pharmacy students' ability to effectively incorporate a computer into a simulated patient encounter and to improve their awareness of barriers and attitudes towards and their confidence in using a computer during simulated patient encounters. Students completed a survey that assessed their awareness of, confidence in, and attitudes towards computer use during simulated patient encounters. Students were evaluated with a rubric on their ability to incorporate a computer into a simulated patient encounter. Students were resurveyed and reevaluated after instruction. Students improved in their ability to effectively incorporate computer usage into a simulated patient encounter. They also became more aware of and improved their attitudes toward barriers regarding such usage and gained more confidence in their ability to use a computer during simulated patient encounters. Instruction can improve pharmacy students' ability to incorporate a computer into simulated patient encounters. This skill is critical to developing efficiency while maintaining rapport with patients.
Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang
2017-05-02
Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.
Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang
2017-01-01
Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components. PMID:28468324
Dust Dynamics in Protoplanetary Disks: Parallel Computing with PVM
NASA Astrophysics Data System (ADS)
de La Fuente Marcos, Carlos; Barge, Pierre; de La Fuente Marcos, Raúl
2002-03-01
We describe a parallel version of our high-order-accuracy particle-mesh code for the simulation of collisionless protoplanetary disks. We use this code to carry out a massively parallel, two-dimensional, time-dependent, numerical simulation, which includes dust particles, to study the potential role of large-scale, gaseous vortices in protoplanetary disks. This noncollisional problem is easy to parallelize on message-passing multicomputer architectures. We performed the simulations on a cache-coherent nonuniform memory access Origin 2000 machine, using both the parallel virtual machine (PVM) and message-passing interface (MPI) message-passing libraries. Our performance analysis suggests that, for our problem, PVM is about 25% faster than MPI. Using PVM and MPI made it possible to reduce CPU time and increase code performance. This allows for simulations with a large number of particles (N ~ 105-106) in reasonable CPU times. The performances of our implementation of the pa! rallel code on an Origin 2000 supercomputer are presented and discussed. They exhibit very good speedup behavior and low load unbalancing. Our results confirm that giant gaseous vortices can play a dominant role in giant planet formation.
Support vector machine firefly algorithm based optimization of lens system.
Shamshirband, Shahaboddin; Petković, Dalibor; Pavlović, Nenad T; Ch, Sudheer; Altameem, Torki A; Gani, Abdullah
2015-01-01
Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.
Gini, Giuseppina
2016-01-01
In this chapter, we introduce the basis of computational chemistry and discuss how computational methods have been extended to some biological properties and toxicology, in particular. Since about 20 years, chemical experimentation is more and more replaced by modeling and virtual experimentation, using a large core of mathematics, chemistry, physics, and algorithms. Then we see how animal experiments, aimed at providing a standardized result about a biological property, can be mimicked by new in silico methods. Our emphasis here is on toxicology and on predicting properties through chemical structures. Two main streams of such models are available: models that consider the whole molecular structure to predict a value, namely QSAR (Quantitative Structure Activity Relationships), and models that find relevant substructures to predict a class, namely SAR. The term in silico discovery is applied to chemical design, to computational toxicology, and to drug discovery. We discuss how the experimental practice in biological science is moving more and more toward modeling and simulation. Such virtual experiments confirm hypotheses, provide data for regulation, and help in designing new chemicals.
Computational Study on New Natural Compound Inhibitors of Pyruvate Dehydrogenase Kinases
Zhou, Xiaoli; Yu, Shanshan; Su, Jing; Sun, Liankun
2016-01-01
Pyruvate dehydrogenase kinases (PDKs) are key enzymes in glucose metabolism, negatively regulating pyruvate dehyrogenase complex (PDC) activity through phosphorylation. Inhibiting PDKs could upregulate PDC activity and drive cells into more aerobic metabolism. Therefore, PDKs are potential targets for metabolism related diseases, such as cancers and diabetes. In this study, a series of computer-aided virtual screening techniques were utilized to discover potential inhibitors of PDKs. Structure-based screening using Libdock was carried out following by ADME (adsorption, distribution, metabolism, excretion) and toxicity prediction. Molecular docking was used to analyze the binding mechanism between these compounds and PDKs. Molecular dynamic simulation was utilized to confirm the stability of potential compound binding. From the computational results, two novel natural coumarins compounds (ZINC12296427 and ZINC12389251) from the ZINC database were found binding to PDKs with favorable interaction energy and predicted to be non-toxic. Our study provide valuable information of PDK-coumarins binding mechanisms in PDK inhibitor-based drug discovery. PMID:26959013
Computational Study on New Natural Compound Inhibitors of Pyruvate Dehydrogenase Kinases.
Zhou, Xiaoli; Yu, Shanshan; Su, Jing; Sun, Liankun
2016-03-04
Pyruvate dehydrogenase kinases (PDKs) are key enzymes in glucose metabolism, negatively regulating pyruvate dehyrogenase complex (PDC) activity through phosphorylation. Inhibiting PDKs could upregulate PDC activity and drive cells into more aerobic metabolism. Therefore, PDKs are potential targets for metabolism related diseases, such as cancers and diabetes. In this study, a series of computer-aided virtual screening techniques were utilized to discover potential inhibitors of PDKs. Structure-based screening using Libdock was carried out following by ADME (adsorption, distribution, metabolism, excretion) and toxicity prediction. Molecular docking was used to analyze the binding mechanism between these compounds and PDKs. Molecular dynamic simulation was utilized to confirm the stability of potential compound binding. From the computational results, two novel natural coumarins compounds (ZINC12296427 and ZINC12389251) from the ZINC database were found binding to PDKs with favorable interaction energy and predicted to be non-toxic. Our study provide valuable information of PDK-coumarins binding mechanisms in PDK inhibitor-based drug discovery.
Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models
NASA Astrophysics Data System (ADS)
Altuntas, Alper; Baugh, John
2017-07-01
Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.
NASA Astrophysics Data System (ADS)
Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.
1997-02-01
We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.
Streamwise Vorticity Generation in Laminar and Turbulent Jets
NASA Technical Reports Server (NTRS)
Demuren, Aodeji O.; Wilson, Robert V.
1999-01-01
Complex streamwise vorticity fields are observed in the evolution of non-circular jets. Generation mechanisms are investigated via Reynolds-averaged (RANS), large-eddy (LES) and direct numerical (DNS) simulations of laminar and turbulent rectangular jets. Complex vortex interactions are found in DNS of laminar jets, but axis-switching is observed only when a single instability mode is present in the incoming mixing layer. With several modes present, the structures are not coherent and no axis-switching occurs, RANS computations also produce no axis-switching. On the other hand, LES of high Reynolds number turbulent jets produce axis-switching even for cases with several instability modes in the mixing layer. Analysis of the source terms of the mean streamwise vorticity equation through post-processing of the instantaneous results shows that, complex interactions of gradients of the normal and shear Reynolds stresses are responsible for the generation of streamwise vorticity which leads to axis-switching. RANS computations confirm these results. k - epsilon turbulence model computations fail to reproduce the phenomenon, whereas algebraic Reynolds stress model (ASM) computations, in which the secondary normal and shear stresses are computed explicitly, succeeded in reproducing the phenomenon accurately.
NASA Astrophysics Data System (ADS)
Boyarshinov, Michael G.; Vaismana, Yakov I.
2016-10-01
The following methods were used in order to identify the pollution fields of urban air caused by the motor transport exhaust gases: the mathematical model, which enables to consider the influence of the main factors that determine pollution fields formation in the complex spatial domain; the authoring software designed for computational modeling of the gas flow, generated by numerous mobile point sources; the results of computing experiments on pollutant spread analysis and evolution of their concentration fields. The computational model of exhaust gas distribution and dispersion in a spatial domain, which includes urban buildings, structures and main traffic arteries, takes into account a stochastic character of cars apparition on the borders of the examined territory and uses a Poisson process. The model also considers the traffic lights switching and permits to define the fields of velocity, pressure and temperature of the discharge gases in urban air. The verification of mathematical model and software used confirmed their satisfactory fit to the in-situ measurements data and the possibility to use the obtained computing results for assessment and prediction of urban air pollution caused by motor transport exhaust gases.
2011-01-01
Background Controlling airborne contamination is of major importance in burn units because of the high susceptibility of burned patients to infections and the unique environmental conditions that can accentuate the infection risk. In particular the required elevated temperatures in the patient room can create thermal convection flows which can transport airborne contaminates throughout the unit. In order to estimate this risk and optimize the design of an intensive care room intended to host severely burned patients, we have relied on a computational fluid dynamic methodology (CFD). Methods The study was carried out in 4 steps: i) patient room design, ii) CFD simulations of patient room design to model air flows throughout the patient room, adjacent anterooms and the corridor, iii) construction of a prototype room and subsequent experimental studies to characterize its performance iv) qualitative comparison of the tendencies between CFD prediction and experimental results. The Electricité De France (EDF) open-source software Code_Saturne® (http://www.code-saturne.org) was used and CFD simulations were conducted with an hexahedral mesh containing about 300 000 computational cells. The computational domain included the treatment room and two anterooms including equipment, staff and patient. Experiments with inert aerosol particles followed by time-resolved particle counting were conducted in the prototype room for comparison with the CFD observations. Results We found that thermal convection can create contaminated zones near the ceiling of the room, which can subsequently lead to contaminate transfer in adjacent rooms. Experimental confirmation of these phenomena agreed well with CFD predictions and showed that particles greater than one micron (i.e. bacterial or fungal spore sizes) can be influenced by these thermally induced flows. When the temperature difference between rooms was 7°C, a significant contamination transfer was observed to enter into the positive pressure room when the access door was opened, while 2°C had little effect. Based on these findings the constructed burn unit was outfitted with supplemental air exhaust ducts over the doors to compensate for the thermal convective flows. Conclusions CFD simulations proved to be a particularly useful tool for the design and optimization of a burn unit treatment room. Our results, which have been confirmed qualitatively by experimental investigation, stressed that airborne transfer of microbial size particles via thermal convection flows are able to bypass the protective overpressure in the patient room, which can represent a potential risk of cross contamination between rooms in protected environments. PMID:21371304
Beauchêne, Christian; Laudinet, Nicolas; Choukri, Firas; Rousset, Jean-Luc; Benhamadouche, Sofiane; Larbre, Juliette; Chaouat, Marc; Benbunan, Marc; Mimoun, Maurice; Lajonchère, Jean-Patrick; Bergeron, Vance; Derouin, Francis
2011-03-03
Controlling airborne contamination is of major importance in burn units because of the high susceptibility of burned patients to infections and the unique environmental conditions that can accentuate the infection risk. In particular the required elevated temperatures in the patient room can create thermal convection flows which can transport airborne contaminates throughout the unit. In order to estimate this risk and optimize the design of an intensive care room intended to host severely burned patients, we have relied on a computational fluid dynamic methodology (CFD). The study was carried out in 4 steps: i) patient room design, ii) CFD simulations of patient room design to model air flows throughout the patient room, adjacent anterooms and the corridor, iii) construction of a prototype room and subsequent experimental studies to characterize its performance iv) qualitative comparison of the tendencies between CFD prediction and experimental results. The Electricité De France (EDF) open-source software Code_Saturne® (http://www.code-saturne.org) was used and CFD simulations were conducted with an hexahedral mesh containing about 300 000 computational cells. The computational domain included the treatment room and two anterooms including equipment, staff and patient. Experiments with inert aerosol particles followed by time-resolved particle counting were conducted in the prototype room for comparison with the CFD observations. We found that thermal convection can create contaminated zones near the ceiling of the room, which can subsequently lead to contaminate transfer in adjacent rooms. Experimental confirmation of these phenomena agreed well with CFD predictions and showed that particles greater than one micron (i.e. bacterial or fungal spore sizes) can be influenced by these thermally induced flows. When the temperature difference between rooms was 7°C, a significant contamination transfer was observed to enter into the positive pressure room when the access door was opened, while 2°C had little effect. Based on these findings the constructed burn unit was outfitted with supplemental air exhaust ducts over the doors to compensate for the thermal convective flows. CFD simulations proved to be a particularly useful tool for the design and optimization of a burn unit treatment room. Our results, which have been confirmed qualitatively by experimental investigation, stressed that airborne transfer of microbial size particles via thermal convection flows are able to bypass the protective overpressure in the patient room, which can represent a potential risk of cross contamination between rooms in protected environments.
Development of simulation computer complex specification
NASA Technical Reports Server (NTRS)
1973-01-01
The Training Simulation Computer Complex Study was one of three studies contracted in support of preparations for procurement of a shuttle mission simulator for shuttle crew training. The subject study was concerned with definition of the software loads to be imposed on the computer complex to be associated with the shuttle mission simulator and the development of procurement specifications based on the resulting computer requirements. These procurement specifications cover the computer hardware and system software as well as the data conversion equipment required to interface the computer to the simulator hardware. The development of the necessary hardware and software specifications required the execution of a number of related tasks which included, (1) simulation software sizing, (2) computer requirements definition, (3) data conversion equipment requirements definition, (4) system software requirements definition, (5) a simulation management plan, (6) a background survey, and (7) preparation of the specifications.
Computational simulation of concurrent engineering for aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Singhal, S. N.
1992-01-01
Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.
Computational simulation for concurrent engineering of aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Singhal, S. N.
1993-01-01
Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.
Computational simulation for concurrent engineering of aerospace propulsion systems
NASA Astrophysics Data System (ADS)
Chamis, C. C.; Singhal, S. N.
1993-02-01
Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Pengfei; Zhang, Yuwen, E-mail: zhangyu@missouri.edu; Yang, Mo
The structural, dynamic, and vibrational properties during heat transfer process in Si/Ge superlattices are studied by analyzing the trajectories generated by the ab initio Car-Parrinello molecular dynamics simulation. The radial distribution functions and mean square displacements are calculated and further discussions are made to explain and probe the structural changes relating to the heat transfer phenomenon. Furthermore, the vibrational density of states of the two layers (Si/Ge) are computed and plotted to analyze the contributions of phonons with different frequencies to the heat conduction. Coherent heat conduction of the low frequency phonons is found and their contributions to facilitate heatmore » transfer are confirmed. The Car-Parrinello molecular dynamics simulation outputs in the work show reasonable thermophysical results of the thermal energy transport process and shed light on the potential applications of treating the heat transfer in the superlattices of semiconductor materials from a quantum mechanical molecular dynamics simulation perspective.« less
Computer modeling describes gravity-related adaptation in cell cultures.
Alexandrov, Ludmil B; Alexandrova, Stoyana; Usheva, Anny
2009-12-16
Questions about the changes of biological systems in response to hostile environmental factors are important but not easy to answer. Often, the traditional description with differential equations is difficult due to the overwhelming complexity of the living systems. Another way to describe complex systems is by simulating them with phenomenological models such as the well-known evolutionary agent-based model (EABM). Here we developed an EABM to simulate cell colonies as a multi-agent system that adapts to hyper-gravity in starvation conditions. In the model, the cell's heritable characteristics are generated and transferred randomly to offspring cells. After a qualitative validation of the model at normal gravity, we simulate cellular growth in hyper-gravity conditions. The obtained data are consistent with previously confirmed theoretical and experimental findings for bacterial behavior in environmental changes, including the experimental data from the microgravity Atlantis and the Hypergravity 3000 experiments. Our results demonstrate that it is possible to utilize an EABM with realistic qualitative description to examine the effects of hypergravity and starvation on complex cellular entities.
Communication: Polymer entanglement dynamics: Role of attractive interactions
Grest, Gary S.
2016-10-10
The coupled dynamics of entangled polymers, which span broad time and length scales, govern their unique viscoelastic properties. To follow chain mobility by numerical simulations from the intermediate Rouse and reptation regimes to the late time diffusive regime, highly coarse grained models with purely repulsive interactions between monomers are widely used since they are computationally the most efficient. In this paper, using large scale molecular dynamics simulations, the effect of including the attractive interaction between monomers on the dynamics of entangled polymer melts is explored for the first time over a wide temperature range. Attractive interactions have little effect onmore » the local packing for all temperatures T and on the chain mobility for T higher than about twice the glass transition T g. Finally, these results, across a broad range of molecular weight, show that to study the dynamics of entangled polymer melts, the interactions can be treated as pure repulsive, confirming a posteriori the validity of previous studies and opening the way to new large scale numerical simulations.« less
NASA Astrophysics Data System (ADS)
Ji, Pengfei; Zhang, Yuwen; Yang, Mo
2013-12-01
The structural, dynamic, and vibrational properties during heat transfer process in Si/Ge superlattices are studied by analyzing the trajectories generated by the ab initio Car-Parrinello molecular dynamics simulation. The radial distribution functions and mean square displacements are calculated and further discussions are made to explain and probe the structural changes relating to the heat transfer phenomenon. Furthermore, the vibrational density of states of the two layers (Si/Ge) are computed and plotted to analyze the contributions of phonons with different frequencies to the heat conduction. Coherent heat conduction of the low frequency phonons is found and their contributions to facilitate heat transfer are confirmed. The Car-Parrinello molecular dynamics simulation outputs in the work show reasonable thermophysical results of the thermal energy transport process and shed light on the potential applications of treating the heat transfer in the superlattices of semiconductor materials from a quantum mechanical molecular dynamics simulation perspective.
Simulation-Based Validation of the p53 Transcriptional Activity with Hybrid Functional Petri Net.
Doi, Atsushi; Nagasaki, Masao; Matsuno, Hiroshi; Miyano, Satoru
2011-01-01
MDM2 and p19ARF are essential proteins in cancer pathways forming a complex with protein p53 to control the transcriptional activity of protein p53. It is confirmed that protein p53 loses its transcriptional activity by forming the functional dimer with protein MDM2. However, it is still unclear that protein p53 keeps its transcriptional activity when it forms the trimer with proteins MDM2 and p19ARF. We have observed mutual behaviors among genes p53, MDM2, p19ARF and their products on a computational model with hybrid functional Petri net (HFPN) which is constructed based on information described in the literature. The simulation results suggested that protein p53 should have the transcriptional activity in the forms of the trimer of proteins p53, MDM2, and p19ARF. This paper also discusses the advantages of HFPN based modeling method in terms of pathway description for simulations.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth Sarkis
1987-01-01
The correspondence between robotic manipulators and single gimbal Control Moment Gyro (CMG) systems was exploited to aid in the understanding and design of single gimbal CMG Steering laws. A test for null motion near a singular CMG configuration was derived which is able to distinguish between escapable and unescapable singular states. Detailed analysis of the Jacobian matrix null-space was performed and results were used to develop and test a variety of single gimbal CMG steering laws. Computer simulations showed that all existing singularity avoidance methods are unable to avoid Elliptic internal singularities. A new null motion algorithm using the Moore-Penrose pseudoinverse, however, was shown by simulation to avoid Elliptic type singularities under certain conditions. The SR-inverse, with appropriate null motion was proposed as a general approach to singularity avoidance, because of its ability to avoid singularities through limited introduction of torque error. Simulation results confirmed the superior performance of this method compared to the other available and proposed pseudoinverse-based Steering laws.
Analysis of a new phase and height algorithm in phase measurement profilometry
NASA Astrophysics Data System (ADS)
Bian, Xintian; Zuo, Fen; Cheng, Ju
2018-04-01
Traditional phase measurement profilometry adopts divergent illumination to obtain the height distribution of a measured object accurately. However, the mapping relation between reference plane coordinates and phase distribution must be calculated before measurement. Data are then stored in a computer in the form of a data sheet for standby applications. This study improved the distribution of projected fringes and deducted the phase-height mapping algorithm when the two pupils of the projection and imaging systems are of unequal heights and when the projection and imaging axes are on different planes. With the algorithm, calculating the mapping relation between reference plane coordinates and phase distribution prior to measurement is unnecessary. Thus, the measurement process is simplified, and the construction of an experimental system is made easy. Computer simulation and experimental results confirm the effectiveness of the method.
Theoretical and experimental study of polycyclic aromatic compounds as β-tubulin inhibitors.
Olazarán, Fabian E; García-Pérez, Carlos A; Bandyopadhyay, Debasish; Balderas-Rentería, Isaias; Reyes-Figueroa, Angel D; Henschke, Lars; Rivera, Gildardo
2017-03-01
In this work, through a docking analysis of compounds from the ZINC chemical library on human β-tubulin using high performance computer cluster, we report new polycyclic aromatic compounds that bind with high energy on the colchicine binding site of β-tubulin, suggesting three new key amino acids. However, molecular dynamic analysis showed low stability in the interaction between ligand and receptor. Results were confirmed experimentally in in vitro and in vivo models that suggest that molecular dynamics simulation is the best option to find new potential β-tubulin inhibitors. Graphical abstract Bennett's acceptance ratio (BAR) method.
Experimental verification of the rainbow trapping effect in adiabatic plasmonic gratings
Gan, Qiaoqiang; Gao, Yongkang; Wagner, Kyle; Vezenov, Dmitri; Ding, Yujie J.; Bartoli, Filbert J.
2011-01-01
We report the experimental observation of a trapped rainbow in adiabatically graded metallic gratings, designed to validate theoretical predictions for this unique plasmonic structure. One-dimensional graded nanogratings were fabricated and their surface dispersion properties tailored by varying the grating groove depth, whose dimensions were confirmed by atomic force microscopy. Tunable plasmonic bandgaps were observed experimentally, and direct optical measurements on graded grating structures show that light of different wavelengths in the 500–700-nm region is “trapped” at different positions along the grating, consistent with computer simulations, thus verifying the “rainbow” trapping effect. PMID:21402936
Linear and quadratic static response functions and structure functions in Yukawa liquids.
Magyar, Péter; Donkó, Zoltán; Kalman, Gabor J; Golden, Kenneth I
2014-08-01
We compute linear and quadratic static density response functions of three-dimensional Yukawa liquids by applying an external perturbation potential in molecular dynamics simulations. The response functions are also obtained from the equilibrium fluctuations (static structure factors) in the system via the fluctuation-dissipation theorems. The good agreement of the quadratic response functions, obtained in the two different ways, confirms the quadratic fluctuation-dissipation theorem. We also find that the three-point structure function may be factorizable into two-point structure functions, leading to a cluster representation of the equilibrium triplet correlation function.
NASA Technical Reports Server (NTRS)
Mosher, R. A.; Palusinski, O. A.; Bier, M.
1982-01-01
A mathematical model has been developed which describes the steady state in an isoelectric focusing (IEF) system with ampholytes or monovalent buffers. The model is based on the fundamental equations describing the component dissociation equilibria, mass transport due to diffusion and electromigration, electroneutrality, and the conservation of charge. The validity and usefulness of the model has been confirmed by using it to formulate buffer systems in actual laboratory experiments. The model has been recently extended to include the evolution of transient states not only in IEF but also in other modes of electrophoresis.
A feedback control model for network flow with multiple pure time delays
NASA Technical Reports Server (NTRS)
Press, J.
1972-01-01
A control model describing a network flow hindered by multiple pure time (or transport) delays is formulated. Feedbacks connect each desired output with a single control sector situated at the origin. The dynamic formulation invokes the use of differential difference equations. This causes the characteristic equation of the model to consist of transcendental functions instead of a common algebraic polynomial. A general graphical criterion is developed to evaluate the stability of such a problem. A digital computer simulation confirms the validity of such criterion. An optimal decision making process with multiple delays is presented.
NASA Astrophysics Data System (ADS)
Jiang, Zhen-Yu; Li, Lin; Huang, Yi-Fan
2009-07-01
The segmented mirror telescope is widely used. The aberrations of segmented mirror systems are different from single mirror systems. This paper uses the Fourier optics theory to analyse the Zernike aberrations of segmented mirror systems. It concludes that the Zernike aberrations of segmented mirror systems obey the linearity theorem. The design of a segmented space telescope and segmented schemes are discussed, and its optical model is constructed. The computer simulation experiment is performed with this optical model to verify the suppositions. The experimental results confirm the correctness of the model.
Towards a physical interpretation of the entropic lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Malaspinas, Orestis; Deville, Michel; Chopard, Bastien
2008-12-01
The entropic lattice Boltzmann method (ELBM) is one among several different versions of the lattice Boltzmann method for the simulation of hydrodynamics. The collision term of the ELBM is characterized by a nonincreasing H function, guaranteed by a variable relaxation time. We propose here an analysis of the ELBM using the Chapman-Enskog expansion. We show that it can be interpreted as some kind of subgrid model, where viscosity correction scales like the strain rate tensor. We confirm our analytical results by the numerical computations of the relaxation time modifications on the two-dimensional dipole-wall interaction benchmark.
Shape of isolated domains in lithium tantalate single crystals at elevated temperatures
NASA Astrophysics Data System (ADS)
Shur, V. Ya.; Akhmatkhanov, A. R.; Chezganov, D. S.; Lobov, A. I.; Baturin, I. S.; Smirnov, M. M.
2013-12-01
The shape of isolated domains has been investigated in congruent lithium tantalate (CLT) single crystals at elevated temperatures and analyzed in terms of kinetic approach. The obtained temperature dependence of the growing domain shape in CLT including circular shape at temperatures above 190 °C has been attributed to increase of relative input of isotropic ionic conductivity. The observed nonstop wall motion and independent domain growth after merging in CLT as opposed to stoichiometric lithium tantalate have been attributed to difference in wall orientation. The computer simulation has confirmed applicability of the kinetic approach to the domain shape explanation.
Tooth shape optimization of brushless permanent magnet motors for reducing torque ripples
NASA Astrophysics Data System (ADS)
Hsu, Liang-Yi; Tsai, Mi-Ching
2004-11-01
This paper presents a tooth shape optimization method based on a generic algorithm to reduce the torque ripple of brushless permanent magnet motors under two different magnetization directions. The analysis of this design method mainly focuses on magnetic saturation and cogging torque and the computation of the optimization process is based on an equivalent magnetic network circuit. The simulation results, obtained from the finite element analysis, are used to confirm the accuracy and performance. Finite element analysis results from different tooth shapes are compared to show the effectiveness of the proposed method.
Flow analysis of new type propulsion system for UV’s
NASA Astrophysics Data System (ADS)
Eimanis, M.; Auzins, J.
2017-10-01
This paper presents an original design of an autonomous underwater vehicle where thrust force is created by the helicoidal shape of the hull rather than screw propellers. Propulsion force is created by counter-rotating bow and stern parts. The middle part of the vehicle has the function of a cargo compartment containing all control mechanisms and communications. It’s made of elastic material, containing a Cardan-joint mechanism, which allows changing the direction of vehicle, actuated by bending drives. A bending drive velocity control algorithm for the automatic control of vehicle movement direction is proposed. The dynamics of AUV are simulated using multibody simulation software MSC Adams. For the simulation of water resistance forces and torques the surrogate polynomial metamodels are created on the basis of computer experiments with CFD software. For flow interaction with model geometry the simplified vehicle model is submerged in fluid medium using special CFD software, with the same idea used in wind tunnel experiments. The simulation results are compared with measurements of the AUV prototype, created at Institute of Mechanics of Riga Technical University. Experiments with the prototype showed good agreement with simulation results and confirmed the effectiveness and the future potential of the proposed principle.
Simulated infrared spectra of triflic acid during proton dissociation.
Laflamme, Patrick; Beaudoin, Alexandre; Chapaton, Thomas; Spino, Claude; Soldera, Armand
2012-05-05
Vibrational analysis of triflic acid (TfOH) at different water uptakes was conducted. This molecule mimics the sulfonate end of the Nafion side-chain. As the proton leaves the sulfonic acid group, structural changes within the Nafion side-chain take place. They are revealed by signal shifts in the infrared spectrum. Molecular modeling is used to follow structural modifications that occur during proton dissociation. To confirm the accuracy of the proposed structures, infrared spectra were computed via quantum chemical modeling based on density functional theory. The requirement to use additional diffuse functions in the basis set is discussed. Comparison between simulated infrared spectra of 1 and 2 acid molecules with different water contents and experimental data was performed. An accurate description of infrared spectra for systems containing 2 TfOH was obtained. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Johnson, J.; Brackley, C. A.; Cook, P. R.; Marenduzzo, D.
2015-02-01
We present computer simulations of the phase behaviour of an ensemble of proteins interacting with a polymer, mimicking non-specific binding to a piece of bacterial DNA or eukaryotic chromatin. The proteins can simultaneously bind to the polymer in two or more places to create protein bridges. Despite the lack of any explicit interaction between the proteins or between DNA segments, our simulations confirm previous results showing that when the protein-polymer interaction is sufficiently strong, the proteins come together to form clusters. Furthermore, a sufficiently large concentration of bridging proteins leads to the compaction of the swollen polymer into a globular phase. Here we characterise both the formation of protein clusters and the polymer collapse as a function of protein concentration, protein-polymer affinity and fibre flexibility.
Development, Validation and Parametric study of a 3-Year-Old Child Head Finite Element Model
NASA Astrophysics Data System (ADS)
Cui, Shihai; Chen, Yue; Li, Haiyan; Ruan, ShiJie
2015-12-01
Traumatic brain injury caused by drop and traffic accidents is an important reason for children's death and disability. Recently, the computer finite element (FE) head model has been developed to investigate brain injury mechanism and biomechanical responses. Based on CT data of a healthy 3-year-old child head, the FE head model with detailed anatomical structure was developed. The deep brain structures such as white matter, gray matter, cerebral ventricle, hippocampus, were firstly created in this FE model. The FE model was validated by comparing the simulation results with that of cadaver experiments based on reconstructing the child and adult cadaver experiments. In addition, the effects of skull stiffness on the child head dynamic responses were further investigated. All the simulation results confirmed the good biofidelity of the FE model.
Guo, Kai; Zhang, Yong-Liang; Qian, Cheng; Fung, Kin-Hung
2018-04-30
In this work, we demonstrate computationally that electric dipole-quadrupole hybridization (EDQH) could be utilized to enhance plasmonic SHG efficiency. To this end, we construct T-shaped plasmonic heterodimers consisting of a short and a long gold nanorod with finite element method simulation. By controlling the strength of capacitive coupling between two gold nanorods, we explore the effect of EDQH evolution on the SHG process, including the SHG efficiency enhancement, corresponding near-field distribution, and far-field radiation pattern. Simulation results demonstrate that EDQH could enhance the SHG efficiency by a factor >100 in comparison with that achieved by an isolated gold nanorod. Additionally, the far-field pattern of the SHG could be adjusted beyond the well-known quadrupolar distribution and confirms that EDQH plays an important role in the SHG process.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Okahashi, Nobuyuki; Kohno, Susumu; Kitajima, Shunsuke; Matsuda, Fumio; Takahashi, Chiaki; Shimizu, Hiroshi
2015-12-01
Studying metabolic directions and flow rates in cultured mammalian cells can provide key information for understanding metabolic function in the fields of cancer research, drug discovery, stem cell biology, and antibody production. In this work, metabolic engineering methodologies including medium component analysis, (13)C-labeling experiments, and computer-aided simulation analysis were applied to characterize the metabolic phenotype of soft tissue sarcoma cells derived from p53-null mice. Cells were cultured in medium containing [1-(13)C] glutamine to assess the level of reductive glutamine metabolism via the reverse reaction of isocitrate dehydrogenase (IDH). The specific uptake and production rates of glucose, organic acids, and the 20 amino acids were determined by time-course analysis of cultured media. Gas chromatography-mass spectrometry analysis of the (13)C-labeling of citrate, succinate, fumarate, malate, and aspartate confirmed an isotopically steady state of the cultured cells. After removing the effect of naturally occurring isotopes, the direction of the IDH reaction was determined by computer-aided analysis. The results validated that metabolic engineering methodologies are applicable to soft tissue sarcoma cells derived from p53-null mice, and also demonstrated that reductive glutamine metabolism is active in p53-null soft tissue sarcoma cells under normoxia. Copyright © 2015 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Spent nuclear fuel assembly inspection using neutron computed tomography
NASA Astrophysics Data System (ADS)
Pope, Chad Lee
The research presented here focuses on spent nuclear fuel assembly inspection using neutron computed tomography. Experimental measurements involving neutron beam transmission through a spent nuclear fuel assembly serve as benchmark measurements for an MCNP simulation model. Comparison of measured results to simulation results shows good agreement. Generation of tomography images from MCNP tally results was accomplished using adapted versions of built in MATLAB algorithms. Multiple fuel assembly models were examined to provide a broad set of conclusions. Tomography images revealing assembly geometric information including the fuel element lattice structure and missing elements can be obtained using high energy neutrons. A projection difference technique was developed which reveals the substitution of unirradiated fuel elements for irradiated fuel elements, using high energy neutrons. More subtle material differences such as altering the burnup of individual elements can be identified with lower energy neutrons provided the scattered neutron contribution to the image is limited. The research results show that neutron computed tomography can be used to inspect spent nuclear fuel assemblies for the purpose of identifying anomalies such as missing elements or substituted elements. The ability to identify anomalies in spent fuel assemblies can be used to deter diversion of material by increasing the risk of early detection as well as improve reprocessing facility operations by confirming the spent fuel configuration is as expected or allowing segregation if anomalies are detected.
Understanding Emergency Care Delivery Through Computer Simulation Modeling.
Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L
2018-02-01
In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.
Rating of Dynamic Coefficient for Simple Beam Bridge Design on High-Speed Railways
NASA Astrophysics Data System (ADS)
Diachenko, Leonid; Benin, Andrey; Smirnov, Vladimir; Diachenko, Anastasia
2018-06-01
The aim of the work is to improve the methodology for the dynamic computation of simple beam spans during the impact of high-speed trains. Mathematical simulation utilizing numerical and analytical methods of structural mechanics is used in the research. The article analyses parameters of the effect of high-speed trains on simple beam spanning bridge structures and suggests a technique of determining of the dynamic index to the live load. Reliability of the proposed methodology is confirmed by results of numerical simulation of high-speed train passage over spans with different speeds. The proposed algorithm of dynamic computation is based on a connection between maximum acceleration of the span in the resonance mode of vibrations and the main factors of stress-strain state. The methodology allows determining maximum and also minimum values of the main efforts in the construction that makes possible to perform endurance tests. It is noted that dynamic additions for the components of the stress-strain state (bending moments, transverse force and vertical deflections) are different. This condition determines the necessity for differentiated approach to evaluation of dynamic coefficients performing design verification of I and II groups of limiting state. The practical importance: the methodology of determining the dynamic coefficients allows making dynamic calculation and determining the main efforts in split beam spans without numerical simulation and direct dynamic analysis that significantly reduces the labour costs for design.
A theoretical framework for strain-related trabecular bone maintenance and adaptation.
Ruimerman, R; Hilbers, P; van Rietbergen, B; Huiskes, R
2005-04-01
It is assumed that density and morphology of trabecular bone is partially controlled by mechanical forces. How these effects are expressed in the local metabolic functions of osteoclast resorption and osteoblast formation is not known. In order to investigate possible mechano-biological pathways for these mechanisms we have proposed a mathematical theory (Nature 405 (2000) 704). This theory is based on hypothetical osteocyte stimulation of osteoblast bone formation, as an effect of elevated strain in the bone matrix, and a role for microcracks and disuse in promoting osteoclast resorption. Applied in a 2-D Finite Element Analysis model, the theory explained the formation of trabecular patterns. In this article we present a 3-D FEA model based on the same theory and investigated its potential morphological predictability of metabolic reactions to mechanical loads. The computations simulated the development of trabecular morphological details during growth, relative to measurements in growing pigs, reasonably realistic. They confirmed that the proposed mechanisms also inherently lead to optimal stress transfer. Alternative loading directions produced new trabecular orientations. Reduction of load reduced trabecular thickness, connectivity and mass in the simulation, as is seen in disuse osteoporosis. Simulating the effects of estrogen deficiency through increased osteoclast resorption frequencies produced osteoporotic morphologies as well, as seen in post-menopausal osteoporosis. We conclude that the theory provides a suitable computational framework to investigate hypothetical relationships between bone loading and metabolic expressions.
Increased Mach Number Capability for the NASA Glenn 10x10 Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Slater, John; Saunders, John
2014-01-01
Computational simulations and wind tunnel testing were conducted to explore the operation of the Abe Silverstein Supersonic Wind Tunnel at the NASA Glenn Research Center at test section Mach numbers above the current limit of Mach 3.5. An increased Mach number would enhance the capability for testing of supersonic and hypersonic propulsion systems. The focus of the explorations was on understanding the flow within the second throat of the tunnel, which is downstream of the test section and is where the supersonic flow decelerates to subsonic flow. Methods of computational fluid dynamics (CFD) were applied to provide details of the shock boundary layer structure and to estimate losses in total pressure. The CFD simulations indicated that the tunnel could be operated up to Mach 4.0 if the minimum width of the second throat was made smaller than that used for previous operation of the tunnel. Wind tunnel testing was able to confirm such operation of the tunnel at Mach 3.6 and 3.7 before a hydraulic failure caused a stop to the testing. CFD simulations performed after the wind tunnel testing showed good agreement with test data consisting of static pressures along the ceiling of the second throat. The CFD analyses showed increased shockwave boundary layer interactions, which was also observed as increased unsteadiness of dynamic pressures collected in the wind tunnel testing.
Increased Mach Number Capability for the NASA Glenn 10x10 Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Slater, J. W.; Saunders, J. D.
2015-01-01
Computational simulations and wind tunnel testing were conducted to explore the operation of the Abe Silverstein Supersonic Wind Tunnel at the NASA Glenn Research Center at test section Mach numbers above the current limit of Mach 3.5. An increased Mach number would enhance the capability for testing of supersonic and hypersonic propulsion systems. The focus of the explorations was on understanding the flow within the second throat of the tunnel, which is downstream of the test section and is where the supersonic flow decelerates to subsonic flow. Methods of computational fluid dynamics (CFD) were applied to provide details of the shock boundary layer structure and to estimate losses in total pressure. The CFD simulations indicated that the tunnel could be operated up to Mach 4.0 if the minimum width of the second throat was made smaller than that used for previous operation of the tunnel. Wind tunnel testing was able to confirm such operation of the tunnel at Mach 3.6 and 3.7 before a hydraulic failure caused a stop to the testing. CFD simulations performed after the wind tunnel testing showed good agreement with test data consisting of static pressures along the ceiling of the second throat. The CFD analyses showed increased shockwave boundary layer interactions, which was also observed as increased unsteadiness of dynamic pressures collected in the wind tunnel testing.
Million-body star cluster simulations: comparisons between Monte Carlo and direct N-body
NASA Astrophysics Data System (ADS)
Rodriguez, Carl L.; Morscher, Meagan; Wang, Long; Chatterjee, Sourav; Rasio, Frederic A.; Spurzem, Rainer
2016-12-01
We present the first detailed comparison between million-body globular cluster simulations computed with a Hénon-type Monte Carlo code, CMC, and a direct N-body code, NBODY6++GPU. Both simulations start from an identical cluster model with 106 particles, and include all of the relevant physics needed to treat the system in a highly realistic way. With the two codes `frozen' (no fine-tuning of any free parameters or internal algorithms of the codes) we find good agreement in the overall evolution of the two models. Furthermore, we find that in both models, large numbers of stellar-mass black holes (>1000) are retained for 12 Gyr. Thus, the very accurate direct N-body approach confirms recent predictions that black holes can be retained in present-day, old globular clusters. We find only minor disagreements between the two models and attribute these to the small-N dynamics driving the evolution of the cluster core for which the Monte Carlo assumptions are less ideal. Based on the overwhelming general agreement between the two models computed using these vastly different techniques, we conclude that our Monte Carlo approach, which is more approximate, but dramatically faster compared to the direct N-body, is capable of producing an accurate description of the long-term evolution of massive globular clusters even when the clusters contain large populations of stellar-mass black holes.
ERIC Educational Resources Information Center
Zillesen, Pieter G. van Schaick
This paper introduces a hardware and software independent model for producing educational computer simulation environments. The model, which is based on the results of 32 studies of educational computer simulations program production, implies that educational computer simulation environments are specified, constructed, tested, implemented, and…
The Learning Effects of Computer Simulations in Science Education
ERIC Educational Resources Information Center
Rutten, Nico; van Joolingen, Wouter R.; van der Veen, Jan T.
2012-01-01
This article reviews the (quasi)experimental research of the past decade on the learning effects of computer simulations in science education. The focus is on two questions: how use of computer simulations can enhance traditional education, and how computer simulations are best used in order to improve learning processes and outcomes. We report on…
NASA Astrophysics Data System (ADS)
Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko
2017-08-01
We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.
NASA Astrophysics Data System (ADS)
Macías-Díaz, J. E.; Hendy, A. S.; De Staelen, R. H.
2018-03-01
In this work, we investigate a general nonlinear wave equation with Riesz space-fractional derivatives that generalizes various classical hyperbolic models, including the sine-Gordon and the Klein-Gordon equations from relativistic quantum mechanics. A finite-difference discretization of the model is provided using fractional centered differences. The method is a technique that is capable of preserving an energy-like quantity at each iteration. Some computational comparisons against solutions available in the literature are performed in order to assess the capability of the method to preserve the invariant. Our experiments confirm that the technique yields good approximations to the solutions considered. As an application of our scheme, we provide simulations that confirm, for the first time in the literature, the presence of the phenomenon of nonlinear supratransmission in Riesz space-fractional Klein-Gordon equations driven by a harmonic perturbation at the boundary.
Chen, C N; Su, Y; Baybayan, P; Siruno, A; Nagaraja, R; Mazzarella, R; Schlessinger, D; Chen, E
1996-01-01
Ordered shotgun sequencing (OSS) has been successfully carried out with an Xq25 YAC substrate. yWXD703 DNA was subcloned into lambda phage and sequences of insert ends of the lambda subclones were used to generate a map to select a minimum tiling path of clones to be completely sequenced. The sequence of 135 038 nt contains the entire ANT2 cDNA as well as four other candidates suggested by computer-assisted analyses. One of the putative genes is homologous to a gene implicated in Graves' disease and it, ANT2 and two others are confirmed by EST matches. The results suggest that OSS can be applied to YACs in accord with earlier simulations and further indicate that the sequence of the YAC accurately reflects the sequence of uncloned human DNA. PMID:8918809
CFD and PTV steady flow investigation in an anatomically accurate abdominal aortic aneurysm.
Boutsianis, Evangelos; Guala, Michele; Olgac, Ufuk; Wildermuth, Simon; Hoyer, Klaus; Ventikos, Yiannis; Poulikakos, Dimos
2009-01-01
There is considerable interest in computational and experimental flow investigations within abdominal aortic aneurysms (AAAs). This task stipulates advanced grid generation techniques and cross-validation because of the anatomical complexity. The purpose of this study is to examine the feasibility of velocity measurements by particle tracking velocimetry (PTV) in realistic AAA models. Computed tomography and rapid prototyping were combined to digitize and construct a silicone replica of a patient-specific AAA. Three-dimensional velocity measurements were acquired using PTV under steady averaged resting boundary conditions. Computational fluid dynamics (CFD) simulations were subsequently carried out with identical boundary conditions. The computational grid was created by splitting the luminal volume into manifold and nonmanifold subsections. They were filled with tetrahedral and hexahedral elements, respectively. Grid independency was tested on three successively refined meshes. Velocity differences of about 1% in all three directions existed mainly within the AAA sack. Pressure revealed similar variations, with the sparser mesh predicting larger values. PTV velocity measurements were taken along the abdominal aorta and showed good agreement with the numerical data. The results within the aneurysm neck and sack showed average velocity variations of about 5% of the mean inlet velocity. The corresponding average differences increased for all velocity components downstream the iliac bifurcation to as much as 15%. The two domains differed slightly due to flow-induced forces acting on the silicone model. Velocity quantification through narrow branches was problematic due to decreased signal to noise ratio at the larger local velocities. Computational wall pressure and shear fields are also presented. The agreement between CFD simulations and the PTV experimental data was confirmed by three-dimensional velocity comparisons at several locations within the investigated AAA anatomy indicating the feasibility of this approach.
Benefits of computer screen-based simulation in learning cardiac arrest procedures.
Bonnetain, Elodie; Boucheix, Jean-Michel; Hamet, Maël; Freysz, Marc
2010-07-01
What is the best way to train medical students early so that they acquire basic skills in cardiopulmonary resuscitation as effectively as possible? Studies have shown the benefits of high-fidelity patient simulators, but have also demonstrated their limits. New computer screen-based multimedia simulators have fewer constraints than high-fidelity patient simulators. In this area, as yet, there has been no research on the effectiveness of transfer of learning from a computer screen-based simulator to more realistic situations such as those encountered with high-fidelity patient simulators. We tested the benefits of learning cardiac arrest procedures using a multimedia computer screen-based simulator in 28 Year 2 medical students. Just before the end of the traditional resuscitation course, we compared two groups. An experiment group (EG) was first asked to learn to perform the appropriate procedures in a cardiac arrest scenario (CA1) in the computer screen-based learning environment and was then tested on a high-fidelity patient simulator in another cardiac arrest simulation (CA2). While the EG was learning to perform CA1 procedures in the computer screen-based learning environment, a control group (CG) actively continued to learn cardiac arrest procedures using practical exercises in a traditional class environment. Both groups were given the same amount of practice, exercises and trials. The CG was then also tested on the high-fidelity patient simulator for CA2, after which it was asked to perform CA1 using the computer screen-based simulator. Performances with both simulators were scored on a precise 23-point scale. On the test on a high-fidelity patient simulator, the EG trained with a multimedia computer screen-based simulator performed significantly better than the CG trained with traditional exercises and practice (16.21 versus 11.13 of 23 possible points, respectively; p<0.001). Computer screen-based simulation appears to be effective in preparing learners to use high-fidelity patient simulators, which present simulations that are closer to real-life situations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michikoshi, Shugo; Kokubo, Eiichiro; Inutsuka, Shu-ichiro, E-mail: michikoshi@cfca.j, E-mail: kokubo@th.nao.ac.j, E-mail: inutsuka@tap.scphys.kyoto-u.ac.j
2009-10-01
The gravitational instability of a dust layer is one of the scenarios for planetesimal formation. If the density of a dust layer becomes sufficiently high as a result of the sedimentation of dust grains toward the midplane of a protoplanetary disk, the layer becomes gravitationally unstable and spontaneously fragments into planetesimals. Using a shearing box method, we performed local N-body simulations of gravitational instability of a dust layer and subsequent coagulation without gas and investigated the basic formation process of planetesimals. In this paper, we adopted the accretion model as a collision model. A gravitationally bound pair of particles ismore » replaced by a single particle with the total mass of the pair. This accretion model enables us to perform long-term and large-scale calculations. We confirmed that the formation process of planetesimals is the same as that in the previous paper with the rubble pile models. The formation process is divided into three stages: the formation of nonaxisymmetric structures; the creation of planetesimal seeds; and their collisional growth. We investigated the dependence of the planetesimal mass on the simulation domain size. We found that the mean mass of planetesimals formed in simulations is proportional to L {sup 3/2} {sub y}, where L{sub y} is the size of the computational domain in the direction of rotation. However, the mean mass of planetesimals is independent of L{sub x} , where L{sub x} is the size of the computational domain in the radial direction if L{sub x} is sufficiently large. We presented the estimation formula of the planetesimal mass taking into account the simulation domain size.« less
Biomechanical simulation of thorax deformation using finite element approach.
Zhang, Guangzhi; Chen, Xian; Ohgi, Junji; Miura, Toshiro; Nakamoto, Akira; Matsumura, Chikanori; Sugiura, Seiryo; Hisada, Toshiaki
2016-02-06
The biomechanical simulation of the human respiratory system is expected to be a useful tool for the diagnosis and treatment of respiratory diseases. Because the deformation of the thorax significantly influences airflow in the lungs, we focused on simulating the thorax deformation by introducing contraction of the intercostal muscles and diaphragm, which are the main muscles responsible for the thorax deformation during breathing. We constructed a finite element model of the thorax, including the rib cage, intercostal muscles, and diaphragm. To reproduce the muscle contractions, we introduced the Hill-type transversely isotropic hyperelastic continuum skeletal muscle model, which allows the intercostal muscles and diaphragm to contract along the direction of the fibres with clinically measurable muscle activation and active force-length relationship. The anatomical fibre orientations of the intercostal muscles and diaphragm were introduced. Thorax deformation consists of movements of the ribs and diaphragm. By activating muscles, we were able to reproduce the pump-handle and bucket-handle motions for the ribs and the clinically observed motion for the diaphragm. In order to confirm the effectiveness of this approach, we simulated the thorax deformation during normal quiet breathing and compared the results with four-dimensional computed tomography (4D-CT) images for verification. Thorax deformation can be simulated by modelling the respiratory muscles according to continuum mechanics and by introducing muscle contractions. The reproduction of representative motions of the ribs and diaphragm and the comparison of the thorax deformations during normal quiet breathing with 4D-CT images demonstrated the effectiveness of the proposed approach. This work may provide a platform for establishing a computational mechanics model of the human respiratory system.
Evaluation of a breast software model for 2D and 3D X-ray imaging studies of the breast.
Baneva, Yanka; Bliznakova, Kristina; Cockmartin, Lesley; Marinov, Stoyko; Buliev, Ivan; Mettivier, Giovanni; Bosmans, Hilde; Russo, Paolo; Marshall, Nicholas; Bliznakov, Zhivko
2017-09-01
In X-ray imaging, test objects reproducing breast anatomy characteristics are realized to optimize issues such as image processing or reconstruction, lesion detection performance, image quality and radiation induced detriment. Recently, a physical phantom with a structured background has been introduced for both 2D mammography and breast tomosynthesis. A software version of this phantom and a few related versions are now available and a comparison between these 3D software phantoms and the physical phantom will be presented. The software breast phantom simulates a semi-cylindrical container filled with spherical beads of different diameters. Four computational breast phantoms were generated with a dedicated software application and for two of these, physical phantoms are also available and they are used for the side by side comparison. Planar projections in mammography and tomosynthesis were simulated under identical incident air kerma conditions. Tomosynthesis slices were reconstructed with an in-house developed reconstruction software. In addition to a visual comparison, parameters like fractal dimension, power law exponent β and second order statistics (skewness, kurtosis) of planar projections and tomosynthesis reconstructed images were compared. Visually, an excellent agreement between simulated and real planar and tomosynthesis images is observed. The comparison shows also an overall very good agreement between parameters evaluated from simulated and experimental images. The computational breast phantoms showed a close match with their physical versions. The detailed mathematical analysis of the images confirms the agreement between real and simulated 2D mammography and tomosynthesis images. The software phantom is ready for optimization purpose and extrapolation of the phantom to other breast imaging techniques. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Kenjereš, Saša; Tjin, Jimmy Leroy
2017-12-01
In the present study, we investigate the concept of the targeted delivery of pharmaceutical drug aerosols in an anatomically realistic geometry of the human upper and central respiratory system. The geometry considered extends from the mouth inlet to the eighth generation of the bronchial bifurcations and is identical to the phantom model used in the experimental studies of Banko et al. (2015 Exp. Fluids 56 , 1-12 (doi:10.1007/s00348-015-1966-y)). In our computer simulations, we combine the transitional Reynolds-averaged Navier-Stokes (RANS) and the wall-resolved large eddy simulation (LES) methods for the air phase with the Lagrangian approach for the particulate (aerosol) phase. We validated simulations against recently obtained magnetic resonance velocimetry measurements of Banko et al. (2015 Exp. Fluids 56 , 1-12. (doi:10.1007/s00348-015-1966-y)) that provide a full three-dimensional mean velocity field for steady inspiratory conditions. Both approaches produced good agreement with experiments, and the transitional RANS approach is selected for the multiphase simulations of aerosols transport, because of significantly lower computational costs. The local and total deposition efficiency are calculated for different classes of pharmaceutical particles (in the 0.1 μm≤ d p ≤10 μm range) without and with a paramagnetic core (the shell-core particles). For the latter, an external magnetic field is imposed. The source of the imposed magnetic field was placed in the proximity of the first bronchial bifurcation. We demonstrated that both total and local depositions of aerosols at targeted locations can be significantly increased by an applied magnetization force. This finding confirms the possible potential for further advancement of the magnetic drug targeting technique for more efficient treatments for respiratory diseases.
NASA Astrophysics Data System (ADS)
Warsta, L.; Karvonen, T.
2017-12-01
There are currently 25 shooting and training areas in Finland managed by The Finnish Defence Forces (FDF), where military activities can cause contamination of open waters and groundwater reservoirs. In the YMPYRÄ project, a computer software framework is being developed that combines existing open environmental data and proprietary information collected by FDF with computational models to investigate current and prevent future environmental problems. A data centric philosophy is followed in the development of the system, i.e. the models are updated and extended to handle available data from different areas. The results generated by the models are summarized as easily understandable flow and risk maps that can be opened in GIS programs and used in environmental assessments by experts. Substances investigated with the system include explosives and metals such as lead, and both surface and groundwater dominated areas can be simulated. The YMPYRÄ framework is composed of a three dimensional soil and groundwater flow model, several solute transport models and an uncertainty assessment system. Solute transport models in the framework include particle based, stream tube and finite volume based approaches. The models can be used to simulate solute dissolution from source area, transport in the unsaturated layers to groundwater and finally migration in groundwater to water extraction wells and springs. The models can be used to simulate advection, dispersion, equilibrium adsorption on soil particles, solubility and dissolution from solute phase and dendritic solute decay chains. Correct numerical solutions were confirmed by comparing results to analytical 1D and 2D solutions and by comparing the numerical solutions to each other. The particle based and stream tube type solute transport models were useful as they could complement the traditional finite volume based approach which in certain circumstances produced numerical dispersion due to piecewise solution of the governing equations in computational grids and included computationally intensive and in some cases unstable iterative solutions. The YMPYRÄ framework is being developed by WaterHope, Gain Oy, and SITO Oy consulting companies and funded by FDF.
Importance of matrix inelastic deformations in the initial response of magnetic elastomers.
Sánchez, Pedro A; Gundermann, Thomas; Dobroserdova, Alla; Kantorovich, Sofia S; Odenbach, Stefan
2018-03-14
Being able to predict and understand the behaviour of soft magnetic materials paves the way to their technological applications. In this study we analyse the magnetic response of soft magnetic elastomers (SMEs) with magnetically hard particles. We present experimental evidence of a difference between the first and next magnetisation loops exhibited by these SMEs, which depends non-monotonically on the interplay between the rigidity of the polymer matrix, its mechanical coupling with the particles, and the magnetic interactions in the system. In order to explain the microstructural mechanism behind this behaviour, we used a minimal computer simulation model whose results evidence the importance of irreversible matrix deformations due to both translations and rotations of the particles. To confirm the simulation findings, computed tomography (CT) was used. We conclude that the initial exposure to the field triggers the inelastic matrix relaxation in the SMEs, as particles attempt to reorient. However, once the necessary degree of freedom is achieved, both the rotations and the magnetisation behaviour become stationary. We expect this scenario not only to be limited to the materials studied here, but also to apply to a broader class of hybrid SMEs.
A phantom axon setup for validating models of action potential recordings.
Rossel, Olivier; Soulier, Fabien; Bernard, Serge; Guiraud, David; Cathébras, Guy
2016-08-01
Electrode designs and strategies for electroneurogram recordings are often tested first by computer simulations and then by animal models, but they are rarely implanted for long-term evaluation in humans. The models show that the amplitude of the potential at the surface of an axon is higher in front of the nodes of Ranvier than at the internodes; however, this has not been investigated through in vivo measurements. An original experimental method is presented to emulate a single fiber action potential in an infinite conductive volume, allowing the potential of an axon to be recorded at both the nodes of Ranvier and the internodes, for a wide range of electrode-to-fiber radial distances. The paper particularly investigates the differences in the action potential amplitude along the longitudinal axis of an axon. At a short radial distance, the action potential amplitude measured in front of a node of Ranvier is two times larger than in the middle of two nodes. Moreover, farther from the phantom axon, the measured action potential amplitude is almost constant along the longitudinal axis. The results of this new method confirm the computer simulations, with a correlation of 97.6 %.
Wind Farm LES Simulations Using an Overset Methodology
NASA Astrophysics Data System (ADS)
Ananthan, Shreyas; Yellapantula, Shashank
2017-11-01
Accurate simulation of wind farm wakes under realistic atmospheric inflow conditions and complex terrain requires modeling a wide range of length and time scales. The computational domain can span several kilometers while requiring mesh resolutions in O(10-6) to adequately resolve the boundary layer on the blade surface. Overset mesh methodology offers an attractive option to address the disparate range of length scales; it allows embedding body-confirming meshes around turbine geomtries within nested wake capturing meshes of varying resolutions necessary to accurately model the inflow turbulence and the resulting wake structures. Dynamic overset hole-cutting algorithms permit relative mesh motion that allow this nested mesh structure to track unsteady inflow direction changes, turbine control changes (yaw and pitch), and wake propagation. An LES model with overset mesh for localized mesh refinement is used to analyze wind farm wakes and performance and compared with local mesh refinements using non-conformal (hanging node) unstructured meshes. Turbine structures will be modeled using both actuator line approaches and fully-resolved structures to test the efficacy of overset methods for wind farm applications. Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration.
NASA Astrophysics Data System (ADS)
Takeda, Kazuaki; Kojima, Yohei; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. However, the residual inter-chip interference (ICI) is produced after MMSE-FDE and this degrades the BER performance. Recently, we showed that frequency-domain ICI cancellation can bring the BER performance close to the theoretical lower bound. To further improve the BER performance, transmit antenna diversity technique is effective. Cyclic delay transmit diversity (CDTD) can increase the number of equivalent paths and hence achieve a large frequency diversity gain. Space-time transmit diversity (STTD) can obtain antenna diversity gain due to the space-time coding and achieve a better BER performance than CDTD. Objective of this paper is to show that the BER performance degradation of CDTD is mainly due to the residual ICI and that the introduction of ICI cancellation gives almost the same BER performance as STTD. This study provides a very important result that CDTD has a great advantage of providing a higher throughput than STTD. This is confirmed by computer simulation. The computer simulation results show that CDTD can achieve higher throughput than STTD when ICI cancellation is introduced.
Closed loop statistical performance analysis of N-K knock controllers
NASA Astrophysics Data System (ADS)
Peyton Jones, James C.; Shayestehmanesh, Saeed; Frey, Jesse
2017-09-01
The closed loop performance of engine knock controllers cannot be rigorously assessed from single experiments or simulations because knock behaves as a random process and therefore the response belongs to a random distribution also. In this work a new method is proposed for computing the distributions and expected values of the closed loop response, both in steady state and in response to disturbances. The method takes as its input the control law, and the knock propensity characteristic of the engine which is mapped from open loop steady state tests. The method is applicable to the 'n-k' class of knock controllers in which the control action is a function only of the number of cycles n since the last control move, and the number k of knock events that have occurred in this time. A Cumulative Summation (CumSum) based controller falls within this category, and the method is used to investigate the performance of the controller in a deeper and more rigorous way than has previously been possible. The results are validated using onerous Monte Carlo simulations, which confirm both the validity of the method and its high computational efficiency.
Importance of matrix inelastic deformations in the initial response of magnetic elastomers
Gundermann, Thomas; Dobroserdova, Alla; Kantorovich, Sofia S.; Odenbach, Stefan
2018-01-01
Being able to predict and understand the behaviour of soft magnetic materials paves the way to their technological applications. In this study we analyse the magnetic response of soft magnetic elastomers (SMEs) with magnetically hard particles. We present experimental evidence of a difference between the first and next magnetisation loops exhibited by these SMEs, which depends non-monotonically on the interplay between the rigidity of the polymer matrix, its mechanical coupling with the particles, and the magnetic interactions in the system. In order to explain the microstructural mechanism behind this behaviour, we used a minimal computer simulation model whose results evidence the importance of irreversible matrix deformations due to both translations and rotations of the particles. To confirm the simulation findings, computed tomography (CT) was used. We conclude that the initial exposure to the field triggers the inelastic matrix relaxation in the SMEs, as particles attempt to reorient. However, once the necessary degree of freedom is achieved, both the rotations and the magnetisation behaviour become stationary. We expect this scenario not only to be limited to the materials studied here, but also to apply to a broader class of hybrid SMEs. PMID:29493690
NASA Technical Reports Server (NTRS)
Suarez, Carlos J.; Smith, Brooke C.; Malcolm, Gerald N.
1993-01-01
Free-to-roll wind tunnel tests were conducted and a computer simulation exercise was performed in an effort to investigate in detail the mechanism of wing rock on a configuration that consisted of a highly-slender forebody and a 78 deg swept delta wing. In the wind tunnel test, the roll angle and wing surface pressures were measured during the wing rock motion. A limit cycle oscillation was observed for angles of attack between 22 deg and 30 deg. In general, the wind tunnel test confirmed that the main flow phenomena responsible for the wing-body-tail wing rock are the interactions between the forebody and the wing vortices. The variation of roll acceleration (determined from the second derivative of the roll angle time history) with roll angle clearly showed the energy balance necessary to sustain the limit cycle oscillation. Pressure measurements on the wing revealed the hysteresis of the wing rock process. First, second and nth order models for the aerodynamic damping were developed and examined with a one degree of freedom computer simulation. Very good agreement with the observed behavior from the wind tunnel was obtained.
The Compositional Dependence of the Microstructure and Properties of CMSX-4 Superalloys
NASA Astrophysics Data System (ADS)
Yu, Hao; Xu, Wei; Van Der Zwaag, Sybrand
2018-01-01
The degradation of creep resistance in Ni-based single-crystal superalloys is essentially ascribed to their microstructural evolution. Yet there is a lack of work that manages to predict (even qualitatively) the effect of alloying element concentrations on the rate of microstructural degradation. In this research, a computational model is presented to connect the rafting kinetics of Ni superalloys to their chemical composition by combining thermodynamics calculation and a modified microstructural model. To simulate the evolution of key microstructural parameters during creep, the isotropic coarsening rate and γ/ γ' misfit stress are defined as composition-related parameters, and the effect of service temperature, time, and applied stress are taken into consideration. Two commercial superalloys, for which the kinetics of the rafting process are selected as the reference alloys, and the corresponding microstructural parameters are simulated and compared with experimental observations reported in the literature. The results confirm that our physical model not requiring any fitting parameters manages to predict (semiquantitatively) the microstructural parameters for different service conditions, as well as the effects of alloying element concentrations. The model can contribute to the computational design of new Ni-based superalloys.
Exact and efficient simulation of concordant computation
NASA Astrophysics Data System (ADS)
Cable, Hugo; Browne, Daniel E.
2015-11-01
Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.
Computer simulations of liquid crystals: Defects, deformations and dynamics
NASA Astrophysics Data System (ADS)
Billeter, Jeffrey Lee
1999-11-01
Computer simulations play an increasingly important role in investigating fundamental issues in the physics of liquid crystals. Presented here are the results of three projects which utilize the unique power of simulations to probe questions which neither theory nor experiment can adequately answer. Throughout, we use the (generalized) Gay-Berne model, a widely-used phenomenological potential which captures the essential features of the anisotropic mesogen shapes and interactions. First, we used a Molecular Dynamics simulation with 65536 Gay-Berne particles to study the behaviors of topological defects in a quench from the isotropic to the nematic phase. Twist disclination loops were the dominant defects, and we saw evidence for dynamical scaling. We observed the loops separating, combining and collapsing, and we also observed numerous non-singular type-1 lines which appeared to be intimately involved with many of the loop processes. Second, we used a Molecular Dynamics simulation of a sphere embedded in a system of 2048 Gay-Berne particles to study the effects of radial anchoring of the molecules at the sphere's surface. A saturn ring defect configuration was observed, and the ring caused a driven sphere (modelling the falling ball experiment) to experience an increased resistance as it moved through the nematic. Deviations from a linear relationship between the driving force and the terminal speed are attributed to distortions of the saturn ring which we observed. The existence of the saturn ring confirms theoretical predictions for small spheres. Finally, we constructed a model for wedge-shaped molecules and used a linear response approach in a Monte Carlo simulation to investigate the flexoelectric behavior of a system of 256 such wedges. Novel potential models as well as novel analytical and visualization techniques were developed for these projects. Once again, the emphasis throughout was to investigate questions which simulations alone can adequately answer.
Plazinska, Anita; Plazinski, Wojciech
2017-05-02
The β 2 -adrenergic receptor (β 2 -AR) is one of the most studied G-protein-coupled receptors. When interacting with ligand molecules, it exhibits a binding characteristic that is strongly dependent on ligand stereoconfiguration. In particular, many experimental and theoretical studies confirmed that stereoisomers of an important β 2 -AR agonist, fenoterol, are associated with diverse mechanisms of binding and activation of β 2 -AR. The objective of the present study was to explore the stereoselective binding of fenoterol to β 2 -AR through the application of an advanced computational methodology based on enhanced-sampling molecular dynamics simulations and potentials of interactions tailored to investigate the stereorecognition effects. The results remain in very good, quantitative agreement with the experimental data (measured in the context of ligand-receptor affinities and their dependence on the temperature), which provides an additional validation for the applied computational protocols. Additionally, our results contribute to the understanding of stereoselective agonist binding by β 2 -AR. Although the significant role of the N293 6.55 residue is confirmed, we additionally show that stereorecognition does not depend solely on the N293-ligand interactions; the stereoselective effects rely on the co-operation of several residues located on both the 6th and 7th transmembrane domains and on extracellular loops. The magnitude and character of the contributions of these residues may be very diverse and result in either enhancing or reducing the stereoselective effects. The same is true when considering the enthalpic and entropic contributions to the binding free energies, which also are dependent on the ligand stereoconfiguration.
Borojeni, Azadeh A.T.; Frank-Ito, Dennis O.; Kimbell, Julia S.; Rhee, John S.; Garcia, Guilherme J. M.
2016-01-01
Virtual surgery planning based on computational fluid dynamics (CFD) simulations has the potential to improve surgical outcomes for nasal airway obstruction (NAO) patients, but the benefits of virtual surgery planning must outweigh the risks of radiation exposure. Cone beam computed tomography (CBCT) scans represent an attractive imaging modality for virtual surgery planning due to lower costs and lower radiation exposures compared with conventional CT scans. However, to minimize the radiation exposure, the CBCT sinusitis protocol sometimes images only the nasal cavity, excluding the nasopharynx. The goal of this study was to develop an idealized nasopharynx geometry for accurate representation of outlet boundary conditions when the nasopharynx geometry is unavailable. Anatomically-accurate models of the nasopharynx created from thirty CT scans were intersected with planes rotated at different angles to obtain an average geometry. Cross sections of the idealized nasopharynx were approximated as ellipses with cross-sectional areas and aspect ratios equal to the average in the actual patient-specific models. CFD simulations were performed to investigate whether nasal airflow patterns were affected when the CT-based nasopharynx was replaced by the idealized nasopharynx in 10 NAO patients. Despite the simple form of the idealized geometry, all biophysical variables (nasal resistance, airflow rate, and heat fluxes) were very similar in the idealized vs. patient-specific models. The results confirmed the expectation that the nasopharynx geometry has a minimal effect in the nasal airflow patterns during inspiration. The idealized nasopharynx geometry will be useful in future CFD studies of nasal airflow based on medical images that exclude the nasopharynx. PMID:27525807
Measuring excess free energies of self-assembled membrane structures.
Norizoe, Yuki; Daoulas, Kostas Ch; Müller, Marcus
2010-01-01
Using computer simulation of a solvent-free, coarse-grained model for amphiphilic membranes, we study the excess free energy of hourglass-shaped connections (i.e., stalks) between two apposed bilayer membranes. In order to calculate the free energy by simulation in the canonical ensemble, we reversibly transfer two apposed bilayers into a configuration with a stalk in three steps. First, we gradually replace the intermolecular interactions by an external, ordering field. The latter is chosen such that the structure of the non-interacting system in this field closely resembles the structure of the original, interacting system in the absence of the external field. The absence of structural changes along this path suggests that it is reversible; a fact which is confirmed by expanded-ensemble simulations. Second, the external, ordering field is changed as to transform the non-interacting system from the apposed bilayer structure to two-bilayers connected by a stalk. The final external field is chosen such that the structure of the non-interacting system resembles the structure of the stalk in the interacting system without a field. On the third branch of the transformation path, we reversibly replace the external, ordering field by non-bonded interactions. Using expanded-ensemble techniques, the free energy change along this reversible path can be obtained with an accuracy of 10(-3)k(B)T per molecule in the n VT-ensemble. Calculating the chemical potential, we obtain the free energy of a stalk in the grandcanonical ensemble, and employing semi-grandcanonical techniques, we calculate the change of the excess free energy upon altering the molecular architecture. This computational strategy can be applied to compute the free energy of self-assembled phases in lipid and copolymer systems, and the excess free energy of defects or interfaces.
2017-01-01
The accurate identification of the specific points of interaction between G protein-coupled receptor (GPCR) oligomers is essential for the design of receptor ligands targeting oligomeric receptor targets. A coarse-grained molecular dynamics computer simulation approach would provide a compelling means of identifying these specific protein–protein interactions and could be applied both for known oligomers of interest and as a high-throughput screen to identify novel oligomeric targets. However, to be effective, this in silico modeling must provide accurate, precise, and reproducible information. This has been achieved recently in numerous biological systems using an ensemble-based all-atom molecular dynamics approach. In this study, we describe an equivalent methodology for ensemble-based coarse-grained simulations. We report the performance of this method when applied to four different GPCRs known to oligomerize using error analysis to determine the ensemble size and individual replica simulation time required. Our measurements of distance between residues shown to be involved in oligomerization of the fifth transmembrane domain from the adenosine A2A receptor are in very good agreement with the existing biophysical data and provide information about the nature of the contact interface that cannot be determined experimentally. Calculations of distance between rhodopsin, CXCR4, and β1AR transmembrane domains reported to form contact points in homodimers correlate well with the corresponding measurements obtained from experimental structural data, providing an ability to predict contact interfaces computationally. Interestingly, error analysis enables identification of noninteracting regions. Our results confirm that GPCR interactions can be reliably predicted using this novel methodology. PMID:28383913
Prediction of the properties of PVD/CVD coatings with the use of FEM analysis
NASA Astrophysics Data System (ADS)
Śliwa, Agata; Mikuła, Jarosław; Gołombek, Klaudiusz; Tański, Tomasz; Kwaśny, Waldemar; Bonek, Mirosław; Brytan, Zbigniew
2016-12-01
The aim of this paper is to present the results of the prediction of the properties of PVD/CVD coatings with the use of finite element method (FEM) analysis. The possibility of employing the FEM in the evaluation of stress distribution in multilayer Ti/Ti(C,N)/CrN, Ti/Ti(C,N)/(Ti,Al)N, Ti/(Ti,Si)N/(Ti,Si)N, and Ti/DLC/DLC coatings by taking into account their deposition conditions on magnesium alloys has been discussed in the paper. The difference in internal stresses in the zone between the coating and the substrate is caused by, first of all, the difference between the mechanical and thermal properties of the substrate and the coating, and also by the structural changes that occur in these materials during the fabrication process, especially during the cooling process following PVD and CVD treatment. The experimental values of stresses were determined based on X-ray diffraction patterns that correspond to the modelled values, which in turn can be used to confirm the correctness of the accepted mathematical model for testing the problem. An FEM model was established for the purpose of building a computer simulation of the internal stresses in the coatings. The accuracy of the FEM model was verified by comparing the results of the computer simulation of the stresses with experimental results. A computer simulation of the stresses was carried out in the ANSYS environment using the FEM method. Structure observations, chemical composition measurements, and mechanical property characterisations of the investigated materials has been carried out to give a background for the discussion of the results that were recorded during the modelling process.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The debonding of a skin/stringer specimen subjected to tension was studied using three-dimensional volume element modeling and computational fracture mechanics. Mixed mode strain energy release rates were calculated from finite element results using the virtual crack closure technique. The simulations revealed an increase in total energy release rate in the immediate vicinity of the free edges of the specimen. Correlation of the computed mixed-mode strain energy release rates along the delamination front contour with a two-dimensional mixed-mode interlaminar fracture criterion suggested that in spite of peak total energy release rates at the free edge the delamination would not advance at the edges first. The qualitative prediction of the shape of the delamination front was confirmed by X-ray photographs of a specimen taken during testing. The good correlation between prediction based on analysis and experiment demonstrated the efficiency of a mixed-mode failure analysis for the investigation of skin/stiffener separation due to delamination in the adherents. The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is also demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.
Real-time simulation of the TF30-P-3 turbofan engine using a hybrid computer
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Bruton, W. M.
1974-01-01
A real-time, hybrid-computer simulation of the TF30-P-3 turbofan engine was developed. The simulation was primarily analog in nature but used the digital portion of the hybrid computer to perform bivariate function generation associated with the performance of the engine's rotating components. FORTRAN listings and analog patching diagrams are provided. The hybrid simulation was controlled by a digital computer programmed to simulate the engine's standard hydromechanical control. Both steady-state and dynamic data obtained from the digitally controlled engine simulation are presented. Hybrid simulation data are compared with data obtained from a digital simulation provided by the engine manufacturer. The comparisons indicate that the real-time hybrid simulation adequately matches the baseline digital simulation.
Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa
2015-01-01
Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is supported by computational and behavioral data. PMID:25698947
Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa
2015-01-01
Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is supported by computational and behavioral data.
Fiévet, Julie B; Nidelet, Thibault; Dillmann, Christine; de Vienne, Dominique
2018-01-01
Heterosis, the superiority of hybrids over their parents for quantitative traits, represents a crucial issue in plant and animal breeding as well as evolutionary biology. Heterosis has given rise to countless genetic, genomic and molecular studies, but has rarely been investigated from the point of view of systems biology. We hypothesized that heterosis is an emergent property of living systems resulting from frequent concave relationships between genotypic variables and phenotypes, or between different phenotypic levels. We chose the enzyme-flux relationship as a model of the concave genotype-phenotype (GP) relationship, and showed that heterosis can be easily created in the laboratory. First, we reconstituted in vitro the upper part of glycolysis. We simulated genetic variability of enzyme activity by varying enzyme concentrations in test tubes. Mixing the content of "parental" tubes resulted in "hybrids," whose fluxes were compared to the parental fluxes. Frequent heterotic fluxes were observed, under conditions that were determined analytically and confirmed by computer simulation. Second, to test this model in a more realistic situation, we modeled the glycolysis/fermentation network in yeast by considering one input flux, glucose, and two output fluxes, glycerol and acetaldehyde. We simulated genetic variability by randomly drawing parental enzyme concentrations under various conditions, and computed the parental and hybrid fluxes using a system of differential equations. Again we found that a majority of hybrids exhibited positive heterosis for metabolic fluxes. Cases of negative heterosis were due to local convexity between certain enzyme concentrations and fluxes. In both approaches, heterosis was maximized when the parents were phenotypically close and when the distributions of parental enzyme concentrations were contrasted and constrained. These conclusions are not restricted to metabolic systems: they only depend on the concavity of the GP relationship, which is commonly observed at various levels of the phenotypic hierarchy, and could account for the pervasiveness of heterosis.
Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms
NASA Astrophysics Data System (ADS)
Yu, Yue; Perdikaris, Paris; Karniadakis, George Em
2016-10-01
We develop efficient numerical methods for fractional order PDEs, and employ them to investigate viscoelastic constitutive laws for arterial wall mechanics. Recent simulations using one-dimensional models [1] have indicated that fractional order models may offer a more powerful alternative for modeling the arterial wall response, exhibiting reduced sensitivity to parametric uncertainties compared with the integer-calculus-based models. Here, we study three-dimensional (3D) fractional PDEs that naturally model the continuous relaxation properties of soft tissue, and for the first time employ them to simulate flow structure interactions for patient-specific brain aneurysms. To deal with the high memory requirements and in order to accelerate the numerical evaluation of hereditary integrals, we employ a fast convolution method [2] that reduces the memory cost to O (log (N)) and the computational complexity to O (Nlog (N)). Furthermore, we combine the fast convolution with high-order backward differentiation to achieve third-order time integration accuracy. We confirm that in 3D viscoelastic simulations, the integer order models strongly depends on the relaxation parameters, while the fractional order models are less sensitive. As an application to long-time simulations in complex geometries, we also apply the method to modeling fluid-structure interaction of a 3D patient-specific compliant cerebral artery with an aneurysm. Taken together, our findings demonstrate that fractional calculus can be employed effectively in modeling complex behavior of materials in realistic 3D time-dependent problems if properly designed efficient algorithms are employed to overcome the extra memory requirements and computational complexity associated with the non-local character of fractional derivatives.
Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms
Perdikaris, Paris; Karniadakis, George Em
2017-01-01
We develop efficient numerical methods for fractional order PDEs, and employ them to investigate viscoelastic constitutive laws for arterial wall mechanics. Recent simulations using one-dimensional models [1] have indicated that fractional order models may offer a more powerful alternative for modeling the arterial wall response, exhibiting reduced sensitivity to parametric uncertainties compared with the integer-calculus-based models. Here, we study three-dimensional (3D) fractional PDEs that naturally model the continuous relaxation properties of soft tissue, and for the first time employ them to simulate flow structure interactions for patient-specific brain aneurysms. To deal with the high memory requirements and in order to accelerate the numerical evaluation of hereditary integrals, we employ a fast convolution method [2] that reduces the memory cost to O(log(N)) and the computational complexity to O(N log(N)). Furthermore, we combine the fast convolution with high-order backward differentiation to achieve third-order time integration accuracy. We confirm that in 3D viscoelastic simulations, the integer order models strongly depends on the relaxation parameters, while the fractional order models are less sensitive. As an application to long-time simulations in complex geometries, we also apply the method to modeling fluid–structure interaction of a 3D patient-specific compliant cerebral artery with an aneurysm. Taken together, our findings demonstrate that fractional calculus can be employed effectively in modeling complex behavior of materials in realistic 3D time-dependent problems if properly designed efficient algorithms are employed to overcome the extra memory requirements and computational complexity associated with the non-local character of fractional derivatives. PMID:29104310
Conti, Michele; Van Loo, Denis; Auricchio, Ferdinando; De Beule, Matthieu; De Santis, Gianluca; Verhegghe, Benedict; Pirrelli, Stefano; Odero, Attilio
2011-06-01
To quantitatively evaluate the impact of carotid stent cell design on vessel scaffolding by using patient-specific finite element analysis of carotid artery stenting (CAS). The study was organized in 2 parts: (1) validation of a patient-specific finite element analysis of CAS and (2) evaluation of vessel scaffolding. Micro-computed tomography (CT) images of an open-cell stent deployed in a patient-specific silicone mock artery were compared with the corresponding finite element analysis results. This simulation was repeated for the closed-cell counterpart. In the second part, the stent strut distribution, as reflected by the inter-strut angles, was evaluated for both cell types in different vessel cross sections as a measure of scaffolding. The results of the patient-specific finite element analysis of CAS matched well with experimental stent deployment both qualitatively and quantitatively, demonstrating the reliability of the numerical approach. The measured inter-strut angles suggested that the closed-cell design provided superior vessel scaffolding compared to the open-cell counterpart. However, the full strut interconnection of the closed-cell design reduced the stent's ability to accommodate to the irregular eccentric profile of the vessel cross section, leading to a gap between the stent surface and the vessel wall. Even though this study was limited to a single stent design and one vascular anatomy, the study confirmed the capability of dedicated computer simulations to predict differences in scaffolding by open- and closed-cell carotid artery stents. These simulations have the potential to be used in the design of novel carotid stents or for procedure planning.
Extreme Scale Computing to Secure the Nation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, D L; McGraw, J R; Johnson, J R
2009-11-10
Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national securitymore » requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be developed that replace phenomenology with increased levels of scientific understanding together with an accompanying quantification of uncertainty. These advanced codes will place significantly higher demands on the computing infrastructure than do the current 3D ASC codes. This article discusses not only the need for a future computing capability at the exascale for the SBSS program, but also considers high performance computing requirements for broader national security questions. For example, the increasing concern over potential nuclear terrorist threats demands a capability to assess threats and potential disablement technologies as well as a rapid forensic capability for determining a nuclear weapons design from post-detonation evidence (nuclear counterterrorism).« less
Displaying Computer Simulations Of Physical Phenomena
NASA Technical Reports Server (NTRS)
Watson, Val
1991-01-01
Paper discusses computer simulation as means of experiencing and learning to understand physical phenomena. Covers both present simulation capabilities and major advances expected in near future. Visual, aural, tactile, and kinesthetic effects used to teach such physical sciences as dynamics of fluids. Recommends classrooms in universities, government, and industry be linked to advanced computing centers so computer simulations integrated into education process.
The DYNAMO Simulation Language--An Alternate Approach to Computer Science Education.
ERIC Educational Resources Information Center
Bronson, Richard
1986-01-01
Suggests the use of computer simulation of continuous systems as a problem solving approach to computer languages. Outlines the procedures that the system dynamics approach employs in computer simulations. Explains the advantages of the special purpose language, DYNAMO. (ML)
Launch Site Computer Simulation and its Application to Processes
NASA Technical Reports Server (NTRS)
Sham, Michael D.
1995-01-01
This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.
Protocols for Handling Messages Between Simulation Computers
NASA Technical Reports Server (NTRS)
Balcerowski, John P.; Dunnam, Milton
2006-01-01
Practical Simulator Network (PSimNet) is a set of data-communication protocols designed especially for use in handling messages between computers that are engaging cooperatively in real-time or nearly-real-time training simulations. In a typical application, computers that provide individualized training at widely dispersed locations would communicate, by use of PSimNet, with a central host computer that would provide a common computational- simulation environment and common data. Originally intended for use in supporting interfaces between training computers and computers that simulate the responses of spacecraft scientific payloads, PSimNet could be especially well suited for a variety of other applications -- for example, group automobile-driver training in a classroom. Another potential application might lie in networking of automobile-diagnostic computers at repair facilities to a central computer that would compile the expertise of numerous technicians and engineers and act as an expert consulting technician.
Reversible simulation of irreversible computation
NASA Astrophysics Data System (ADS)
Li, Ming; Tromp, John; Vitányi, Paul
1998-09-01
Computer computations are generally irreversible while the laws of physics are reversible. This mismatch is penalized by among other things generating excess thermic entropy in the computation. Computing performance has improved to the extent that efficiency degrades unless all algorithms are executed reversibly, for example by a universal reversible simulation of irreversible computations. All known reversible simulations are either space hungry or time hungry. The leanest method was proposed by Bennett and can be analyzed using a simple ‘reversible’ pebble game. The reachable reversible simulation instantaneous descriptions (pebble configurations) of such pebble games are characterized completely. As a corollary we obtain the reversible simulation by Bennett and, moreover, show that it is a space-optimal pebble game. We also introduce irreversible steps and give a theorem on the tradeoff between the number of allowed irreversible steps and the memory gain in the pebble game. In this resource-bounded setting the limited erasing needs to be performed at precise instants during the simulation. The reversible simulation can be modified so that it is applicable also when the simulated computation time is unknown.
Metadynamics study of a β-hairpin stability in mixed solvents.
Saladino, Giorgio; Pieraccini, Stefano; Rendine, Stefano; Recca, Teresa; Francescato, Pierangelo; Speranza, Giovanna; Sironi, Maurizio
2011-03-09
Understanding the molecular mechanisms that allow some organisms to survive in extremely harsh conditions is an important achievement that might disclose a wide range of applications and that is constantly drawing the attention of many research fields. The high adaptability of these living creatures is related to the presence in their tissues of a high concentration of osmoprotectants, small organic, highly soluble molecules. Despite osmoprotectants having been known for a long time, a full disclosure of the machinery behind their activity is still lacking. Here we describe a computational approach that, taking advantage of the recently developed metadynamics technique, allows one to fully describe the free energy surface of a small β-hairpin peptide and how it is affected by an osmoprotectant, glycine betaine (GB) and for comparison by urea, a common denaturant. Simulations led to relevant thermodynamic information, including how the free energy difference of denaturation is affected by the two cosolvents; unlike urea, GB caused a considerable increase of the folded basin stability, which transposes into a higher melting temperature. NMR experiments confirmed the picture derived from the theoretical study. Further molecular dynamics simulations of selected conformations allowed investigation into deeper detail the role of GB in folded state protection. Simulations of the protein in GB solutions clearly showed an excess of osmoprotectant in the solvent bulk, rather than in the protein domain, confirming the exclusion from the protein surface, but also highlighted interesting features on its interactions, opening to new scenarios besides the classic "indirect mechanism" hypothesis.
Massively parallel quantum computer simulator
NASA Astrophysics Data System (ADS)
De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.
2007-01-01
We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.
Space-filling designs for computer experiments: A review
Joseph, V. Roshan
2016-01-29
Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less
Space-filling designs for computer experiments: A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, V. Roshan
Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less
ERIC Educational Resources Information Center
Zillesen, P. G. van Schaick; And Others
Instructional feedback given to the learners during computer simulation sessions may be greatly improved by integrating educational computer simulation programs with hypermedia-based computer-assisted learning (CAL) materials. A prototype of a learning environment of this type called BRINE PURIFICATION was developed for use in corporate training…
ERIC Educational Resources Information Center
O'Reilly, Daniel J.
2011-01-01
This study examined the capability of computer simulation as a tool for assessing the strategic competency of emergency department nurses as they responded to authentically computer simulated biohazard-exposed patient case studies. Thirty registered nurses from a large, urban hospital completed a series of computer-simulated case studies of…
Space Ultrareliable Modular Computer (SUMC) instruction simulator
NASA Technical Reports Server (NTRS)
Curran, R. T.
1972-01-01
The design principles, description, functional operation, and recommended expansion and enhancements are presented for the Space Ultrareliable Modular Computer interpretive simulator. Included as appendices are the user's manual, program module descriptions, target instruction descriptions, simulator source program listing, and a sample program printout. In discussing the design and operation of the simulator, the key problems involving host computer independence and target computer architectural scope are brought into focus.
Computational modeling of mediator oxidation by oxygen in an amperometric glucose biosensor.
Simelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Razumienė, Julija
2014-02-07
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior.
Computational Modeling of Mediator Oxidation by Oxygen in an Amperometric Glucose Biosensor
Šimelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Julija, Razumienė
2014-01-01
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior. PMID:24514882
Cognitive architectures and language acquisition: a case study in pronoun comprehension.
VAN Rij, Jacolien; VAN Rijn, Hedderik; Hendriks, Petra
2010-06-01
In this paper we discuss a computational cognitive model of children's poor performance on pronoun interpretation (the so-called Delay of Principle B Effect, or DPBE). This cognitive model is based on a theoretical account that attributes the DPBE to children's inability as hearers to also take into account the speaker's perspective. The cognitive model predicts that child hearers are unable to do so because their speed of linguistic processing is too limited to perform this second step in interpretation. We tested this hypothesis empirically in a psycholinguistic study, in which we slowed down the speech rate to give children more time for interpretation, and in a computational simulation study. The results of the two studies confirm the predictions of our model. Moreover, these studies show that embedding a theory of linguistic competence in a cognitive architecture allows for the generation of detailed and testable predictions with respect to linguistic performance.
Network-based stochastic semisupervised learning.
Silva, Thiago Christiano; Zhao, Liang
2012-03-01
Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.
Njoya, Eric Tchouamou; Seetaram, Neelu
2018-04-01
The aim of this article is to investigate the claim that tourism development can be the engine for poverty reduction in Kenya using a dynamic, microsimulation computable general equilibrium model. The article improves on the common practice in the literature by using the more comprehensive Foster-Greer-Thorbecke (FGT) index to measure poverty instead of headcount ratios only. Simulations results from previous studies confirm that expansion of the tourism industry will benefit different sectors unevenly and will only marginally improve poverty headcount. This is mainly due to the contraction of the agricultural sector caused the appreciation of the real exchange rates. This article demonstrates that the effect on poverty gap and poverty severity is, nevertheless, significant for both rural and urban areas with higher impact in the urban areas. Tourism expansion enables poorer households to move closer to the poverty line. It is concluded that the tourism industry is pro-poor.
Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Tanaka, Ken; Tomeba, Hiromichi; Adachi, Fumiyuki
Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of orthogonal frequency division multiplexing (OFDM) and time-domain spreading, while multi-carrier code division multiple access (MC-CDMA) is a combination of OFDM and frequency-domain spreading. In MC-CDMA, a good bit error rate (BER) performance can be achieved by using frequency-domain equalization (FDE), since the frequency diversity gain is obtained. On the other hand, the conventional orthogonal MC DS-CDMA fails to achieve any frequency diversity gain. In this paper, we propose a new orthogonal MC DS-CDMA that can obtain the frequency diversity gain by applying FDE. The conditional BER analysis is presented. The theoretical average BER performance in a frequency-selective Rayleigh fading channel is evaluated by the Monte-Carlo numerical computation method using the derived conditional BER and is confirmed by computer simulation of the orthogonal MC DS-CDMA signal transmission.
Singh, Gurpreet; Mohanty, B P; Saini, G S S
2016-02-15
Structure, vibrational and nuclear magnetic resonance spectra, and antioxidant action of ascorbic acid towards hydroxyl radicals have been studied computationally and in vitro by ultraviolet-visible, nuclear magnetic resonance and vibrational spectroscopic techniques. Time dependant density functional theory calculations have been employed to specify various electronic transitions in ultraviolet-visible spectra. Observed chemical shifts and vibrational bands in nuclear magnetic resonance and vibrational spectra, respectively have been assigned with the help of calculations. Changes in the structure of ascorbic acid in aqueous phase have been examined computationally and experimentally by recording Raman spectra in aqueous medium. Theoretical calculations of the interaction between ascorbic acid molecule and hydroxyl radical predicted the formation of dehydroascorbic acid as first product, which has been confirmed by comparing its simulated spectra with the corresponding spectra of ascorbic acid in presence of hydrogen peroxide. Copyright © 2015 Elsevier B.V. All rights reserved.
Modulation of the error-related negativity by response conflict.
Danielmeier, Claudia; Wessel, Jan R; Steinhauser, Marco; Ullsperger, Markus
2009-11-01
An arrow version of the Eriksen flanker task was employed to investigate the influence of conflict on the error-related negativity (ERN). The degree of conflict was modulated by varying the distance between flankers and the target arrow (CLOSE and FAR conditions). Error rates and reaction time data from a behavioral experiment were used to adapt a connectionist model of this task. This model was based on the conflict monitoring theory and simulated behavioral and event-related potential data. The computational model predicted an increased ERN amplitude in FAR incompatible (the low-conflict condition) compared to CLOSE incompatible errors (the high-conflict condition). A subsequent ERP experiment confirmed the model predictions. The computational model explains this finding with larger post-response conflict in far trials. In addition, data and model predictions of the N2 and the LRP support the conflict interpretation of the ERN.
Optical function of the finite-thickness corrugated pellicle of euglenoids.
Inchaussandague, Marina E; Skigin, Diana C; Dolinko, Andrés E
2017-06-20
We explore the electromagnetic response of the pellicle of selected species of euglenoids. These microorganisms are bounded by a typical surface pellicle formed by S-shaped overlapping bands that resemble a corrugated film. We investigate the role played by this structure in the protection of the cell against UV radiation. By considering the pellicle as a periodically corrugated film of finite thickness, we applied the C-method to compute the reflectance spectra. The far-field results revealed reflectance peaks with a Q-factor larger than 10 3 in the UV region for all the illumination conditions investigated. The resonant behavior responsible for this enhancement has also been illustrated by near-field computations performed by a photonic simulation method. These results confirm that the corrugated pellicle of euglenoids shields the cell from harmful UV radiation and open up new possibilities for the design of highly UV-reflective surfaces.
Njoya, Eric Tchouamou; Seetaram, Neelu
2017-01-01
The aim of this article is to investigate the claim that tourism development can be the engine for poverty reduction in Kenya using a dynamic, microsimulation computable general equilibrium model. The article improves on the common practice in the literature by using the more comprehensive Foster-Greer-Thorbecke (FGT) index to measure poverty instead of headcount ratios only. Simulations results from previous studies confirm that expansion of the tourism industry will benefit different sectors unevenly and will only marginally improve poverty headcount. This is mainly due to the contraction of the agricultural sector caused the appreciation of the real exchange rates. This article demonstrates that the effect on poverty gap and poverty severity is, nevertheless, significant for both rural and urban areas with higher impact in the urban areas. Tourism expansion enables poorer households to move closer to the poverty line. It is concluded that the tourism industry is pro-poor. PMID:29595836
Effects of scale and Froude number on the hydraulics of waste stabilization ponds.
Vieira, Isabela De Luna; Da Silva, Jhonatan Barbosa; Ide, Carlos Nobuyoshi; Janzen, Johannes Gérson
2018-01-01
This paper presents the findings from a series of computational fluid dynamics simulations to estimate the effect of scale and Froude number on hydraulic performance and effluent pollutant fraction of scaled waste stabilization ponds designed using Froude similarity. Prior to its application, the model was verified by comparing the computational and experimental results of a model scaled pond, showing good agreement and confirming that the model accurately reproduces the hydrodynamics and tracer transport processes. Our results showed that the scale and the interaction between scale and Froude number has an effect on the hydraulics of ponds. At 1:5 scale, the increase of scale increased short-circuiting and decreased mixing. Furthermore, at 1:10 scale, the increase of scale decreased the effluent pollutant fraction. Since the Reynolds effect cannot be ignored, a ratio of Reynolds and Froude numbers was suggested to predict the effluent pollutant fraction for flows with different Reynolds numbers.
Computational biology approach to uncover hepatitis C virus helicase operation.
Flechsig, Holger
2014-04-07
Hepatitis C virus (HCV) helicase is a molecular motor that splits nucleic acid duplex structures during viral replication, therefore representing a promising target for antiviral treatment. Hence, a detailed understanding of the mechanism by which it operates would facilitate the development of efficient drug-assisted therapies aiming to inhibit helicase activity. Despite extensive investigations performed in the past, a thorough understanding of the activity of this important protein was lacking since the underlying internal conformational motions could not be resolved. Here we review investigations that have been previously performed by us for HCV helicase. Using methods of structure-based computational modelling it became possible to follow entire operation cycles of this motor protein in structurally resolved simulations and uncover the mechanism by which it moves along the nucleic acid and accomplishes strand separation. We also discuss observations from that study in the light of recent experimental studies that confirm our findings.
Computer-aided Instructional System for Transmission Line Simulation.
ERIC Educational Resources Information Center
Reinhard, Erwin A.; Roth, Charles H., Jr.
A computer-aided instructional system has been developed which utilizes dynamic computer-controlled graphic displays and which requires student interaction with a computer simulation in an instructional mode. A numerical scheme has been developed for digital simulation of a uniform, distortionless transmission line with resistive terminations and…
Verifying the Simulation Hypothesis via Infinite Nested Universe Simulacrum Loops
NASA Astrophysics Data System (ADS)
Sharma, Vikrant
2017-01-01
The simulation hypothesis proposes that local reality exists as a simulacrum within a hypothetical computer's dimension. More specifically, Bostrom's trilemma proposes that the number of simulations an advanced 'posthuman' civilization could produce makes the proposition very likely. In this paper a hypothetical method to verify the simulation hypothesis is discussed using infinite regression applied to a new type of infinite loop. Assign dimension n to any computer in our present reality, where dimension signifies the hierarchical level in nested simulations our reality exists in. A computer simulating known reality would be dimension (n-1), and likewise a computer simulating an artificial reality, such as a video game, would be dimension (n +1). In this method, among others, four key assumptions are made about the nature of the original computer dimension n. Summations show that regressing such a reality infinitely will create convergence, implying that the verification of whether local reality is a grand simulation is feasible to detect with adequate compute capability. The action of reaching said convergence point halts the simulation of local reality. Sensitivities to the four assumptions and implications are discussed.
Utilization of Short-Simulations for Tuning High-Resolution Climate Model
NASA Astrophysics Data System (ADS)
Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.
2016-12-01
Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (< 10 days ) and longer ( 1 year) Perturbed Parameters Ensemble (PPE) simulations at low resolution to identify model feature sensitivity to parameter changes. The CAPT tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in greater detail once an educated set of parameter choice is selected. Limitations on using short-term simulations for tuning climate model are also discussed.
Experimental and simulation flow rate analysis of the 3/2 directional pneumatic valve
NASA Astrophysics Data System (ADS)
Blasiak, Slawomir; Takosoglu, Jakub E.; Laski, Pawel A.; Pietrala, Dawid S.; Zwierzchowski, Jaroslaw; Bracha, Gabriel; Nowakowski, Lukasz; Blasiak, Malgorzata
The work includes a study on the comparative analysis of two test methods. The first method - numerical method, consists in determining the flow characteristics with the use of ANSYS CFX. A modeled poppet directional valve 3/2 3D CAD software - SolidWorks was used for this purpose. Based on the solid model that was developed, simulation studies of the air flow through the way valve in the software for computational fluid dynamics Ansys CFX were conducted. The second method - experimental, entailed conducting tests on a specially constructed test stand. The comparison of the test results obtained on the basis of both methods made it possible to determine the cross-correlation. High compatibility of the results confirms the usefulness of the numerical procedures. Thus, they might serve to determine the flow characteristics of directional valves as an alternative to a costly and time-consuming test stand.
Product selectivity control induced by using liquid-liquid parallel laminar flow in a microreactor.
Amemiya, Fumihiro; Matsumoto, Hideyuki; Fuse, Keishi; Kashiwagi, Tsuneo; Kuroda, Chiaki; Fuchigami, Toshio; Atobe, Mahito
2011-06-07
Product selectivity control based on a liquid-liquid parallel laminar flow has been successfully demonstrated by using a microreactor. Our electrochemical microreactor system enables regioselective cross-coupling reaction of aldehyde with allylic chloride via chemoselective cathodic reduction of substrate by the combined use of suitable flow mode and corresponding cathode material. The formation of liquid-liquid parallel laminar flow in the microreactor was supported by the estimation of benzaldehyde diffusion coefficient and computational fluid dynamics simulation. The diffusion coefficient for benzaldehyde in Bu(4)NClO(4)-HMPA medium was determined to be 1.32 × 10(-7) cm(2) s(-1) by electrochemical measurements, and the flow simulation using this value revealed the formation of clear concentration gradient of benzaldehyde in the microreactor channel over a specific channel length. In addition, the necessity of the liquid-liquid parallel laminar flow was confirmed by flow mode experiments.
NASA Astrophysics Data System (ADS)
Attari Moghaddam, Alireza; Prat, Marc; Tsotsas, Evangelos; Kharaghani, Abdolreza
2017-12-01
The classical continuum modeling of evaporation in capillary porous media is revisited from pore network simulations of the evaporation process. The computed moisture diffusivity is characterized by a minimum corresponding to the transition between liquid and vapor transport mechanisms confirming previous interpretations. Also the study suggests an explanation for the scattering generally observed in the moisture diffusivity obtained from experimental data. The pore network simulations indicate a noticeable nonlocal equilibrium effect leading to a new interpretation of the vapor pressure-saturation relationship classically introduced to obtain the one-equation continuum model of evaporation. The latter should not be understood as a desorption isotherm as classically considered but rather as a signature of a nonlocal equilibrium effect. The main outcome of this study is therefore that nonlocal equilibrium two-equation model must be considered for improving the continuum modeling of evaporation.
Hu, Guixiang; Huang, Meilan; Luo, Chengcai; Wang, Qi; Zou, Jian-Wei
2016-05-01
The separation of enantiomers and confirmation of their absolute configurations is significant in the development of chiral drugs. The interactions between the enantiomers of chiral pyrazole derivative and polysaccharide-based chiral stationary phase cellulose tris(4-methylbenzoate) (Chiralcel OJ) in seven solvents and under different temperature were studied using molecular dynamics simulations. The results show that solvent effect has remarkable influence on the interactions. Structure analysis discloses that the different interactions between two isomers and chiral stationary phase are dependent on the nature of solvents, which may invert the elution order. The computational method in the present study can be used to predict the elution order and the absolute configurations of enantiomers in HPLC separations and therefore would be valuable in development of chiral drugs. Copyright © 2016 Elsevier Inc. All rights reserved.
Configurational entropy measurements in extremely supercooled liquids that break the glass ceiling.
Berthier, Ludovic; Charbonneau, Patrick; Coslovich, Daniele; Ninarello, Andrea; Ozawa, Misaki; Yaida, Sho
2017-10-24
Liquids relax extremely slowly on approaching the glass state. One explanation is that an entropy crisis, because of the rarefaction of available states, makes it increasingly arduous to reach equilibrium in that regime. Validating this scenario is challenging, because experiments offer limited resolution, while numerical studies lag more than eight orders of magnitude behind experimentally relevant timescales. In this work, we not only close the colossal gap between experiments and simulations but manage to create in silico configurations that have no experimental analog yet. Deploying a range of computational tools, we obtain four estimates of their configurational entropy. These measurements consistently confirm that the steep entropy decrease observed in experiments is also found in simulations, even beyond the experimental glass transition. Our numerical results thus extend the observational window into the physics of glasses and reinforce the relevance of an entropy crisis for understanding their formation. Published under the PNAS license.
Mono- and Di-Alkylation Processes of DNA Bases by Nitrogen Mustard Mechlorethamine.
Larrañaga, Olatz; de Cózar, Abel; Cossío, Fernando P
2017-12-06
The reactivity of nitrogen mustard mechlorethamine (mec) with purine bases towards formation of mono- (G-mec and A-mec) and dialkylated (AA-mec, GG-mec and AG-mec) adducts has been studied using density functional theory (DFT). To gain a complete overview of DNA-alkylation processes, direct chloride substitution and formation through activated aziridinium species were considered as possible reaction paths for adduct formation. Our results confirm that DNA alkylation by mec occurs via aziridine intermediates instead of direct substitution. Consideration of explicit water molecules in conjunction with polarizable continuum model (PCM) was shown as an adequate computational method for a proper representation of the system. Moreover, Runge-Kutta numerical kinetic simulations including the possible bisadducts have been performed. These simulations predicted a product ratio of 83:17 of GG-mec and AG-mec diadducts, respectively. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dominant factor analysis of B-flow twinkling sign with phantom and simulation data.
Lu, Weijia; Haider, Bruno
2017-01-01
The twinkling sign in B-flow imaging (BFI-TS) has been reported in the literature to increase both specificity and sensitivity compared to the traditional gray-scale imaging. Unfortunately, there has been no conclusive study on the mechanism of this effect. In the study presented here, a comparative test on phantoms is introduced, where the variance of a phase estimator is used to quantify the motion amplitude. The statistical inference is employed later to find the dominate factor for the twinkling sign, which is proven by computer simulation. Through the analysis, it is confirmed that the tissue viscoelasticity is closely coupled with the twinkling sign. Moreover, the acoustic radiation force caused by tissue attenuation is found to be the trigger of the twinkling sign. Based on these findings, the BFI-TS is interpreted as a tissue movement triggering vibration of microcalcifications particle.
Encounter times of chromatin loci influenced by polymer decondensation
NASA Astrophysics Data System (ADS)
Amitai, A.; Holcman, D.
2018-03-01
The time for a DNA sequence to find its homologous counterpart depends on a long random search inside the cell nucleus. Using polymer models, we compute here the mean first encounter time (MFET) between two sites located on two different polymer chains and confined locally by potential wells. We find that reducing tethering forces acting on the polymers results in local decondensation, and numerical simulations of the polymer model show that these changes are associated with a reduction of the MFET by several orders of magnitude. We derive here new asymptotic formula for the MFET, confirmed by Brownian simulations. We conclude from the present modeling approach that the fast search for homology is mediated by a local chromatin decondensation due to the release of multiple chromatin tethering forces. The present scenario could explain how the homologous recombination pathway for double-stranded DNA repair is controlled by its random search step.
Configurational entropy measurements in extremely supercooled liquids that break the glass ceiling
Berthier, Ludovic; Charbonneau, Patrick; Coslovich, Daniele; Ninarello, Andrea; Ozawa, Misaki
2017-01-01
Liquids relax extremely slowly on approaching the glass state. One explanation is that an entropy crisis, because of the rarefaction of available states, makes it increasingly arduous to reach equilibrium in that regime. Validating this scenario is challenging, because experiments offer limited resolution, while numerical studies lag more than eight orders of magnitude behind experimentally relevant timescales. In this work, we not only close the colossal gap between experiments and simulations but manage to create in silico configurations that have no experimental analog yet. Deploying a range of computational tools, we obtain four estimates of their configurational entropy. These measurements consistently confirm that the steep entropy decrease observed in experiments is also found in simulations, even beyond the experimental glass transition. Our numerical results thus extend the observational window into the physics of glasses and reinforce the relevance of an entropy crisis for understanding their formation. PMID:29073056
Active cooling of microvascular composites for battery packaging
NASA Astrophysics Data System (ADS)
Pety, Stephen J.; Chia, Patrick X. L.; Carrington, Stephen M.; White, Scott R.
2017-10-01
Batteries in electric vehicles (EVs) require a packaging system that provides both thermal regulation and crash protection. A novel packaging scheme is presented that uses active cooling of microvascular carbon fiber reinforced composites to accomplish this multifunctional objective. Microvascular carbon fiber/epoxy composite panels were fabricated and their cooling performance assessed over a range of thermal loads and experimental conditions. Tests were performed for different values of coolant flow rate, channel spacing, panel thermal conductivity, and applied heat flux. More efficient cooling occurs when the coolant flow rate is increased, channel spacing is reduced, and thermal conductivity of the host composite is increased. Computational fluid dynamics (CFD) simulations were also performed and correlate well with the experimental data. CFD simulations of a typical EV battery pack confirm that microvascular composite panels can adequately cool battery cells generating 500 W m-2 heat flux below 40 °C.
Numerical simulation of heat transfer and fluid flow in laser drilling of metals
NASA Astrophysics Data System (ADS)
Zhang, Tingzhong; Ni, Chenyin; Zhou, Jie; Zhang, Hongchao; Shen, Zhonghua; Ni, Xiaowu; Lu, Jian
2015-05-01
Laser processing as laser drilling, laser welding and laser cutting, etc. is rather important in modern manufacture, and the interaction of laser and matter is a complex phenomenon which should be detailed studied in order to increase the manufacture efficiency and quality. In this paper, a two-dimensional transient numerical model was developed to study the temperature field and molten pool size during pulsed laser keyhole drilling. The volume-of-fluid method was employed to track free surfaces, and melting and evaporation enthalpy, recoil pressure, surface tension, and energy loss due to evaporating materials were considered in this model. Besides, the enthalpy-porosity technique was also applied to account for the latent heat during melting and solidification. Temperature fields and melt pool size were numerically simulated via finite element method. Moreover, the effectiveness of the developed computational procedure had been confirmed by experiments.
Configurational entropy measurements in extremely supercooled liquids that break the glass ceiling
NASA Astrophysics Data System (ADS)
Berthier, Ludovic; Charbonneau, Patrick; Coslovich, Daniele; Ninarello, Andrea; Ozawa, Misaki; Yaida, Sho
2017-10-01
Liquids relax extremely slowly on approaching the glass state. One explanation is that an entropy crisis, because of the rarefaction of available states, makes it increasingly arduous to reach equilibrium in that regime. Validating this scenario is challenging, because experiments offer limited resolution, while numerical studies lag more than eight orders of magnitude behind experimentally relevant timescales. In this work, we not only close the colossal gap between experiments and simulations but manage to create in silico configurations that have no experimental analog yet. Deploying a range of computational tools, we obtain four estimates of their configurational entropy. These measurements consistently confirm that the steep entropy decrease observed in experiments is also found in simulations, even beyond the experimental glass transition. Our numerical results thus extend the observational window into the physics of glasses and reinforce the relevance of an entropy crisis for understanding their formation.
Saenz-Méndez, Patricia; Katz, Aline; Pérez-Kempner, María Lucía; Ventura, Oscar N; Vázquez, Marta
2017-04-01
A new homology model of human microsomal epoxide hydrolase was derived based on multiple templates. The model obtained was fully evaluated, including MD simulations and ensemble-based docking, showing that the quality of the structure is better than that of only previously known model. Particularly, a catalytic triad was clearly identified, in agreement with the experimental information available. Analysis of intermediates in the enzymatic mechanism led to the identification of key residues for substrate binding, stereoselectivity, and intermediate stabilization during the reaction. In particular, we have confirmed the role of the oxyanion hole and the conserved motif (HGXP) in epoxide hydrolases, in excellent agreement with known experimental and computational data on similar systems. The model obtained is the first one that fully agrees with all the experimental observations on the system. Proteins 2017; 85:720-730. © 2016 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Uses of Computer Simulation Models in Ag-Research and Everyday Life
USDA-ARS?s Scientific Manuscript database
When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...
A meta-analysis of outcomes from the use of computer-simulated experiments in science education
NASA Astrophysics Data System (ADS)
Lejeune, John Van
The purpose of this study was to synthesize the findings from existing research on the effects of computer simulated experiments on students in science education. Results from 40 reports were integrated by the process of meta-analysis to examine the effect of computer-simulated experiments and interactive videodisc simulations on student achievement and attitudes. Findings indicated significant positive differences in both low-level and high-level achievement of students who use computer-simulated experiments and interactive videodisc simulations as compared to students who used more traditional learning activities. No significant differences in retention, student attitudes toward the subject, or toward the educational method were found. Based on the findings of this study, computer-simulated experiments and interactive videodisc simulations should be used to enhance students' learning in science, especially in cases where the use of traditional laboratory activities are expensive, dangerous, or impractical.
Riaz, Faisal; Niazi, Muaz A
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.
Multi-vortex crystal lattices in Bose-Einstein condensates with a rotating trap.
Xie, Shuangquan; Kevrekidis, Panayotis G; Kolokolnikov, Theodore
2018-05-01
We consider vortex dynamics in the context of Bose-Einstein condensates (BECs) with a rotating trap, with or without anisotropy. Starting with the Gross-Pitaevskii (GP) partial differential equation (PDE), we derive a novel reduced system of ordinary differential equations (ODEs) that describes stable configurations of multiple co-rotating vortices (vortex crystals). This description is found to be quite accurate quantitatively especially in the case of multiple vortices. In the limit of many vortices, BECs are known to form vortex crystal structures, whereby vortices tend to arrange themselves in a hexagonal-like spatial configuration. Using our asymptotic reduction, we derive the effective vortex crystal density and its radius. We also obtain an asymptotic estimate for the maximum number of vortices as a function of rotation rate. We extend considerations to the anisotropic trap case, confirming that a pair of vortices lying on the long (short) axis is linearly stable (unstable), corroborating the ODE reduction results with full PDE simulations. We then further investigate the many-vortex limit in the case of strong anisotropic potential. In this limit, the vortices tend to align themselves along the long axis, and we compute the effective one-dimensional vortex density, as well as the maximum admissible number of vortices. Detailed numerical simulations of the GP equation are used to confirm our analytical predictions.
Improving energy efficiency of Embedded DRAM Caches for High-end Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S; Li, Dong
2014-01-01
With increasing system core-count, the size of last level cache (LLC) has increased and since SRAM consumes high leakage power, power consumption of LLCs is becoming a significant fraction of processor power consumption. To address this, researchers have used embedded DRAM (eDRAM) LLCs which consume low-leakage power. However, eDRAM caches consume a significant amount of energy in the form of refresh energy. In this paper, we propose ESTEEM, an energy saving technique for embedded DRAM caches. ESTEEM uses dynamic cache reconfiguration to turn-off a portion of the cache to save both leakage and refresh energy. It logically divides the cachemore » sets into multiple modules and turns-off possibly different number of ways in each module. Microarchitectural simulations confirm that ESTEEM is effective in improving performance and energy efficiency and provides better results compared to a recently-proposed eDRAM cache energy saving technique, namely Refrint. For single and dual-core simulations, the average saving in memory subsystem (LLC+main memory) on using ESTEEM is 25.8% and 32.6%, respectively and average weighted speedup are 1.09X and 1.22X, respectively. Additional experiments confirm that ESTEEM works well for a wide-range of system parameters.« less
Niazi, Muaz A.
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson’s arms race model has also been presented. The performance of the proposed social agent has been validated at two levels–firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme. PMID:29040294
Naim, Mona; Elewa, Mahmoud; El-Shafei, Ahmed; Moneer, Abeer
2015-01-01
An innovative polymeric membrane has been invented, which presents a breakthrough in the field of desalination membranes. It can desalinate simulated seawater of exceptionally high concentration to produce a high flux of potable water with over 99.7% salt rejection (%SR) in a once-through purge-air pervaporation (PV) process. A set-up was constructed for conducting the desalination experiments and the effect of initial salt solution concentration (Ci) and pervaporation temperature (Tpv) on the water flux (J), %SR, separation factor, and pervaporation separation index were determined. The membrane was prepared by the phase-inversion technique, of a specially formulated casting solution consisting of five ingredients, after which the membrane was subjected to a post-treatment by which certain properties were conferred. The results confirmed that the salinity of the pervaporate was independent of Ci (all %SR above 99.7). The best result was at Tpv=70 °C, where J varied from 5.97 to 3.45 l/m2 h for Ci=40-140 g NaCl/l, respectively. The membrane morphology was confirmed to be asymmetric. The contact angle was immeasurable, indicating the membrane to be super-hydrophilic. Activation energies computed using Arrhenius law were, under all conditions investigated, less than 20 kJ/mol K.
High resolution flow field prediction for tail rotor aeroacoustics
NASA Technical Reports Server (NTRS)
Quackenbush, Todd R.; Bliss, Donald B.
1989-01-01
The prediction of tail rotor noise due to the impingement of the main rotor wake poses a significant challenge to current analysis methods in rotorcraft aeroacoustics. This paper describes the development of a new treatment of the tail rotor aerodynamic environment that permits highly accurate resolution of the incident flow field with modest computational effort relative to alternative models. The new approach incorporates an advanced full-span free wake model of the main rotor in a scheme which reconstructs high-resolution flow solutions from preliminary, computationally inexpensive simulations with coarse resolution. The heart of the approach is a novel method for using local velocity correction terms to capture the steep velocity gradients characteristic of the vortex-dominated incident flow. Sample calculations have been undertaken to examine the principal types of interactions between the tail rotor and the main rotor wake and to examine the performance of the new method. The results of these sample problems confirm the success of this approach in capturing the high-resolution flows necessary for analysis of rotor-wake/rotor interactions with dramatically reduced computational cost. Computations of radiated sound are also carried out that explore the role of various portions of the main rotor wake in generating tail rotor noise.
Near real-time traffic routing
NASA Technical Reports Server (NTRS)
Yang, Chaowei (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor); Cao, Ying (Inventor)
2012-01-01
A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.
Radiotherapy Monte Carlo simulation using cloud computing technology.
Poole, C M; Cornelius, I; Trapp, J V; Langton, C M
2012-12-01
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
Simulating complex intracellular processes using object-oriented computational modelling.
Johnson, Colin G; Goldman, Jacki P; Gullick, William J
2004-11-01
The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation.
Sinking bubbles in stout beers
NASA Astrophysics Data System (ADS)
Lee, W. T.; Kaar, S.; O'Brien, S. B. G.
2018-04-01
A surprising phenomenon witnessed by many is the sinking bubbles seen in a settling pint of stout beer. Bubbles are less dense than the surrounding fluid so how does this happen? Previous work has shown that the explanation lies in a circulation of fluid promoted by the tilted sides of the glass. However, this work has relied heavily on computational fluid dynamics (CFD) simulations. Here, we show that the phenomenon of sinking bubbles can be predicted using a simple analytic model. To make the model analytically tractable, we work in the limit of small bubbles and consider a simplified geometry. The model confirms both the existence of sinking bubbles and the previously proposed mechanism.
NASA Astrophysics Data System (ADS)
Ikeda, Kazushi; Mima, Hiroki; Inoue, Yuta; Shibata, Tomohiro; Fukaya, Naoki; Hitomi, Kentaro; Bando, Takashi
The paper proposes a rear-end collision warning system for drivers, where the collision risk is adaptively set from driving signals. The system employs the inverse of the time-to-collision with a constant relative acceleration as the risk and the one-class support vector machine as the anomaly detector. The system also utilizes brake sequences for outliers detection. When a brake sequence has a low likelihood with respect to trained hidden Markov models, the driving data during the sequence are removed from the training dataset. This data selection is confirmed to increase the robustness of the system by computer simulations.
NASA Astrophysics Data System (ADS)
Lähivaara, Timo; Kärkkäinen, Leo; Huttunen, Janne M. J.; Hesthaven, Jan S.
2018-02-01
We study the feasibility of data based machine learning applied to ultrasound tomography to estimate water-saturated porous material parameters. In this work, the data to train the neural networks is simulated by solving wave propagation in coupled poroviscoelastic-viscoelastic-acoustic media. As the forward model, we consider a high-order discontinuous Galerkin method while deep convolutional neural networks are used to solve the parameter estimation problem. In the numerical experiment, we estimate the material porosity and tortuosity while the remaining parameters which are of less interest are successfully marginalized in the neural networks-based inversion. Computational examples confirms the feasibility and accuracy of this approach.
Directed motion of a Brownian motor in a temperature gradient
NASA Astrophysics Data System (ADS)
Liu, Yibing; Nie, Wenjie; Lan, Yueheng
2017-05-01
Directed motion of mesoscopic systems in a non-equilibrium environment is of great interest to both scientists and engineers. Here, the translation and rotation of a Brownian motor is investigated under non-equilibrium conditions. An anomalous directed translation is found if the two heads of the Brownian motor are immersed in baths with different particle masses, which is hinted in the analytic computation and confirmed by the numerical simulation. Similar consideration is also used to find the directed movement in the single rotational and translational degree of freedom of the Brownian motor when residing in one thermal bath with a temperature gradient.
NASA Astrophysics Data System (ADS)
Mai, Wenjie; Zhang, Long; Gu, Yudong; Huang, Shiqing; Zhang, Zongfu; Lao, Changshi; Yang, Peihua; Qiang, Pengfei; Chen, Zhongwei
2012-08-01
With assistance from a nano-manipulator system inside a scanning electron microscope chamber, mechanical and electrical properties of ZnO nanorings were investigated. The change of a fractured nanoring to nearly straight nanobelts was strong evidence to support the previously proposed electrostatic-force-induced self-coiling model, and our computational simulation results indicated the fracture force was 25-30 μN. The contact between a tungsten tip of the manipulator and a ZnO nanoring was confirmed as the Schottky type; therefore, the change of I-V curves of the nanoring under compression was attributed to the Schottky barrier height changes.
The thiocyanate anion is a primary driver of carbon dioxide capture by ionic liquids
NASA Astrophysics Data System (ADS)
Chaban, Vitaly
2015-01-01
Carbon dioxide, CO2, capture by room-temperature ionic liquids (RTILs) is a vivid research area featuring both accomplishments and frustrations. This work employs the PM7-MD method to simulate adsorption of CO2 by 1,3-dimethylimidazolium thiocyanate at 300 K. The obtained result evidences that the thiocyanate anion plays a key role in gas capture, whereas the impact of the 1,3-dimethylimidazolium cation is mediocre. Decomposition of the computed wave function on the individual molecular orbitals confirms that CO2-SCN binding extends beyond just expected electrostatic interactions in the ion-molecular system and involves partial sharing of valence orbitals.
Optical Forging of Graphene into Three-Dimensional Shapes.
Johansson, Andreas; Myllyperkiö, Pasi; Koskinen, Pekka; Aumanen, Jukka; Koivistoinen, Juha; Tsai, Hung-Chieh; Chen, Chia-Hao; Chang, Lo-Yueh; Hiltunen, Vesa-Matti; Manninen, Jyrki J; Woon, Wei Yen; Pettersson, Mika
2017-10-11
Atomically thin materials, such as graphene, are the ultimate building blocks for nanoscale devices. But although their synthesis and handling today are routine, all efforts thus far have been restricted to flat natural geometries, since the means to control their three-dimensional (3D) morphology has remained elusive. Here we show that, just as a blacksmith uses a hammer to forge a metal sheet into 3D shapes, a pulsed laser beam can forge a graphene sheet into controlled 3D shapes in the nanoscale. The forging mechanism is based on laser-induced local expansion of graphene, as confirmed by computer simulations using thin sheet elasticity theory.
Comparison of experiments and computations for cold gas spraying through a mask. Part 2
NASA Astrophysics Data System (ADS)
Klinkov, S. V.; Kosarev, V. F.; Ryashin, N. S.
2017-03-01
This paper presents experimental and simulation results of cold spray coating deposition using the mask placed above the plane substrate at different distances. Velocities of aluminum (mean size 30 μm) and copper (mean size 60 μm) particles in the vicinity of the mask are determined. It was found that particle velocities have angular distribution in flow with a representative standard deviation of 1.5-2 degrees. Modeling of coating formation behind the mask with account for this distribution was developed. The results of model agree with experimental data confirming the importance of particle angular distribution for coating deposition process in the masked area.
Mixtures of GAMs for habitat suitability analysis with overdispersed presence / absence data
Pleydell, David R.J.; Chrétien, Stéphane
2009-01-01
A new approach to species distribution modelling based on unsupervised classification via a finite mixture of GAMs incorporating habitat suitability curves is proposed. A tailored EM algorithm is outlined for computing maximum likelihood estimates. Several submodels incorporating various parameter constraints are explored. Simulation studies confirm, that under certain constraints, the habitat suitability curves are recovered with good precision. The method is also applied to a set of real data concerning presence/absence of observable small mammal indices collected on the Tibetan plateau. The resulting classification was found to correspond to species-level differences in habitat preference described in previous ecological work. PMID:20401331
Shock waves generated by sudden expansions of a water jet
NASA Astrophysics Data System (ADS)
Salinas-Vázquez, M.; Echeverría, C.; Porta, D.; Stern, C. E.; Ascanio, G.; Vicente, W.; Aguayo, J. P.
2018-07-01
Direct shadowgraph with parallel light combined with high-speed recording has been used to analyze the water jet of a cutting machine. The use of image processing allowed observing sudden expansions in the jet diameter as well as estimating the jet velocity by means of the Mach angle, obtaining velocities of about 500 m s^{-1}. The technique used here revealed the development of hydrodynamic instabilities in the jet. Additionally, this is the first reporting of the onset of shock waves generated by small fluctuations of a continuous flow of water at high velocity surrounded by air, a result confirmed by a transient computational fluid dynamics simulation.
Graphene-based room-temperature implementation of a modified Deutsch-Jozsa quantum algorithm.
Dragoman, Daniela; Dragoman, Mircea
2015-12-04
We present an implementation of a one-qubit and two-qubit modified Deutsch-Jozsa quantum algorithm based on graphene ballistic devices working at room temperature. The modified Deutsch-Jozsa algorithm decides whether a function, equivalent to the effect of an energy potential distribution on the wave function of ballistic charge carriers, is constant or not, without measuring the output wave function. The function need not be Boolean. Simulations confirm that the algorithm works properly, opening the way toward quantum computing at room temperature based on the same clean-room technologies as those used for fabrication of very-large-scale integrated circuits.
Minimizing Dispersion in FDTD Methods with CFL Limit Extension
NASA Astrophysics Data System (ADS)
Sun, Chen
The CFL extension in FDTD methods is receiving considerable attention in order to reduce the computational effort and save the simulation time. One of the major issues in the CFL extension methods is the increased dispersion. We formulate a decomposition of FDTD equations to study the behaviour of the dispersion. A compensation scheme to reduce the dispersion in CFL extension is constructed and proposed. We further study the CFL extension in a FDTD subgridding case, where we improve the accuracy by acting only on the FDTD equations of the fine grid. Numerical results confirm the efficiency of the proposed method for minimising dispersion.
A Novel Color Image Encryption Algorithm Based on Quantum Chaos Sequence
NASA Astrophysics Data System (ADS)
Liu, Hui; Jin, Cong
2017-03-01
In this paper, a novel algorithm of image encryption based on quantum chaotic is proposed. The keystreams are generated by the two-dimensional logistic map as initial conditions and parameters. And then general Arnold scrambling algorithm with keys is exploited to permute the pixels of color components. In diffusion process, a novel encryption algorithm, folding algorithm, is proposed to modify the value of diffused pixels. In order to get the high randomness and complexity, the two-dimensional logistic map and quantum chaotic map are coupled with nearest-neighboring coupled-map lattices. Theoretical analyses and computer simulations confirm that the proposed algorithm has high level of security.
Shape of isolated domains in lithium tantalate single crystals at elevated temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shur, V. Ya., E-mail: vladimir.shur@usu.ru; Akhmatkhanov, A. R.; Baturin, I. S.
2013-12-09
The shape of isolated domains has been investigated in congruent lithium tantalate (CLT) single crystals at elevated temperatures and analyzed in terms of kinetic approach. The obtained temperature dependence of the growing domain shape in CLT including circular shape at temperatures above 190 °C has been attributed to increase of relative input of isotropic ionic conductivity. The observed nonstop wall motion and independent domain growth after merging in CLT as opposed to stoichiometric lithium tantalate have been attributed to difference in wall orientation. The computer simulation has confirmed applicability of the kinetic approach to the domain shape explanation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrés, Nahuel, E-mail: nandres@iafe.uba.ar; Gómez, Daniel; Departamento de Física, Facultad de Ciencias Exactas y Naturales, Univrsidad de Buenos Aires, Pabellón I, 1428, Buenos Aires
We present a study of collisionless magnetic reconnection within the framework of full two-fluid MHD for a completely ionized hydrogen plasma, retaining the effects of the Hall current, electron pressure and electron inertia. We performed 2.5D simulations using a pseudo-spectral code with no dissipative effects. We check that the ideal invariants of the problem are conserved down to round-off errors. Our numerical results confirm that the change in the topology of the magnetic field lines is exclusively due to the presence of electron inertia. The computed reconnection rates remain a fair fraction of the Alfvén velocity, which therefore qualifies asmore » fast reconnection.« less
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Quantum chemistry simulation on quantum computers: theories and experiments.
Lu, Dawei; Xu, Boruo; Xu, Nanyang; Li, Zhaokai; Chen, Hongwei; Peng, Xinhua; Xu, Ruixue; Du, Jiangfeng
2012-07-14
It has been claimed that quantum computers can mimic quantum systems efficiently in the polynomial scale. Traditionally, those simulations are carried out numerically on classical computers, which are inevitably confronted with the exponential growth of required resources, with the increasing size of quantum systems. Quantum computers avoid this problem, and thus provide a possible solution for large quantum systems. In this paper, we first discuss the ideas of quantum simulation, the background of quantum simulators, their categories, and the development in both theories and experiments. We then present a brief introduction to quantum chemistry evaluated via classical computers followed by typical procedures of quantum simulation towards quantum chemistry. Reviewed are not only theoretical proposals but also proof-of-principle experimental implementations, via a small quantum computer, which include the evaluation of the static molecular eigenenergy and the simulation of chemical reaction dynamics. Although the experimental development is still behind the theory, we give prospects and suggestions for future experiments. We anticipate that in the near future quantum simulation will become a powerful tool for quantum chemistry over classical computations.
An analysis of the 70-meter antenna hydrostatic bearing by means of computer simulation
NASA Technical Reports Server (NTRS)
Bartos, R. D.
1993-01-01
Recently, the computer program 'A Computer Solution for Hydrostatic Bearings with Variable Film Thickness,' used to design the hydrostatic bearing of the 70-meter antennas, was modified to improve the accuracy with which the program predicts the film height profile and oil pressure distribution between the hydrostatic bearing pad and the runner. This article presents a description of the modified computer program, the theory upon which the computer program computations are based, computer simulation results, and a discussion of the computer simulation results.
Comparative Implementation of High Performance Computing for Power System Dynamic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng
Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less
Gupta, Jasmine; Nunes, Cletus; Vyas, Shyam; Jonnalagadda, Sriramakamal
2011-03-10
The objectives of this study were (i) to develop a computational model based on molecular dynamics technique to predict the miscibility of indomethacin in carriers (polyethylene oxide, glucose, and sucrose) and (ii) to experimentally verify the in silico predictions by characterizing the drug-carrier mixtures using thermoanalytical techniques. Molecular dynamics (MD) simulations were performed using the COMPASS force field, and the cohesive energy density and the solubility parameters were determined for the model compounds. The magnitude of difference in the solubility parameters of drug and carrier is indicative of their miscibility. The MD simulations predicted indomethacin to be miscible with polyethylene oxide and to be borderline miscible with sucrose and immiscible with glucose. The solubility parameter values obtained using the MD simulations values were in reasonable agreement with those calculated using group contribution methods. Differential scanning calorimetry showed melting point depression of polyethylene oxide with increasing levels of indomethacin accompanied by peak broadening, confirming miscibility. In contrast, thermal analysis of blends of indomethacin with sucrose and glucose verified general immiscibility. The findings demonstrate that molecular modeling is a powerful technique for determining the solubility parameters and predicting miscibility of pharmaceutical compounds. © 2011 American Chemical Society
Dănilă, R; Gerdes, B; Ulrike, H; Domínguez Fernández, E; Hassan, I
2009-01-01
The learning curve in laparoscopic surgery may be associated with higher patient risk, which is unacceptable in the setting of kidney donation. Virtual reality simulators may increase the safety and efficiency of training in laparoscopic surgery. The aim of this study was to investigate if the results of a training session reflect the actual skill level of transplantation surgeons and whether the simulator could differentiate laparoscopic experienced transplantation surgeon from advanced trainees. 16 subjects were assigned to one of two groups: 5 experienced transplantation surgeon and 11 advanced residents, with only assistant role during transplantation. The level of performance was measured by a relative scoring system that combines single parameters assessed by the computer. The higher the level of transplantation experience of a participant, the higher the laparoscopic performance. Experienced transplantation surgeons showed statistically significant better scores than the advanced group for time and precision parameters. Our results show that performance of the various tasks on the simulator corresponds to the respective level of experience in transplantation surgery in our research groups. This study confirms construct validity for the LapSim. It thus measures relevant skills and can be integrated in an endoscopic training and assessment curriculum for transplantations surgeons.
NASA Astrophysics Data System (ADS)
Chien, Cheng-Chih
In the past thirty years, the effectiveness of computer assisted learning was found varied by individual studies. Today, with drastic technical improvement, computers have been widely spread in schools and used in a variety of ways. In this study, a design model involving educational technology, pedagogy, and content domain is proposed for effective use of computers in learning. Computer simulation, constructivist and Vygotskian perspectives, and circular motion are the three elements of the specific Chain Model for instructional design. The goal of the physics course is to help students remove the ideas which are not consistent with the physics community and rebuild new knowledge. To achieve the learning goal, the strategies of using conceptual conflicts and using language to internalize specific tasks into mental functions were included. Computer simulations and accompanying worksheets were used to help students explore their own ideas and to generate questions for discussions. Using animated images to describe the dynamic processes involved in the circular motion may reduce the complexity and possible miscommunications resulting from verbal explanations. The effectiveness of the instructional material on student learning is evaluated. The results of problem solving activities show that students using computer simulations had significantly higher scores than students not using computer simulations. For conceptual understanding, on the pretest students in the non-simulation group had significantly higher score than students in the simulation group. There was no significant difference observed between the two groups in the posttest. The relations of gender, prior physics experience, and frequency of computer uses outside the course to student achievement were also studied. There were fewer female students than male students and fewer students using computer simulations than students not using computer simulations. These characteristics affect the statistical power for detecting differences. For the future research, more intervention of simulations may be introduced to explore the potential of computer simulation in helping students learning. A test for conceptual understanding with more problems and appropriate difficulty level may be needed.
van Kempen, Bob J H; Ferket, Bart S; Hofman, Albert; Steyerberg, Ewout W; Colkesen, Ersen B; Boekholdt, S Matthijs; Wareham, Nicholas J; Khaw, Kay-Tee; Hunink, M G Myriam
2012-12-06
We developed a Monte Carlo Markov model designed to investigate the effects of modifying cardiovascular disease (CVD) risk factors on the burden of CVD. Internal, predictive, and external validity of the model have not yet been established. The Rotterdam Ischemic Heart Disease and Stroke Computer Simulation (RISC) model was developed using data covering 5 years of follow-up from the Rotterdam Study. To prove 1) internal and 2) predictive validity, the incidences of coronary heart disease (CHD), stroke, CVD death, and non-CVD death simulated by the model over a 13-year period were compared with those recorded for 3,478 participants in the Rotterdam Study with at least 13 years of follow-up. 3) External validity was verified using 10 years of follow-up data from the European Prospective Investigation of Cancer (EPIC)-Norfolk study of 25,492 participants, for whom CVD and non-CVD mortality was compared. At year 5, the observed incidences (with simulated incidences in brackets) of CHD, stroke, and CVD and non-CVD mortality for the 3,478 Rotterdam Study participants were 5.30% (4.68%), 3.60% (3.23%), 4.70% (4.80%), and 7.50% (7.96%), respectively. At year 13, these percentages were 10.60% (10.91%), 9.90% (9.13%), 14.20% (15.12%), and 24.30% (23.42%). After recalibrating the model for the EPIC-Norfolk population, the 10-year observed (simulated) incidences of CVD and non-CVD mortality were 3.70% (4.95%) and 6.50% (6.29%). All observed incidences fell well within the 95% credibility intervals of the simulated incidences. We have confirmed the internal, predictive, and external validity of the RISC model. These findings provide a basis for analyzing the effects of modifying cardiovascular disease risk factors on the burden of CVD with the RISC model.
Paper simulation techniques in user requirements analysis for interactive computer systems
NASA Technical Reports Server (NTRS)
Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.
1979-01-01
This paper describes the use of a technique called 'paper simulation' in the analysis of user requirements for interactive computer systems. In a paper simulation, the user solves problems with the aid of a 'computer', as in normal man-in-the-loop simulation. In this procedure, though, the computer does not exist, but is simulated by the experimenters. This allows simulated problem solving early in the design effort, and allows the properties and degree of structure of the system and its dialogue to be varied. The technique, and a method of analyzing the results, are illustrated with examples from a recent paper simulation exercise involving a Space Shuttle flight design task
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Wangda; McNeil, Andrew; Wetter, Michael
2013-05-23
Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less
Virtual manufacturing work cell for engineering
NASA Astrophysics Data System (ADS)
Watanabe, Hideo; Ohashi, Kazushi; Takahashi, Nobuyuki; Kato, Kiyotaka; Fujita, Satoru
1997-12-01
The life cycles of products have been getting shorter. To meet this rapid turnover, manufacturing systems must be frequently changed as well. In engineering to develop manufacturing systems, there are several tasks such as process planning, layout design, programming, and final testing using actual machines. This development of manufacturing systems takes a long time and is expensive. To aid the above engineering process, we have developed the virtual manufacturing workcell (VMW). This paper describes a concept of VMW and design method through computer aided manufacturing engineering using VMW (CAME-VMW) related to the above engineering tasks. The VMW has all design data, and realizes a behavior of equipment and devices using a simulator. The simulator has logical and physical functionality. The one simulates a sequence control and the other simulates motion control, shape movement in 3D space. The simulator can execute the same control software made for actual machines. Therefore we can verify the behavior precisely before the manufacturing workcell will be constructed. The VMW creates engineering work space for several engineers and offers debugging tools such as virtual equipment and virtual controllers. We applied this VMW to development of a transfer workcell for vaporization machine in actual manufacturing system to produce plasma display panel (PDP) workcell and confirmed its effectiveness.
Stadlbauer, Petr; Krepl, Miroslav; Cheatham, Thomas E.; Koča, Jaroslav; Šponer, Jiří
2013-01-01
Explicit solvent molecular dynamics simulations have been used to complement preceding experimental and computational studies of folding of guanine quadruplexes (G-DNA). We initiate early stages of unfolding of several G-DNAs by simulating them under no-salt conditions and then try to fold them back using standard excess salt simulations. There is a significant difference between G-DNAs with all-anti parallel stranded stems and those with stems containing mixtures of syn and anti guanosines. The most natural rearrangement for all-anti stems is a vertical mutual slippage of the strands. This leads to stems with reduced numbers of tetrads during unfolding and a reduction of strand slippage during refolding. The presence of syn nucleotides prevents mutual strand slippage; therefore, the antiparallel and hybrid quadruplexes initiate unfolding via separation of the individual strands. The simulations confirm the capability of G-DNA molecules to adopt numerous stable locally and globally misfolded structures. The key point for a proper individual folding attempt appears to be correct prior distribution of syn and anti nucleotides in all four G-strands. The results suggest that at the level of individual molecules, G-DNA folding is an extremely multi-pathway process that is slowed by numerous misfolding arrangements stabilized on highly variable timescales. PMID:23700306
Staged-Fault Testing of Distance Protection Relay Settings
NASA Astrophysics Data System (ADS)
Havelka, J.; Malarić, R.; Frlan, K.
2012-01-01
In order to analyze the operation of the protection system during induced fault testing in the Croatian power system, a simulation using the CAPE software has been performed. The CAPE software (Computer-Aided Protection Engineering) is expert software intended primarily for relay protection engineers, which calculates current and voltage values during faults in the power system, so that relay protection devices can be properly set up. Once the accuracy of the simulation model had been confirmed, a series of simulations were performed in order to obtain the optimal fault location to test the protection system. The simulation results were used to specify the test sequence definitions for the end-to-end relay testing using advanced testing equipment with GPS synchronization for secondary injection in protection schemes based on communication. The objective of the end-to-end testing was to perform field validation of the protection settings, including verification of the circuit breaker operation, telecommunication channel time and the effectiveness of the relay algorithms. Once the end-to-end secondary injection testing had been completed, the induced fault testing was performed with three-end lines loaded and in service. This paper describes and analyses the test procedure, consisting of CAPE simulations, end-to-end test with advanced secondary equipment and staged-fault test of a three-end power line in the Croatian transmission system.
Computational composite mechanics for aerospace propulsion structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1986-01-01
Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial fabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating (1) complex composite structural behavior in general and (2) specific aerospace propulsion structural components in particular.
Computational composite mechanics for aerospace propulsion structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1987-01-01
Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial frabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating: (1) complex composite structural behavior in general, and (2) specific aerospace propulsion structural components in particular.
NASA Technical Reports Server (NTRS)
Curran, R. T.; Hornfeck, W. A.
1972-01-01
The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.
MAGIC Computer Simulation. Volume 2: Analyst Manual, Part 1
1971-05-01
A review of the subject Magic Computer Simulation User and Analyst Manuals has been conducted based upon a request received from the US Army...1971 4. TITLE AND SUBTITLE MAGIC Computer Simulation Analyst Manual Part 1 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...14. ABSTRACT The MAGIC computer simulation generates target description data consisting of item-by-item listings of the target’s components and air
Computational simulation of progressive fracture in fiber composites
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1986-01-01
Computational methods for simulating and predicting progressive fracture in fiber composite structures are presented. These methods are integrated into a computer code of modular form. The modules include composite mechanics, finite element analysis, and fracture criteria. The code is used to computationally simulate progressive fracture in composite laminates with and without defects. The simulation tracks the fracture progression in terms of modes initiating fracture, damage growth, and imminent global (catastrophic) laminate fracture.
ERIC Educational Resources Information Center
Hart, Jeffrey A.
1985-01-01
Presents a discussion of how computer simulations are used in two undergraduate social science courses and a faculty computer literacy course on simulations and artificial intelligence. Includes a list of 60 simulations for use on mainframes and microcomputers. Entries include type of hardware required, publisher's address, and cost. Sample…
Learning Oceanography from a Computer Simulation Compared with Direct Experience at Sea
ERIC Educational Resources Information Center
Winn, William; Stahr, Frederick; Sarason, Christian; Fruland, Ruth; Oppenheimer, Peter; Lee, Yen-Ling
2006-01-01
Considerable research has compared how students learn science from computer simulations with how they learn from "traditional" classes. Little research has compared how students learn science from computer simulations with how they learn from direct experience in the real environment on which the simulations are based. This study compared two…
ERIC Educational Resources Information Center
Tang, Hui; Abraham, Michael R.
2016-01-01
Computer-based simulations can help students visualize chemical representations and understand chemistry concepts, but simulations at different levels of representation may vary in effectiveness on student learning. This study investigated the influence of computer activities that simulate chemical reactions at different levels of representation…
NASA Astrophysics Data System (ADS)
Long, M. S.; Yantosca, R.; Nielsen, J.; Linford, J. C.; Keller, C. A.; Payer Sulprizio, M.; Jacob, D. J.
2014-12-01
The GEOS-Chem global chemical transport model (CTM), used by a large atmospheric chemistry research community, has been reengineered to serve as a platform for a range of computational atmospheric chemistry science foci and applications. Development included modularization for coupling to general circulation and Earth system models (ESMs) and the adoption of co-processor capable atmospheric chemistry solvers. This was done using an Earth System Modeling Framework (ESMF) interface that operates independently of GEOS-Chem scientific code to permit seamless transition from the GEOS-Chem stand-alone serial CTM to deployment as a coupled ESM module. In this manner, the continual stream of updates contributed by the CTM user community is automatically available for broader applications, which remain state-of-science and directly referenceable to the latest version of the standard GEOS-Chem CTM. These developments are now available as part of the standard version of the GEOS-Chem CTM. The system has been implemented as an atmospheric chemistry module within the NASA GEOS-5 ESM. The coupled GEOS-5/GEOS-Chem system was tested for weak and strong scalability and performance with a tropospheric oxidant-aerosol simulation. Results confirm that the GEOS-Chem chemical operator scales efficiently for any number of processes. Although inclusion of atmospheric chemistry in ESMs is computationally expensive, the excellent scalability of the chemical operator means that the relative cost goes down with increasing number of processes, making fine-scale resolution simulations possible.
Computer-based simulation training in emergency medicine designed in the light of malpractice cases.
Karakuş, Akan; Duran, Latif; Yavuz, Yücel; Altintop, Levent; Calişkan, Fatih
2014-07-27
Using computer-based simulation systems in medical education is becoming more and more common. Although the benefits of practicing with these systems in medical education have been demonstrated, advantages of using computer-based simulation in emergency medicine education are less validated. The aim of the present study was to assess the success rates of final year medical students in doing emergency medical treatment and evaluating the effectiveness of computer-based simulation training in improving final year medical students' knowledge. Twenty four Students trained with computer-based simulation and completed at least 4 hours of simulation-based education between the dates Feb 1, 2010 - May 1, 2010. Also a control group (traditionally trained, n =24) was chosen. After the end of training, students completed an examination about 5 randomized medical simulation cases. In 5 cases, an average of 3.9 correct medical approaches carried out by computer-based simulation trained students, an average of 2.8 correct medical approaches carried out by traditionally trained group (t = 3.90, p < 0.005). We found that the success of students trained with simulation training in cases which required complicated medical approach, was statistically higher than the ones who didn't take simulation training (p ≤ 0.05). Computer-based simulation training would be significantly effective in learning of medical treatment algorithms. We thought that these programs can improve the success rate of students especially in doing adequate medical approach to complex emergency cases.
NASA Astrophysics Data System (ADS)
Sloan, Gregory James
The direct numerical simulation (DNS) offers the most accurate approach to modeling the behavior of a physical system, but carries an enormous computation cost. There exists a need for an accurate DNS to model the coupled solid-fluid system seen in targeted drug delivery (TDD), nanofluid thermal energy storage (TES), as well as other fields where experiments are necessary, but experiment design may be costly. A parallel DNS can greatly reduce the large computation times required, while providing the same results and functionality of the serial counterpart. A D2Q9 lattice Boltzmann method approach was implemented to solve the fluid phase. The use of domain decomposition with message passing interface (MPI) parallelism resulted in an algorithm that exhibits super-linear scaling in testing, which may be attributed to the caching effect. Decreased performance on a per-node basis for a fixed number of processes confirms this observation. A multiscale approach was implemented to model the behavior of nanoparticles submerged in a viscous fluid, and used to examine the mechanisms that promote or inhibit clustering. Parallelization of this model using a masterworker algorithm with MPI gives less-than-linear speedup for a fixed number of particles and varying number of processes. This is due to the inherent inefficiency of the master-worker approach. Lastly, these separate simulations are combined, and two-way coupling is implemented between the solid and fluid.
Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models
NASA Astrophysics Data System (ADS)
Pallant, Amy; Lee, Hee-Sun
2015-04-01
Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students ( N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation tasks with three increasingly complex dynamic climate models. Each scientific argumentation task consisted of four parts: multiple-choice claim, openended explanation, five-point Likert scale uncertainty rating, and open-ended uncertainty rationale. We coded 1,294 scientific arguments in terms of a claim's consistency with current scientific consensus, whether explanations were model based or knowledge based and categorized the sources of uncertainty (personal vs. scientific). We used chi-square and ANOVA tests to identify significant patterns. Results indicate that (1) a majority of students incorporated models as evidence to support their claims, (2) most students used model output results shown on graphs to confirm their claim rather than to explain simulated molecular processes, (3) students' dependence on model results and their uncertainty rating diminished as the dynamic climate models became more and more complex, (4) some students' misconceptions interfered with observing and interpreting model results or simulated processes, and (5) students' uncertainty sources reflected more frequently on their assessment of personal knowledge or abilities related to the tasks than on their critical examination of scientific evidence resulting from models. These findings have implications for teaching and research related to the integration of scientific argumentation and modeling practices to address complex Earth systems.
NASA Technical Reports Server (NTRS)
Reznick, Steve
1988-01-01
Transonic Euler/Navier-Stokes computations are accomplished for wing-body flow fields using a computer program called Transonic Navier-Stokes (TNS). The wing-body grids are generated using a program called ZONER, which subdivides a coarse grid about a fighter-like aircraft configuration into smaller zones, which are tailored to local grid requirements. These zones can be either finely clustered for capture of viscous effects, or coarsely clustered for inviscid portions of the flow field. Different equation sets may be solved in the different zone types. This modular approach also affords the opportunity to modify a local region of the grid without recomputing the global grid. This capability speeds up the design optimization process when quick modifications to the geometry definition are desired. The solution algorithm embodied in TNS is implicit, and is capable of capturing pressure gradients associated with shocks. The algebraic turbulence model employed has proven adequate for viscous interactions with moderate separation. Results confirm that the TNS program can successfully be used to simulate transonic viscous flows about complicated 3-D geometries.
NeuroManager: a workflow analysis based simulation management engine for computational neuroscience
Stockton, David B.; Santamaria, Fidel
2015-01-01
We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project. PMID:26528175
NeuroManager: a workflow analysis based simulation management engine for computational neuroscience.
Stockton, David B; Santamaria, Fidel
2015-01-01
We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project.
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
A Computer-Based Simulation of an Acid-Base Titration
ERIC Educational Resources Information Center
Boblick, John M.
1971-01-01
Reviews the advantages of computer simulated environments for experiments, referring in particular to acid-base titrations. Includes pre-lab instructions and a sample computer printout of a student's use of an acid-base simulation. Ten references. (PR)
Software for Brain Network Simulations: A Comparative Study
Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.
2017-01-01
Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687
Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmalz, Mark S
2011-07-24
Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less
Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Chen, Yousu; Wu, Di
2015-12-09
Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less
Particle-in-cell studies of fast-ion slowing-down rates in cool tenuous magnetized plasma
NASA Astrophysics Data System (ADS)
Evans, Eugene S.; Cohen, Samuel A.; Welch, Dale R.
2018-04-01
We report on 3D-3V particle-in-cell simulations of fast-ion energy-loss rates in a cold, weakly-magnetized, weakly-coupled plasma where the electron gyroradius, ρe, is comparable to or less than the Debye length, λDe, and the fast-ion velocity exceeds the electron thermal velocity, a regime in which the electron response may be impeded. These simulations use explicit algorithms, spatially resolve ρe and λDe, and temporally resolve the electron cyclotron and plasma frequencies. For mono-energetic dilute fast ions with isotropic velocity distributions, these scaling studies of the slowing-down time, τs, versus fast-ion charge are in agreement with unmagnetized slowing-down theory; with an applied magnetic field, no consistent anisotropy between τs in the cross-field and field-parallel directions could be resolved. Scaling the fast-ion charge is confirmed as a viable way to reduce the required computational time for each simulation. The implications of these slowing down processes are described for one magnetic-confinement fusion concept, the small, advanced-fuel, field-reversed configuration device.
Analysis of Gravitational Signals from Core-Collapse Supernovae (CCSNe) using MatLab
NASA Astrophysics Data System (ADS)
Frere, Noah; Mezzacappa, Anthony; Yakunin, Konstantin
2017-01-01
When a massive star runs out of fuel, it collapses under its own weight and rebounds in a powerful supernova explosion, sending, among other things, ripples through space-time, known as gravitational waves (GWs). GWs can be detected by earth-based observatories, such as the Laser Interferometer Gravitational-Wave Observatory (LIGO). Observers must compare the data from GW detectors with theoretical waveforms in order to confirm that the detection of a GW signal from a particular source has occurred. GW predictions for core collapse supernovae (CCSNe) rely on computer simulations. The UTK/ORNL astrophysics group has performed such simulations. Here, I analyze the resulting waveforms, using Matlab, to generate their Fourier transforms, short-time Fourier transforms, energy spectra, evolution of frequencies, and frequency maxima. One product will be a Matlab interface for analyzing and comparing GW predictions based on data from future simulations. This interface will make it easier to analyze waveforms and to share the results with the GW astrophysics community. Funding provided by Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996-1200, USA.
Local structure in anisotropic systems determined by molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Komolkin, Andrei V.; Maliniak, Arnold
In the present communication we describe the investigation of local structure using a new visualization technique. The approach is based on two-dimensional pair correlation functions derived from a molecular dynamics computer simulation. We have used this method to analyse a trajectory produced in a simulation of a nematic liquid crystal of 4-n-pentyl-4'-cyanobiphenyl (5CB) (Komolkin et al., 1994, J. chem. Phys., 101, 4103). The molecule is assumed to have cylindrical symmetry, and the liquid crystalline phase is treated as uniaxial. The pair correlation functions or cylindrical distribution functions (CDFs) are calculated in the molecular (m) and laboratory (l) frames, gm2(z1 2, d1 2) and g12(Z1 2, D1 2). Anisotropic molecular organization in the liquid crystal is reflected in laboratory frame CDFs. The molecular excluded volume is determined and the effect of the fast motion in the alkyl chain is observed. The intramolecular distributions are included in the CDFs and indicate the size of the motional amplitude in the chain. Absence of long range order was confirmed, a feature typical for a nematic liquid crystal.
Näsi, Tiina; Mäki, Hanna; Hiltunen, Petri; Heiskala, Juha; Nissilä, Ilkka; Kotilahti, Kalle; Ilmoniemi, Risto J
2013-03-01
The effect of task-related extracerebral circulatory changes on diffuse optical tomography (DOT) of brain activation was evaluated using experimental data from 14 healthy human subjects and computer simulations. Total hemoglobin responses to weekday-recitation, verbal-fluency, and hand-motor tasks were measured with a high-density optode grid placed on the forehead. The tasks caused varying levels of mental and physical stress, eliciting extracerebral circulatory changes that the reconstruction algorithm was unable to fully distinguish from cerebral hemodynamic changes, resulting in artifacts in the brain activation images. Crosstalk between intra- and extracranial layers was confirmed by the simulations. The extracerebral effects were attenuated by superficial signal regression and depended to some extent on the heart rate, thus allowing identification of hemodynamic changes related to brain activation during the verbal-fluency task. During the hand-motor task, the extracerebral component was stronger, making the separation less clear. DOT provides a tool for distinguishing extracerebral components from signals of cerebral origin. Especially in the case of strong task-related extracerebral circulatory changes, however, sophisticated reconstruction methods are needed to eliminate crosstalk artifacts.
Perturbations of the Richardson number field by gravity waves
NASA Technical Reports Server (NTRS)
Wurtele, M. G.; Sharman, R. D.
1985-01-01
An analytic solution is presented for a stratified fluid of arbitrary constant Richardson number. By computer aided analysis the perturbation fields, including that of the Richardson number can be calculated. The results of the linear analytic model were compared with nonlinear simulations, leading to the following conclusions: (1) the perturbations in the Richardson number field, when small, are produced primarily by the perturbations of the shear; (2) perturbations of in the Richardson number field, even when small, are not symmetric, the increase being significantly larger than the decrease (the linear analytic solution and the nonlinear simulations both confirm this result); (3) as the perturbations grow, this asymmetry increases, but more so in the nonlinear simulations than in the linear analysis; (4) for large perturbations of the shear flow, the static stability, as represented by N2, is the dominating mechanism, becoming zero or negative, and producing convective overturning; and (5) the convectional measure of linearity in lee wave theory, NH/U, is no longer the critical parameter (it is suggested that (H/u sub 0) (du sub 0/dz) takes on this role in a shearing flow).
Combustion-Powered Actuation for Dynamic Stall Suppression - Simulations and Low-Mach Experiments
NASA Technical Reports Server (NTRS)
Matalanis, Claude G.; Min, Byung-Young; Bowles, Patrick O.; Jee, Solkeun; Wake, Brian E.; Crittenden, Tom; Woo, George; Glezer, Ari
2014-01-01
An investigation on dynamic-stall suppression capabilities of combustion-powered actuation (COMPACT) applied to a tabbed VR-12 airfoil is presented. In the first section, results from computational fluid dynamics (CFD) simulations carried out at Mach numbers from 0.3 to 0.5 are presented. Several geometric parameters are varied including the slot chordwise location and angle. Actuation pulse amplitude, frequency, and timing are also varied. The simulations suggest that cycle-averaged lift increases of approximately 4% and 8% with respect to the baseline airfoil are possible at Mach numbers of 0.4 and 0.3 for deep and near-deep dynamic-stall conditions. In the second section, static-stall results from low-speed wind-tunnel experiments are presented. Low-speed experiments and high-speed CFD suggest that slots oriented tangential to the airfoil surface produce stronger benefits than slots oriented normal to the chordline. Low-speed experiments confirm that chordwise slot locations suitable for Mach 0.3-0.4 stall suppression (based on CFD) will also be effective at lower Mach numbers.
NASA Astrophysics Data System (ADS)
Sperling, J.; Milota, F.; Tortschanoff, A.; Warmuth, Ch.; Mollay, B.; Bässler, H.; Kauffmann, H. F.
2002-12-01
We present a comprehensive experimental and computational study on fs-relaxational dynamics of optical excitations in the conjugated polymer poly(p-phenylenevinylene) (PPV) under selective excitation tuning conditions into the long-wavelength, low-vibrational S1ν=0-density-of-states (DOS). The dependence of single-wavelength luminescence kinetics and time-windowed spectral transients on distinct, initial excitation boundaries at 1.4 K and at room temperature was measured applying the luminescence up-conversion technique. The typical energy-dispersive intra-DOS energy transfer was simulated by a combination of static Monte Carlo method with a dynamical algorithm for solving the energy-space transport Master-Equation in population-space. For various, selective excitations that give rise to specific S1-population distributions in distinct spatial and energetic subspaces inside the DOS, simulations confirm the experimental results and show that the subsequent, energy-dissipative, multilevel relaxation is hierarchically constrained, and reveals a pronounced site-energy memory effect with a migration-threshold, characteristic of the (dressed) excitation dynamics in the disordered PPV many-body system.
Particle-in-cell simulation of x-ray wakefield acceleration and betatron radiation in nanotubes
Zhang, Xiaomei; Tajima, Toshiki; Farinella, Deano; ...
2016-10-18
Though wakefield acceleration in crystal channels has been previously proposed, x-ray wakefield acceleration has only recently become a realistic possibility since the invention of the single-cycled optical laser compression technique. We investigate the acceleration due to a wakefield induced by a coherent, ultrashort x-ray pulse guided by a nanoscale channel inside a solid material. By two-dimensional particle-in-cell computer simulations, we show that an acceleration gradient of TeV/cm is attainable. This is about 3 orders of magnitude stronger than that of the conventional plasma-based wakefield accelerations, which implies the possibility of an extremely compact scheme to attain ultrahigh energies. In additionmore » to particle acceleration, this scheme can also induce the emission of high energy photons at ~O(10–100) MeV. Here, our simulations confirm such high energy photon emissions, which is in contrast with that induced by the optical laser driven wakefield scheme. In addition to this, the significantly improved emittance of the energetic electrons has been discussed.« less
Energetics of codon-anticodon recognition on the small ribosomal subunit.
Almlöf, Martin; Andér, Martin; Aqvist, Johan
2007-01-09
Recent crystal structures of the small ribosomal subunit have made it possible to examine the detailed energetics of codon recognition on the ribosome by computational methods. The binding of cognate and near-cognate anticodon stem loops to the ribosome decoding center, with mRNA containing the Phe UUU and UUC codons, are analyzed here using explicit solvent molecular dynamics simulations together with the linear interaction energy (LIE) method. The calculated binding free energies are in excellent agreement with experimental binding constants and reproduce the relative effects of mismatches in the first and second codon position versus a mismatch at the wobble position. The simulations further predict that the Leu2 anticodon stem loop is about 10 times more stable than the Ser stem loop in complex with the Phe UUU codon. It is also found that the ribosome significantly enhances the intrinsic stability differences of codon-anticodon complexes in aqueous solution. Structural analysis of the simulations confirms the previously suggested importance of the universally conserved nucleotides A1492, A1493, and G530 in the decoding process.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-07
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-08
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.
Computational Fluid Dynamics Demonstration of Rigid Bodies in Motion
NASA Technical Reports Server (NTRS)
Camarena, Ernesto; Vu, Bruce T.
2011-01-01
The Design Analysis Branch (NE-Ml) at the Kennedy Space Center has not had the ability to accurately couple Rigid Body Dynamics (RBD) and Computational Fluid Dynamics (CFD). OVERFLOW-D is a flow solver that has been developed by NASA to have the capability to analyze and simulate dynamic motions with up to six Degrees of Freedom (6-DOF). Two simulations were prepared over the course of the internship to demonstrate 6DOF motion of rigid bodies under aerodynamic loading. The geometries in the simulations were based on a conceptual Space Launch System (SLS). The first simulation that was prepared and computed was the motion of a Solid Rocket Booster (SRB) as it separates from its core stage. To reduce computational time during the development of the simulation, only half of the physical domain with respect to the symmetry plane was simulated. Then a full solution was prepared and computed. The second simulation was a model of the SLS as it departs from a launch pad under a 20 knot crosswind. This simulation was reduced to Two Dimensions (2D) to reduce both preparation and computation time. By allowing 2-DOF for translations and 1-DOF for rotation, the simulation predicted unrealistic rotation. The simulation was then constrained to only allow translations.
NASA Astrophysics Data System (ADS)
Matsuda, K.; Onishi, R.; Takahashi, K.
2017-12-01
Urban high temperatures due to the combined influence of global warming and urban heat islands increase the risk of heat stroke. Greenery is one of possible countermeasures for mitigating the heat environments since the transpiration and shading effect of trees can reduce the air temperature and the radiative heat flux. In order to formulate effective measures, it is important to estimate the influence of the greenery on the heat stroke risk. In this study, we have developed a tree-crown-resolving large-eddy simulation (LES) model that is coupled with three-dimensional radiative transfer (3DRT) model. The Multi-Scale Simulator for the Geoenvironment (MSSG) is used for performing building- and tree-crown-resolving LES. The 3DRT model is implemented in the MSSG so that the 3DRT is calculated repeatedly during the time integration of the LES. We have confirmed that the computational time for the 3DRT model is negligibly small compared with that for the LES and the accuracy of the 3DRT model is sufficiently high to evaluate the radiative heat flux at the pedestrian level. The present model is applied to the analysis of the heat environment in an actual urban area around the Tokyo Bay area, covering 8 km × 8 km with 5-m grid mesh, in order to confirm its feasibility. The results show that the wet-bulb globe temperature (WBGT), which is an indicator of the heat stroke risk, is predicted in a sufficiently high accuracy to evaluate the influence of tree crowns on the heat environment. In addition, by comparing with a case without the greenery in the Tokyo Bay area, we have confirmed that the greenery increases the low WBGT areas in major pedestrian spaces by a factor of 3.4. This indicates that the present model can predict the greenery effect on the urban heat environment quantitatively.
ERIC Educational Resources Information Center
Schwarz, Christina V.; Meyer, Jason; Sharma, Ajay
2007-01-01
This study infused computer modeling and simulation tools in a 1-semester undergraduate elementary science methods course to advance preservice teachers' understandings of computer software use in science teaching and to help them learn important aspects of pedagogy and epistemology. Preservice teachers used computer modeling and simulation tools…
NASA Technical Reports Server (NTRS)
1998-01-01
Under a NASA SBIR (Small Business Innovative Research) contract, (NAS5-30905), EAI Simulation Associates, Inc., developed a new digital simulation computer, Starlight(tm). With an architecture based on the analog model of computation, Starlight(tm) outperforms all other computers on a wide range of continuous system simulation. This system is used in a variety of applications, including aerospace, automotive, electric power and chemical reactors.
Using Computational Simulations to Confront Students' Mental Models
ERIC Educational Resources Information Center
Rodrigues, R.; Carvalho, P. Simeão
2014-01-01
In this paper we show an example of how to use a computational simulation to obtain visual feedback for students' mental models, and compare their predictions with the simulated system's behaviour. Additionally, we use the computational simulation to incrementally modify the students' mental models in order to accommodate new data,…
ERIC Educational Resources Information Center
Lin, Li-Fen; Hsu, Ying-Shao; Yeh, Yi-Fen
2012-01-01
Several researchers have investigated the effects of computer simulations on students' learning. However, few have focused on how simulations with authentic contexts influences students' inquiry skills. Therefore, for the purposes of this study, we developed a computer simulation (FossilSim) embedded in an authentic inquiry lesson. FossilSim…
Computer-generated forces in distributed interactive simulation
NASA Astrophysics Data System (ADS)
Petty, Mikel D.
1995-04-01
Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.
Toe, Kyaw Kyar; Huang, Weimin; Yang, Tao; Duan, Yuping; Zhou, Jiayin; Su, Yi; Teo, Soo-Kng; Kumar, Selvaraj Senthil; Lim, Calvin Chi-Wan; Chui, Chee Kong; Chang, Stephen
2015-08-01
This work presents a surgical training system that incorporates cutting operation of soft tissue simulated based on a modified pre-computed linear elastic model in the Simulation Open Framework Architecture (SOFA) environment. A precomputed linear elastic model used for the simulation of soft tissue deformation involves computing the compliance matrix a priori based on the topological information of the mesh. While this process may require a few minutes to several hours, based on the number of vertices in the mesh, it needs only to be computed once and allows real-time computation of the subsequent soft tissue deformation. However, as the compliance matrix is based on the initial topology of the mesh, it does not allow any topological changes during simulation, such as cutting or tearing of the mesh. This work proposes a way to modify the pre-computed data by correcting the topological connectivity in the compliance matrix, without re-computing the compliance matrix which is computationally expensive.
NASA Astrophysics Data System (ADS)
Smetana, Lara Kathleen; Bell, Randy L.
2012-06-01
Researchers have explored the effectiveness of computer simulations for supporting science teaching and learning during the past four decades. The purpose of this paper is to provide a comprehensive, critical review of the literature on the impact of computer simulations on science teaching and learning, with the goal of summarizing what is currently known and providing guidance for future research. We report on the outcomes of 61 empirical studies dealing with the efficacy of, and implications for, computer simulations in science instruction. The overall findings suggest that simulations can be as effective, and in many ways more effective, than traditional (i.e. lecture-based, textbook-based and/or physical hands-on) instructional practices in promoting science content knowledge, developing process skills, and facilitating conceptual change. As with any other educational tool, the effectiveness of computer simulations is dependent upon the ways in which they are used. Thus, we outline specific research-based guidelines for best practice. Computer simulations are most effective when they (a) are used as supplements; (b) incorporate high-quality support structures; (c) encourage student reflection; and (d) promote cognitive dissonance. Used appropriately, computer simulations involve students in inquiry-based, authentic science explorations. Additionally, as educational technologies continue to evolve, advantages such as flexibility, safety, and efficiency deserve attention.
Human agency beliefs influence behaviour during virtual social interactions.
Caruana, Nathan; Spirou, Dean; Brock, Jon
2017-01-01
In recent years, with the emergence of relatively inexpensive and accessible virtual reality technologies, it is now possible to deliver compelling and realistic simulations of human-to-human interaction. Neuroimaging studies have shown that, when participants believe they are interacting via a virtual interface with another human agent, they show different patterns of brain activity compared to when they know that their virtual partner is computer-controlled. The suggestion is that users adopt an "intentional stance" by attributing mental states to their virtual partner. However, it remains unclear how beliefs in the agency of a virtual partner influence participants' behaviour and subjective experience of the interaction. We investigated this issue in the context of a cooperative "joint attention" game in which participants interacted via an eye tracker with a virtual onscreen partner, directing each other's eye gaze to different screen locations. Half of the participants were correctly informed that their partner was controlled by a computer algorithm ("Computer" condition). The other half were misled into believing that the virtual character was controlled by a second participant in another room ("Human" condition). Those in the "Human" condition were slower to make eye contact with their partner and more likely to try and guide their partner before they had established mutual eye contact than participants in the "Computer" condition. They also responded more rapidly when their partner was guiding them, although the same effect was also found for a control condition in which they responded to an arrow cue. Results confirm the influence of human agency beliefs on behaviour in this virtual social interaction context. They further suggest that researchers and developers attempting to simulate social interactions should consider the impact of agency beliefs on user experience in other social contexts, and their effect on the achievement of the application's goals.
ERIC Educational Resources Information Center
Brady, Corey; Orton, Kai; Weintrop, David; Anton, Gabriella; Rodriguez, Sebastian; Wilensky, Uri
2017-01-01
Computer science (CS) is becoming an increasingly diverse domain. This paper reports on an initiative designed to introduce underrepresented populations to computing using an eclectic, multifaceted approach. As part of a yearlong computing course, students engage in Maker activities, participatory simulations, and computing projects that…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Randy R.; Bass, Robert B.; Kouzes, Richard T.
2003-01-20
This paper provides a brief overview of the implementation of the Advanced Encryption Standard (AES) as a hash function for confirming the identity of software resident on a computer system. The PNNL Software Authentication team chose to use a hash function to confirm software identity on a system for situations where: (1) there is limited time to perform the confirmation and (2) access to the system is restricted to keyboard or thumbwheel input and output can only be displayed on a monitor. PNNL reviewed three popular algorithms: the Secure Hash Algorithm - 1 (SHA-1), the Message Digest - 5 (MD-5),more » and the Advanced Encryption Standard (AES) and selected the AES to incorporate in software confirmation tool we developed. This paper gives a brief overview of the SHA-1, MD-5, and the AES and sites references for further detail. It then explains the overall processing steps of the AES to reduce a large amount of generic data-the plain text, such is present in memory and other data storage media in a computer system, to a small amount of data-the hash digest, which is a mathematically unique representation or signature of the former that could be displayed on a computer's monitor. This paper starts with a simple definition and example to illustrate the use of a hash function. It concludes with a description of how the software confirmation tool uses the hash function to confirm the identity of software on a computer system.« less
Liang, Fuyou; Takagi, Shu; Himeno, Ryutaro; Liu, Hao
2013-01-01
A variety of methods have been proposed to noninvasively assess arterial stiffness using single or multiple oscillometric cuffs. A common pitfall of most of such methods is that the individual-specific accuracy of assessment is not clearly known due to an insufficient understanding of the relationships between the characteristics of cuff oscillometry and cardiovascular properties. To provide a tool for quantitatively investigating such relationships, we developed a computational model of the cardiovascular system coupled with an oscillometric cuff wrapped around the left upper arm. The model was first examined by simulating the inflation-deflation process of the cuff. The simulated results reasonably reproduced the well-established characteristics of cuff oscillometry. The model was then applied to study the oscillation wave generated by a suprasystolic cuff that is currently under considerable debate regarding its validity for assessing aortic stiffness. The simulated results confirmed the experimental observations that the suprasystolic cuff oscillation wave resembles the blood pressure wave in the proximal brachial artery and is characterised by the presence of two systolic peaks. A systemic analysis on the simulation results for various cardiovascular/physiological conditions revealed that neither the time lag nor the height difference between the two peaks is a direct indicator of aortic stiffness. These findings provided useful evidence for explaining the conflicts among previous studies. Finally, it was stressed that although the emphasis of this study has been placed on a suprasystolic upper-arm cuff, the model could be employed to address more issues related to oscillometric cuffs.
Computations of Complex Three-Dimensional Turbulent Free Jets
NASA Technical Reports Server (NTRS)
Wilson, Robert V.; Demuren, Ayodeji O.
1997-01-01
Three-dimensional, incompressible turbulent jets with rectangular and elliptical cross-sections are simulated with a finite-difference numerical method. The full Navier- Stokes equations are solved at low Reynolds numbers, whereas at high Reynolds numbers filtered forms of the equations are solved along with a sub-grid scale model to approximate the effects of the unresolved scales. A 2-N storage, third-order Runge-Kutta scheme is used for temporary discretization and a fourth-order compact scheme is used for spatial discretization. Although such methods are widely used in the simulation of compressible flows, the lack of an evolution equation for pressure or density presents particular difficulty in incompressible flows. The pressure-velocity coupling must be established indirectly. It is achieved, in this study, through a Poisson equation which is solved by a compact scheme of the same order of accuracy. The numerical formulation is validated and the dispersion and dissipation errors are documented by the solution of a wide range of benchmark problems. Three-dimensional computations are performed for different inlet conditions which model the naturally developing and forced jets. The experimentally observed phenomenon of axis-switching is captured in the numerical simulation, and it is confirmed through flow visualization that this is based on self-induction of the vorticity field. Statistical quantities such as mean velocity, mean pressure, two-point velocity spatial correlations and Reynolds stresses are presented. Detailed budgets of the mean momentum and Reynolds stresses are presented. Detailed budgets of the mean momentum and Reynolds stress equations are presented to aid in the turbulence modeling of complex jets. Simulations of circular jets are used to quantify the effect of the non-uniform curvature of the non-circular jets.
Evaluation of Computer Simulations for Teaching Apparel Merchandising Concepts.
ERIC Educational Resources Information Center
Jolly, Laura D.; Sisler, Grovalynn
1988-01-01
The study developed and evaluated computer simulations for teaching apparel merchandising concepts. Evaluation results indicated that teaching method (computer simulation versus case study) does not significantly affect cognitive learning. Student attitudes varied, however, according to topic (profitable merchandising analysis versus retailing…
O'Donnell, Michael
2015-01-01
State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf
Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.; Chen, Jen-Ping
2012-01-01
This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.
Longitudinal train dynamics: an overview
NASA Astrophysics Data System (ADS)
Wu, Qing; Spiryagin, Maksym; Cole, Colin
2016-12-01
This paper discusses the evolution of longitudinal train dynamics (LTD) simulations, which covers numerical solvers, vehicle connection systems, air brake systems, wagon dumper systems and locomotives, resistance forces and gravitational components, vehicle in-train instabilities, and computing schemes. A number of potential research topics are suggested, such as modelling of friction, polymer, and transition characteristics for vehicle connection simulations, studies of wagon dumping operations, proper modelling of vehicle in-train instabilities, and computing schemes for LTD simulations. Evidence shows that LTD simulations have evolved with computing capabilities. Currently, advanced component models that directly describe the working principles of the operation of air brake systems, vehicle connection systems, and traction systems are available. Parallel computing is a good solution to combine and simulate all these advanced models. Parallel computing can also be used to conduct three-dimensional long train dynamics simulations.
Unsteady CFD simulation for bucket design optimization of Pelton turbine runner
NASA Astrophysics Data System (ADS)
KUMASHIRO, Takashi; FUKUHARA, Haruki; TANI, Kiyohito
2016-11-01
To investigate flow patterns on the bucket of Pelton turbine runners is one of the important issues to improve the turbine performance. By studying the mechanism of loss generation on the flow around the bucket, it becomes possible to optimize the design of inner and outer bucket shape. For making it into study, computational fluid dynamics (CFD) is quite an effective method. It is normally used to simulate the flow in turbines and to expect the turbine performances in the development for many kind of water turbine including Pelton type. Especially in the bucket development, the numerical investigations are more useful than observations and measurements obtained in the model test to understand the transient flow patterns. In this paper, a numerical study on two different design buckets is introduced. The simplified analysis domain with consideration for reduction of computational load is also introduced. Furthermore the model tests of two buckets are also performed by using the same test equipment. As the results of the model test, a difference of turbine efficiency is clearly confirmed. The trend of calculated efficiencies on both buckets agrees with the experiment. To investigate the causes of that, the difference of unsteady flow patterns between two buckets is discussed based on the results of numerical analysis.
Geometry Dynamics of α-Helices in Different Class I Major Histocompatibility Complexes
Karch, Rudolf; Schreiner, Wolfgang
2015-01-01
MHC α-helices form the antigen-binding cleft and are of particular interest for immunological reactions. To monitor these helices in molecular dynamics simulations, we applied a parsimonious fragment-fitting method to trace the axes of the α-helices. Each resulting axis was fitted by polynomials in a least-squares sense and the curvature integral was computed. To find the appropriate polynomial degree, the method was tested on two artificially modelled helices, one performing a bending movement and another a hinge movement. We found that second-order polynomials retrieve predefined parameters of helical motion with minimal relative error. From MD simulations we selected those parts of α-helices that were stable and also close to the TCR/MHC interface. We monitored the curvature integral, generated a ruled surface between the two MHC α-helices, and computed interhelical area and surface torsion, as they changed over time. We found that MHC α-helices undergo rapid but small changes in conformation. The curvature integral of helices proved to be a sensitive measure, which was closely related to changes in shape over time as confirmed by RMSD analysis. We speculate that small changes in the conformation of individual MHC α-helices are part of the intrinsic dynamics induced by engagement with the TCR. PMID:26649324
Defect chemistry and lithium transport in Li3OCl anti-perovskite superionic conductors.
Lu, Ziheng; Chen, Chi; Baiyee, Zarah Medina; Chen, Xin; Niu, Chunming; Ciucci, Francesco
2015-12-28
Lithium-rich anti-perovskites (LiRAPs) are a promising family of solid electrolytes, which exhibit ionic conductivities above 10(-3) S cm(-1) at room temperature, among the highest reported values to date. In this work, we investigate the defect chemistry and the associated lithium transport in Li3OCl, a prototypical LiRAP, using ab initio density functional theory (DFT) calculations and classical molecular dynamics (MD) simulations. We studied three types of charge neutral defect pairs, namely the LiCl Schottky pair, the Li2O Schottky pair, and the Li interstitial with a substitutional defect of O on the Cl site. Among them the LiCl Schottky pair has the lowest binding energy and is the most energetically favorable for diffusion as computed by DFT. This is confirmed by classical MD simulations, where the computed Li ion diffusion coefficients for LiCl Schottky systems are significantly higher than those for the other two defects considered and the activation energy in LiCl deficient Li3OCl is comparable to experimental values. The high conductivities and low activation energies of LiCl Schottky systems are explained by the low energy pathways of Li between the Cl vacancies. We propose that Li vacancy hopping is the main diffusion mechanism in highly conductive Li3OCl.
NASA Astrophysics Data System (ADS)
Cetinbas, Firat C.; Ahluwalia, Rajesh K.; Kariuki, Nancy; De Andrade, Vincent; Fongalland, Dash; Smith, Linda; Sharman, Jonathan; Ferreira, Paulo; Rasouli, Somaye; Myers, Deborah J.
2017-03-01
The cost and performance of proton exchange membrane fuel cells strongly depend on the cathode electrode due to usage of expensive platinum (Pt) group metal catalyst and sluggish reaction kinetics. Development of low Pt content high performance cathodes requires comprehensive understanding of the electrode microstructure. In this study, a new approach is presented to characterize the detailed cathode electrode microstructure from nm to μm length scales by combining information from different experimental techniques. In this context, nano-scale X-ray computed tomography (nano-CT) is performed to extract the secondary pore space of the electrode. Transmission electron microscopy (TEM) is employed to determine primary C particle and Pt particle size distributions. X-ray scattering, with its ability to provide size distributions of orders of magnitude more particles than TEM, is used to confirm the TEM-determined size distributions. The number of primary pores that cannot be resolved by nano-CT is approximated using mercury intrusion porosimetry. An algorithm is developed to incorporate all these experimental data in one geometric representation. Upon validation of pore size distribution against gas adsorption and mercury intrusion porosimetry data, reconstructed ionomer size distribution is reported. In addition, transport related characteristics and effective properties are computed by performing simulations on the hybrid microstructure.
Martin, Caitlin; Sun, Wei
2015-01-01
Transcatheter aortic valve (TAV) intervention is now the standard-of-care treatment for inoperable patients and a viable alternative treatment option for high-risk patients with symptomatic aortic stenosis. While the procedure is associated with lower operative risk and shorter recovery times than traditional surgical aortic valve (SAV) replacement, TAV intervention is still not considered for lower-risk patients due in part to concerns about device durability. It is well known that bioprosthetic SAVs have limited durability, and TAVs are generally assumed to have even worse durability, yet there is little long-term data to confirm this suspicion. In this study, TAV and SAV leaflet fatigue due to cyclic loading was investigated through finite element analysis by implementing a computational soft tissue fatigue damage model to describe the behavior of the pericardial leaflets. Under identical loading conditions and with identical leaflet tissue properties, the TAV leaflets sustained higher stresses, strains, and fatigue damage compared to the SAV leaflets. The simulation results suggest that the durability of TAVs may be significantly reduced compared to SAVs to about 7.8 years. The developed computational framework may be useful in optimizing TAV design parameters to improve leaflet durability, and assessing the effects of underexpanded, elliptical, or non-uniformly expanded stent deployment on TAV durability. PMID:26294354
Wang, Zhiguo; Theeuwes, Jan
2014-01-01
Previous studies have shown that saccades may deviate towards or away from task irrelevant visual distractors. This observation has been attributed to active suppression (inhibition) of the distractor location unfolding over time: early in time inhibition at the distractor location is incomplete causing deviation towards the distractor, while later in time when inhibition is complete the eyes deviate away from the distractor. In a recent computational study, Wang, Kruijne and Theeuwes proposed an alternative theory that the lateral interactions in the superior colliculus (SC), which are characterized by short-distance excitation and long-distance inhibition, are sufficient for generating both deviations towards and away from distractors. In the present study, we performed a meta-analysis of the literature, ran model simulations and conducted two behavioral experiments to further explore this unconventional theory. Confirming predictions generated by the model simulations, the behavioral experiments show that a) saccades deviate towards close distractors and away from remote distractors, and b) the amount of deviation depends on the strength of fixation activity in the SC, which can be manipulated by turning off the fixation stimulus before or after target onset (Experiment 1), or by varying the eccentricity of the target and distractor (Experiment 2). PMID:25551552
Simulation Framework for Intelligent Transportation Systems
DOT National Transportation Integrated Search
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System. The simulator is designed for running on parellel computers and distributed (networked) computer systems, but ca...
Thermodynamic and transport properties of nitrogen fluid: Molecular theory and computer simulations
NASA Astrophysics Data System (ADS)
Eskandari Nasrabad, A.; Laghaei, R.
2018-04-01
Computer simulations and various theories are applied to compute the thermodynamic and transport properties of nitrogen fluid. To model the nitrogen interaction, an existing potential in the literature is modified to obtain a close agreement between the simulation results and experimental data for the orthobaric densities. We use the Generic van der Waals theory to calculate the mean free volume and apply the results within the modified Cohen-Turnbull relation to obtain the self-diffusion coefficient. Compared to experimental data, excellent results are obtained via computer simulations for the orthobaric densities, the vapor pressure, the equation of state, and the shear viscosity. We analyze the results of the theory and computer simulations for the various thermophysical properties.
NASA Technical Reports Server (NTRS)
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
Schmidt, Maria A; Morgan, Robert
2008-10-01
To investigate bolus timing artifacts that impair depiction of renal arteries at contrast material-enhanced magnetic resonance (MR) angiography and to determine the effect of contrast agent infusion rates on artifact generation. Renal contrast-enhanced MR angiography was simulated for a variety of infusion schemes, assuming both correct and incorrect timing between data acquisition and contrast agent injection. In addition, the ethics committee approved the retrospective evaluation of clinical breath-hold renal contrast-enhanced MR angiographic studies obtained with automated detection of contrast agent arrival. Twenty-two studies were evaluated for their ability to depict the origin of renal arteries in patent vessels and for any signs of timing errors. Simulations showed that a completely artifactual stenosis or an artifactual overestimation of an existing stenosis at the renal artery origin can be caused by timing errors of the order of 5 seconds in examinations performed with contrast agent infusion rates compatible with or higher than those of hand injections. Lower infusion rates make the studies more likely to accurately depict the origin of the renal arteries. In approximately one-third of all clinical examinations, different contrast agent uptake rates were detected on the left and right sides of the body, and thus allowed us to confirm that it is often impossible to optimize depiction of both renal arteries. In three renal arteries, a signal void was found at the origin in a patent vessel, and delayed contrast agent arrival was confirmed. Computer simulations and clinical examinations showed that timing errors impair the accurate depiction of renal artery origins. (c) RSNA, 2008.
GATE Monte Carlo simulation in a cloud computing environment
NASA Astrophysics Data System (ADS)
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
Simulating Laboratory Procedures.
ERIC Educational Resources Information Center
Baker, J. E.; And Others
1986-01-01
Describes the use of computer assisted instruction in a medical microbiology course. Presents examples of how computer assisted instruction can present case histories in which the laboratory procedures are simulated. Discusses an authoring system used to prepare computer simulations and provides one example of a case history dealing with fractured…
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
Inflight IFR procedures simulator
NASA Technical Reports Server (NTRS)
Parker, L. C. (Inventor)
1984-01-01
An inflight IFR procedures simulator for generating signals and commands to conventional instruments provided in an airplane is described. The simulator includes a signal synthesizer which generates predetermined simulated signals corresponding to signals normally received from remote sources upon being activated. A computer is connected to the signal synthesizer and causes the signal synthesizer to produce simulated signals responsive to programs fed into the computer. A switching network is connected to the signal synthesizer, the antenna of the aircraft, and navigational instruments and communication devices for selectively connecting instruments and devices to the synthesizer and disconnecting the antenna from the navigational instruments and communication device. Pressure transducers are connected to the altimeter and speed indicator for supplying electrical signals to the computer indicating the altitude and speed of the aircraft. A compass is connected for supply electrical signals for the computer indicating the heading of the airplane. The computer upon receiving signals from the pressure transducer and compass, computes the signals that are fed to the signal synthesizer which, in turn, generates simulated navigational signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schallreuter, Karin U.; Institute for Pigmentary Disorders in Association with EM Arndt University of Greifswald; University of Bradford
The human epidermis holds an autocrine acetylcholine production and degradation including functioning membrane integrated and cytosolic butyrylcholinesterase (BuchE). Here we show that BuchE activities increase 9-fold in the presence of calcium (0.5 x 10{sup -3}M) via a specific EF-hand calcium binding site, whereas acetylcholinesterase (AchE) is not affected. {sup 45}Calcium labelling and computer simulation confirmed the presence of one EF-hand binding site per subunit which is disrupted by H{sub 2}O{sub 2}-mediated oxidation. Moreover, we confirmed the faster hydrolysis by calcium-activated BuchE using the neurotoxic organophosphate O-ethyl-O-(4-nitrophenyl)-phenylphosphonothioate (EPN). Considering the large size of the human skin with 1.8 m{sup 2} surfacemore » area with its calcium gradient in the 10{sup -3}M range, our results implicate calcium-activated BuchE as a major protective mechanism against suicide inhibition of AchE by organophosphates in this non-neuronal tissue.« less
Computational structural mechanics engine structures computational simulator
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1989-01-01
The Computational Structural Mechanics (CSM) program at Lewis encompasses: (1) fundamental aspects for formulating and solving structural mechanics problems, and (2) development of integrated software systems to computationally simulate the performance/durability/life of engine structures.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-01
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343
How Effective Is Instructional Support for Learning with Computer Simulations?
ERIC Educational Resources Information Center
Eckhardt, Marc; Urhahne, Detlef; Conrad, Olaf; Harms, Ute
2013-01-01
The study examined the effects of two different instructional interventions as support for scientific discovery learning using computer simulations. In two well-known categories of difficulty, data interpretation and self-regulation, instructional interventions for learning with computer simulations on the topic "ecosystem water" were developed…
SIGMA--A Graphical Approach to Teaching Simulation.
ERIC Educational Resources Information Center
Schruben, Lee W.
1992-01-01
SIGMA (Simulation Graphical Modeling and Analysis) is a computer graphics environment for building, testing, and experimenting with discrete event simulation models on personal computers. It uses symbolic representations (computer animation) to depict the logic of large, complex discrete event systems for easier understanding and has proven itself…
Application of technology developed for flight simulation at NASA. Langley Research Center
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1991-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations including mathematical model computation and data input/output to the simulators must be deterministic and be completed in as short a time as possible. Personnel at NASA's Langley Research Center are currently developing the use of supercomputers for simulation mathematical model computation for real-time simulation. This, coupled with the use of an open systems software architecture, will advance the state-of-the-art in real-time flight simulation.
Computers for real time flight simulation: A market survey
NASA Technical Reports Server (NTRS)
Bekey, G. A.; Karplus, W. J.
1977-01-01
An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael A.; Boldyrev, Stanislav; Fischer, Paul
This report details the impact exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the DOE applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought together experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.
Computer Based Simulation of Laboratory Experiments.
ERIC Educational Resources Information Center
Edward, Norrie S.
1997-01-01
Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…
CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.
ERIC Educational Resources Information Center
Skrein, Dale
1994-01-01
CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)
2010-12-01
Base ( CFB ) Kingston. The computer simulation developed in this project is intended to be used for future research and as a possible training platform...DRDC Toronto No. CR 2010-055 Development of an E-Prime based computer simulation of an interactive Human Rights Violation negotiation script...Abstract This report describes the method of developing an E-Prime computer simulation of an interactive Human Rights Violation (HRV) negotiation. An
Outcomes from the DOE Workshop on Turbulent Flow Simulation at the Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael; Boldyrev, Stanislav; Chang, Choong-Seock
This paper summarizes the outcomes from the Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop, which was held 4-5 August 2015, and was sponsored by the U.S. Department of Energy Office of Advanced Scientific Computing Research. The workshop objective was to define and describe the challenges and opportunities that computing at the exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the U.S. Department of Energy applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought togethermore » experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.« less
DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Poole, Stephen W
2013-01-01
In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET,more » and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.« less
NASA Technical Reports Server (NTRS)
Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.
1989-01-01
The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.
NASA Astrophysics Data System (ADS)
Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki
2017-12-01
This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.
A study of workstation computational performance for real-time flight simulation
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Cleveland, Jeff I., II
1995-01-01
With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong
2010-10-01
Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Computer animation challenges for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine
2012-07-01
Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.
Computer Simulation in Undergraduate Instruction: A Symposium.
ERIC Educational Resources Information Center
Street, Warren R.; And Others
These symposium papers discuss the instructional use of computers in psychology, with emphasis on computer-produced simulations. The first, by Rich Edwards, briefly outlines LABSIM, a general purpose system of FORTRAN programs which simulate data collection in more than a dozen experimental models in psychology and are designed to train students…
Overview of Computer Simulation Modeling Approaches and Methods
Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett
2005-01-01
The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...
New Pedagogies on Teaching Science with Computer Simulations
ERIC Educational Resources Information Center
Khan, Samia
2011-01-01
Teaching science with computer simulations is a complex undertaking. This case study examines how an experienced science teacher taught chemistry using computer simulations and the impact of his teaching on his students. Classroom observations over 3 semesters, teacher interviews, and student surveys were collected. The data was analyzed for (1)…
Computer modeling and simulation of human movement. Applications in sport and rehabilitation.
Neptune, R R
2000-05-01
Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.
Fiber Composite Sandwich Thermostructural Behavior: Computational Simulation
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Aiello, R. A.; Murthy, P. L. N.
1986-01-01
Several computational levels of progressive sophistication/simplification are described to computationally simulate composite sandwich hygral, thermal, and structural behavior. The computational levels of sophistication include: (1) three-dimensional detailed finite element modeling of the honeycomb, the adhesive and the composite faces; (2) three-dimensional finite element modeling of the honeycomb assumed to be an equivalent continuous, homogeneous medium, the adhesive and the composite faces; (3) laminate theory simulation where the honeycomb (metal or composite) is assumed to consist of plies with equivalent properties; and (4) derivations of approximate, simplified equations for thermal and mechanical properties by simulating the honeycomb as an equivalent homogeneous medium. The approximate equations are combined with composite hygrothermomechanical and laminate theories to provide a simple and effective computational procedure for simulating the thermomechanical/thermostructural behavior of fiber composite sandwich structures.
Generalized dynamic engine simulation techniques for the digital computer
NASA Technical Reports Server (NTRS)
Sellers, J.; Teren, F.
1974-01-01
Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.
Generalized dynamic engine simulation techniques for the digital computer
NASA Technical Reports Server (NTRS)
Sellers, J.; Teren, F.
1974-01-01
Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.
Generalized dynamic engine simulation techniques for the digital computers
NASA Technical Reports Server (NTRS)
Sellers, J.; Teren, F.
1975-01-01
Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar digital programs on future engine simulation philosophy is also discussed.
Fast Learning for Immersive Engagement in Energy Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M
The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less
NASA Astrophysics Data System (ADS)
Hasan, S.; Basmage, O.; Stokes, J. T.; Hashmi, M. S. J.
2018-05-01
A review of wire coating studies using plasto-hydrodynamic pressure shows that most of the works were carried out by conducting experiments simultaneously with simulation analysis based upon Bernoulli's principle and Euler and Navier-Stokes (N-S) equations. These characteristics relate to the domain of Computational Fluid Dynamics (CFD) which is an interdisciplinary topic (Fluid Mechanics, Numerical Analysis of Fluid flow and Computer Science). This research investigates two aspects: (i) simulation work and (ii) experimentation. A mathematical model was developed to investigate the flow pattern of the molten polymer and pressure distribution within the wire-drawing dies, assessment of polymer coating thickness on the coated wires and speed of coating on the wires at the outlet of the drawing dies, without deploying any pressurizing pump. In addition to a physical model which was developed within ANSYS™ environment through the simulation design of ANSYS™ Workbench. The design was customized to simulate the process of wire-coating on the fine stainless-steel wires using drawing dies having different bore geometries such as: stepped parallel bore, tapered bore and combined parallel and tapered bore. The convergence of the designed CFD model and numerical and physical solution parameters for simulation were dynamically monitored for the viscous flow of the polypropylene (PP) polymer. Simulation results were validated against experimental results and used to predict the ideal bore shape to produce a thin coating on stainless wires with different diameter. Simulation studies confirmed that a specific speed should be attained by the stainless-steel wires while passing through the drawing dies. It has been observed that all the speed values within specific speed range did not produce a coating thickness having the desired coating characteristic features. Therefore, some optimization of the experimental set up through design of experiments (Stat-Ease) was applied to validate the results. Further rapid solidification of the viscous coating on the wires was targeted so that the coated wires do not stick to the winding spool after the coating process.
A Primer on Simulation and Gaming.
ERIC Educational Resources Information Center
Barton, Richard F.
In a primer intended for the administrative professions, for the behavioral sciences, and for education, simulation and its various aspects are defined, illustrated, and explained. Man-model simulation, man-computer simulation, all-computer simulation, and analysis are discussed as techniques for studying object systems (parts of the "real…
Head-target tracking control of well drilling
NASA Astrophysics Data System (ADS)
Agzamov, Z. V.
2018-05-01
The method of directional drilling trajectory control for oil and gas wells using predictive models is considered in the paper. The developed method does not apply optimization and therefore there is no need for the high-performance computing. Nevertheless, it allows following the well-plan with high precision taking into account process input saturation. Controller output is calculated both from the present target reference point of the well-plan and from well trajectory prediction with using the analytical model. This method allows following a well-plan not only on angular, but also on the Cartesian coordinates. Simulation of the control system has confirmed the high precision and operation performance with a wide range of random disturbance action.
Cortical dipole imaging using truncated total least squares considering transfer matrix error.
Hori, Junichi; Takeuchi, Kosuke
2013-01-01
Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.
NASA Astrophysics Data System (ADS)
Enayatifar, Rasul; Sadaei, Hossein Javedani; Abdullah, Abdul Hanan; Lee, Malrey; Isnin, Ismail Fauzi
2015-08-01
Currently, there are many studies have conducted on developing security of the digital image in order to protect such data while they are sending on the internet. This work aims to propose a new approach based on a hybrid model of the Tinkerbell chaotic map, deoxyribonucleic acid (DNA) and cellular automata (CA). DNA rules, DNA sequence XOR operator and CA rules are used simultaneously to encrypt the plain-image pixels. To determine rule number in DNA sequence and also CA, a 2-dimension Tinkerbell chaotic map is employed. Experimental results and computer simulations, both confirm that the proposed scheme not only demonstrates outstanding encryption, but also resists various typical attacks.
An ABS control logic based on wheel force measurement
NASA Astrophysics Data System (ADS)
Capra, D.; Galvagno, E.; Ondrak, V.; van Leeuwen, B.; Vigliani, A.
2012-12-01
The paper presents an anti-lock braking system (ABS) control logic based on the measurement of the longitudinal forces at the hub bearings. The availability of force information allows to design a logic that does not rely on the estimation of the tyre-road friction coefficient, since it continuously tries to exploit the maximum longitudinal tyre force. The logic is designed by means of computer simulation and then tested on a specific hardware in the loop test bench: the experimental results confirm that measured wheel force can lead to a significant improvement of the ABS performances in terms of stopping distance also in the presence of road with variable friction coefficient.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure
NASA Astrophysics Data System (ADS)
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-01
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
Effect of computer game playing on baseline laparoscopic simulator skills.
Halvorsen, Fredrik H; Cvancarova, Milada; Fosse, Erik; Mjåland, Odd
2013-08-01
Studies examining the possible association between computer game playing and laparoscopic performance in general have yielded conflicting results and neither has a relationship between computer game playing and baseline performance on laparoscopic simulators been established. The aim of this study was to examine the possible association between previous and present computer game playing and baseline performance on a virtual reality laparoscopic performance in a sample of potential future medical students. The participating students completed a questionnaire covering the weekly amount and type of computer game playing activity during the previous year and 3 years ago. They then performed 2 repetitions of 2 tasks ("gallbladder dissection" and "traverse tube") on a virtual reality laparoscopic simulator. Performance on the simulator were then analyzed for association to their computer game experience. Local high school, Norway. Forty-eight students from 2 high school classes volunteered to participate in the study. No association between prior and present computer game playing and baseline performance was found. The results were similar both for prior and present action game playing and prior and present computer game playing in general. Our results indicate that prior and present computer game playing may not affect baseline performance in a virtual reality simulator.
A Computer Simulation of Bacterial Growth During Food-Processing
1974-11-01
1 AD A TECHNICAL REPORT A COMPUTER SIMULATION OF BACTERIAL GROWTH DURING FOOD PROCESSING =r= by Edward W. Ross, Jr. Approved for public...COMPUTER SIMULATION OF BACTERIAL GROWTH DURING FOOD - PROCESSING Edward W. Ross, Jr. Army Natick Laboratories Natick, Massachusetts Novembe...CATALOG NUMBER 4. TITLE fand SubtKUJ "A Computer Sinulatlon of Bacterial Growth During Food - Processing " 5. TYPE OF REPORT A PERIOD COVERED 6
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-01-01
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-04-05
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.
Filters for Improvement of Multiscale Data from Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Reynolds, Daniel R.
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
Filters for Improvement of Multiscale Data from Atomistic Simulations
Gardner, David J.; Reynolds, Daniel R.
2017-01-05
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
NASA Astrophysics Data System (ADS)
Nakano, Masuo; Wada, Akiyoshi; Sawada, Masahiro; Yoshimura, Hiromasa; Onishi, Ryo; Kawahara, Shintaro; Sasaki, Wataru; Nasuno, Tomoe; Yamaguchi, Munehiko; Iriguchi, Takeshi; Sugi, Masato; Takeuchi, Yoshiaki
2017-03-01
Recent advances in high-performance computers facilitate operational numerical weather prediction by global hydrostatic atmospheric models with horizontal resolutions of ˜ 10 km. Given further advances in such computers and the fact that the hydrostatic balance approximation becomes invalid for spatial scales < 10 km, the development of global nonhydrostatic models with high accuracy is urgently required. The Global 7 km mesh nonhydrostatic Model Intercomparison Project for improving TYphoon forecast (TYMIP-G7) is designed to understand and statistically quantify the advantages of high-resolution nonhydrostatic global atmospheric models to improve tropical cyclone (TC) prediction. A total of 137 sets of 5-day simulations using three next-generation nonhydrostatic global models with horizontal resolutions of 7 km and a conventional hydrostatic global model with a horizontal resolution of 20 km were run on the Earth Simulator. The three 7 km mesh nonhydrostatic models are the nonhydrostatic global spectral atmospheric Double Fourier Series Model (DFSM), the Multi-Scale Simulator for the Geoenvironment (MSSG) and the Nonhydrostatic ICosahedral Atmospheric Model (NICAM). The 20 km mesh hydrostatic model is the operational Global Spectral Model (GSM) of the Japan Meteorological Agency. Compared with the 20 km mesh GSM, the 7 km mesh models reduce systematic errors in the TC track, intensity and wind radii predictions. The benefits of the multi-model ensemble method were confirmed for the 7 km mesh nonhydrostatic global models. While the three 7 km mesh models reproduce the typical axisymmetric mean inner-core structure, including the primary and secondary circulations, the simulated TC structures and their intensities in each case are very different for each model. In addition, the simulated track is not consistently better than that of the 20 km mesh GSM. These results suggest that the development of more sophisticated initialization techniques and model physics is needed to further improve the TC prediction.
NASA Astrophysics Data System (ADS)
Castiglioni, Giacomo
Flows over airfoils and blades in rotating machinery, for unmanned and micro-aerial vehicles, wind turbines, and propellers consist of a laminar boundary layer near the leading edge that is often followed by a laminar separation bubble and transition to turbulence further downstream. Typical Reynolds averaged Navier-Stokes turbulence models are inadequate for such flows. Direct numerical simulation is the most reliable, but is also the most computationally expensive alternative. This work assesses the capability of immersed boundary methods and large eddy simulations to reduce the computational requirements for such flows and still provide high quality results. Two-dimensional and three-dimensional simulations of a laminar separation bubble on a NACA-0012 airfoil at Rec = 5x104 and at 5° of incidence have been performed with an immersed boundary code and a commercial code using body fitted grids. Several sub-grid scale models have been implemented in both codes and their performance evaluated. For the two-dimensional simulations with the immersed boundary method the results show good agreement with the direct numerical simulation benchmark data for the pressure coefficient Cp and the friction coefficient Cf, but only when using dissipative numerical schemes. There is evidence that this behavior can be attributed to the ability of dissipative schemes to damp numerical noise coming from the immersed boundary. For the three-dimensional simulations the results show a good prediction of the separation point, but an inaccurate prediction of the reattachment point unless full direct numerical simulation resolution is used. The commercial code shows good agreement with the direct numerical simulation benchmark data in both two and three-dimensional simulations, but the presence of significant, unquantified numerical dissipation prevents a conclusive assessment of the actual prediction capabilities of very coarse large eddy simulations with low order schemes in general cases. Additionally, a two-dimensional sweep of angles of attack from 0° to 5° is performed showing a qualitative prediction of the jump in lift and drag coefficients due to the appearance of the laminar separation bubble. The numerical dissipation inhibits the predictive capabilities of large eddy simulations whenever it is of the same order of magnitude or larger than the sub-grid scale dissipation. The need to estimate the numerical dissipation is most pressing for low-order methods employed by commercial computational fluid dynamics codes. Following the recent work of Schranner et al., the equations and procedure for estimating the numerical dissipation rate and the numerical viscosity in a commercial code are presented. The method allows for the computation of the numerical dissipation rate and numerical viscosity in the physical space for arbitrary sub-domains in a self-consistent way, using only information provided by the code in question. The method is first tested for a three-dimensional Taylor-Green vortex flow in a simple cubic domain and compared with benchmark results obtained using an accurate, incompressible spectral solver. Afterwards the same procedure is applied for the first time to a realistic flow configuration, specifically to the above discussed laminar separation bubble flow over a NACA 0012 airfoil. The method appears to be quite robust and its application reveals that for the code and the flow in question the numerical dissipation can be significantly larger than the viscous dissipation or the dissipation of the classical Smagorinsky sub-grid scale model, confirming the previously qualitative finding.
An Investigation of Computer-based Simulations for School Crises Management.
ERIC Educational Resources Information Center
Degnan, Edward; Bozeman, William
2001-01-01
Describes development of a computer-based simulation program for training school personnel in crisis management. Addresses the data collection and analysis involved in developing a simulated event, the systems requirements for simulation, and a case study of application and use of the completed simulation. (Contains 21 references.) (Authors/PKP)
Problem-Solving in the Pre-Clinical Curriculum: The Uses of Computer Simulations.
ERIC Educational Resources Information Center
Michael, Joel A.; Rovick, Allen A.
1986-01-01
Promotes the use of computer-based simulations in the pre-clinical medical curriculum as a means of providing students with opportunities for problem solving. Describes simple simulations of skeletal muscle loads, complex simulations of major organ systems and comprehensive simulation models of the entire human body. (TW)
Giachetti, Daniela; Biagi, Marco; Manetti, Fabrizio; De Vico, Luca
2016-01-01
Benign prostatic hyperplasia is a common disease in men aged over 50 years old, with an incidence increasing to more than 80% over the age of 70, that is increasingly going to attract pharmaceutical interest. Within conventional therapies, such as α-adrenoreceptor antagonists and 5α-reductase inhibitor, there is a large requirement for treatments with less adverse events on, e.g., blood pressure and sexual function: phytotherapy may be the right way to fill this need. Serenoa repens standardized extract has been widely studied and its ability to reduce lower urinary tract symptoms related to benign prostatic hyperplasia is comprehensively described in literature. An innovative investigation on the mechanism of inhibition of 5α-reductase by Serenoa repens extract active principles is proposed in this work through computational methods, performing molecular docking simulations on the crystal structure of human liver 5β-reductase. The results confirm that both sterols and fatty acids can play a role in the inhibition of the enzyme, thus, suggesting a competitive mechanism of inhibition. This work proposes a further confirmation for the rational use of herbal products in the management of benign prostatic hyperplasia, and suggests computational methods as an innovative, low cost, and non-invasive process for the study of phytocomplex activity toward proteic targets. PMID:27904805
NASA Astrophysics Data System (ADS)
Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim
2018-06-01
The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.