Sample records for time simulation results

  1. Reducing EnergyPlus Run Time For Code Compliance Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less

  2. Real time digital propulsion system simulation for manned flight simulators

    NASA Technical Reports Server (NTRS)

    Mihaloew, J. R.; Hart, C. E.

    1978-01-01

    A real time digital simulation of a STOL propulsion system was developed which generates significant dynamics and internal variables needed to evaluate system performance and aircraft interactions using manned flight simulators. The simulation ran at a real-to-execution time ratio of 8.8. The model was used in a piloted NASA flight simulator program to evaluate the simulation technique and the propulsion system digital control. The simulation is described and results shown. Limited results of the flight simulation program are also presented.

  3. Real-Time Visualization of an HPF-based CFD Simulation

    NASA Technical Reports Server (NTRS)

    Kremenetsky, Mark; Vaziri, Arsi; Haimes, Robert; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Current time-dependent CFD simulations produce very large multi-dimensional data sets at each time step. The visual analysis of computational results are traditionally performed by post processing the static data on graphics workstations. We present results from an alternate approach in which we analyze the simulation data in situ on each processing node at the time of simulation. The locally analyzed results, usually more economical and in a reduced form, are then combined and sent back for visualization on a graphics workstation.

  4. Acceleration of discrete stochastic biochemical simulation using GPGPU.

    PubMed

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.

  5. Acceleration of discrete stochastic biochemical simulation using GPGPU

    PubMed Central

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936

  6. Design of teleoperation system with a force-reflecting real-time simulator

    NASA Technical Reports Server (NTRS)

    Hirata, Mitsunori; Sato, Yuichi; Nagashima, Fumio; Maruyama, Tsugito

    1994-01-01

    We developed a force-reflecting teleoperation system that uses a real-time graphic simulator. This system eliminates the effects of communication time delays in remote robot manipulation. The simulator provides the operator with predictive display and feedback of computed contact forces through a six-degree of freedom (6-DOF) master arm on a real-time basis. With this system, peg-in-hole tasks involving round-trip communication time delays of up to a few seconds were performed at three support levels: a real image alone, a predictive display with a real image, and a real-time graphic simulator with computed-contact-force reflection and a predictive display. The experimental results indicate the best teleoperation efficiency was achieved by using the force-reflecting simulator with two images. The shortest work time, lowest sensor maximum, and a 100 percent success rate were obtained. These results demonstrate the effectiveness of simulated-force-reflecting teleoperation efficiency.

  7. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  8. 40 CFR 86.004-30 - Certification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... simulation of such, resulting in an increase of 1.5 times the NMHC+NOX standard or FEL above the NMHC+NOX... simulation of such, resulting in exhaust emissions exceeding 1.5 times the applicable standard or FEL for... catastrophically failed, or an electronic simulation of such. (2)(i) Otto-cycle. An engine misfire condition is...

  9. 40 CFR 86.004-30 - Certification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... simulation of such, resulting in an increase of 1.5 times the NMHC+NOX standard or FEL above the NMHC+NOX... simulation of such, resulting in exhaust emissions exceeding 1.5 times the applicable standard or FEL for... catastrophically failed, or an electronic simulation of such. (2)(i) Otto-cycle. An engine misfire condition is...

  10. Part weight verification between simulation and experiment of plastic part in injection moulding process

    NASA Astrophysics Data System (ADS)

    Amran, M. A. M.; Idayu, N.; Faizal, K. M.; Sanusi, M.; Izamshah, R.; Shahir, M.

    2016-11-01

    In this study, the main objective is to determine the percentage difference of part weight between experimental and simulation work. The effect of process parameters on weight of plastic part is also investigated. The process parameters involved were mould temperature, melt temperature, injection time and cooling time. Autodesk Simulation Moldflow software was used to run the simulation of the plastic part. Taguchi method was selected as Design of Experiment to conduct the experiment. Then, the simulation result was validated with the experimental result. It was found that the minimum and maximum percentage of differential of part weight between simulation and experimental work are 0.35 % and 1.43 % respectively. In addition, the most significant parameter that affected part weight is the mould temperature, followed by melt temperature, injection time and cooling time.

  11. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less

  12. International benchmarking of longitudinal train dynamics simulators: results

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Spiryagin, Maksym; Cole, Colin; Chang, Chongyi; Guo, Gang; Sakalo, Alexey; Wei, Wei; Zhao, Xubao; Burgelman, Nico; Wiersma, Pier; Chollet, Hugues; Sebes, Michel; Shamdani, Amir; Melzi, Stefano; Cheli, Federico; di Gialleonardo, Egidio; Bosso, Nicola; Zampieri, Nicolò; Luo, Shihui; Wu, Honghua; Kaza, Guy-Léon

    2018-03-01

    This paper presents the results of the International Benchmarking of Longitudinal Train Dynamics Simulators which involved participation of nine simulators (TABLDSS, UM, CRE-LTS, TDEAS, PoliTo, TsDyn, CARS, BODYSIM and VOCO) from six countries. Longitudinal train dynamics results and computing time of four simulation cases are presented and compared. The results show that all simulators had basic agreement in simulations of locomotive forces, resistance forces and track gradients. The major differences among different simulators lie in the draft gear models. TABLDSS, UM, CRE-LTS, TDEAS, TsDyn and CARS had general agreement in terms of the in-train forces; minor differences exist as reflections of draft gear model variations. In-train force oscillations were observed in VOCO due to the introduction of wheel-rail contact. In-train force instabilities were sometimes observed in PoliTo and BODYSIM due to the velocity controlled transitional characteristics which could have generated unreasonable transitional stiffness. Regarding computing time per train operational second, the following list is in order of increasing computing speed: VOCO, TsDyn, PoliTO, CARS, BODYSIM, UM, TDEAS, CRE-LTS and TABLDSS (fastest); all simulators except VOCO, TsDyn and PoliTo achieved faster speeds than real-time simulations. Similarly, regarding computing time per integration step, the computing speeds in order are: CRE-LTS, VOCO, CARS, TsDyn, UM, TABLDSS and TDEAS (fastest).

  13. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting

    2016-01-01

    This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507

  14. Geographically distributed real-time digital simulations using linear prediction

    DOE PAGES

    Liu, Ren; Mohanpurkar, Manish; Panwar, Mayank; ...

    2016-07-04

    Real time simulation is a powerful tool for analyzing, planning, and operating modern power systems. For analyzing the ever evolving power systems and understanding complex dynamic and transient interactions larger real time computation capabilities are essential. These facilities are interspersed all over the globe and to leverage unique facilities geographically-distributed real-time co-simulation in analyzing the power systems is pursued and presented. However, the communication latency between different simulator locations may lead to inaccuracy in geographically distributed real-time co-simulations. In this paper, the effect of communication latency on geographically distributed real-time co-simulation is introduced and discussed. In order to reduce themore » effect of the communication latency, a real-time data predictor, based on linear curve fitting is developed and integrated into the distributed real-time co-simulation. Two digital real time simulators are used to perform dynamic and transient co-simulations with communication latency and predictor. Results demonstrate the effect of the communication latency and the performance of the real-time data predictor to compensate it.« less

  15. Geographically distributed real-time digital simulations using linear prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ren; Mohanpurkar, Manish; Panwar, Mayank

    Real time simulation is a powerful tool for analyzing, planning, and operating modern power systems. For analyzing the ever evolving power systems and understanding complex dynamic and transient interactions larger real time computation capabilities are essential. These facilities are interspersed all over the globe and to leverage unique facilities geographically-distributed real-time co-simulation in analyzing the power systems is pursued and presented. However, the communication latency between different simulator locations may lead to inaccuracy in geographically distributed real-time co-simulations. In this paper, the effect of communication latency on geographically distributed real-time co-simulation is introduced and discussed. In order to reduce themore » effect of the communication latency, a real-time data predictor, based on linear curve fitting is developed and integrated into the distributed real-time co-simulation. Two digital real time simulators are used to perform dynamic and transient co-simulations with communication latency and predictor. Results demonstrate the effect of the communication latency and the performance of the real-time data predictor to compensate it.« less

  16. Fast Whole-Engine Stirling Analysis

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2005-01-01

    An experimentally validated approach is described for fast axisymmetric Stirling engine simulations. These simulations include the entire displacer interior and demonstrate it is possible to model a complete engine cycle in less than an hour. The focus of this effort was to demonstrate it is possible to produce useful Stirling engine performance results in a time-frame short enough to impact design decisions. The combination of utilizing the latest 64-bit Opteron computer processors, fiber-optical Myrinet communications, dynamic meshing, and across zone partitioning has enabled solution times at least 240 times faster than previous attempts at simulating the axisymmetric Stirling engine. A comparison of the multidimensional results, calibrated one-dimensional results, and known experimental results is shown. This preliminary comparison demonstrates that axisymmetric simulations can be very accurate, but more work remains to improve the simulations through such means as modifying the thermal equilibrium regenerator models, adding fluid-structure interactions, including radiation effects, and incorporating mechanodynamics.

  17. Fast Whole-Engine Stirling Analysis

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2007-01-01

    An experimentally validated approach is described for fast axisymmetric Stirling engine simulations. These simulations include the entire displacer interior and demonstrate it is possible to model a complete engine cycle in less than an hour. The focus of this effort was to demonstrate it is possible to produce useful Stirling engine performance results in a time-frame short enough to impact design decisions. The combination of utilizing the latest 64-bit Opteron computer processors, fiber-optical Myrinet communications, dynamic meshing, and across zone partitioning has enabled solution times at least 240 times faster than previous attempts at simulating the axisymmetric Stirling engine. A comparison of the multidimensional results, calibrated one-dimensional results, and known experimental results is shown. This preliminary comparison demonstrates that axisymmetric simulations can be very accurate, but more work remains to improve the simulations through such means as modifying the thermal equilibrium regenerator models, adding fluid-structure interactions, including radiation effects, and incorporating mechanodynamics.

  18. Wind Energy System Time-domain (WEST) analyzers using hybrid simulation techniques

    NASA Technical Reports Server (NTRS)

    Hoffman, J. A.

    1979-01-01

    Two stand-alone analyzers constructed for real time simulation of the complex dynamic characteristics of horizontal-axis wind energy systems are described. Mathematical models for an aeroelastic rotor, including nonlinear aerodynamic and elastic loads, are implemented with high speed digital and analog circuitry. Models for elastic supports, a power train, a control system, and a rotor gimbal system are also included. Limited correlation efforts show good comparisons between results produced by the analyzers and results produced by a large digital simulation. The digital simulation results correlate well with test data.

  19. Real-time visual simulation of APT system based on RTW and Vega

    NASA Astrophysics Data System (ADS)

    Xiong, Shuai; Fu, Chengyu; Tang, Tao

    2012-10-01

    The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.

  20. Mass imbalances in EPANET water-quality simulations

    NASA Astrophysics Data System (ADS)

    Davis, Michael J.; Janke, Robert; Taxon, Thomas N.

    2018-04-01

    EPANET is widely employed to simulate water quality in water distribution systems. However, in general, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results only for short water-quality time steps. Overly long time steps can yield errors in concentration estimates and can result in situations in which constituent mass is not conserved. The use of a time step that is sufficiently short to avoid these problems may not always be feasible. The absence of EPANET errors or warnings does not ensure conservation of mass. This paper provides examples illustrating mass imbalances and explains how such imbalances can occur because of fundamental limitations in the water-quality routing algorithm used in EPANET. In general, these limitations cannot be overcome by the use of improved water-quality modeling practices. This paper also presents a preliminary event-driven approach that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, toward those obtained using the preliminary event-driven approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations. The results presented in this paper should be of value to those who perform water-quality simulations using EPANET or use the results of such simulations, including utility managers and engineers.

  1. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  2. Minimum MD simulation length required to achieve reliable results in free energy perturbation calculations: case study of relative binding free energies of fructose-1,6-bisphosphatase inhibitors.

    PubMed

    Rathore, R S; Aparoy, P; Reddanna, P; Kondapi, A K; Reddy, M Rami

    2011-07-30

    In an attempt to establish the criteria for the length of simulation to achieve the desired convergence of free energy calculations, two studies were carried out on chosen complexes of FBPase-AMP mimics. Calculations were performed for varied length of simulations and for different starting configurations using both conventional- and QM/MM-FEP methods. The results demonstrate that for small perturbations, 1248 ps simulation time could be regarded a reasonable yardstick to achieve convergence of the results. As the simulation time is extended, the errors associated with free energy calculations also gradually tapers off. Moreover, when starting the simulation from different initial configurations of the systems, the results are not changed significantly, when performed for 1248 ps. This study carried on FBPase-AMP mimics corroborates well with our previous successful demonstration of requirement of simulation time for solvation studies, both by conventional and ab initio FEP. The establishment of aforementioned criteria of simulation length serves a useful benchmark in drug design efforts using FEP methodologies, to draw a meaningful and unequivocal conclusion. Copyright © 2011 Wiley Periodicals, Inc.

  3. SIMSAT: An object oriented architecture for real-time satellite simulation

    NASA Technical Reports Server (NTRS)

    Williams, Adam P.

    1993-01-01

    Real-time satellite simulators are vital tools in the support of satellite missions. They are used in the testing of ground control systems, the training of operators, the validation of operational procedures, and the development of contingency plans. The simulators must provide high-fidelity modeling of the satellite, which requires detailed system information, much of which is not available until relatively near launch. The short time-scales and resulting high productivity required of such simulator developments culminates in the need for a reusable infrastructure which can be used as a basis for each simulator. This paper describes a major new simulation infrastructure package, the Software Infrastructure for Modelling Satellites (SIMSAT). It outlines the object oriented design methodology used, describes the resulting design, and discusses the advantages and disadvantages experienced in applying the methodology.

  4. Numerical Study of a High Head Francis Turbine with Measurements from the Francis-99 Project

    NASA Astrophysics Data System (ADS)

    Wallimann, H.; Neubauer, R.

    2015-01-01

    For the Francis-99 project initiated by the Norwegian University of Science and Technology (NTNU, Norway) and the Luleå University of Technology (LTU, Sweden) numerical flow simulation has been performed and the results compared to experimentally obtained data. The full machine including spiral casing, stay vanes, guide vanes, runner and draft tube was simulated transient for three operating points defined by the Francis-99 organisers. Two sets of results were created with differing time steps. Additionally, a reduced domain was simulated in a stationary manner to create a complete cut along constant prototype head and constant prototype discharge. The efficiency values and shape of the curves have been investigated and compared to the experimental data. Special attention has been given to rotor stator interaction (RSI). Signals from several probes and their counterpart in the simulation have been processed to evaluate the pressure fluctuations occurring due to the RSI. The direct comparison of the hydraulic efficiency obtained by the full machine simulation compared to the experimental data showed no improvement when using a 1° time step compared to a coarser 2° time step. At the BEP the 2° time step even showed a slightly better result with an absolute deviation 1.08% compared with 1.24% for the 1° time step. At the other two operating points the simulation results were practically identical but fell short of predicting the measured values. The RSI evaluation was done using the results of the 2° time step simulation, which proved to be an adequate setting to reproduce pressure signals with peaks at the correct frequencies. The simulation results showed the highest amplitudes in the vaneless space at the BEP operating point at a location different from the probe measurements available. This implies that not only the radial distance, but the shape of the vaneless space influences the RSI.

  5. Short- and long-time diffusion and dynamic scaling in suspensions of charged colloidal particles

    NASA Astrophysics Data System (ADS)

    Banchio, Adolfo J.; Heinen, Marco; Holmqvist, Peter; Nägele, Gerhard

    2018-04-01

    We report on a comprehensive theory-simulation-experimental study of collective and self-diffusion in concentrated suspensions of charge-stabilized colloidal spheres. In theory and simulation, the spheres are assumed to interact directly by a hard-core plus screened Coulomb effective pair potential. The intermediate scattering function, fc(q, t), is calculated by elaborate accelerated Stokesian dynamics (ASD) simulations for Brownian systems where many-particle hydrodynamic interactions (HIs) are fully accounted for, using a novel extrapolation scheme to a macroscopically large system size valid for all correlation times. The study spans the correlation time range from the colloidal short-time to the long-time regime. Additionally, Brownian Dynamics (BD) simulation and mode-coupling theory (MCT) results of fc(q, t) are generated where HIs are neglected. Using these results, the influence of HIs on collective and self-diffusion and the accuracy of the MCT method are quantified. It is shown that HIs enhance collective and self-diffusion at intermediate and long times. At short times self-diffusion, and for wavenumbers outside the structure factor peak region also collective diffusion, are slowed down by HIs. MCT significantly overestimates the slowing influence of dynamic particle caging. The dynamic scattering functions obtained in the ASD simulations are in overall good agreement with our dynamic light scattering (DLS) results for a concentration series of charged silica spheres in an organic solvent mixture, in the experimental time window and wavenumber range. From the simulation data for the time derivative of the width function associated with fc(q, t), there is indication of long-time exponential decay of fc(q, t), for wavenumbers around the location of the static structure factor principal peak. The experimental scattering functions in the probed time range are consistent with a time-wavenumber factorization scaling behavior of fc(q, t) that was first reported by Segrè and Pusey [Phys. Rev. Lett. 77, 771 (1996)] for suspensions of hard spheres. Our BD simulation and MCT results predict a significant violation of exact factorization scaling which, however, is approximately restored according to the ASD results when HIs are accounted for, consistent with the experimental findings for fc(q, t). Our study of collective diffusion is amended by simulation and theoretical results for the self-intermediate scattering function, fs(q, t), and its non-Gaussian parameter α2(t) and for the particle mean squared displacement W(t) and its time derivative. Since self-diffusion properties are not assessed in standard DLS measurements, a method to deduce W(t) approximately from fc(q, t) is theoretically validated.

  6. Improvements of the two-dimensional FDTD method for the simulation of normal- and superconducting planar waveguides using time series analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofschen, S.; Wolff, I.

    1996-08-01

    Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are comparedmore » with measurements and show good agreement.« less

  7. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  8. Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner

    NASA Astrophysics Data System (ADS)

    Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.

    2015-02-01

    Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.

  9. NOTE: Implementation of angular response function modeling in SPECT simulations with GATE

    NASA Astrophysics Data System (ADS)

    Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.

    2010-05-01

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.

  10. Applying Reduced Generator Models in the Coarse Solver of Parareal in Time Parallel Power System Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan

    2016-01-01

    The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less

  11. A study of workstation computational performance for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Cleveland, Jeff I., II

    1995-01-01

    With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.

  12. Design and implementation of laser target simulator in hardware-in-the-loop simulation system based on LabWindows/CVI and RTX

    NASA Astrophysics Data System (ADS)

    Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong

    2016-11-01

    In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.

  13. Power estimation using simulations for air pollution time-series studies

    PubMed Central

    2012-01-01

    Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599

  14. Trauma Simulation Training Increases Confidence Levels in Prehospital Personnel Performing Life-Saving Interventions in Trauma Patients

    PubMed Central

    Patel, Archita D.; Meurer, David A.; Shuster, Jonathan J.

    2016-01-01

    Introduction. Limited evidence is available on simulation training of prehospital care providers, specifically the use of tourniquets and needle decompression. This study focused on whether the confidence level of prehospital personnel performing these skills improved through simulation training. Methods. Prehospital personnel from Alachua County Fire Rescue were enrolled in the study over a 2- to 3-week period based on their availability. Two scenarios were presented to them: a motorcycle crash resulting in a leg amputation requiring a tourniquet and an intoxicated patient with a stab wound, who experienced tension pneumothorax requiring needle decompression. Crews were asked to rate their confidence levels before and after exposure to the scenarios. Timing of the simulation interventions was compared with actual scene times to determine applicability of simulation in measuring the efficiency of prehospital personnel. Results. Results were collected from 129 participants. Pre- and postexposure scores increased by a mean of 1.15 (SD 1.32; 95% CI, 0.88–1.42; P < 0.001). Comparison of actual scene times with simulated scene times yielded a 1.39-fold difference (95% CI, 1.25–1.55) for Scenario 1 and 1.59 times longer for Scenario 2 (95% CI, 1.43–1.77). Conclusion. Simulation training improved prehospital care providers' confidence level in performing two life-saving procedures. PMID:27563467

  15. Real-Time Agent-Based Modeling Simulation with in-situ Visualization of Complex Biological Systems: A Case Study on Vocal Fold Inflammation and Healing.

    PubMed

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2016-05-01

    We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.

  16. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    PubMed

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  17. OpenACC performance for simulating 2D radial dambreak using FVM HLLE flux

    NASA Astrophysics Data System (ADS)

    Gunawan, P. H.; Pahlevi, M. R.

    2018-03-01

    The aim of this paper is to investigate the performances of openACC platform for computing 2D radial dambreak. Here, the shallow water equation will be used to describe and simulate 2D radial dambreak with finite volume method (FVM) using HLLE flux. OpenACC is a parallel computing platform based on GPU cores. Indeed, from this research this platform is used to minimize computational time on the numerical scheme performance. The results show the using OpenACC, the computational time is reduced. For the dry and wet radial dambreak simulations using 2048 grids, the computational time of parallel is obtained 575.984 s and 584.830 s respectively for both simulations. These results show the successful of OpenACC when they are compared with the serial time of dry and wet radial dambreak simulations which are collected 28047.500 s and 29269.40 s respectively.

  18. Solar Potential Analysis and Integration of the Time-Dependent Simulation Results for Semantic 3d City Models Using Dynamizers

    NASA Astrophysics Data System (ADS)

    Chaturvedi, K.; Willenborg, B.; Sindram, M.; Kolbe, T. H.

    2017-10-01

    Semantic 3D city models play an important role in solving complex real-world problems and are being adopted by many cities around the world. A wide range of application and simulation scenarios directly benefit from the adoption of international standards such as CityGML. However, most of the simulations involve properties, whose values vary with respect to time, and the current generation semantic 3D city models do not support time-dependent properties explicitly. In this paper, the details of solar potential simulations are provided operating on the CityGML standard, assessing and estimating solar energy production for the roofs and facades of the 3D building objects in different ways. Furthermore, the paper demonstrates how the time-dependent simulation results are better-represented inline within 3D city models utilizing the so-called Dynamizer concept. This concept not only allows representing the simulation results in standardized ways, but also delivers a method to enhance static city models by such dynamic property values making the city models truly dynamic. The dynamizer concept has been implemented as an Application Domain Extension of the CityGML standard within the OGC Future City Pilot Phase 1. The results are given in this paper.

  19. Using a Radiofrequency Identification System for Improving the Patient Discharge Process: A Simulation Study.

    PubMed

    Shim, Sung J; Kumar, Arun; Jiao, Roger

    2016-01-01

    A hospital is considering deploying a radiofrequency identification (RFID) system and setting up a new "discharge lounge" to improve the patient discharge process. This study uses computer simulation to model and compare the current process and the new process, and it assesses the impact of the RFID system and the discharge lounge on the process in terms of resource utilization and time taken in the process. The simulation results regarding resource utilization suggest that the RFID system can slightly relieve the burden on all resources, whereas the RFID system and the discharge lounge together can significantly mitigate the nurses' tasks. The simulation results in terms of the time taken demonstrate that the RFID system can shorten patient wait times, staff busy times, and bed occupation times. The results of the study could prove helpful to others who are considering the use of an RFID system in the patient discharge process in hospitals or similar processes.

  20. Convection in a Very Compressible Fluid: Comparison of Simulations With Experiments

    NASA Technical Reports Server (NTRS)

    Meyer, H.; Furukawa, A.; Onuki, A.; Kogan, A. B.

    2003-01-01

    The time profile (Delta)T(t) of the temperature difference, measured across a very compressible fluid layer of supercritical He-3 after the start of a heat flow, shows a damped oscillatory behavior before steady state convection is reached. The results for (Delta)T(t) obtained from numerical simulations and from laboratory experiments are compared over a temperature range where the compressibility varies by a factor of approx. = 40. First the steady-state convective heat current j(sup conv) as a function of the Rayleigh number R(alpha) is presented, and the agreement is found to be good. Second, the shape of the time profile and two characteristic times in the transient part of (Delta)T(t) from simulations and experiments are compared, namely: 1) t(sub osc), the oscillatory period and 2) t(sub p), the time of the first peak after starting the heat flow. These times, scaled by the diffusive time tau(sub D) versus R(alpha), are presented. The agreement is good for t(sup osc)/tau(sub D), where the results collapse on a single curve showing a powerlaw behavior. The simulation hence confirms the universal scaling behavior found experimentally. However for t(sub p)/tau(sub D), where the experimental data also collapse on a single curve, the simulation results show systematic departures from such a behavior. A possible reason for some of the disagreements, both in the time profile and in t(sub p) is discussed. In the Appendix a third characteristic time, t(sub m), between the first peak and the first oscillation minimum is plotted and a comparison between the results of experiments and simulations is made.

  1. 76 FR 52569 - Regulated Navigation Area; Arthur Kill, NY and NJ

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-23

    ... simulator assessments of the drilling and blasting areas. These simulations studied the possibility of... all times for vessel transits. The results of the simulation allowed the USACE to determine that... 2011 to discuss the results of the navigation simulations. In June the USACE and the contractor...

  2. Numerical simulation of a 100-ton ANFO detonation

    NASA Astrophysics Data System (ADS)

    Weber, P. W.; Millage, K. K.; Crepeau, J. E.; Happ, H. J.; Gitterman, Y.; Needham, C. E.

    2015-03-01

    This work describes the results from a US government-owned hydrocode (SHAMRC, Second-Order Hydrodynamic Automatic Mesh Refinement Code) that simulated an explosive detonation experiment with 100,000 kg of Ammonium Nitrate-Fuel Oil (ANFO) and 2,080 kg of Composition B (CompB). The explosive surface charge was nearly hemispherical and detonated in desert terrain. Two-dimensional axisymmetric (2D) and three-dimensional (3D) simulations were conducted, with the 3D model providing a more accurate representation of the experimental setup geometry. Both 2D and 3D simulations yielded overpressure and impulse waveforms that agreed qualitatively with experiment, including the capture of the secondary shock observed in the experiment. The 2D simulation predicted the primary shock arrival time correctly but secondary shock arrival time was early. The 2D-predicted impulse waveforms agreed very well with the experiment, especially at later calculation times, and prediction of the early part of the impulse waveform (associated with the initial peak) was better quantitatively for 2D compared to 3D. The 3D simulation also predicted the primary shock arrival time correctly, and secondary shock arrival times in 3D were closer to the experiment than in the 2D results. The 3D-predicted impulse waveform had better quantitative agreement than 2D for the later part of the impulse waveform. The results of this numerical study show that SHAMRC may be used reliably to predict phenomena associated with the 100-ton detonation. The ultimate fidelity of the simulations was limited by both computer time and memory. The results obtained provide good accuracy and indicate that the code is well suited to predicting the outcomes of explosive detonations.

  3. Exact-Differential Large-Scale Traffic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less

  4. Real-time global MHD simulation of the solar wind interaction with the earth’s magnetosphere

    NASA Astrophysics Data System (ADS)

    Shimazu, H.; Kitamura, K.; Tanaka, T.; Fujita, S.; Nakamura, M. S.; Obara, T.

    2008-11-01

    We have developed a real-time global MHD (magnetohydrodynamics) simulation of the solar wind interaction with the earth’s magnetosphere. By adopting the real-time solar wind parameters and interplanetary magnetic field (IMF) observed routinely by the ACE (Advanced Composition Explorer) spacecraft, responses of the magnetosphere are calculated with MHD code. The simulation is carried out routinely on the super computer system at National Institute of Information and Communications Technology (NICT), Japan. The visualized images of the magnetic field lines around the earth, pressure distribution on the meridian plane, and the conductivity of the polar ionosphere, can be referred to on the web site (http://www2.nict.go.jp/y/y223/simulation/realtime/). The results show that various magnetospheric activities are almost reproduced qualitatively. They also give us information how geomagnetic disturbances develop in the magnetosphere in relation with the ionosphere. From the viewpoint of space weather, the real-time simulation helps us to understand the whole image in the current condition of the magnetosphere. To evaluate the simulation results, we compare the AE indices derived from the simulation and observations. The simulation and observation agree well for quiet days and isolated substorm cases in general.

  5. Numerical simulation of electromagnetic waves in Schwarzschild space-time by finite difference time domain method and Green function method

    NASA Astrophysics Data System (ADS)

    Jia, Shouqing; La, Dongsheng; Ma, Xuelian

    2018-04-01

    The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.

  6. Prediction of Land use changes using CA in GIS Environment

    NASA Astrophysics Data System (ADS)

    Kiavarz Moghaddam, H.; Samadzadegan, F.

    2009-04-01

    Urban growth is a typical self-organized system that results from the interaction between three defined systems; developed urban system, natural non-urban system and planned urban system. Urban growth simulation for an artificial city is carried out first. It evaluates a number of urban sprawl parameters including the size and shape of neighborhood besides testing different types of constraints on urban growth simulation. The results indicate that circular-type neighborhood shows smoother but faster urban growth as compared to nine-cell Moore neighborhood. Cellular Automata is proved to be very efficient in simulating the urban growth simulation over time. The strength of this technology comes from the ability of urban modeler to implement the growth simulation model, evaluating the results and presenting the output simulation results in visual interpretable environment. Artificial city simulation model provides an excellent environment to test a number of simulation parameters such as neighborhood influence on growth results and constraints role in driving the urban growth .Also, CA rules definition is critical stage in simulating the urban growth pattern in a close manner to reality. CA urban growth simulation and prediction of Tehran over the last four decades succeeds to simulate specified tested growth years at a high accuracy level. Some real data layer have been used in the CA simulation training phase such as 1995 while others used for testing the prediction results such as 2002. Tuning the CA growth rules is important through comparing the simulated images with the real data to obtain feedback. An important notice is that CA rules need also to be modified over time to adapt to the urban growth pattern. The evaluation method used on region basis has its advantage in covering the spatial distribution component of the urban growth process. Next step includes running the developed CA simulation over classified raster data for three years in a developed ArcGIS extention. A set of crisp rules are defined and calibrated based on real urban growth pattern. Uncertainty analysis is performed to evaluate the accuracy of the simulated results as compared to the historical real data. Evaluation shows promising results represented by the high average accuracies achieved. The average accuracy for the predicted growth images 1964 and 2002 is over 80 %. Modifying CA growth rules over time to match the growth pattern changes is important to obtain accurate simulation. This modification is based on the urban growth relationship for Tehran over time as can be seen in the historical raster data. The feedback obtained from comparing the simulated and real data is crucial in identifying the optimal set of CA rules for reliable simulation and calibrating growth steps.

  7. Use of high performance networks and supercomputers for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  8. Reliable results from stochastic simulation models

    Treesearch

    Donald L., Jr. Gochenour; Leonard R. Johnson

    1973-01-01

    Development of a computer simulation model is usually done without fully considering how long the model should run (e.g. computer time) before the results are reliable. However construction of confidence intervals (CI) about critical output parameters from the simulation model makes it possible to determine the point where model results are reliable. If the results are...

  9. Efficient scatter model for simulation of ultrasound images from computed tomography data

    NASA Astrophysics Data System (ADS)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  10. Short- and long-time diffusion and dynamic scaling in suspensions of charged colloidal particles.

    PubMed

    Banchio, Adolfo J; Heinen, Marco; Holmqvist, Peter; Nägele, Gerhard

    2018-04-07

    We report on a comprehensive theory-simulation-experimental study of collective and self-diffusion in concentrated suspensions of charge-stabilized colloidal spheres. In theory and simulation, the spheres are assumed to interact directly by a hard-core plus screened Coulomb effective pair potential. The intermediate scattering function, f c (q, t), is calculated by elaborate accelerated Stokesian dynamics (ASD) simulations for Brownian systems where many-particle hydrodynamic interactions (HIs) are fully accounted for, using a novel extrapolation scheme to a macroscopically large system size valid for all correlation times. The study spans the correlation time range from the colloidal short-time to the long-time regime. Additionally, Brownian Dynamics (BD) simulation and mode-coupling theory (MCT) results of f c (q, t) are generated where HIs are neglected. Using these results, the influence of HIs on collective and self-diffusion and the accuracy of the MCT method are quantified. It is shown that HIs enhance collective and self-diffusion at intermediate and long times. At short times self-diffusion, and for wavenumbers outside the structure factor peak region also collective diffusion, are slowed down by HIs. MCT significantly overestimates the slowing influence of dynamic particle caging. The dynamic scattering functions obtained in the ASD simulations are in overall good agreement with our dynamic light scattering (DLS) results for a concentration series of charged silica spheres in an organic solvent mixture, in the experimental time window and wavenumber range. From the simulation data for the time derivative of the width function associated with f c (q, t), there is indication of long-time exponential decay of f c (q, t), for wavenumbers around the location of the static structure factor principal peak. The experimental scattering functions in the probed time range are consistent with a time-wavenumber factorization scaling behavior of f c (q, t) that was first reported by Segrè and Pusey [Phys. Rev. Lett. 77, 771 (1996)] for suspensions of hard spheres. Our BD simulation and MCT results predict a significant violation of exact factorization scaling which, however, is approximately restored according to the ASD results when HIs are accounted for, consistent with the experimental findings for f c (q, t). Our study of collective diffusion is amended by simulation and theoretical results for the self-intermediate scattering function, f s (q, t), and its non-Gaussian parameter α 2 (t) and for the particle mean squared displacement W(t) and its time derivative. Since self-diffusion properties are not assessed in standard DLS measurements, a method to deduce W(t) approximately from f c (q, t) is theoretically validated.

  11. Hardware in-the-Loop Demonstration of Real-Time Orbit Determination in High Earth Orbits

    NASA Technical Reports Server (NTRS)

    Moreau, Michael; Naasz, Bo; Leitner, Jesse; Carpenter, J. Russell; Gaylor, Dave

    2005-01-01

    This paper presents results from a study conducted at Goddard Space Flight Center (GSFC) to assess the real-time orbit determination accuracy of GPS-based navigation in a number of different high Earth orbital regimes. Measurements collected from a GPS receiver (connected to a GPS radio frequency (RF) signal simulator) were processed in a navigation filter in real-time, and resulting errors in the estimated states were assessed. For the most challenging orbit simulated, a 12 hour Molniya orbit with an apogee of approximately 39,000 km, mean total position and velocity errors were approximately 7 meters and 3 mm/s respectively. The study also makes direct comparisons between the results from the above hardware in-the-loop tests and results obtained by processing GPS measurements generated from software simulations. Care was taken to use the same models and assumptions in the generation of both the real-time and software simulated measurements, in order that the real-time data could be used to help validate the assumptions and models used in the software simulations. The study makes use of the unique capabilities of the Formation Flying Test Bed at GSFC, which provides a capability to interface with different GPS receivers and to produce real-time, filtered orbit solutions even when less than four satellites are visible. The result is a powerful tool for assessing onboard navigation performance in a wide range of orbital regimes, and a test-bed for developing software and procedures for use in real spacecraft applications.

  12. Simulation and Modeling of charge particles transport using SIMION for our Time of Flight Positron Annihilation Induce Auger Electron Spectroscopy systems

    NASA Astrophysics Data System (ADS)

    Joglekar, Prasad; Shastry, K.; Satyal, Suman; Weiss, Alexander

    2012-02-01

    Time of flight Positron Annihilation Induced Auger Electron Spectroscopy system, a highly surface selective analytical technique using time of flight of auger electron resulting from the annihilation of core electrons by trapped incident positron in image potential well. We simulated and modeled the trajectories of the charge particles in TOF-PAES using SIMION for the development of new high resolution system at U T Arlington and current TOFPAES system. This poster presents the SIMION simulations results, Time of flight calculations and larmor radius calculations for current system as well as new system.

  13. Multi-scale simulations of droplets in generic time-dependent flows

    NASA Astrophysics Data System (ADS)

    Milan, Felix; Biferale, Luca; Sbragaglia, Mauro; Toschi, Federico

    2017-11-01

    We study the deformation and dynamics of droplets in time-dependent flows using a diffuse interface model for two immiscible fluids. The numerical simulations are at first benchmarked against analytical results of steady droplet deformation, and further extended to the more interesting case of time-dependent flows. The results of these time-dependent numerical simulations are compared against analytical models available in the literature, which assume the droplet shape to be an ellipsoid at all times, with time-dependent major and minor axis. In particular we investigate the time-dependent deformation of a confined droplet in an oscillating Couette flow for the entire capillary range until droplet break-up. In this way these multi component simulations prove to be a useful tool to establish from ``first principles'' the dynamics of droplets in complex flows involving multiple scales. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No 642069. & European Research Council under the European Community's Seventh Framework Program, ERC Grant Agreement No 339032.

  14. Numerical simulation of transonic compressor under circumferential inlet distortion and rotor/stator interference using harmonic balance method

    NASA Astrophysics Data System (ADS)

    Wang, Ziwei; Jiang, Xiong; Chen, Ti; Hao, Yan; Qiu, Min

    2018-05-01

    Simulating the unsteady flow of compressor under circumferential inlet distortion and rotor/stator interference would need full-annulus grid with a dual time method. This process is time consuming and needs a large amount of computational resources. Harmonic balance method simulates the unsteady flow in compressor on single passage grid with a series of steady simulations. This will largely increase the computational efficiency in comparison with the dual time method. However, most simulations with harmonic balance method are conducted on the flow under either circumferential inlet distortion or rotor/stator interference. Based on an in-house CFD code, the harmonic balance method is applied in the simulation of flow in the NASA Stage 35 under both circumferential inlet distortion and rotor/stator interference. As the unsteady flow is influenced by two different unsteady disturbances, it leads to the computational instability. The instability can be avoided by coupling the harmonic balance method with an optimizing algorithm. The computational result of harmonic balance method is compared with the result of full-annulus simulation. It denotes that, the harmonic balance method simulates the flow under circumferential inlet distortion and rotor/stator interference as precise as the full-annulus simulation with a speed-up of about 8 times.

  15. Exploring the optimal economic timing for crop tree release treatments in hardwoods: results from simulation

    Treesearch

    Chris B. LeDoux; Gary W. Miller

    2008-01-01

    In this study we used data from 16 Appalachian hardwood stands, a growth and yield computer simulation model, and stump-to-mill logging cost-estimating software to evaluate the optimal economic timing of crop tree release (CTR) treatments. The simulated CTR treatments consisted of one-time logging operations at stand age 11, 23, 31, or 36 years, with the residual...

  16. Differential die-away instrument: Report on comparison of fuel assembly experiments and simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsell, Alison Victoria; Henzl, Vladimir; Swinhoe, Martyn Thomas

    2015-01-14

    Experimental results of the assay of mock-up (fresh) fuel with the differential die-away (DDA) instrument were compared to the Monte Carlo N-Particle eXtended (MCNPX) simulation results. Most principal experimental observables, the die-away time and the in tegral of the DDA signal in several time domains, have been found in good agreement with the MCNPX simulation results. The remaining discrepancies between the simulation and experimental results are likely due to small differences between the actual experimental setup and the simulated geometry, including uncertainty in the DT neutron generator yield. Within this report we also present a sensitivity study of the DDAmore » instrument which is a complex and sensitive system and demonstrate to what degree it can be impacted by geometry, material composition, and electronics performance.« less

  17. Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production

    NASA Astrophysics Data System (ADS)

    Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne

    2018-05-01

    A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.

  18. VERSE - Virtual Equivalent Real-time Simulation

    NASA Technical Reports Server (NTRS)

    Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel

    2005-01-01

    Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.

  19. RENEW v3.2 user's manual, maintenance estimation simulation for Space Station Freedom Program

    NASA Technical Reports Server (NTRS)

    Bream, Bruce L.

    1993-01-01

    RENEW is a maintenance event estimation simulation program developed in support of the Space Station Freedom Program (SSFP). This simulation uses reliability and maintainability (R&M) and logistics data to estimate both average and time dependent maintenance demands. The simulation uses Monte Carlo techniques to generate failure and repair times as a function of the R&M and logistics parameters. The estimates are generated for a single type of orbital replacement unit (ORU). The simulation has been in use by the SSFP Work Package 4 prime contractor, Rocketdyne, since January 1991. The RENEW simulation gives closer estimates of performance since it uses a time dependent approach and depicts more factors affecting ORU failure and repair than steady state average calculations. RENEW gives both average and time dependent demand values. Graphs of failures over the mission period and yearly failure occurrences are generated. The averages demand rate for the ORU over the mission period is also calculated. While RENEW displays the results in graphs, the results are also available in a data file for further use by spreadsheets or other programs. The process of using RENEW starts with keyboard entry of the R&M and operational data. Once entered, the data may be saved in a data file for later retrieval. The parameters may be viewed and changed after entry using RENEW. The simulation program runs the number of Monte Carlo simulations requested by the operator. Plots and tables of the results can be viewed on the screen or sent to a printer. The results of the simulation are saved along with the input data. Help screens are provided with each menu and data entry screen.

  20. Fixed-base simulator study of the effect of time delays in visual cues on pilot tracking performance

    NASA Technical Reports Server (NTRS)

    Queijo, M. J.; Riley, D. R.

    1975-01-01

    Factors were examined which determine the amount of time delay acceptable in the visual feedback loop in flight simulators. Acceptable time delays are defined as delays which significantly affect neither the results nor the manner in which the subject 'flies' the simulator. The subject tracked a target aircraft as it oscillated sinusoidally in a vertical plane only. The pursuing aircraft was permitted five degrees of freedom. Time delays of from 0.047 to 0.297 second were inserted in the visual feedback loop. A side task was employed to maintain the workload constant and to insure that the pilot was fully occupied during the experiment. Tracking results were obtained for 17 aircraft configurations having different longitudinal short-period characteristics. Results show a positive correlation between improved handling qualities and a longer acceptable time delay.

  1. Design of object-oriented distributed simulation classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D. (Principal Investigator)

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  2. Design of Object-Oriented Distributed Simulation Classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  3. Piloted simulation of a ground-based time-control concept for air traffic control

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Green, Steven M.

    1989-01-01

    A concept for aiding air traffic controllers in efficiently spacing traffic and meeting scheduled arrival times at a metering fix was developed and tested in a real time simulation. The automation aid, referred to as the ground based 4-D descent advisor (DA), is based on accurate models of aircraft performance and weather conditions. The DA generates suggested clearances, including both top-of-descent-point and speed-profile data, for one or more aircraft in order to achieve specific time or distance separation objectives. The DA algorithm is used by the air traffic controller to resolve conflicts and issue advisories to arrival aircraft. A joint simulation was conducted using a piloted simulator and an advanced concept air traffic control simulation to study the acceptability and accuracy of the DA automation aid from both the pilot's and the air traffic controller's perspectives. The results of the piloted simulation are examined. In the piloted simulation, airline crews executed controller issued descent advisories along standard curved path arrival routes, and were able to achieve an arrival time precision of + or - 20 sec at the metering fix. An analysis of errors generated in turns resulted in further enhancements of the algorithm to improve the predictive accuracy. Evaluations by pilots indicate general support for the concept and provide specific recommendations for improvement.

  4. Real-time global MHD simulation of the solar wind interaction with the earth's magnetosphere

    NASA Astrophysics Data System (ADS)

    Shimazu, H.; Tanaka, T.; Fujita, S.; Nakamura, M.; Obara, T.

    We have developed a real-time global MHD simulation of the solar wind interaction with the earth s magnetosphere By adopting the real-time solar wind parameters including the IMF observed routinely by the ACE spacecraft responses of the magnetosphere are calculated with the MHD code We adopted the modified spherical coordinates and the mesh point numbers for this simulation are 56 58 and 40 for the r theta and phi direction respectively The simulation is carried out routinely on the super computer system NEC SX-6 at National Institute of Information and Communications Technology Japan The visualized images of the magnetic field lines around the earth pressure distribution on the meridian plane and the conductivity of the polar ionosphere can be referred to on the Web site http www nict go jp dk c232 realtime The results show that various magnetospheric activities are almost reproduced qualitatively They also give us information how geomagnetic disturbances develop in the magnetosphere in relation with the ionosphere From the viewpoint of space weather the real-time simulation helps us to understand the whole image in the current condition of the magnetosphere To evaluate the simulation results we compare the AE index derived from the simulation and observations In the case of isolated substorms the indices almost agreed well in both timing and intensities In other cases the simulation can predict general activities although the exact timing of the onset of substorms and intensities did not always agree By analyzing

  5. A heterogeneous system based on GPU and multi-core CPU for real-time fluid and rigid body simulation

    NASA Astrophysics Data System (ADS)

    da Silva Junior, José Ricardo; Gonzalez Clua, Esteban W.; Montenegro, Anselmo; Lage, Marcos; Dreux, Marcelo de Andrade; Joselli, Mark; Pagliosa, Paulo A.; Kuryla, Christine Lucille

    2012-03-01

    Computational fluid dynamics in simulation has become an important field not only for physics and engineering areas but also for simulation, computer graphics, virtual reality and even video game development. Many efficient models have been developed over the years, but when many contact interactions must be processed, most models present difficulties or cannot achieve real-time results when executed. The advent of parallel computing has enabled the development of many strategies for accelerating the simulations. Our work proposes a new system which uses some successful algorithms already proposed, as well as a data structure organisation based on a heterogeneous architecture using CPUs and GPUs, in order to process the simulation of the interaction of fluids and rigid bodies. This successfully results in a two-way interaction between them and their surrounding objects. As far as we know, this is the first work that presents a computational collaborative environment which makes use of two different paradigms of hardware architecture for this specific kind of problem. Since our method achieves real-time results, it is suitable for virtual reality, simulation and video game fluid simulation problems.

  6. Nonlinear relaxation algorithms for circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, R.A.

    Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less

  7. Equilibration of experimentally determined protein structures for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Walton, Emily B.; Vanvliet, Krystyn J.

    2006-12-01

    Preceding molecular dynamics simulations of biomolecular interactions, the molecule of interest is often equilibrated with respect to an initial configuration. This so-called equilibration stage is required because the input structure is typically not within the equilibrium phase space of the simulation conditions, particularly in systems as complex as proteins, which can lead to artifactual trajectories of protein dynamics. The time at which nonequilibrium effects from the initial configuration are minimized—what we will call the equilibration time—marks the beginning of equilibrium phase-space exploration. Note that the identification of this time does not imply exploration of the entire equilibrium phase space. We have found that current equilibration methodologies contain ambiguities that lead to uncertainty in determining the end of the equilibration stage of the trajectory. This results in equilibration times that are either too long, resulting in wasted computational resources, or too short, resulting in the simulation of molecular trajectories that do not accurately represent the physical system. We outline and demonstrate a protocol for identifying the equilibration time that is based on the physical model of Normal Mode Analysis. We attain the computational efficiency required of large-protein simulations via a stretched exponential approximation that enables an analytically tractable and physically meaningful form of the root-mean-square deviation of atoms comprising the protein. We find that the fitting parameters (which correspond to physical properties of the protein) fluctuate initially but then stabilize for increased simulation time, independently of the simulation duration or sampling frequency. We define the end of the equilibration stage—and thus the equilibration time—as the point in the simulation when these parameters attain constant values. Compared to existing methods, our approach provides the objective identification of the time at which the simulated biomolecule has entered an energetic basin. For the representative protein considered, bovine pancreatic trypsin inhibitor, existing methods indicate a range of 0.2-10ns of simulation until a local minimum is attained. Our approach identifies a substantially narrower range of 4.5-5.5ns , which will lead to a much more objective choice of equilibration time.

  8. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  9. Reconstructing the ideal results of a perturbed analog quantum simulator

    NASA Astrophysics Data System (ADS)

    Schwenk, Iris; Reiner, Jan-Michael; Zanker, Sebastian; Tian, Lin; Leppäkangas, Juha; Marthaler, Michael

    2018-04-01

    Well-controlled quantum systems can potentially be used as quantum simulators. However, a quantum simulator is inevitably perturbed by coupling to additional degrees of freedom. This constitutes a major roadblock to useful quantum simulations. So far there are only limited means to understand the effect of perturbation on the results of quantum simulation. Here we present a method which, in certain circumstances, allows for the reconstruction of the ideal result from measurements on a perturbed quantum simulator. We consider extracting the value of the correlator 〈Ôi(t ) Ôj(0 ) 〉 from the simulated system, where Ôi are the operators which couple the system to its environment. The ideal correlator can be straightforwardly reconstructed by using statistical knowledge of the environment, if any n -time correlator of operators Ôi of the ideal system can be written as products of two-time correlators. We give an approach to verify the validity of this assumption experimentally by additional measurements on the perturbed quantum simulator. The proposed method can allow for reliable quantum simulations with systems subjected to environmental noise without adding an overhead to the quantum system.

  10. Kinetics of transient electroluminescence in organic light emitting diodes

    NASA Astrophysics Data System (ADS)

    Shukla, Manju; Kumar, Pankaj; Chand, Suresh; Brahme, Nameeta; Kher, R. S.; Khokhar, M. S. K.

    2008-08-01

    Mathematical simulation on the rise and decay kinetics of transient electroluminescence (EL) in organic light emitting diodes (OLEDs) is presented. The transient EL is studied with respect to a step voltage pulse. While rising, for lower values of time, the EL intensity shows a quadratic dependence on (t - tdel), where tdel is the time delay observed in the onset of EL, and finally attains saturation at a sufficiently large time. When the applied voltage is switched off, the initial EL decay shows an exponential dependence on (t - tdec), where tdec is the time when the voltage is switched off. The simulated results are compared with the transient EL performance of a bilayer OLED based on small molecular bis(2-methyl 8-hydroxyquinoline)(triphenyl siloxy) aluminium (SAlq). Transient EL studies have been carried out at different voltage pulse amplitudes. The simulated results show good agreement with experimental data. Using these simulated results the lifetime of the excitons in SAlq has also been calculated.

  11. Broadband impedance boundary conditions for the simulation of sound propagation in the time domain.

    PubMed

    Bin, Jonghoon; Yousuff Hussaini, M; Lee, Soogab

    2009-02-01

    An accurate and practical surface impedance boundary condition in the time domain has been developed for application to broadband-frequency simulation in aeroacoustic problems. To show the capability of this method, two kinds of numerical simulations are performed and compared with the analytical/experimental results: one is acoustic wave reflection by a monopole source over an impedance surface and the other is acoustic wave propagation in a duct with a finite impedance wall. Both single-frequency and broadband-frequency simulations are performed within the framework of linearized Euler equations. A high-order dispersion-relation-preserving finite-difference method and a low-dissipation, low-dispersion Runge-Kutta method are used for spatial discretization and time integration, respectively. The results show excellent agreement with the analytical/experimental results at various frequencies. The method accurately predicts both the amplitude and the phase of acoustic pressure and ensures the well-posedness of the broadband time-domain impedance boundary condition.

  12. Power estimation using simulations for air pollution time-series studies.

    PubMed

    Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt

    2012-09-20

    Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.

  13. Simulated BRDF based on measured surface topography of metal

    NASA Astrophysics Data System (ADS)

    Yang, Haiyue; Haist, Tobias; Gronle, Marc; Osten, Wolfgang

    2017-06-01

    The radiative reflective properties of a calibration standard rough surface were simulated by ray tracing and the Finite-difference time-domain (FDTD) method. The simulation results have been used to compute the reflectance distribution functions (BRDF) of metal surfaces and have been compared with experimental measurements. The experimental and simulated results are in good agreement.

  14. NMR diffusion simulation based on conditional random walk.

    PubMed

    Gudbjartsson, H; Patz, S

    1995-01-01

    The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.

  15. Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves

    NASA Astrophysics Data System (ADS)

    Liu, Shukui; Papanikolaou, Apostolos D.

    2011-03-01

    Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT) of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.

  16. [Research on Time-frequency Characteristics of Magneto-acoustic Signal of Different Thickness Medium Based on Wave Summing Method].

    PubMed

    Zhang, Shunqi; Yin, Tao; Ma, Ren; Liu, Zhipeng

    2015-08-01

    Functional imaging method of biological electrical characteristics based on magneto-acoustic effect gives valuable information of tissue in early tumor diagnosis, therein time and frequency characteristics analysis of magneto-acoustic signal is important in image reconstruction. This paper proposes wave summing method based on Green function solution for acoustic source of magneto-acoustic effect. Simulations and analysis under quasi 1D transmission condition are carried out to time and frequency characteristics of magneto-acoustic signal of models with different thickness. Simulation results of magneto-acoustic signal were verified through experiments. Results of the simulation with different thickness showed that time-frequency characteristics of magneto-acoustic signal reflected thickness of sample. Thin sample, which is less than one wavelength of pulse, and thick sample, which is larger than one wavelength, showed different summed waveform and frequency characteristics, due to difference of summing thickness. Experimental results verified theoretical analysis and simulation results. This research has laid a foundation for acoustic source and conductivity reconstruction to the medium with different thickness in magneto-acoustic imaging.

  17. Wang-Landau sampling: Saving CPU time

    NASA Astrophysics Data System (ADS)

    Ferreira, L. S.; Jorge, L. N.; Leão, S. A.; Caparica, A. A.

    2018-04-01

    In this work we propose an improvement to the Wang-Landau (WL) method that allows an economy in CPU time of about 60% leading to the same results with the same accuracy. We used the 2D Ising model to show that one can initiate all WL simulations using the outputs of an advanced WL level from a previous simulation. We showed that up to the seventh WL level (f6) the simulations are not biased yet and can proceed to any value that the simulation from the very beginning would reach. As a result the initial WL levels can be simulated just once. It was also observed that the saving in CPU time is larger for larger lattice sizes, exactly where the computational cost is considerable. We carried out high-resolution simulations beginning initially from the first WL level (f0) and another beginning from the eighth WL level (f7) using all the data at the end of the previous level and showed that the results for the critical temperature Tc and the critical static exponents β and γ coincide within the error bars. Finally we applied the same procedure to the 1/2-spin Baxter-Wu model and the economy in CPU time was of about 64%.

  18. Simulation platform of LEO satellite communication system based on OPNET

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Zhang, Yong; Li, Xiaozhuo; Wang, Chuqiao; Li, Haihao

    2018-02-01

    For the purpose of verifying communication protocol in the low earth orbit (LEO) satellite communication system, an Optimized Network Engineering Tool (OPNET) based simulation platform is built. Using the three-layer modeling mechanism, the network model, the node model and the process model of the satellite communication system are built respectively from top to bottom, and the protocol will be implemented by finite state machine and Proto-C language. According to satellite orbit parameters, orbit files are generated via Satellite Tool Kit (STK) and imported into OPNET, and the satellite nodes move along their orbits. The simulation platform adopts time-slot-driven mode, divides simulation time into continuous time slots, and allocates slot number for each time slot. A resource allocation strategy is simulated on this platform, and the simulation results such as resource utilization rate, system throughput and packet delay are analyzed, which indicate that this simulation platform has outstanding versatility.

  19. Mass imbalances in EPANET water-quality simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Janke, Robert; Taxon, Thomas N.

    EPANET is widely employed to simulate water quality in water distribution systems. However, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results, in general, only for small water-quality time steps; use of an adequately short time step may not be feasible. Overly long time steps can yield errors in concentrations and result in situations in which constituent mass is not conserved. Mass may not be conserved even when EPANET gives no errors or warnings. This paper explains how such imbalances can occur and provides examples of such cases; it also presents a preliminary event-driven approachmore » that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, to those obtained using the new approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations.« less

  20. Real-time simulation of thermal shadows with EMIT

    NASA Astrophysics Data System (ADS)

    Klein, Andreas; Oberhofer, Stefan; Schätz, Peter; Nischwitz, Alfred; Obermeier, Paul

    2016-05-01

    Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.

  1. Operating system for a real-time multiprocessor propulsion system simulator

    NASA Technical Reports Server (NTRS)

    Cole, G. L.

    1984-01-01

    The success of the Real Time Multiprocessor Operating System (RTMPOS) in the development and evaluation of experimental hardware and software systems for real time interactive simulation of air breathing propulsion systems was evaluated. The Real Time Multiprocessor Operating System (RTMPOS) provides the user with a versatile, interactive means for loading, running, debugging and obtaining results from a multiprocessor based simulator. A front end processor (FEP) serves as the simulator controller and interface between the user and the simulator. These functions are facilitated by the RTMPOS which resides on the FEP. The RTMPOS acts in conjunction with the FEP's manufacturer supplied disk operating system that provides typical utilities like an assembler, linkage editor, text editor, file handling services, etc. Once a simulation is formulated, the RTMPOS provides for engineering level, run time operations such as loading, modifying and specifying computation flow of programs, simulator mode control, data handling and run time monitoring. Run time monitoring is a powerful feature of RTMPOS that allows the user to record all actions taken during a simulation session and to receive advisories from the simulator via the FEP. The RTMPOS is programmed mainly in PASCAL along with some assembly language routines. The RTMPOS software is easily modified to be applicable to hardware from different manufacturers.

  2. Adaptive control of theophylline therapy: importance of blood sampling times.

    PubMed

    D'Argenio, D Z; Khakmahd, K

    1983-10-01

    A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.

  3. Time-Domain Simulation of Along-Track Interferometric SAR for Moving Ocean Surfaces.

    PubMed

    Yoshida, Takero; Rheem, Chang-Kyu

    2015-06-10

    A time-domain simulation of along-track interferometric synthetic aperture radar (AT-InSAR) has been developed to support ocean observations. The simulation is in the time domain and based on Bragg scattering to be applicable for moving ocean surfaces. The time-domain simulation is suitable for examining velocities of moving objects. The simulation obtains the time series of microwave backscattering as raw signals for movements of ocean surfaces. In terms of realizing Bragg scattering, the computational grid elements for generating the numerical ocean surface are set to be smaller than the wavelength of the Bragg resonant wave. In this paper, the simulation was conducted for a Bragg resonant wave and irregular waves with currents. As a result, the phases of the received signals from two antennas differ due to the movement of the numerical ocean surfaces. The phase differences shifted by currents were in good agreement with the theoretical values. Therefore, the adaptability of the simulation to observe velocities of ocean surfaces with AT-InSAR was confirmed.

  4. Time-Domain Simulation of Along-Track Interferometric SAR for Moving Ocean Surfaces

    PubMed Central

    Yoshida, Takero; Rheem, Chang-Kyu

    2015-01-01

    A time-domain simulation of along-track interferometric synthetic aperture radar (AT-InSAR) has been developed to support ocean observations. The simulation is in the time domain and based on Bragg scattering to be applicable for moving ocean surfaces. The time-domain simulation is suitable for examining velocities of moving objects. The simulation obtains the time series of microwave backscattering as raw signals for movements of ocean surfaces. In terms of realizing Bragg scattering, the computational grid elements for generating the numerical ocean surface are set to be smaller than the wavelength of the Bragg resonant wave. In this paper, the simulation was conducted for a Bragg resonant wave and irregular waves with currents. As a result, the phases of the received signals from two antennas differ due to the movement of the numerical ocean surfaces. The phase differences shifted by currents were in good agreement with the theoretical values. Therefore, the adaptability of the simulation to observe velocities of ocean surfaces with AT-InSAR was confirmed. PMID:26067197

  5. Adaptive temporal refinement in injection molding

    NASA Astrophysics Data System (ADS)

    Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek

    2018-05-01

    Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.

  6. Research and implementation of simulation for TDICCD remote sensing in vibration of optical axis

    NASA Astrophysics Data System (ADS)

    Liu, Zhi-hong; Kang, Xiao-jun; Lin, Zhe; Song, Li

    2013-12-01

    During the exposure time, the charge transfer speed in the push-broom direction and the line-by-lines canning speed of the sensor are required to match each other strictly for a space-borne TDICCD push-broom camera. However, as attitude disturbance of satellite and vibration of camera are inevitable, it is impossible to eliminate the speed mismatch, which will make the signal of different targets overlay each other and result in a decline of image resolution. The effects of velocity mismatch will be visually observed and analyzed by simulating the degradation of image quality caused by the vibration of the optical axis, and it is significant for the evaluation of image quality and design of the image restoration algorithm. How to give a model in time domain and space domain during the imaging time is the problem needed to be solved firstly. As vibration information for simulation is usually given by a continuous curve, the pixels of original image matrix and sensor matrix are discrete, as a result, they cannot always match each other well. The effect of simulation will also be influenced by the discrete sampling in integration time. In conclusion, it is quite significant for improving simulation accuracy and efficiency to give an appropriate discrete modeling and simulation method. The paper analyses discretization schemes in time domain and space domain and presents a method to simulate the quality of image of the optical system in the vibration of the line of sight, which is based on the principle of TDICCD sensor. The gray value of pixels in sensor matrix is obtained by a weighted arithmetic, which solves the problem of pixels dismatch. The result which compared with the experiment of hardware test indicate that this simulation system performances well in accuracy and reliability.

  7. Dynamic Average-Value Modeling of Doubly-Fed Induction Generator Wind Energy Conversion Systems

    NASA Astrophysics Data System (ADS)

    Shahab, Azin

    In a Doubly-fed Induction Generator (DFIG) wind energy conversion system, the rotor of a wound rotor induction generator is connected to the grid via a partial scale ac/ac power electronic converter which controls the rotor frequency and speed. In this research, detailed models of the DFIG wind energy conversion system with Sinusoidal Pulse-Width Modulation (SPWM) scheme and Optimal Pulse-Width Modulation (OPWM) scheme for the power electronic converter are developed in detail in PSCAD/EMTDC. As the computer simulation using the detailed models tends to be computationally extensive, time consuming and even sometimes not practical in terms of speed, two modified approaches (switching-function modeling and average-value modeling) are proposed to reduce the simulation execution time. The results demonstrate that the two proposed approaches reduce the simulation execution time while the simulation results remain close to those obtained using the detailed model simulation.

  8. Just-in-time Time Data Analytics and Visualization of Climate Simulations using the Bellerophon Framework

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.

    2015-12-01

    Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.

  9. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  10. CFD simulation of a dry scroll vacuum pump with clearances, solid heating and thermal deformation

    NASA Astrophysics Data System (ADS)

    Spille-Kohoff, A.; Hesse, J.; Andres, R.; Hetze, F.

    2017-08-01

    Although dry scroll vacuum pumps (DSPV) are essential devices in many different industrial processes, the CFD simulation of such pumps is not widely used and often restricted to simplified cases due to its complexity: The working principle with a fixed and an orbiting scroll leads to working chambers that are changing in time and are connected through moving small radial and axial clearances in the range of 10 to 100 μm. Due to the low densities and low mass flow rates in vacuum pumps, it is important to include heat transfer towards and inside the solid components. Solid heating is very slow compared to the scroll revolution speed and the gas behaviour, thus a special workflow is necessary to reach the working conditions in reasonable simulation times. The resulting solid temperature is then used to compute the thermal deformation, which usually results in gap size changes that influence leakage flows. In this paper, setup steps and results for the simulation of a DSVP are shown and compared to theoretical and experimental results. The time-varying working chambers are meshed with TwinMesh, a hexahedral meshing programme for positive displacement machines. The CFD simulation with ANSYS CFX accounts for gas flow with compressibility and turbulence effects, conjugate heat transfer between gas and solids, and leakage flows through the clearances. Time-resolved results for torques, chamber pressure, mass flow, and heat flow between gas and solids are shown, as well as time- and space-resolved results for pressure, velocity, and temperature for different operating conditions of the DSVP.

  11. Photonic time crystals.

    PubMed

    Zeng, Lunwu; Xu, Jin; Wang, Chengen; Zhang, Jianhua; Zhao, Yuting; Zeng, Jing; Song, Runxia

    2017-12-07

    When space (time) translation symmetry is spontaneously broken, the space crystal (time crystal) forms; when permittivity and permeability periodically vary with space (time), the photonic crystal (photonic time crystal) forms. We proposed the concept of photonic time crystal and rewritten the Maxwell's equations. Utilizing Finite Difference Time Domain (FDTD) method, we simulated electromagnetic wave propagation in photonic time crystal and photonic space-time crystal, the simulation results show that more intensive scatter fields can obtained in photonic time crystal and photonic space-time crystal.

  12. Kinetic theory-based numerical modeling and analysis of bi-disperse segregated mixture fluidized bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konan, N. A.; Huckaby, E. D.

    We discuss a series of continuum Euler-Euler simulations of an initially mixed bi-disperse fluidized bed which segregates under certain operating conditions. The simulations use the multi-phase kinetic theory-based description of the momentum and energy exchanges between the phases by Simonin’s Group [see e.g. Gourdel, Simonin and Brunier (1999). Proceedings of 6th International Conference on Circulating Fluidized Beds, Germany, pp. 205-210]. The discussion and analysis of the results focus on the fluid-particle momentum exchange (i.e. drag). Simulations using mono- and poly-disperse fluid-particle drag correlations are analyzed for the Geldart D-type size bi-disperse gas-solid experiments performed by Goldschmidt et al. [Powder Tech.,more » pp. 135-159 (2003)]. The poly-disperse gas-particle drag correlations account for the local particle size distribution by using an effective mixture diameter when calculating the Reynolds number and then correcting the resulting force coefficient. Simulation results show very good predictions of the segregation index for bidisperse beds with the mono-disperse drag correlations contrary to the poly-disperse drag correlations for which the segregation rate is systematically under-predicted. The statistical analysis of the results shows a clear separation in the distribution of the gas-particle mean relaxation times of the small and large particles with simulations using the mono-disperse drag. In contrast, the poly-disperse drag simulations have a significant overlap and also a smaller difference in the mean particle relaxation times. This results in the small and large particles in the bed to respond to the gas similarly without enough relative time lag. The results suggest that the difference in the particle response time induce flow dynamics favorable to a force imbalance which results in the segregation.« less

  13. Kinetic theory-based numerical modeling and analysis of bi-disperse segregated mixture fluidized bed

    DOE PAGES

    Konan, N. A.; Huckaby, E. D.

    2017-06-21

    We discuss a series of continuum Euler-Euler simulations of an initially mixed bi-disperse fluidized bed which segregates under certain operating conditions. The simulations use the multi-phase kinetic theory-based description of the momentum and energy exchanges between the phases by Simonin’s Group [see e.g. Gourdel, Simonin and Brunier (1999). Proceedings of 6th International Conference on Circulating Fluidized Beds, Germany, pp. 205-210]. The discussion and analysis of the results focus on the fluid-particle momentum exchange (i.e. drag). Simulations using mono- and poly-disperse fluid-particle drag correlations are analyzed for the Geldart D-type size bi-disperse gas-solid experiments performed by Goldschmidt et al. [Powder Tech.,more » pp. 135-159 (2003)]. The poly-disperse gas-particle drag correlations account for the local particle size distribution by using an effective mixture diameter when calculating the Reynolds number and then correcting the resulting force coefficient. Simulation results show very good predictions of the segregation index for bidisperse beds with the mono-disperse drag correlations contrary to the poly-disperse drag correlations for which the segregation rate is systematically under-predicted. The statistical analysis of the results shows a clear separation in the distribution of the gas-particle mean relaxation times of the small and large particles with simulations using the mono-disperse drag. In contrast, the poly-disperse drag simulations have a significant overlap and also a smaller difference in the mean particle relaxation times. This results in the small and large particles in the bed to respond to the gas similarly without enough relative time lag. The results suggest that the difference in the particle response time induce flow dynamics favorable to a force imbalance which results in the segregation.« less

  14. Implementation of 3D spatial indexing and compression in a large-scale molecular dynamics simulation database for rapid atomic contact detection

    PubMed Central

    2011-01-01

    Background Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Results Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. Conclusions The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008. PMID:21831299

  15. V/STOL tilt rotor aircraft study. Volume 9: Piloted simulator evaluation of the Boeing Vertol model 222 tilt rotor aircraft

    NASA Technical Reports Server (NTRS)

    Rosenstein, H.; Mcveigh, M. A.; Mollenkof, P. A.

    1973-01-01

    The results of a real time piloted simulation to investigate the handling qualities and performance of a tilting rotor aircraft design are presented. The aerodynamic configuration of the aircraft is described. The procedures for conducting the simulator evaluation are reported. Pilot comments of the aircraft handling qualities under various simulated flight conditions are included. The time histories of selected pilot maneuvers are shown.

  16. Light extraction efficiency analysis of GaN-based light-emitting diodes with nanopatterned sapphire substrates.

    PubMed

    Pan, Jui-Wen; Tsai, Pei-Jung; Chang, Kao-Der; Chang, Yung-Yuan

    2013-03-01

    In this paper, we propose a method to analyze the light extraction efficiency (LEE) enhancement of a nanopatterned sapphire substrates (NPSS) light-emitting diode (LED) by comparing wave optics software with ray optics software. Finite-difference time-domain (FDTD) simulations represent the wave optics software and Light Tools (LTs) simulations represent the ray optics software. First, we find the trends of and an optimal solution for the LEE enhancement when the 2D-FDTD simulations are used to save on simulation time and computational memory. The rigorous coupled-wave analysis method is utilized to explain the trend we get from the 2D-FDTD algorithm. The optimal solution is then applied in 3D-FDTD and LTs simulations. The results are similar and the difference in LEE enhancement between the two simulations does not exceed 8.5% in the small LED chip area. More than 10(4) times computational memory is saved during the LTs simulation in comparison to the 3D-FDTD simulation. Moreover, LEE enhancement from the side of the LED can be obtained in the LTs simulation. An actual-size NPSS LED is simulated using the LTs. The results show a more than 307% improvement in the total LEE enhancement of the NPSS LED with the optimal solution compared to the conventional LED.

  17. Computational simulation of concurrent engineering for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  18. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  19. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Astrophysics Data System (ADS)

    Chamis, C. C.; Singhal, S. N.

    1993-02-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  20. A real time Pegasus propulsion system model for VSTOL piloted simulation evaluation

    NASA Technical Reports Server (NTRS)

    Mihaloew, J. R.; Roth, S. P.; Creekmore, R.

    1981-01-01

    A real time propulsion system modeling technique suitable for use in man-in-the-loop simulator studies was developd. This technique provides the system accuracy, stability, and transient response required for integrated aircraft and propulsion control system studies. A Pegasus-Harrier propulsion system was selected as a baseline for developing mathematical modeling and simulation techniques for VSTOL. Initially, static and dynamic propulsion system characteristics were modeled in detail to form a nonlinear aerothermodynamic digital computer simulation of a Pegasus engine. From this high fidelity simulation, a real time propulsion model was formulated by applying a piece-wise linear state variable methodology. A hydromechanical and water injection control system was also simulated. The real time dynamic model includes the detail and flexibility required for the evaluation of critical control parameters and propulsion component limits over a limited flight envelope. The model was programmed for interfacing with a Harrier aircraft simulation. Typical propulsion system simulation results are presented.

  1. Resin Flow Behavior Simulation of Grooved Foam Sandwich Composites with the Vacuum Assisted Resin Infusion (VARI) Molding Process

    PubMed Central

    Zhao, Chenhui; Zhang, Guangcheng; Wu, Yibo

    2012-01-01

    The resin flow behavior in the vacuum assisted resin infusion molding process (VARI) of foam sandwich composites was studied by both visualization flow experiments and computer simulation. Both experimental and simulation results show that: the distribution medium (DM) leads to a shorter molding filling time in grooved foam sandwich composites via the VARI process, and the mold filling time is linearly reduced with the increase of the ratio of DM/Preform. Patterns of the resin sources have a significant influence on the resin filling time. The filling time of center source is shorter than that of edge pattern. Point pattern results in longer filling time than of linear source. Short edge/center patterns need a longer time to fill the mould compared with Long edge/center sources.

  2. Time-dependent magnetohydrodynamic simulations of the inner heliosphere

    NASA Astrophysics Data System (ADS)

    Merkin, V. G.; Lyon, J. G.; Lario, D.; Arge, C. N.; Henney, C. J.

    2016-04-01

    This paper presents results from a simulation study exploring heliospheric consequences of time-dependent changes at the Sun. We selected a 2 month period in the beginning of year 2008 that was characterized by very low solar activity. The heliosphere in the equatorial region was dominated by two coronal holes whose changing structure created temporal variations distorting the classical steady state picture of the heliosphere. We used the Air Force Data Assimilate Photospheric Flux Transport (ADAPT) model to obtain daily updated photospheric magnetograms and drive the Wang-Sheeley-Arge (WSA) model of the corona. This leads to a formulation of a time-dependent boundary condition for our three-dimensional (3-D) magnetohydrodynamic (MHD) model, LFM-helio, which is the heliospheric adaptation of the Lyon-Fedder-Mobarry MHD simulation code. The time-dependent coronal conditions were propagated throughout the inner heliosphere, and the simulation results were compared with the spacecraft located near 1 astronomical unit (AU) heliocentric distance: Advanced Composition Explorer (ACE), Solar Terrestrial Relations Observatory (STEREO-A and STEREO-B), and the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft that was in cruise phase measuring the heliospheric magnetic field between 0.35 and 0.6 AU. In addition, during the selected interval MESSENGER and ACE aligned radially allowing minimization of the effects of temporal variation at the Sun versus radial evolution of structures. Our simulations show that time-dependent simulationsreproduce the gross-scale structure of the heliosphere with higher fidelity, while on smaller spatial and faster time scales (e.g., 1 day) they provide important insights for interpretation of the data. The simulations suggest that moving boundaries of slow-fast wind transitions at 0.1 AU may result in the formation of inverted magnetic fields near pseudostreamers which is an intrinsically time-dependent process. Finally, we show that heliospheric current sheet corrugation, which may result in multiple sector boundary crossings when observed by spacecraft, is caused by solar wind velocity shears. Overall, our simulations demonstrate that time-dependent heliosphere modeling is a promising direction of research both for space weather applications and fundamental physics of the heliosphere.

  3. Initial Data Analysis Results for ATD-2 ISAS HITL Simulation

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2017-01-01

    To evaluate the operational procedures and information requirements for the core functional capabilities of the ATD-2 project, such as tactical surface metering tool, APREQ-CFR procedure, and data element exchanges between ramp and tower, human-in-the-loop (HITL) simulations were performed in March, 2017. This presentation shows the initial data analysis results from the HITL simulations. With respect to the different runway configurations and metering values in tactical surface scheduler, various airport performance metrics were analyzed and compared. These metrics include gate holding time, taxi-out in time, runway throughput, queue size and wait time in queue, and TMI flight compliance. In addition to the metering value, other factors affecting the airport performance in the HITL simulation, including run duration, runway changes, and TMI constraints, are also discussed.

  4. Temperature dependence of creep compliance of highly cross-linked epoxy: A molecular simulation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khabaz, Fardin, E-mail: rajesh.khare@ttu.edu; Khare, Ketan S., E-mail: rajesh.khare@ttu.edu; Khare, Rajesh, E-mail: rajesh.khare@ttu.edu

    2014-05-15

    We have used molecular dynamics (MD) simulations to study the effect of temperature on the creep compliance of neat cross-linked epoxy. Experimental studies of mechanical behavior of cross-linked epoxy in literature commonly report creep compliance values, whereas molecular simulations of these systems have primarily focused on the Young’s modulus. In this work, in order to obtain a more direct comparison between experiments and simulations, atomistically detailed models of the cross-linked epoxy are used to study their creep compliance as a function of temperature using MD simulations. The creep tests are performed by applying a constant tensile stress and monitoring themore » resulting strain in the system. Our results show that simulated values of creep compliance increase with an increase in both time and temperature. We believe that such calculations of the creep compliance, along with the use of time temperature superposition, hold great promise in connecting the molecular insight obtained from molecular simulation at small length- and time-scales with the experimental behavior of such materials. To the best of our knowledge, this work is the first reported effort that investigates the creep compliance behavior of cross-linked epoxy using MD simulations.« less

  5. Comparison of long-term numerical simulations at the Ketzin pilot site using the Schlumberger ECLIPSE and LBNL TOUGH2 simulators

    NASA Astrophysics Data System (ADS)

    Kempka, T.; Norden, B.; Tillner, E.; Nakaten, B.; Kühn, M.

    2012-04-01

    Geological modelling and dynamic flow simulations were conducted at the Ketzin pilot site showing a good agreement of history matched geological models with CO2 arrival times in both observation wells and timely development of reservoir pressure determined in the injection well. Recently, a re-evaluation of the seismic 3D data enabled a refinement of the structural site model and the implementation of the fault system present at the top of the Ketzin anticline. The updated geological model (model size: 5 km x 5 km) shows a horizontal discretization of 5 x 5 m and consists of three vertical zones, with the finest discretization at the top (0.5 m). According to the revised seismic analysis, the facies modelling to simulate the channel and floodplain facies distribution at Ketzin was updated. Using a sequential Gaussian simulator for the distribution of total and effective porosities and an empiric porosity-permeability relationship based on site and literature data available, the structural model was parameterized. Based on this revised reservoir model of the Stuttgart formation, numerical simulations using the TOUGH2-MP/ECO2N and Schlumberger Information Services (SIS) ECLIPSE 100 black-oil simulators were undertaken in order to evaluate the long-term (up to 10,000 years) migration of the injected CO2 (about 57,000 t at the end of 2011) and the development of reservoir pressure over time. The simulation results enabled us to quantitatively compare both reservoir simulators based on current operational data considering the long-term effects of CO2 storage including CO2 dissolution in the formation fluid. While the integration of the static geological model developed in the SIS Petrel modelling package into the ECLIPSE simulator is relatively flawless, a work-flow allowing for the export of Petrel models into the TOUGH2-MP input file format had to be implemented within the scope of this study. The challenge in this task was mainly determined by the presence of a complex faulted system in the revised reservoir model demanding for an integrated concept to deal with connections between the elements aligned to faults in the TOUGH2-MP simulator. Furthermore, we developed a methodology to visualize and compare the TOUGH2-MP simulation results with those of the Eclipse simulator using the Petrel software package. The long-term simulation results of both simulators are generally in good agreement. Spatial and timely migration of the CO2 plume as well as residual gas saturation are almost identical for both simulators, even though a time-dependent approach of CO2 dissolution in the formation fluid was chosen in the ECLIPSE simulator. Our results confirmed that a scientific open-source simulator as the TOUGH2-MP software package is capable to provide the same accuracy as the industrial standard simulator ECLIPSE 100. However, the computational time and additional efforts to implement a suitable workflow for using the TOUGH2-MP simulator are significantly higher, while the open-source concept of TOUGH2 provides more flexibility regarding process adaptation.

  6. Building occupancy simulation and data assimilation using a graph-based agent-oriented model

    NASA Astrophysics Data System (ADS)

    Rai, Sanish; Hu, Xiaolin

    2018-07-01

    Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.

  7. Operational modeling system with dynamic-wave routing

    USGS Publications Warehouse

    Ishii, A.L.; Charlton, T.J.; Ortel, T.W.; Vonnahme, C.C.; ,

    1998-01-01

    A near real-time streamflow-simulation system utilizing continuous-simulation rainfall-runoff generation with dynamic-wave routing is being developed by the U.S. Geological Survey in cooperation with the Du Page County Department of Environmental Concerns for a 24-kilometer reach of Salt Creek in Du Page County, Illinois. This system is needed in order to more effectively manage the Elmhurst Quarry Flood Control Facility, an off-line stormwater diversion reservoir located along Salt Creek. Near real time simulation capabilities will enable the testing and evaluation of potential rainfall, diversion, and return-flow scenarios on water-surface elevations along Salt Creek before implementing diversions or return-flows. The climatological inputs for the continuous-simulation rainfall-runoff model, Hydrologic Simulation Program - FORTRAN (HSPF) are obtained by Internet access and from a network of radio-telemetered precipitation gages reporting to a base-station computer. The unit area runoff time series generated from HSPF are the input for the dynamic-wave routing model. Full Equations (FEQ). The Generation and Analysis of Model Simulation Scenarios (GENSCN) interface is used as a pre- and post-processor for managing input data and displaying and managing simulation results. The GENSCN interface includes a variety of graphical and analytical tools for evaluation and quick visualization of the results of operational scenario simulations and thereby makes it possible to obtain the full benefit of the fully distributed dynamic routing results.

  8. Numerical simulations of tropical cyclones with assimilation of satellite, radar and in-situ observations: lessons learned from recent field programs and real-time experimental forecasts

    NASA Astrophysics Data System (ADS)

    Pu, Z.; Zhang, L.

    2010-12-01

    The impact of data assimilation on the predictability of tropical cyclones is examined with the cases from recent field programs and real-time hurricane forecast experiments. Mesoscale numerical simulations are performed to simulate major typhoons during the T-PARC/TCS08 field campaign with the assimilation of satellite, radar and in-situ observations. Results confirmed that data assimilation has indeed resulted in improved numerical simulations of tropical cyclones. However, positive impacts from the satellite and radar data are strongly depend on the quality of these data. Specifically, it is found that the overall impacts of assimilating AIRS retrieved atmospheric temperature and moisture profiles on numerical simulations of tropical cyclones are very sensitive to the bias corrections of the data.For instance, the dry biases of moisture profiles can cause the decay of tropical cyclones in the numerical simulations.In addition, the quality of airborne Doppler radar data has strong influence on numerical simulations of tropical cyclones in terms of their track, intensity and precipitation structures. Outcomes from assimilating radar data with various quality thresholds suggest that a trade-off between the quality and area coverage of the radar data is necessary in the practice. Some of those experiences obtained from the field case studies are applied to the near-real time experimental hurricane forecasts during the 2010 hurricane season. Results and issues raised from the case studies and real-time experiments will be discussed.

  9. Medication Waste Reduction in Pediatric Pharmacy Batch Processes

    PubMed Central

    Veltri, Michael A.; Hamrock, Eric; Mollenkopf, Nicole L.; Holt, Kristen; Levin, Scott

    2014-01-01

    OBJECTIVES: To inform pediatric cart-fill batch scheduling for reductions in pharmaceutical waste using a case study and simulation analysis. METHODS: A pre and post intervention and simulation analysis was conducted during 3 months at a 205-bed children's center. An algorithm was developed to detect wasted medication based on time-stamped computerized provider order entry information. The algorithm was used to quantify pharmaceutical waste and associated costs for both preintervention (1 batch per day) and postintervention (3 batches per day) schedules. Further, simulation was used to systematically test 108 batch schedules outlining general characteristics that have an impact on the likelihood for waste. RESULTS: Switching from a 1-batch-per-day to a 3-batch-per-day schedule resulted in a 31.3% decrease in pharmaceutical waste (28.7% to 19.7%) and annual cost savings of $183,380. Simulation results demonstrate how increasing batch frequency facilitates a more just-in-time process that reduces waste. The most substantial gains are realized by shifting from a schedule of 1 batch per day to at least 2 batches per day. The simulation exhibits how waste reduction is also achievable by avoiding batch preparation during daily time periods where medication administration or medication discontinuations are frequent. Last, the simulation was used to show how reducing batch preparation time per batch provides some, albeit minimal, opportunity to decrease waste. CONCLUSIONS: The case study and simulation analysis demonstrate characteristics of batch scheduling that may support pediatric pharmacy managers in redesign toward minimizing pharmaceutical waste. PMID:25024671

  10. A methodology towards virtualisation-based high performance simulation platform supporting multidisciplinary design of complex products

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin

    2012-08-01

    Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.

  11. Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity

    DOE PAGES

    Gordiz, Kiarash; Singh, David J.; Henry, Asegun

    2015-01-29

    In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less

  12. A rate-controlled teleoperator task with simulated transport delays

    NASA Technical Reports Server (NTRS)

    Pennington, J. E.

    1983-01-01

    A teleoperator-system simulation was used to examine the effects of two control modes (joint-by-joint and resolved-rate), a proximity-display method, and time delays (up to 2 sec) on the control of a five-degree-of-freedom manipulator performing a probe-in-hole alignment task. Four subjects used proportional rotational control and discrete (on-off) translation control with computer-generated visual displays. The proximity display enabled subjects to separate rotational errors from displacement (translation) errors; thus, when the proximity display was used with resolved-rate control, the simulated task was trivial. The time required to perform the simulated task increased linearly with time delay, but time delays had no effect on alignment accuracy. Based on the results of this simulation, several future studies are recommended.

  13. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, Mohammed Omair

    2012-01-01

    Our simulation was able to mimic the results of 30 tests on the actual hardware. This shows that simulations have the potential to enable early design validation - well before actual hardware exists. Although simulations focused around data processing procedures at subsystem and device level, they can also be applied to system level analysis to simulate mission scenarios and consumable tracking (e.g. power, propellant, etc.). Simulation engine plug-in developments are continually improving the product, but handling time for time-sensitive operations (like those of the remote engineering unit and bus controller) can be cumbersome.

  14. Simulation for analysis and control of superplastic forming. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacharia, T.; Aramayo, G.A.; Simunovic, S.

    1996-08-01

    A joint study was conducted by Oak Ridge National Laboratory (ORNL) and the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy-Lightweight Materials (DOE-LWM) Program. the purpose of the study was to assess and benchmark the current modeling capabilities with respect to accuracy of predictions and simulation time. Two modeling capabilities with respect to accuracy of predictions and simulation time. Two simulation platforms were considered in this study, which included the LS-DYNA3D code installed on ORNL`s high- performance computers and the finite element code MARC used at PNL. both ORNL and PNL performed superplastic forming (SPF) analysis on amore » standard butter-tray geometry, which was defined by PNL, to better understand the capabilities of the respective models. The specific geometry was selected and formed at PNL, and the experimental results, such as forming time and thickness at specific locations, were provided for comparisons with numerical predictions. Furthermore, comparisons between the ORNL simulation results, using elasto-plastic analysis, and PNL`s results, using rigid-plastic flow analysis, were performed.« less

  15. Parallel Decomposition of the Fictitious Lagrangian Algorithm and its Accuracy for Molecular Dynamics Simulations of Semiconductors.

    NASA Astrophysics Data System (ADS)

    Yeh, Mei-Ling

    We have performed a parallel decomposition of the fictitious Lagrangian method for molecular dynamics with tight-binding total energy expression into the hypercube computer. This is the first time in literature that the dynamical simulation of semiconducting systems containing more than 512 silicon atoms has become possible with the electrons treated as quantum particles. With the utilization of the Intel Paragon system, our timing analysis predicts that our code is expected to perform realistic simulations on very large systems consisting of thousands of atoms with time requirements of the order of tens of hours. Timing results and performance analysis of our parallel code are presented in terms of calculation time, communication time, and setup time. The accuracy of the fictitious Lagrangian method in molecular dynamics simulation is also investigated, especially the energy conservation of the total energy of ions. We find that the accuracy of the fictitious Lagrangian scheme in small silicon cluster and very large silicon system simulations is good for as long as the simulations proceed, even though we quench the electronic coordinates to the Born-Oppenheimer surface only in the beginning of the run. The kinetic energy of electrons does not increase as time goes on, and the energy conservation of the ionic subsystem remains very good. This means that, as far as the ionic subsystem is concerned, the electrons are on the average in the true quantum ground states. We also tie up some odds and ends regarding a few remaining questions about the fictitious Lagrangian method, such as the difference between the results obtained from the Gram-Schmidt and SHAKE method of orthonormalization, and differences between simulations where the electrons are quenched to the Born -Oppenheimer surface only once compared with periodic quenching.

  16. Advanced Space Shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1982-01-01

    A non-recursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed. It provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gusts gradients. Based on this model the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes, entitled Shuttle Simulation Turbulence Tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. A description of the turbulence generation procedure is provided. The results of validating the simulated turbulence are described. Conclusions and recommendations are presented. One-dimensional von Karman spectra are tabulated, while a discussion of the minimum frequency simulated is provided. The results of spectral and statistical analyses of the SSTT are presented.

  17. Operating system for a real-time multiprocessor propulsion system simulator. User's manual

    NASA Technical Reports Server (NTRS)

    Cole, G. L.

    1985-01-01

    The NASA Lewis Research Center is developing and evaluating experimental hardware and software systems to help meet future needs for real-time, high-fidelity simulations of air-breathing propulsion systems. Specifically, the real-time multiprocessor simulator project focuses on the use of multiple microprocessors to achieve the required computing speed and accuracy at relatively low cost. Operating systems for such hardware configurations are generally not available. A real time multiprocessor operating system (RTMPOS) that supports a variety of multiprocessor configurations was developed at Lewis. With some modification, RTMPOS can also support various microprocessors. RTMPOS, by means of menus and prompts, provides the user with a versatile, user-friendly environment for interactively loading, running, and obtaining results from a multiprocessor-based simulator. The menu functions are described and an example simulation session is included to demonstrate the steps required to go from the simulation loading phase to the execution phase.

  18. Connections between residence time distributions and watershed characteristics across the continental US

    NASA Astrophysics Data System (ADS)

    Condon, L. E.; Maxwell, R. M.; Kollet, S. J.; Maher, K.; Haggerty, R.; Forrester, M. M.

    2016-12-01

    Although previous studies have demonstrated fractal residence time distributions in small watersheds, analyzing residence time scaling over large spatial areas is difficult with existing observational methods. For this study we use a fully integrated groundwater surface water simulation combined with Lagrangian particle tracking to evaluate connections between residence time distributions and watershed characteristics such as geology, topography and climate. Our simulation spans more than six million square kilometers of the continental US, encompassing a broad range of watershed sizes and physiographic settings. Simulated results demonstrate power law residence time distributions with peak age rages from 1.5 to 10.5 years. These ranges agree well with previous observational work and demonstrate the feasibility of using integrated models to simulate residence times. Comparing behavior between eight major watersheds, we show spatial variability in both the peak and the variance of the residence time distributions that can be related to model inputs. Peak age is well correlated with basin averaged hydraulic conductivity and the semi-variance corresponds to aridity. While power law age distributions have previously been attributed to fractal topography, these results illustrate the importance of subsurface characteristics and macro climate as additional controls on groundwater configuration and residence times.

  19. Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process

    NASA Astrophysics Data System (ADS)

    Turner, Douglas C.; Ladde, Gangaram S.

    2018-03-01

    Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.

  20. Landing-Time-Controlled Management Of Air Traffic

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Tobias, Leonard

    1988-01-01

    Conceptual system controls aircraft with old and new guidance equipment. Report begins with overview of concept, then reviews controller-interactive simulations. Describes fuel-conservative-trajectory algorithm, based on equations of motion for controlling landing time. Finally, presents results of piloted simulations.

  1. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  2. A Semi-implicit Method for Time Accurate Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2001-11-01

    A semi-implicit method for time accurate simulation of compressible flow is presented. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity. Centered discretization in both time and space allows the method to achieve zero artificial attenuation of acoustic waves. The method is an extension of the standard low Mach number pressure correction method to the compressible Navier-Stokes equations, and the main feature of the method is the solution of a Helmholtz type pressure correction equation similar to that of Demirdžić et al. (Int. J. Num. Meth. Fluids, Vol. 16, pp. 1029-1050, 1993). The method is attractive for simulation of acoustic combustion instabilities in practical combustors. In these flows, the Mach number is low; therefore the time step allowed by the convective CFL limitation is significantly larger than that allowed by the acoustic CFL limitation, resulting in significant efficiency gains. Also, the method's property of zero artificial attenuation of acoustic waves is important for accurate simulation of the interaction between acoustic waves and the combustion process. The method has been implemented in a large eddy simulation code, and results from several test cases will be presented.

  3. Stochastic simulation and analysis of biomolecular reaction networks

    PubMed Central

    Frazier, John M; Chushak, Yaroslav; Foy, Brent

    2009-01-01

    Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796

  4. Ultra-fast hadronic calorimetry

    DOE PAGES

    Denisov, Dmitri; Lukic, Strahinja; Mokhov, Nikolai; ...

    2018-05-08

    Calorimeters for particle physics experiments with integration time of a few ns will substantially improve the capability of the experiment to resolve event pileup and to reject backgrounds. In this paper the time development of hadronic showers induced by 30 and 60 GeV positive pions and 120 GeV protons is studied using Monte Carlo simulation and beam tests with a prototype of a sampling steel-scintillator hadronic calorimeter. In the beam tests, scintillator signals induced by hadronic showers in steel are sampled with a period of 0.2 ns and precisely time-aligned in order to study the average signal waveform at various locations with respectmore » to the beam particle impact. Simulations of the same setup are performed using the MARS15 code. Both simulation and test beam results suggest that energy deposition in steel calorimeters develop over a time shorter than 2 ns providing opportunity for ultra-fast calorimetry. As a result, simulation results for an “ideal” calorimeter consisting exclusively of bulk tungsten or copper are presented to establish the lower limit of the signal integration window.« less

  5. Constant pressure and temperature discrete-time Langevin molecular dynamics

    NASA Astrophysics Data System (ADS)

    Grønbech-Jensen, Niels; Farago, Oded

    2014-11-01

    We present a new and improved method for simultaneous control of temperature and pressure in molecular dynamics simulations with periodic boundary conditions. The thermostat-barostat equations are built on our previously developed stochastic thermostat, which has been shown to provide correct statistical configurational sampling for any time step that yields stable trajectories. Here, we extend the method and develop a set of discrete-time equations of motion for both particle dynamics and system volume in order to seek pressure control that is insensitive to the choice of the numerical time step. The resulting method is simple, practical, and efficient. The method is demonstrated through direct numerical simulations of two characteristic model systems—a one-dimensional particle chain for which exact statistical results can be obtained and used as benchmarks, and a three-dimensional system of Lennard-Jones interacting particles simulated in both solid and liquid phases. The results, which are compared against the method of Kolb and Dünweg [J. Chem. Phys. 111, 4453 (1999)], show that the new method behaves according to the objective, namely that acquired statistical averages and fluctuations of configurational measures are accurate and robust against the chosen time step applied to the simulation.

  6. Distributed Observer Network

    NASA Technical Reports Server (NTRS)

    2008-01-01

    NASA s advanced visual simulations are essential for analyses associated with life cycle planning, design, training, testing, operations, and evaluation. Kennedy Space Center, in particular, uses simulations for ground services and space exploration planning in an effort to reduce risk and costs while improving safety and performance. However, it has been difficult to circulate and share the results of simulation tools among the field centers, and distance and travel expenses have made timely collaboration even harder. In response, NASA joined with Valador Inc. to develop the Distributed Observer Network (DON), a collaborative environment that leverages game technology to bring 3-D simulations to conventional desktop and laptop computers. DON enables teams of engineers working on design and operations to view and collaborate on 3-D representations of data generated by authoritative tools. DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3-D visual environment. Multiple widely dispersed users, working individually or in groups, can view and analyze simulation results on desktop and laptop computers in real time.

  7. Cooling rate effects in sodium silicate glasses: Bridging the gap between molecular dynamics simulations and experiments

    NASA Astrophysics Data System (ADS)

    Li, Xin; Song, Weiying; Yang, Kai; Krishnan, N. M. Anoop; Wang, Bu; Smedskjaer, Morten M.; Mauro, John C.; Sant, Gaurav; Balonis, Magdalena; Bauchy, Mathieu

    2017-08-01

    Although molecular dynamics (MD) simulations are commonly used to predict the structure and properties of glasses, they are intrinsically limited to short time scales, necessitating the use of fast cooling rates. It is therefore challenging to compare results from MD simulations to experimental results for glasses cooled on typical laboratory time scales. Based on MD simulations of a sodium silicate glass with varying cooling rate (from 0.01 to 100 K/ps), here we show that thermal history primarily affects the medium-range order structure, while the short-range order is largely unaffected over the range of cooling rates simulated. This results in a decoupling between the enthalpy and volume relaxation functions, where the enthalpy quickly plateaus as the cooling rate decreases, whereas density exhibits a slower relaxation. Finally, we show that, using the proper extrapolation method, the outcomes of MD simulations can be meaningfully compared to experimental values when extrapolated to slower cooling rates.

  8. Shot Peening Numerical Simulation of Aircraft Aluminum Alloy Structure

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Lv, Sheng-Li; Zhang, Wei

    2018-03-01

    After shot peening, the 7050 aluminum alloy has good anti-fatigue and anti-stress corrosion properties. In the shot peening process, the pellet collides with target material randomly, and generated residual stress distribution on the target material surface, which has great significance to improve material property. In this paper, a simplified numerical simulation model of shot peening was established. The influence of pellet collision velocity, pellet collision position and pellet collision time interval on the residual stress of shot peening was studied, which is simulated by the ANSYS/LS-DYNA software. The analysis results show that different velocity, different positions and different time intervals have great influence on the residual stress after shot peening. Comparing with the numerical simulation results based on Kriging model, the accuracy of the simulation results in this paper was verified. This study provides a reference for the optimization of the shot peening process, and makes an effective exploration for the precise shot peening numerical simulation.

  9. Recent developments in track reconstruction and hadron identification at MPD

    NASA Astrophysics Data System (ADS)

    Mudrokh, A.; Zinchenko, A.

    2017-03-01

    A Monte Carlo simulation of real detector effects with as many details as possible has been carried out instead of a simplified Geant point smearing approach during the study of the detector performance. Some results of realistic simulation of the MPD TPC (Time Projection Chamber) including digitization in central Au+Au collisions have been obtained. Particle identification (PID) has been tuned to account for modifications in the track reconstruction. Some results on hadron identification in the TPC and TOF (Time Of Flight) detectors with realistically simulated response have been also obtained.

  10. Time-dependent Data System (TDDS); an interactive program to assemble, manage, and appraise input data and numerical output of flow/transport simulation models

    USGS Publications Warehouse

    Regan, R.S.; Schaffranek, R.W.; Baltzer, R.A.

    1996-01-01

    A system of functional utilities and computer routines, collectively identified as the Time-Dependent Data System CI DDS), has been developed and documented by the U.S. Geological Survey. The TDDS is designed for processing time sequences of discrete, fixed-interval, time-varying geophysical data--in particular, hydrologic data. Such data include various, dependent variables and related parameters typically needed as input for execution of one-, two-, and three-dimensional hydrodynamic/transport and associated water-quality simulation models. Such data can also include time sequences of results generated by numerical simulation models. Specifically, TDDS provides the functional capabilities to process, store, retrieve, and compile data in a Time-Dependent Data Base (TDDB) in response to interactive user commands or pre-programmed directives. Thus, the TDDS, in conjunction with a companion TDDB, provides a ready means for processing, preparation, and assembly of time sequences of data for input to models; collection, categorization, and storage of simulation results from models; and intercomparison of field data and simulation results. The TDDS can be used to edit and verify prototype, time-dependent data to affirm that selected sequences of data are accurate, contiguous, and appropriate for numerical simulation modeling. It can be used to prepare time-varying data in a variety of formats, such as tabular lists, sequential files, arrays, graphical displays, as well as line-printer plots of single or multiparameter data sets. The TDDB is organized and maintained as a direct-access data base by the TDDS, thus providing simple, yet efficient, data management and access. A single, easily used, program interface that provides all access to and from a particular TDDB is available for use directly within models, other user-provided programs, and other data systems. This interface, together with each major functional utility of the TDDS, is described and documented in this report.

  11. The timing of bud burst and its effect on tree growth.

    PubMed

    Rötzer, T; Grote, R; Pretzsch, H

    2004-02-01

    A phenology model for estimating the timings of bud burst--one of the most influential phenological phases for the simulation of tree growth--is presented in this study. The model calculates the timings of the leafing of beech (Fagus sylvatica L.) and oak (Quercus robur L.) and the May shoot of Norway spruce (Picea abies L.) and Scots pine (Pinus sylvestris L.) on the basis of the daily maximum temperature. The data for parameterisation and validation of the model have been taken from 40 climate and 120 phenological stations in southern Germany with time series for temperature and bud burst of up to 30 years. The validation of the phenology module by means of an independent data set showed correlation coefficients for comparisons between observed and simulated values of 54% (beech), 55% (oak), 59% (spruce) and 56% (pine) with mean absolute errors varying from 4.4 days (spruce) to 5.0 days (pine). These results correspond well with the results of other--often more complex--phenology models. After the phenology module had been implemented in the tree-growth model BALANCE, the growth of a mixed forest stand with the former static and the new dynamic timings for the bud burst was simulated. The results of the two simulation runs showed that phenology has to be taken into account when simulating forest growth, particularly in mixed stands.

  12. Angular radiation temperature simulation for time-dependent capsule drive prediction in inertial confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing, Longfei; Yang, Dong; Li, Hang

    2015-02-15

    The x-ray drive on a capsule in an inertial confinement fusion setup is crucial for ignition. Unfortunately, a direct measurement has not been possible so far. We propose an angular radiation temperature simulation to predict the time-dependent drive on the capsule. A simple model, based on the view-factor method for the simulation of the radiation temperature, is presented and compared with the experimental data obtained using the OMEGA laser facility and the simulation results acquired with VISRAD code. We found a good agreement between the time-dependent measurements and the simulation results obtained using this model. The validated model was thenmore » used to analyze the experimental results from the Shenguang-III prototype laser facility. More specifically, the variations of the peak radiation temperatures at different view angles with the albedo of the hohlraum, the motion of the laser spots, the closure of the laser entrance holes, and the deviation of the laser power were investigated. Furthermore, the time-dependent radiation temperature at different orientations and the drive history on the capsule were calculated. The results indicate that the radiation temperature from “U20W112” (named according to the diagnostic hole ID on the target chamber) can be used to approximately predict the drive temperature on the capsule. In addition, the influence of the capsule on the peak radiation temperature is also presented.« less

  13. The impact of secondary-task type on the sensitivity of reaction-time based measurement of cognitive load for novices learning surgical skills using simulation.

    PubMed

    Rojas, David; Haji, Faizal; Shewaga, Rob; Kapralos, Bill; Dubrowski, Adam

    2014-01-01

    Interest in the measurement of cognitive load (CL) in simulation-based education has grown in recent years. In this paper we present two pilot experiments comparing the sensitivity of two reaction time based secondary task measures of CL. The results suggest that simple reaction time measures are sensitive enough to detect changes in CL experienced by novice learners in the initial stages of simulation-based surgical skills training.

  14. Practical solutions for reducing container ships' waiting times at ports using simulation model

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, Abdorreza; Ilati, Gholamreza; Yeganeh, Yones Eftekhari

    2013-12-01

    The main challenge for container ports is the planning required for berthing container ships while docked in port. Growth of containerization is creating problems for ports and container terminals as they reach their capacity limits of various resources which increasingly leads to traffic and port congestion. Good planning and management of container terminal operations reduces waiting time for liner ships. Reducing the waiting time improves the terminal's productivity and decreases the port difficulties. Two important keys to reducing waiting time with berth allocation are determining suitable access channel depths and increasing the number of berths which in this paper are studied and analyzed as practical solutions. Simulation based analysis is the only way to understand how various resources interact with each other and how they are affected in the berthing time of ships. We used the Enterprise Dynamics software to produce simulation models due to the complexity and nature of the problems. We further present case study for berth allocation simulation of the biggest container terminal in Iran and the optimum access channel depth and the number of berths are obtained from simulation results. The results show a significant reduction in the waiting time for container ships and can be useful for major functions in operations and development of container ship terminals.

  15. Simulation model of a gear synchronisation unit for application in a real-time HiL environment

    NASA Astrophysics Data System (ADS)

    Kirchner, Markus; Eberhard, Peter

    2017-05-01

    Gear shifting simulations using the multibody system approach and the finite-element method are standard in the development of transmissions. However, the corresponding models are typically large due to the complex geometries and numerous contacts, which causes long calculation times. The present work sets itself apart from these detailed shifting simulations by proposing a much simpler but powerful synchronisation model which can be computed in real-time while it is still more realistic than a pure rigid multibody model. Therefore, the model is even used as part of a Hardware-in-the-Loop (HiL) test rig. The proposed real-time capable synchronization model combines the rigid multibody system approach with a multiscale simulation approach. The multibody system approach is suitable for the description of the large motions. The multiscale simulation approach is using also the finite-element method suitable for the analysis of the contact processes. An efficient contact search for the claws of a car transmission synchronisation unit is described in detail which shortens the required calculation time of the model considerably. To further shorten the calculation time, the use of a complex pre-synchronisation model with a nonlinear contour is presented. The model has to provide realistic results with the time-step size of the HiL test rig. To reach this specification, a particularly adapted multirate method for the synchronisation model is shown. Measured results of test rigs of the real-time capable synchronisation model are verified on plausibility. The simulation model is then also used in the HiL test rig for a transmission control unit.

  16. Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures

    NASA Technical Reports Server (NTRS)

    Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.

    1990-01-01

    A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.

  17. FDTD simulation tools for UWB antenna analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brocato, Robert Wesley

    2004-12-01

    This paper describes the development of a set of software tools useful for analyzing ultra-wideband (UWB) antennas and structures. These tools are used to perform finite difference time domain (FDTD) simulation of a conical antenna with continuous wave (CW) and UWB pulsed excitations. The antenna is analyzed using spherical coordinate-based FDTD equations that are derived from first principles. The simulation results for CW excitation are compared to simulation and measured results from published sources; the results for UWB excitation are new.

  18. Computational study of noise in a large signal transduction network.

    PubMed

    Intosalmi, Jukka; Manninen, Tiina; Ruohonen, Keijo; Linne, Marja-Leena

    2011-06-21

    Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies. © 2011 Intosalmi et al; licensee BioMed Central Ltd.

  19. SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeff S.

    1992-01-01

    Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.

  20. GPU-based real-time soft tissue deformation with cutting and haptic feedback.

    PubMed

    Courtecuisse, Hadrien; Jung, Hoeryong; Allard, Jérémie; Duriez, Christian; Lee, Doo Yong; Cotin, Stéphane

    2010-12-01

    This article describes a series of contributions in the field of real-time simulation of soft tissue biomechanics. These contributions address various requirements for interactive simulation of complex surgical procedures. In particular, this article presents results in the areas of soft tissue deformation, contact modelling, simulation of cutting, and haptic rendering, which are all relevant to a variety of medical interventions. The contributions described in this article share a common underlying model of deformation and rely on GPU implementations to significantly improve computation times. This consistency in the modelling technique and computational approach ensures coherent results as well as efficient, robust and flexible solutions. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Protocols for efficient simulations of long-time protein dynamics using coarse-grained CABS model.

    PubMed

    Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian

    2014-01-01

    Coarse-grained (CG) modeling is a well-acknowledged simulation approach for getting insight into long-time scale protein folding events at reasonable computational cost. Depending on the design of a CG model, the simulation protocols vary from highly case-specific-requiring user-defined assumptions about the folding scenario-to more sophisticated blind prediction methods for which only a protein sequence is required. Here we describe the framework protocol for the simulations of long-term dynamics of globular proteins, with the use of the CABS CG protein model and sequence data. The simulations can start from a random or a selected (e.g., native) structure. The described protocol has been validated using experimental data for protein folding model systems-the prediction results agreed well with the experimental results.

  2. Applied Time Domain Stability Margin Assessment for Nonlinear Time-Varying Systems

    NASA Technical Reports Server (NTRS)

    Kiefer, J. M.; Johnson, M. D.; Wall, J. H.; Dominguez, A.

    2016-01-01

    The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation. This technique was implemented by using the Stability Aerospace Vehicle Analysis Tool (SAVANT) computer simulation to evaluate the stability of the SLS system with the Adaptive Augmenting Control (AAC) active and inactive along its ascent trajectory. The gains for which the vehicle maintains apparent time-domain stability defines the gain margins, and the time delay similarly defines the phase margin. This method of extracting the control stability margins from the time-domain simulation is relatively straightforward and the resultant margins can be compared to the linearized system results. The sections herein describe the techniques employed to extract the time-domain margins, compare the results between these nonlinear and the linear methods, and provide explanations for observed discrepancies. The SLS ascent trajectory was simulated with SAVANT and the classical linear stability margins were evaluated at one second intervals. The linear analysis was performed with the AAC algorithm disabled to attain baseline stability margins. At each time point, the system was linearized about the current operating point using Simulink's built-in solver. Each linearized system in time was evaluated for its rigid-body gain margin (high frequency gain margin), rigid-body phase margin, and aero gain margin (low frequency gain margin) for each control axis. Using the stability margins derived from the baseline linearization approach, the time domain derived stability margins were determined by executing time domain simulations in which axis-specific incremental gain and phase adjustments were made to the nominal system about the expected neutral stability point at specific flight times. The baseline stability margin time histories were used to shift the system gain to various values around the zero margin point such that a precise amount of expected gain margin was maintained throughout flight. When assessing the gain margins, the gain was applied starting at the time point under consideration, thereafter following the variation in the margin found in the linear analysis. When assessing the rigid-body phase margin, a constant time delay was applied to the system starting at the time point under consideration. If the baseline stability margins were correctly determined via the linear analysis, the time domain simulation results should contain unstable behavior at certain gain and phase values. Examples will be shown from repeated simulations with variable added gain and phase lag. Faithfulness of margins calculated from the linear analysis to the nonlinear system will be demonstrated.

  3. Shoulder Arthroscopy Simulator Training Improves Shoulder Arthroscopy Performance in a Cadaver Model

    PubMed Central

    Henn, R. Frank; Shah, Neel; Warner, Jon J.P.; Gomoll, Andreas H.

    2013-01-01

    Purpose The purpose of this study was to quantify the benefits of shoulder arthroscopy simulator training with a cadaver model of shoulder arthroscopy. Methods Seventeen first year medical students with no prior experience in shoulder arthroscopy were enrolled and completed this study. Each subject completed a baseline proctored arthroscopy on a cadaveric shoulder, which included controlling the camera and completing a standard series of tasks using the probe. The subjects were randomized, and nine of the subjects received training on a virtual reality simulator for shoulder arthroscopy. All subjects then repeated the same cadaveric arthroscopy. The arthroscopic videos were analyzed in a blinded fashion for time to task completion and subjective assessment of technical performance. The two groups were compared with students t-tests, and change over time within groups was analyzed with paired t-tests. Results There were no observed differences between the two groups on the baseline evaluation. The simulator group improved significantly from baseline with respect to time to completion and subjective performance (p<0.05). Time to completion was significantly faster in the simulator group compared to controls at final evaluation (p<0.05). No difference was observed between the groups on the subjective scores at final evaluation (p=0.98). Conclusions Shoulder arthroscopy simulator training resulted in significant benefits in clinical shoulder arthroscopy time to task completion in this cadaver model. This study provides important additional evidence of the benefit of simulators in orthopaedic surgical training. Clinical Relevance There may be a role for simulator training in shoulder arthroscopy education. PMID:23591380

  4. Implementation and evaluation of a simulation curriculum for paediatric residency programs including just-in-time in situ mock codes

    PubMed Central

    Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam

    2012-01-01

    OBJECTIVE: To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. DESIGN: Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. SETTING: Paediatric residency program at BC Children’s Hospital, Vancouver, British Columbia. INTERVENTIONS: The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. RESULTS: A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. CONCLUSIONS: A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes. PMID:23372405

  5. Analysis OpenMP performance of AMD and Intel architecture for breaking waves simulation using MPS

    NASA Astrophysics Data System (ADS)

    Alamsyah, M. N. A.; Utomo, A.; Gunawan, P. H.

    2018-03-01

    Simulation of breaking waves by using Navier-Stokes equation via moving particle semi-implicit method (MPS) over close domain is given. The results show the parallel computing on multicore architecture using OpenMP platform can reduce the computational time almost half of the serial time. Here, the comparison using two computer architectures (AMD and Intel) are performed. The results using Intel architecture is shown better than AMD architecture in CPU time. However, in efficiency, the computer with AMD architecture gives slightly higher than the Intel. For the simulation by 1512 number of particles, the CPU time using Intel and AMD are 12662.47 and 28282.30 respectively. Moreover, the efficiency using similar number of particles, AMD obtains 50.09 % and Intel up to 49.42 %.

  6. Time Accurate CFD Simulations of the Orion Launch Abort Vehicle in the Transonic Regime

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph; Rojahn, Josh

    2011-01-01

    Significant asymmetries in the fluid dynamics were calculated for some cases in the CFD simulations of the Orion Launch Abort Vehicle through its abort trajectories. The CFD simulations were performed steady state with symmetric boundary conditions and geometries. The trajectory points at issue were in the transonic regime, at 0 and 5 angles of attack with the Abort Motors with and without the Attitude Control Motors (ACM) firing. In some of the cases the asymmetric fluid dynamics resulted in aerodynamic side forces that were large enough that would overcome the control authority of the ACMs. MSFC s Fluid Dynamics Group supported the investigation into the cause of the flow asymmetries with time accurate CFD simulations, utilizing a hybrid RANS-LES turbulence model. The results show that the flow over the vehicle and the subsequent interaction with the AB and ACM motor plumes were unsteady. The resulting instantaneous aerodynamic forces were oscillatory with fairly large magnitudes. Time averaged aerodynamic forces were essentially symmetric.

  7. Time Accurate CFD Simulations of the Orion Launch Abort Vehicle in the Transonic Regime

    NASA Technical Reports Server (NTRS)

    Rojahn, Josh; Ruf, Joe

    2011-01-01

    Significant asymmetries in the fluid dynamics were calculated for some cases in the CFD simulations of the Orion Launch Abort Vehicle through its abort trajectories. The CFD simulations were performed steady state and in three dimensions with symmetric geometries, no freestream sideslip angle, and motors firing. The trajectory points at issue were in the transonic regime, at 0 and +/- 5 angles of attack with the Abort Motors with and without the Attitude Control Motors (ACM) firing. In some of the cases the asymmetric fluid dynamics resulted in aerodynamic side forces that were large enough that would overcome the control authority of the ACMs. MSFC's Fluid Dynamics Group supported the investigation into the cause of the flow asymmetries with time accurate CFD simulations, utilizing a hybrid RANS-LES turbulence model. The results show that the flow over the vehicle and the subsequent interaction with the AB and ACM motor plumes were unsteady. The resulting instantaneous aerodynamic forces were oscillatory with fairly large magnitudes. Time averaged aerodynamic forces were essentially symmetric.

  8. Observability of ionospheric space-time structure with ISR: A simulation study

    NASA Astrophysics Data System (ADS)

    Swoboda, John; Semeter, Joshua; Zettergren, Matthew; Erickson, Philip J.

    2017-02-01

    The sources of error from electronically steerable array (ESA) incoherent scatter radar (ISR) systems are investigated both theoretically and with use of an open-source ISR simulator, developed by the authors, called Simulator for ISR (SimISR). The main sources of error incorporated in the simulator include statistical uncertainty, which arises due to nature of the measurement mechanism and the inherent space-time ambiguity from the sensor. SimISR can take a field of plasma parameters, parameterized by time and space, and create simulated ISR data at the scattered electric field (i.e., complex receiver voltage) level, subsequently processing these data to show possible reconstructions of the original parameter field. To demonstrate general utility, we show a number of simulation examples, with two cases using data from a self-consistent multifluid transport model. Results highlight the significant influence of the forward model of the ISR process and the resulting statistical uncertainty on plasma parameter measurements and the core experiment design trade-offs that must be made when planning observations. These conclusions further underscore the utility of this class of measurement simulator as a design tool for more optimal experiment design efforts using flexible ESA class ISR systems.

  9. Computational simulation of the creep-rupture process in filamentary composite materials

    NASA Technical Reports Server (NTRS)

    Slattery, Kerry T.; Hackett, Robert M.

    1991-01-01

    A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.

  10. A Real-time 3D Visualization of Global MHD Simulation for Space Weather Forecasting

    NASA Astrophysics Data System (ADS)

    Murata, K.; Matsuoka, D.; Kubo, T.; Shimazu, H.; Tanaka, T.; Fujita, S.; Watari, S.; Miyachi, H.; Yamamoto, K.; Kimura, E.; Ishikura, S.

    2006-12-01

    Recently, many satellites for communication networks and scientific observation are launched in the vicinity of the Earth (geo-space). The electromagnetic (EM) environments around the spacecraft are always influenced by the solar wind blowing from the Sun and induced electromagnetic fields. They occasionally cause various troubles or damages, such as electrification and interference, to the spacecraft. It is important to forecast the geo-space EM environment as well as the ground weather forecasting. Owing to the recent remarkable progresses of super-computer technologies, numerical simulations have become powerful research methods in the solar-terrestrial physics. For the necessity of space weather forecasting, NICT (National Institute of Information and Communications Technology) has developed a real-time global MHD simulation system of solar wind-magnetosphere-ionosphere couplings, which has been performed on a super-computer SX-6. The real-time solar wind parameters from the ACE spacecraft at every one minute are adopted as boundary conditions for the simulation. Simulation results (2-D plots) are updated every 1 minute on a NICT website. However, 3D visualization of simulation results is indispensable to forecast space weather more accurately. In the present study, we develop a real-time 3D webcite for the global MHD simulations. The 3-D visualization results of simulation results are updated every 20 minutes in the following three formats: (1)Streamlines of magnetic field lines, (2)Isosurface of temperature in the magnetosphere and (3)Isoline of conductivity and orthogonal plane of potential in the ionosphere. For the present study, we developed a 3-D viewer application working on Internet Explorer browser (ActiveX) is implemented, which was developed on the AVS/Express. Numerical data are saved in the HDF5 format data files every 1 minute. Users can easily search, retrieve and plot past simulation results (3D visualization data and numerical data) by using the STARS (Solar-terrestrial data Analysis and Reference System). The STARS is a data analysis system for satellite and ground-based observation data for solar-terrestrial physics.

  11. Effect of suspension kinematic on 14 DOF vehicle model

    NASA Astrophysics Data System (ADS)

    Wongpattananukul, T.; Chantharasenawong, C.

    2017-12-01

    Computer simulations play a major role in shaping modern science and engineering. They reduce time and resource consumption in new studies and designs. Vehicle simulations have been studied extensively to achieve a vehicle model used in minimum lap time solution. Simulation result accuracy depends on the abilities of these models to represent real phenomenon. Vehicles models with 7 degrees of freedom (DOF), 10 DOF and 14 DOF are normally used in optimal control to solve for minimum lap time. However, suspension kinematics are always neglected on these models. Suspension kinematics are defined as wheel movements with respect to the vehicle body. Tire forces are expressed as a function of wheel slip and wheel position. Therefore, the suspension kinematic relation is appended to the 14 DOF vehicle model to investigate its effects on the accuracy of simulate trajectory. Classical 14 DOF vehicle model is chosen as baseline model. Experiment data is collected from formula student style car test runs as baseline data for simulation and comparison between baseline model and model with suspension kinematic. Results show that in a single long turn there is an accumulated trajectory error in baseline model compared to model with suspension kinematic. While in short alternate turns, the trajectory error is much smaller. These results show that suspension kinematic had an effect on the trajectory simulation of vehicle. Which optimal control that use baseline model will result in inaccuracy control scheme.

  12. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Simulating Mass Removal of Groundwater Contaminant Plumes with Complex and Simple Models

    NASA Astrophysics Data System (ADS)

    Lopez, J.; Guo, Z.; Fogg, G. E.

    2016-12-01

    Chlorinated solvents used in industrial, commercial, and other applications continue to pose significant threats to human health through contamination of groundwater resources. A recent National Research Council report concludes that it is unlikely that remediation of these complex sites will be achieved in a time frame of 50-100 years under current methods and standards (NRC, 2013). Pump and treat has been a common strategy at many sites to contain and treat groundwater contamination. In these sites, extensive retention of contaminant mass in low-permeability materials (tailing) has been observed after years or decades of pumping. Although transport models can be built that contain enough of the complex, 3D heterogeneity to simulate the tailing and long cleanup times, this is seldom done because of the large data and computational burdens. Hence, useful, reliable models to simulate various cleanup strategies are rare. The purpose of this study is to explore other potential ways to simulate the mass-removal processes with shorter time and less cost but still produce robust results by capturing effects of the heterogeneity and long-term retention of mass. A site containing a trichloroethylene groundwater plume was selected as the study area. The plume is located within alluvial sediments in the Tucson Basin. A fully heterogeneous domain is generated first and MODFLOW is used to simulate the flow field. Contaminant transport is simulated using both MT3D and RWHet for the fully heterogeneous model. Other approaches, including dual-domain mass transfer and heterogeneous chemical reactions, are manipulated to simulate the mass removal in a less heterogeneous, or homogeneous, domain and results are compared to the results obtained from complex models. The capability of these simpler models to simulate remediation processes, especially capture the late-time tailing, are examined.

  14. OSCAR a Matlab based optical FFT code

    NASA Astrophysics Data System (ADS)

    Degallaix, Jérôme

    2010-05-01

    Optical simulation softwares are essential tools for designing and commissioning laser interferometers. This article aims to introduce OSCAR, a Matlab based FFT code, to the experimentalist community. OSCAR (Optical Simulation Containing Ansys Results) is used to simulate the steady state electric fields in optical cavities with realistic mirrors. The main advantage of OSCAR over other similar packages is the simplicity of its code requiring only a short time to master. As a result, even for a beginner, it is relatively easy to modify OSCAR to suit other specific purposes. OSCAR includes an extensive manual and numerous detailed examples such as simulating thermal aberration, calculating cavity eigen modes and diffraction loss, simulating flat beam cavities and three mirror ring cavities. An example is also provided about how to run OSCAR on the GPU of modern graphic cards instead of the CPU, making the simulation up to 20 times faster.

  15. System analysis for the Huntsville Operational Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Mauldin, J.

    1984-01-01

    The Huntsville Operations Support Center (HOSC) is a distributed computer system used to provide real time data acquisition, analysis and display during NASA space missions and to perform simulation and study activities during non-mission times. The primary purpose is to provide a HOSC system simulation model that is used to investigate the effects of various HOSC system configurations. Such a model would be valuable in planning the future growth of HOSC and in ascertaining the effects of data rate variations, update table broadcasting and smart display terminal data requirements on the HOSC HYPERchannel network system. A simulation model was developed in PASCAL and results of the simulation model for various system configuraions were obtained. A tutorial of the model is presented and the results of simulation runs are presented. Some very high data rate situations were simulated to observe the effects of the HYPERchannel switch over from contention to priority mode under high channel loading.

  16. Power in the loop real time simulation platform for renewable energy generation

    NASA Astrophysics Data System (ADS)

    Li, Yang; Shi, Wenhui; Zhang, Xing; He, Guoqing

    2018-02-01

    Nowadays, a large scale of renewable energy sources has been connecting to power system and the real time simulation platform is widely used to carry out research on integration control algorithm, power system stability etc. Compared to traditional pure digital simulation and hardware in the loop simulation, power in the loop simulation has higher accuracy and degree of reliability. In this paper, a power in the loop analog digital hybrid simulation platform has been built and it can be used not only for the single generation unit connecting to grid, but also for multiple new energy generation units connecting to grid. A wind generator inertia control experiment was carried out on the platform. The structure of the inertia control platform was researched and the results verify that the platform is up to need for renewable power in the loop real time simulation.

  17. Distributed simulation using a real-time shared memory network

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Mattern, Duane L.; Wong, Edmond; Musgrave, Jeffrey L.

    1993-01-01

    The Advanced Control Technology Branch of the NASA Lewis Research Center performs research in the area of advanced digital controls for aeronautic and space propulsion systems. This work requires the real-time implementation of both control software and complex dynamical models of the propulsion system. We are implementing these systems in a distributed, multi-vendor computer environment. Therefore, a need exists for real-time communication and synchronization between the distributed multi-vendor computers. A shared memory network is a potential solution which offers several advantages over other real-time communication approaches. A candidate shared memory network was tested for basic performance. The shared memory network was then used to implement a distributed simulation of a ramjet engine. The accuracy and execution time of the distributed simulation was measured and compared to the performance of the non-partitioned simulation. The ease of partitioning the simulation, the minimal time required to develop for communication between the processors and the resulting execution time all indicate that the shared memory network is a real-time communication technique worthy of serious consideration.

  18. Numerical Simulations of Thick Aluminum Wire Behavior Under Megampere Current Drive

    DTIC Science & Technology

    2009-06-01

    time dependences of the wire radii agree rather well with the experimental results obtained using laser diagnostics and light imaging. The...simulated time dependences of the wire radii agree rather well with the experimental results obtained using laser diagnostics and light imaging. The...experiments involved a wide range of diagnostics , including current probes, streaked imaging of optical emission, 4-frame laser shadowgraphy, fast

  19. Applied Research on Laparoscopic Simulator in the Resident Surgical Laparoscopic Operation Technical Training.

    PubMed

    Fu, Shangxi; Liu, Xiao; Zhou, Li; Zhou, Meisheng; Wang, Liming

    2017-08-01

    The purpose of this study was to estimate the effects of surgical laparoscopic operation course on laparoscopic operation skills after the simulated training for medical students with relatively objective results via data gained before and after the practice course of laparoscopic simulator of the resident standardized trainees. Experiment 1: 20 resident standardized trainees with no experience in laparoscopic surgery were included in the inexperienced group and finished simulated cholecystectomy according to simulator videos. Simulator data was collected (total operation time, path length, average speed of instrument movement, movement efficiency, number of perforations, the time cautery is applied without appropriate contact with adhesions, number of serious complications). Ten attending doctors were included in the experienced group and conducted the operation of simulated cholecystectomy directly. Data was collected with simulator. Data of two groups was compared. Experiment 2: Participants in inexperienced group were assigned to basic group (receiving 8 items of basic operation training) and special group (receiving 8 items of basic operation training and 4 items of specialized training), and 10 persons for each group. They received training course designed by us respectively. After training level had reached the expected target, simulated cholecystectomy was performed, and data was collected. Experimental data between basic group and special group was compared and then data between special group and experienced group was compared. Results of experiment 1 showed that there is significant difference between data in inexperienced group in which participants operated simulated cholecystectomy only according to instructors' teaching and operation video and data in experienced group. Result of experiment 2 suggested that, total operation time, number of perforations, number of serious complications, number of non-cauterized bleeding and the time cautery is applied without appropriate contact with adhesions in special group were all superior to those in basic group. There was no statistical difference on other data between special group and basic group. Comparing special group with experienced group, data of total operation time and the time cautery is applied without appropriate contact with adhesions in experienced group was superior to that in special group. There was no statistical difference on other data between special group and experienced group. Laparoscopic simulators are effective for surgical skills training. Basic courses could mainly improve operator's hand-eye coordination and perception of sense of the insertion depth for instruments. Specialized training courses could not only improve operator's familiarity with surgeries, but also reduce operation time and risk, and improve safety.

  20. Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators

    NASA Technical Reports Server (NTRS)

    Fantini, Jay A.

    1998-01-01

    Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.

  1. Use of a Novel Airway Kit and Simulation in Resident Training on Emergent Pediatric Airways.

    PubMed

    Melzer, Jonathan M; Hamersley, Erin R S; Gallagher, Thomas Q

    2017-06-01

    Objective Development of a novel pediatric airway kit and implementation with simulation to improve resident response to emergencies with the goal of improving patient safety. Methods Prospective study with 9 otolaryngology residents (postgraduate years 1-5) from our tertiary care institution. Nine simulated pediatric emergency airway drills were carried out with the existing system and a novel portable airway kit. Response times and time to successful airway control were noted with both the extant airway system and the new handheld kit. Results were analyzed to ensure parametric data and compared with t tests. A Bonferroni adjustment indicated that an alpha of 0.025 was needed for significance. Results Use of the airway kit significantly reduced the mean time of resident arrival by 47% ( P = .013) and mean time of successful intubation by 50% ( P = .007). Survey data indicated 100% improved resident comfort with emergent airway scenarios with use of the kit. Discussion Times to response and meaningful intervention were significantly reduced with implementation of the handheld airway kit. Use of simulation training to implement the new kit improved residents' comfort and airway skills. This study describes an affordable novel mobile airway kit and demonstrates its ability to improve response times. Implications for Practice The low cost of this airway kit makes it a tenable option even for smaller hospitals. Simulation provides a safe and effective way to familiarize oneself with novel equipment, and, when possible, realistic emergent airway simulations should be used to improve provider performance.

  2. [Skill retention using extraglottic airways in out-of-hospital emergencies: efficacy and long-term results of simulator-based medical education : A prospective follow-up study].

    PubMed

    Mann, V; Limberg, F; Mann, S T W; Little, S; Müller, M; Sander, M; Röhrig, R

    2018-04-11

    For emergency medicine personnel (EMP), there is little evidence concerning the adequate timing for refresher courses to maintain routine in the application of extraglottic airways. The aim of this study was to evaluate the efficacy and long-term results of a simulator-based education concept teaching the basic airway management skills with extraglottic airways for EMP and also to draw conclusions concerning the adequate time interval for refresher courses. By use of an explorative, prospective simulator-study with nonphysician EMP, airway management skills using the Larynxmaske Supreme® (LMA‑S) after an introduction lecture were examined. The application of an endotracheal tube (ETT) served as control. Time for preparation of the airway devices, insertion success, and resulting apnea time were assessed immediately after the first introduction lecture (t1) and unannounced 9-12 months thereafter (t2). Comparison of the times for preparation of the LMA‑S at t1 and t2 demonstrated similar results. After the introduction lecture, all paramedics were able to insert the LMA‑S successfully after maximal 2 attempts; 9-12 months later success rates with the LMA‑S were unchanged. Apnea time during airway management was shorter with the LMA‑S compared to the ETT (p < 0.01). Times needed for preparation of the airway devices were similar. The results of this simulator study indicate that a standardized introduction lecture is appropriate to ensure long-lasting procedural skills up to 12 months, so that subsequent refresher courses in basic airway management with the LMA‑S once a year may be adequate. A simulator-based education in basic airway management skills with extraglottic airways is recommended for facilitation of further clinical education according to the current guidelines.

  3. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  4. Fatigue-test acceleration with flight-by-flight loading and heating to simulate supersonic-transport operation

    NASA Technical Reports Server (NTRS)

    Imig, L. A.; Garrett, L. E.

    1973-01-01

    Possibilities for reducing fatigue-test time for supersonic-transport materials and structures were studied in tests with simulated flight-by-flight loading. In order to determine whether short-time tests were feasible, the results of accelerated tests (2 sec per flight) were compared with the results of real-time tests (96 min per flight). The effects of design mean stress, the stress range for ground-air-ground cycles, simulated thermal stress, the number of stress cycles in each flight, and salt corrosion were studied. The flight-by-flight stress sequences were applied to notched sheet specimens of Ti-8Al-1Mo-1V and Ti-6Al-4V titanium alloys. A linear cumulative-damage analysis accounted for large changes in stress range of the simulated flights but did not account for the differences between real-time and accelerated tests. The fatigue lives from accelerated tests were generally within a factor of two of the lives from real-time tests; thus, within the scope of the investigation, accelerated testing seems feasible.

  5. Method for decreasing CT simulation time of complex phantoms and systems through separation of material specific projection data

    NASA Astrophysics Data System (ADS)

    Divel, Sarah E.; Christensen, Soren; Wintermark, Max; Lansberg, Maarten G.; Pelc, Norbert J.

    2017-03-01

    Computer simulation is a powerful tool in CT; however, long simulation times of complex phantoms and systems, especially when modeling many physical aspects (e.g., spectrum, finite detector and source size), hinder the ability to realistically and efficiently evaluate and optimize CT techniques. Long simulation times primarily result from the tracing of hundreds of line integrals through each of the hundreds of geometrical shapes defined within the phantom. However, when the goal is to perform dynamic simulations or test many scan protocols using a particular phantom, traditional simulation methods inefficiently and repeatedly calculate line integrals through the same set of structures although only a few parameters change in each new case. In this work, we have developed a new simulation framework that overcomes such inefficiencies by dividing the phantom into material specific regions with the same time attenuation profiles, acquiring and storing monoenergetic projections of the regions, and subsequently scaling and combining the projections to create equivalent polyenergetic sinograms. The simulation framework is especially efficient for the validation and optimization of CT perfusion which requires analysis of many stroke cases and testing hundreds of scan protocols on a realistic and complex numerical brain phantom. Using this updated framework to conduct a 31-time point simulation with 80 mm of z-coverage of a brain phantom on two 16-core Linux serves, we have reduced the simulation time from 62 hours to under 2.6 hours, a 95% reduction.

  6. Skin hydration analysis by experiment and computer simulations and its implications for diapered skin.

    PubMed

    Saadatmand, M; Stone, K J; Vega, V N; Felter, S; Ventura, S; Kasting, G; Jaworska, J

    2017-11-01

    Experimental work on skin hydration is technologically challenging, and mostly limited to observations where environmental conditions are constant. In some cases, like diapered baby skin, such work is practically unfeasible, yet it is important to understand potential effects of diapering on skin condition. To overcome this challenge, in part, we developed a computer simulation model of reversible transient skin hydration effects. Skin hydration model by Li et al. (Chem Eng Sci, 138, 2015, 164) was further developed to simulate transient exposure conditions where relative humidity (RH), wind velocity, air, and skin temperature can be any function of time. Computer simulations of evaporative water loss (EWL) decay after different occlusion times were compared with experimental data to calibrate the model. Next, we used the model to investigate EWL and SC thickness in different diapering scenarios. Key results from the experimental work were: (1) For occlusions by RH=100% and free water longer than 30 minutes the absorbed amount of water is almost the same; (2) Longer occlusion times result in higher water absorption by the SC. The EWL decay and skin water content predictions were in agreement with experimental data. Simulations also revealed that skin under occlusion hydrates mainly because the outflux is blocked, not because it absorbs water from the environment. Further, simulations demonstrated that hydration level is sensitive to time, RH and/or free water on skin. In simulated diapering scenarios, skin maintained hydration content very close to the baseline conditions without a diaper for the entire duration of a 24 hours period. Different diapers/diaper technologies are known to have different profiles in terms of their ability to provide wetness protection, which can result in consumer-noticeable differences in wetness. Simulation results based on published literature using data from a number of different diapers suggest that diapered skin hydrates within ranges considered reversible. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. The NASA Lewis integrated propulsion and flight control simulator

    NASA Technical Reports Server (NTRS)

    Bright, Michelle M.; Simon, Donald L.

    1991-01-01

    A new flight simulation facility was developed at NASA-Lewis. The purpose of this flight simulator is to allow integrated propulsion control and flight control algorithm development and evaluation in real time. As a preliminary check of the simulator facility capabilities and correct integration of its components, the control design and physics models for a short take-off and vertical landing fighter aircraft model were shown, with their associated system integration and architecture, pilot vehicle interfaces, and display symbology. The initial testing and evaluation results show that this fixed based flight simulator can provide real time feedback and display of both airframe and propulsion variables for validation of integrated flight and propulsion control systems. Additionally, through the use of this flight simulator, various control design methodologies and cockpit mechanizations can be tested and evaluated in a real time environment.

  8. The Simulation Realization of Pavement Roughness in the Time Domain

    NASA Astrophysics Data System (ADS)

    XU, H. L.; He, L.; An, D.

    2017-10-01

    As the needs for the dynamic study on the vehicle-pavement system and the simulated vibration table test, how to simulate the pavement roughness actually is important guarantee for whether calculation and test can reflect the actual situation or not. Using the power spectral density function, the simulation of pavement roughness can be realized by Fourier inverse transform. The main idea of this method was that the spectrum amplitude and random phase were obtained separately according to the power spectrum, and then the simulation of pavement roughness was obtained in the time domain through the Fourier inverse transform (IFFT). In the process, the sampling interval (Δl) was 0.1m, and the sampling points(N) was 4096, which satisfied the accuracy requirements. Using this method, the simulate results of pavement roughness (A~H grades) were obtain in the time domain.

  9. Comparison of three large-eddy simulations of shock-induced turbulent separation bubbles

    NASA Astrophysics Data System (ADS)

    Touber, Emile; Sandham, Neil D.

    2009-12-01

    Three different large-eddy simulation investigations of the interaction between an impinging oblique shock and a supersonic turbulent boundary layer are presented. All simulations made use of the same inflow technique, specifically aimed at avoiding possible low-frequency interferences with the shock/boundary-layer interaction system. All simulations were run on relatively wide computational domains and integrated over times greater than twenty five times the period of the most commonly reported low-frequency shock-oscillation, making comparisons at both time-averaged and low-frequency-dynamic levels possible. The results confirm previous experimental results which suggested a simple linear relation between the interaction length and the oblique-shock strength if scaled using the boundary-layer thickness and wall-shear stress. All the tested cases show evidences of significant low-frequency shock motions. At the wall, energetic low-frequency pressure fluctuations are observed, mainly in the initial part of interaction.

  10. Studies on Radar Sensor Networks

    DTIC Science & Technology

    2007-08-08

    scheme in which 2-D image was created via adding voltages with the appropriate time offset. Simulation results show that our DCT-based scheme works...using RSNs in terms of the probability of miss detection PMD and the root mean square error (RMSE). Simulation results showed that multi-target detection... Simulation results are presented to evaluate the feasibility and effectiveness of the proposed JMIC algorithm in a query surveillance region. 5 SVD-QR and

  11. Design verification of large time constant thermal shields for optical reference cavities.

    PubMed

    Zhang, J; Wu, W; Shi, X H; Zeng, X Y; Deng, K; Lu, Z H

    2016-02-01

    In order to achieve high frequency stability in ultra-stable lasers, the Fabry-Pérot reference cavities shall be put inside vacuum chambers with large thermal time constants to reduce the sensitivity to external temperature fluctuations. Currently, the determination of thermal time constants of vacuum chambers is based either on theoretical calculation or time-consuming experiments. The first method can only apply to simple system, while the second method will take a lot of time to try out different designs. To overcome these limitations, we present thermal time constant simulation using finite element analysis (FEA) based on complete vacuum chamber models and verify the results with measured time constants. We measure the thermal time constants using ultrastable laser systems and a frequency comb. The thermal expansion coefficients of optical reference cavities are precisely measured to reduce the measurement error of time constants. The simulation results and the experimental results agree very well. With this knowledge, we simulate several simplified design models using FEA to obtain larger vacuum thermal time constants at room temperature, taking into account vacuum pressure, shielding layers, and support structure. We adopt the Taguchi method for shielding layer optimization and demonstrate that layer material and layer number dominate the contributions to the thermal time constant, compared with layer thickness and layer spacing.

  12. Demonstration of an Aerocapture GN and C System Through Hardware-in-the-Loop Simulations

    NASA Technical Reports Server (NTRS)

    Masciarelli, James; Deppen, Jennifer; Bladt, Jeff; Fleck, Jeff; Lawson, Dave

    2010-01-01

    Aerocapture is an orbit insertion maneuver in which a spacecraft flies through a planetary atmosphere one time using drag force to decelerate and effect a hyperbolic to elliptical orbit change. Aerocapture employs a feedback Guidance, Navigation, and Control (GN&C) system to deliver the spacecraft into a precise postatmospheric orbit despite the uncertainties inherent in planetary atmosphere knowledge, entry targeting and aerodynamic predictions. Only small amounts of propellant are required for attitude control and orbit adjustments, thereby providing mass savings of hundreds to thousands of kilograms over conventional all-propulsive techniques. The Analytic Predictor Corrector (APC) guidance algorithm has been developed to steer the vehicle through the aerocapture maneuver using bank angle control. Through funding provided by NASA's In-Space Propulsion Technology Program, the operation of an aerocapture GN&C system has been demonstrated in high-fidelity simulations that include real-time hardware in the loop, thus increasing the Technology Readiness Level (TRL) of aerocapture GN&C. First, a non-real-time (NRT), 6-DOF trajectory simulation was developed for the aerocapture trajectory. The simulation included vehicle dynamics, gravity model, atmosphere model, aerodynamics model, inertial measurement unit (IMU) model, attitude control thruster torque models, and GN&C algorithms (including the APC aerocapture guidance). The simulation used the vehicle and mission parameters from the ST-9 mission. A 2000 case Monte Carlo simulation was performed and results show an aerocapture success rate of greater than 99.7%, greater than 95% of total delta-V required for orbit insertion is provided by aerodynamic drag, and post-aerocapture orbit plane wedge angle error is less than 0.5 deg (3-sigma). Then a real-time (RT), 6-DOF simulation for the aerocapture trajectory was developed which demonstrated the guidance software executing on a flight-like computer, interfacing with a simulated IMU and simulated thrusters, with vehicle dynamics provided by an external simulator. Five cases from the NRT simulations were run in the RT simulation environment. The results compare well to those of the NRT simulation thus verifying the RT simulation configuration. The results of the above described simulations show the aerocapture maneuver using the APC algorithm can be accomplished reliably and the algorithm is now at TRL-6. Flight validation is the next step for aerocapture technology development.

  13. Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei

    The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual enginemore » cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed greater spatial variation in the root-mean-square (RMS). Conversely, circular standard deviation results showed greater repeatability of the flow directionality and swirl vortex positioning than the simulations.« less

  14. Fast Simulation of Electromagnetic Showers in the ATLAS Calorimeter: Frozen Showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barberio, E.; /Melbourne U.; Boudreau, J.

    2011-11-29

    One of the most time consuming process simulating pp interactions in the ATLAS detector at LHC is the simulation of electromagnetic showers in the calorimeter. In order to speed up the event simulation several parametrisation methods are available in ATLAS. In this paper we present a short description of a frozen shower technique, together with some recent benchmarks and comparison with full simulation. An expected high rate of proton-proton collisions in ATLAS detector at LHC requires large samples of simulated events (Monte Carlo) to study various physics processes. A detailed simulation of particle reactions ('full simulation') in the ATLAS detectormore » is based on GEANT4 and is very accurate. However, due to complexity of the detector, high particle multiplicity and GEANT4 itself, the average CPU time spend to simulate typical QCD event in pp collision is 20 or more minutes for modern computers. During detector simulation the largest time is spend in the calorimeters (up to 70%) most of which is required for electromagnetic particles in the electromagnetic (EM) part of the calorimeters. This is the motivation for fast simulation approaches which reduce the simulation time without affecting the accuracy. Several of fast simulation methods available within the ATLAS simulation framework (standard Athena based simulation program) are discussed here with the focus on the novel frozen shower library (FS) technique. The results obtained with FS are presented here as well.« less

  15. Numerical simulations of an advection fog event over Shanghai Pudong International Airport with the WRF model

    NASA Astrophysics Data System (ADS)

    Lin, Caiyan; Zhang, Zhongfeng; Pu, Zhaoxia; Wang, Fengyun

    2017-10-01

    A series of numerical simulations is conducted to understand the formation, evolution, and dissipation of an advection fog event over Shanghai Pudong International Airport (ZSPD) with the Weather Research and Forecasting (WRF) model. Using the current operational settings at the Meteorological Center of East China Air Traffic Management Bureau, the WRF model successfully predicts the fog event at ZSPD. Additional numerical experiments are performed to examine the physical processes associated with the fog event. The results indicate that prediction of this particular fog event is sensitive to microphysical schemes for the time of fog dissipation but not for the time of fog onset. The simulated timing of the arrival and dissipation of the fog, as well as the cloud distribution, is substantially sensitive to the planetary boundary layer and radiation (both longwave and shortwave) processes. Moreover, varying forecast lead times also produces different simulation results for the fog event regarding its onset and duration, suggesting a trade-off between more accurate initial conditions and a proper forecast lead time that allows model physical processes to spin up adequately during the fog simulation. The overall outcomes from this study imply that the complexity of physical processes and their interactions within the WRF model during fog evolution and dissipation is a key area of future research.

  16. 2-D Modeling of Nanoscale MOSFETs: Non-Equilibrium Green's Function Approach

    NASA Technical Reports Server (NTRS)

    Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan

    2001-01-01

    We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions and oxide tunneling are treated on an equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Electron-electron interaction is treated within Hartree approximation by solving NEGF and Poisson equations self-consistently. For the calculations presented here, parallelization is performed by distributing the solution of NEGF equations to various processors, energy wise. We present simulation of the "benchmark" MIT 25nm and 90nm MOSFETs and compare our results to those from the drift-diffusion simulator and the quantum-corrected results available. In the 25nm MOSFET, the channel length is less than ten times the electron wavelength, and the electron scattering time is comparable to its transit time. Our main results are: (1) Simulated drain subthreshold current characteristics are shown, where the potential profiles are calculated self-consistently by the corresponding simulation methods. The current predicted by our quantum simulation has smaller subthreshold slope of the Vg dependence which results in higher threshold voltage. (2) When gate oxide thickness is less than 2 nm, gate oxide leakage is a primary factor which determines off-current of a MOSFET (3) Using our 2-D NEGF simulator, we found several ways to drastically decrease oxide leakage current without compromising drive current. (4) Quantum mechanically calculated electron density is much smaller than the background doping density in the poly silicon gate region near oxide interface. This creates an additional effective gate voltage. Different ways to. include this effect approximately will be discussed.

  17. The reliability of molecular dynamics simulations of the multidrug transporter P-glycoprotein in a membrane environment

    PubMed Central

    Condic-Jurkic, Karmen; Subramanian, Nandhitha; Mark, Alan E.

    2018-01-01

    Despite decades of research, the mechanism of action of the ABC multidrug transporter P-glycoprotein (P-gp) remains elusive. Due to experimental limitations, many researchers have turned to molecular dynamics simulation studies in order to investigate different aspects of P-gp function. However, such studies are challenging and caution is required when interpreting the results. P-gp is highly flexible and the time scale on which it can be simulated is limited. There is also uncertainty regarding the accuracy of the various crystal structures available, let alone the structure of the protein in a physiologically relevant environment. In this study, three alternative structural models of mouse P-gp (3G5U, 4KSB, 4M1M), all resolved to 3.8 Å, were used to initiate sets of simulations of P-gp in a membrane environment in order to determine: a) the sensitivity of the results to differences in the starting configuration; and b) the extent to which converged results could be expected on the times scales commonly simulated for this system. The simulations suggest that the arrangement of the nucleotide binding domains (NBDs) observed in the crystal structures is not stable in a membrane environment. In all simulations, the NBDs rapidly associated (within 10 ns) and changes within the transmembrane helices were observed. The secondary structure within the transmembrane domain was best preserved in the 4M1M model under the simulation conditions used. However, the extent to which replicate simulations diverged on a 100 to 200 ns timescale meant that it was not possible to draw definitive conclusions as to which structure overall was most stable, or to obtain converged and reliable results for any of the properties examined. The work brings into question the reliability of conclusions made in regard to the nature of specific interactions inferred from previous simulation studies on this system involving similar sampling times. It also highlights the need to demonstrate the statistical significance of any results obtained in simulations of large flexible proteins, especially where the initial structure is uncertain. PMID:29370310

  18. The reliability of molecular dynamics simulations of the multidrug transporter P-glycoprotein in a membrane environment.

    PubMed

    Condic-Jurkic, Karmen; Subramanian, Nandhitha; Mark, Alan E; O'Mara, Megan L

    2018-01-01

    Despite decades of research, the mechanism of action of the ABC multidrug transporter P-glycoprotein (P-gp) remains elusive. Due to experimental limitations, many researchers have turned to molecular dynamics simulation studies in order to investigate different aspects of P-gp function. However, such studies are challenging and caution is required when interpreting the results. P-gp is highly flexible and the time scale on which it can be simulated is limited. There is also uncertainty regarding the accuracy of the various crystal structures available, let alone the structure of the protein in a physiologically relevant environment. In this study, three alternative structural models of mouse P-gp (3G5U, 4KSB, 4M1M), all resolved to 3.8 Å, were used to initiate sets of simulations of P-gp in a membrane environment in order to determine: a) the sensitivity of the results to differences in the starting configuration; and b) the extent to which converged results could be expected on the times scales commonly simulated for this system. The simulations suggest that the arrangement of the nucleotide binding domains (NBDs) observed in the crystal structures is not stable in a membrane environment. In all simulations, the NBDs rapidly associated (within 10 ns) and changes within the transmembrane helices were observed. The secondary structure within the transmembrane domain was best preserved in the 4M1M model under the simulation conditions used. However, the extent to which replicate simulations diverged on a 100 to 200 ns timescale meant that it was not possible to draw definitive conclusions as to which structure overall was most stable, or to obtain converged and reliable results for any of the properties examined. The work brings into question the reliability of conclusions made in regard to the nature of specific interactions inferred from previous simulation studies on this system involving similar sampling times. It also highlights the need to demonstrate the statistical significance of any results obtained in simulations of large flexible proteins, especially where the initial structure is uncertain.

  19. Motion control of 7-DOF arms - The configuration control approach

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Long, Mark K.; Lee, Thomas S.

    1993-01-01

    Graphics simulation and real-time implementation of configuration control schemes for a redundant 7-DOF Robotics Research arm are described. The arm kinematics and motion control schemes are described briefly. This is followed by a description of a graphics simulation environment for 7-DOF arm control on the Silicon Graphics IRIS Workstation. Computer simulation results are presented to demonstrate elbow control, collision avoidance, and optimal joint movement as redundancy resolution goals. The laboratory setup for experimental validation of motion control of the 7-DOF Robotics Research arm is then described. The configuration control approach is implemented on a Motorola-68020/VME-bus-based real-time controller, with elbow positioning for redundancy resolution. Experimental results demonstrate the efficacy of configuration control for real-time control.

  20. CFD simulation of local and global mixing time in an agitated tank

    NASA Astrophysics Data System (ADS)

    Li, Liangchao; Xu, Bin

    2017-01-01

    The Issue of mixing efficiency in agitated tanks has drawn serious concern in many industrial processes. The turbulence model is very critical to predicting mixing process in agitated tanks. On the basis of computational fluid dynamics(CFD) software package Fluent 6.2, the mixing characteristics in a tank agitated by dual six-blade-Rushton-turbines(6-DT) are predicted using the detached eddy simulation(DES) method. A sliding mesh(SM) approach is adopted to solve the rotation of the impeller. The simulated flow patterns and liquid velocities in the agitated tank are verified by experimental data in the literature. The simulation results indicate that the DES method can obtain more flow details than Reynolds-averaged Navier-Stokes(RANS) model. Local and global mixing time in the agitated tank is predicted by solving a tracer concentration scalar transport equation. The simulated results show that feeding points have great influence on mixing process and mixing time. Mixing efficiency is the highest for the feeding point at location of midway of the two impellers. Two methods are used to determine global mixing time and get close result. Dimensionless global mixing time remains unchanged with increasing of impeller speed. Parallel, merging and diverging flow pattern form in the agitated tank, respectively, by changing the impeller spacing and clearance of lower impeller from the bottom of the tank. The global mixing time is the shortest for the merging flow, followed by diverging flow, and the longest for parallel flow. The research presents helpful references for design, optimization and scale-up of agitated tanks with multi-impeller.

  1. Real-Time Model and Simulation Architecture for Half- and Full-Bridge Modular Multilevel Converters

    NASA Astrophysics Data System (ADS)

    Ashourloo, Mojtaba

    This work presents an equivalent model and simulation architecture for real-time electromagnetic transient analysis of either half-bridge or full-bridge modular multilevel converter (MMC) with 400 sub-modules (SMs) per arm. The proposed CPU/FPGA-based architecture is optimized for the parallel implementation of the presented MMC model on the FPGA and is beneficiary of a high-throughput floating-point computational engine. The developed real-time simulation architecture is capable of simulating MMCs with 400 SMs per arm at 825 nanoseconds. To address the difficulties of the sorting process implementation, a modified Odd-Even Bubble sorting is presented in this work. The comparison of the results under various test scenarios reveals that the proposed real-time simulator is representing the system responses in the same way of its corresponding off-line counterpart obtained from the PSCAD/EMTDC program.

  2. Finite Element Methods for real-time Haptic Feedback of Soft-Tissue Models in Virtual Reality Simulators

    NASA Technical Reports Server (NTRS)

    Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.

  3. Cellular Particle Dynamics simulation of biomechanical relaxation processes of multi-cellular systems

    NASA Astrophysics Data System (ADS)

    McCune, Matthew; Kosztin, Ioan

    2013-03-01

    Cellular Particle Dynamics (CPD) is a theoretical-computational-experimental framework for describing and predicting the time evolution of biomechanical relaxation processes of multi-cellular systems, such as fusion, sorting and compression. In CPD, cells are modeled as an ensemble of cellular particles (CPs) that interact via short range contact interactions, characterized by an attractive (adhesive interaction) and a repulsive (excluded volume interaction) component. The time evolution of the spatial conformation of the multicellular system is determined by following the trajectories of all CPs through numerical integration of their equations of motion. Here we present CPD simulation results for the fusion of both spherical and cylindrical multi-cellular aggregates. First, we calibrate the relevant CPD model parameters for a given cell type by comparing the CPD simulation results for the fusion of two spherical aggregates to the corresponding experimental results. Next, CPD simulations are used to predict the time evolution of the fusion of cylindrical aggregates. The latter is relevant for the formation of tubular multi-cellular structures (i.e., primitive blood vessels) created by the novel bioprinting technology. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.

  4. Distributed Observer Network (DON), Version 3.0, User's Guide

    NASA Technical Reports Server (NTRS)

    Mazzone, Rebecca A.; Conroy, Michael P.

    2015-01-01

    The Distributed Observer Network (DON) is a data presentation tool developed by the National Aeronautics and Space Administration (NASA) to distribute and publish simulation results. Leveraging the display capabilities inherent in modern gaming technology, DON places users in a fully navigable 3-D environment containing graphical models and allows the users to observe how those models evolve and interact over time in a given scenario. Each scenario is driven with data that has been generated by authoritative NASA simulation tools and exported in accordance with a published data interface specification. This decoupling of the data from the source tool enables DON to faithfully display a simulator's results and ensure that every simulation stakeholder will view the exact same information every time.

  5. Piloted simulator study of allowable time delay in pitch flight control system of a transport airplane with negative static stability

    NASA Technical Reports Server (NTRS)

    Grantham, William D.; Smith, Paul M.; Person, Lee H., Jr.; Meyer, Robert T.; Tingas, Stephen A.

    1987-01-01

    A piloted simulation study was conducted to determine the permissible time delay in the flight control system of a 10-percent statically unstable transport airplane during cruise flight conditions. The math model used for the simulation was a derivative Lockheed L-1011 wide-body jet transport. Data were collected and analyzed from a total of 137 cruising flights in both calm- and turbulent-air conditions. Results of this piloted simulation study verify previous findings that show present military specifications for allowable control-system time delay may be too stringent when applied to transport-size airplanes. Also, the degree of handling-qualities degradation due to time delay is shown to be strongly dependent on the source of the time delay in an advanced flight control system. Maximum allowable time delay for each source of time delay in the control system, in addition to a less stringent overall maximum level of time delay, should be considered for large aircraft. Preliminary results also suggest that adverse effects of control-system time delay may be at least partially offset by variations in control gearing. It is recommended that the data base include different airplane baselines, control systems, and piloting tasks with many pilots participating, so that a reasonable set of limits for control-system time delay can be established to replace the military specification limits currently being used.

  6. Image formation simulation for computer-aided inspection planning of machine vision systems

    NASA Astrophysics Data System (ADS)

    Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz

    2017-06-01

    In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.

  7. CRITTERS! A Realistic Simulation for Teaching Evolutionary Biology

    ERIC Educational Resources Information Center

    Latham, Luke G., II; Scully, Erik P.

    2008-01-01

    Evolutionary processes can be studied in nature and in the laboratory, but time and financial constraints result in few opportunities for undergraduate and high school students to explore the agents of genetic change in populations. One alternative to time consuming and expensive teaching laboratories is the use of computer simulations. We…

  8. Comparative hybrid and digital simulation studies of the behaviour of a wind generator equipped with a static frequency converter

    NASA Astrophysics Data System (ADS)

    Dube, B.; Lefebvre, S.; Perocheau, A.; Nakra, H. L.

    1988-01-01

    This paper describes the comparative results obtained from digital and hybrid simulation studies on a variable speed wind generator interconnected to the utility grid. The wind generator is a vertical-axis Darrieus type coupled to a synchronous machine by a gear-box; the synchronous machine is connected to the AC utility grid through a static frequency converter. Digital simulation results have been obtained using CSMP software; these results are compared with those obtained from a real-time hybrid simulator that in turn uses a part of the IREQ HVDC simulator. The agreement between hybrid and digital simulation results is generally good. The results demonstrate that the digital simulation reproduces the dynamic behavior of the system in a satisfactory manner and thus constitutes a valid tool for the design of the control systems of the wind generator.

  9. Estimation of Initial and Response Times of Laser Dew-Point Hygrometer by Measurement Simulation

    NASA Astrophysics Data System (ADS)

    Matsumoto, Sigeaki; Toyooka, Satoru

    1995-10-01

    The initial and the response times of the laser dew-point hygrometer were evaluated by measurement simulation. The simulation was based on loop computations of the surface temperature of a plate with dew deposition, the quantity of dew deposited and the intensity of scattered light from the surface at each short interval of measurement. The initial time was defined as the time necessary for the hygrometer to reach a temperature within ±0.5° C of the measured dew point from the start time of measurement, and the response time was also defined for stepwise dew-point changes of +5° C and -5° C. The simulation results are in approximate agreement with the recorded temperature and intensity of scattered light of the hygrometer. The evaluated initial time ranged from 0.3 min to 5 min in the temperature range from 0° C to 60° C, and the response time was also evaluated to be from 0.2 min to 3 min.

  10. Prediction system of the 1-AU arrival times of CME-associated interplanetary shocks using three-dimensional simulations

    NASA Astrophysics Data System (ADS)

    den, Mitsue; Amo, Hiroyoshi; Sugihara, Kohta; Takei, Toshifumi; Ogawa, Tomoya; Tanaka, Takashi; Watari, Shinichi

    We describe prediction system of the 1-AU arrival times of interplanetary shock waves associated with coromal mass ejections (CMEs). The system is based on modeling of the shock propagation using a three-dimensional adaptive mesh refinement (AMR) code. Once a CME is observed by LASCO/SOHO, firstly ambient solar wind is obtained by numerical simulation, which reproduces the solar wind parameters at that time observed by ACE spacecraft. Then we input the expansion speed and occurrence position data of that CME as initial condtions for an CME model, and 3D simulation of the CME and the shock propagation is perfomed until the shock wave passes the 1-AU. Input the parameters, execution of simulation and output of the result are available on Web, so a person who is not familiar with operation of computer or simulations or is not a researcher can use this system to predict the shock passage time. Simulated CME and shock evolution is visuallized at the same time with simulation and snap shots appear on the web automatically, so that user can follow the propagation. This system is expected to be useful for forecasters of space weather. We will describe the system and simulation model in detail.

  11. Tidal Signals In GOCE Measurements And Time-GCM

    NASA Astrophysics Data System (ADS)

    Hausler, K.; Hagan, M. E.; Lu, G.; Doornbos, E.; Bruinsma, S.; Forbes, J. M.

    2013-12-01

    In this paper we investigate tidal signatures in GOCE measurements during 15-24 November 2009 and complementary simulations with the Thermosphere-Ionosphere- Mesosphere-Electrodynamics General Circulation Model (TIME-GCM). The TIME-GCM simulations are driven by inputs that represent the prevailing solar and geomagnetic conditions along with tidal and planetary waves applied at the lower boundary (ca. 30km). For this pilot study, the resultant TIME-GCM densities are analyzed in two ways: 1) we use results along the GOCE orbital track, to calculate ascending/descending orbit longitude- latitude density difference and sum maps for direct comparison with the GOCE diagnostics, and 2) we conduct a complete analysis of TIME-GCM results to unambiguously characterize the simulated atmospheric tides and to attribute the observed longitude variations to specific tidal components. TIME-GCM captures some but not all of the observed longitudinal variability. The good data- model agreement for wave-2, wave-3, and wave-4 suggests that thermospheric impacts can be attributed to the DE1, DE2, DE3, S0, SE1, and SE2 tides. Discrepancies between TIME-GCM and GOCE results are most prominent in the wave-1 variations, and suggest that further refinement of the lower boundary forcing is necessary before we extend our analysis and interpretation to densities associated with the remainder of the GOCE mission.

  12. Verification of Reproduction Simulation of the 2011 Great East Japan Tsunami Using Time-Stamp Data

    NASA Astrophysics Data System (ADS)

    Honma, Motohiro; Ushiyama, Motoyuki

    2014-05-01

    In the 2011 off the pacific coast of Tohoku earthquake tsunami, the significant damage and loss of lives were caused by large tsunami in the pacific coastal areas of the northern Japan. It is important to understand the situation of tsunami inundation in detail in order to establish the effective measures of disaster prevention. In this study, we calculated the detailed tsunami inundation simulation of Rikuzentakata city and verified the simulation results using not only the static observed data such as inundation area and tsunami height estimated by traces but also time stamp data which were recorded to digital camera etc. We calculated the tsunami simulation by non-linear long-wave theory using the staggered grid and leap flog scheme. We used Fujii and Satake (2011)'s model ver.4.2 as the tsunami source. The inundation model of Rikuzentakata city was constructed by fine ground level data of 10m mesh. In this simulation, the shore and river banks were set in boundary of calculation mesh. At that time, we have calculated two patterns of simulation, one condition is that a bank doesn't collapse even if tsunami overflows on it, another condition is that a bank collapses if tsunami overflows on it and its discharge exceeds the threshold. We can use the inundation area data, which was obtained by Geospatial Information Authority of Japan (GSI), and height data of tsunami trace, which were obtained by the 2011 Tohoku Earthquake Joint Survey (TTJS) group, as "static" verification data. Comparing the inundation area of simulation result with its observation by GSI, both areas are matched very well. And then, correlation coefficient between tsunami height data resulted from simulation and observed by TTJS is 0.756. In order to verify tsunami arrival time, we used the time stamp data which were recorded to digital camera etc. by citizens. Ushiyama and Yokomaku (2012) collected these tsunami stamp data and estimated the arrival time in Rikuzentakata city. We compared the arrival time resulted from tsunami simulation with estimated by Ushiyama and Yokomaku (2012) for some major points. The arrival time is earlier 2-4 minutes in the condition that a bank collapses when tsunami overflows and its discharge exceeds 0.05m2/s at each mesh boundary than in the condition that a bank doesn't collapse. And, on the whole the arrival time estimated from time stamp data is in accord with the result which were calculated in the condition that a bank collapse. We could verify reproducibility about not only the final tsunami inundation situation but also the temporal change of tsunami inundation situation by using the time stamp data. Acknowledgement In this study, we used tsunami trace data obtained by The 2011 Tohoku Earthquake Tsunami Joint Survey (TTJS) Group. Reference 1) Fujii and Satake: Tsunami Source of the Off Tohoku-Pacific Earthquake on March 11, 2011, http://iisee.kenken.go.jp/staff/fujii/OffTohokuPacific2011/tsunami_ja_ver4.2and4.6.html, 2011. 2) Ushiyama and Yokomaku: Estimation of situation in Rikuzentakata city just before tsunami attack based on time stamp data, J.JSNDS31-1, pp.47-58, 2012.

  13. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  14. Multiscale optical simulation settings: challenging applications handled with an iterative ray-tracing FDTD interface method.

    PubMed

    Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian

    2016-03-20

    We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.

  15. Phosphorus Adsorption and Desorption Properties of Minnesota Basalt Lunar Simulant and Lunar Glass Simulant

    NASA Technical Reports Server (NTRS)

    Sutter, Brad; Hossner, Lloyd R.; Ming, Douglas W.

    1996-01-01

    Phosphorus (P) adsorption and desorption characteristics of Minnesota Basalt Lunar Simulant (MBLS) and Lunar Glass Simulant (LGS) were evaluated. Results of P interactions with lunar simulants indicated that mineral and glass components adsorbed between 50 and 70% of the applied P and that between 85 and 100% of the applied P was desorbed. The Extended Freundlich equation best described the adsorption data (r(sup 2) = 0.92), whereas the Raven/Hossner equation best described the desorption data ((r(sup 2) = 0.97). Kinetic desorption results indicated that MBLS and LGS released most of their P within 15 h. The expanded Elovich equation fit the data best at shorter times while t/Q(sub DT) equation had a better fit at longer times. These results indicate that P does not strongly adsorb to the two simulants and that any P that was adsorbed was readily desorbed in the presence of anion exchange resin. This work suggests that multiple small applications of P (10-20 mg P/kg) should be added to the simulants to ensure adequate solution P for plant uptake and efficient use of P fertilizer.

  16. BEM-based simulation of lung respiratory deformation for CT-guided biopsy.

    PubMed

    Chen, Dong; Chen, Weisheng; Huang, Lipeng; Feng, Xuegang; Peters, Terry; Gu, Lixu

    2017-09-01

    Accurate and real-time prediction of the lung and lung tumor deformation during respiration are important considerations when performing a peripheral biopsy procedure. However, most existing work focused on offline whole lung simulation using 4D image data, which is not applicable in real-time image-guided biopsy with limited image resources. In this paper, we propose a patient-specific biomechanical model based on the boundary element method (BEM) computed from CT images to estimate the respiration motion of local target lesion region, vessel tree and lung surface for the real-time biopsy guidance. This approach applies pre-computation of various BEM parameters to facilitate the requirement for real-time lung motion simulation. The resulting boundary condition at end inspiratory phase is obtained using a nonparametric discrete registration with convex optimization, and the simulation of the internal tissue is achieved by applying a tetrahedron-based interpolation method depend on expert-determined feature points on the vessel tree model. A reference needle is tracked to update the simulated lung motion during biopsy guidance. We evaluate the model by applying it for respiratory motion estimations of ten patients. The average symmetric surface distance (ASSD) and the mean target registration error (TRE) are employed to evaluate the proposed model. Results reveal that it is possible to predict the lung motion with ASSD of [Formula: see text] mm and a mean TRE of [Formula: see text] mm at largest over the entire respiratory cycle. In the CT-/electromagnetic-guided biopsy experiment, the whole process was assisted by our BEM model and final puncture errors in two studies were 3.1 and 2.0 mm, respectively. The experiment results reveal that both the accuracy of simulation and real-time performance meet the demands of clinical biopsy guidance.

  17. Simulation of Thermographic Responses of Delaminations in Composites with Quadrupole Method

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.; Cramer, K. Elliott

    2016-01-01

    The application of the quadrupole method for simulating thermal responses of delaminations in carbon fiber reinforced epoxy composites materials is presented. The method solves for the flux at the interface containing the delamination. From the interface flux, the temperature at the surface is calculated. While the results presented are for single sided measurements, with ash heating, expansion of the technique to arbitrary temporal flux heating or through transmission measurements is simple. The quadrupole method is shown to have two distinct advantages relative to finite element or finite difference techniques. First, it is straight forward to incorporate arbitrary shaped delaminations into the simulation. Second, the quadrupole method enables calculation of the thermal response at only the times of interest. This, combined with a significant reduction in the number of degrees of freedom for the same simulation quality, results in a reduction of the computation time by at least an order of magnitude. Therefore, it is a more viable technique for model based inversion of thermographic data. Results for simulations of delaminations in composites are presented and compared to measurements and finite element method results.

  18. Implementation of 3D spatial indexing and compression in a large-scale molecular dynamics simulation database for rapid atomic contact detection.

    PubMed

    Toofanny, Rudesh D; Simms, Andrew M; Beck, David A C; Daggett, Valerie

    2011-08-10

    Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008.

  19. Upper Atmospheric Response to the April 2010 Storm as Observed by GOCE, CHAMP, and GRACE and Modeled by TIME-GCM

    NASA Astrophysics Data System (ADS)

    Hagan, Maura; Häusler, Kathrin; Lu, Gang; Forbes, Jeffrey; Zhang, Xiaoli; Doornbos, Eelco; Bruinsma, Sean

    2014-05-01

    We present the results of an investigation of the upper atmosphere during April 2010 when it was disturbed by a fast-moving coronal mass ejection. Our study is based on comparative analysis of observations made by the Gravity field and steady-state Ocean Circulation Explorer (GOCE), Challenging Minisatellite Payload (CHAMP), and Gravity Recovery And Climate Experiment (GRACE) satellites and a set of simulations with the National Center for Atmospheric Research (NCAR) thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (TIME-GCM). We compare and contrast the satellite observations with TIME-GCM results from a realistic simulation based on prevailing meteorological and solar geomagnetic conditions. We diagnose the comparative importance of the upper atmospheric signatures attributable to meteorological forcing with those attributable to storm effects by diagnosing a series of complementary control TIME-GCM simulations. These results also quantify the extent to which lower and middle atmospheric sources of upper atmospheric variability precondition its response to the solar geomagnetic storm.

  20. Capabilities of stochastic rainfall models as data providers for urban hydrology

    NASA Astrophysics Data System (ADS)

    Haberlandt, Uwe

    2017-04-01

    For planning of urban drainage systems using hydrological models, long, continuous precipitation series with high temporal resolution are needed. Since observed time series are often too short or not available everywhere, the use of synthetic precipitation is a common alternative. This contribution compares three precipitation models regarding their suitability to provide 5 minute continuous rainfall time series for a) sizing of drainage networks for urban flood protection and b) dimensioning of combined sewage systems for pollution reduction. The rainfall models are a parametric stochastic model (Haberlandt et al., 2008), a non-parametric probabilistic approach (Bárdossy, 1998) and a stochastic downscaling of dynamically simulated rainfall (Berg et al., 2013); all models are operated both as single site and multi-site generators. The models are applied with regionalised parameters assuming that there is no station at the target location. Rainfall and discharge characteristics are utilised for evaluation of the model performance. The simulation results are compared against results obtained from reference rainfall stations not used for parameter estimation. The rainfall simulations are carried out for the federal states of Baden-Württemberg and Lower Saxony in Germany and the discharge simulations for the drainage networks of the cities of Hamburg, Brunswick and Freiburg. Altogether, the results show comparable simulation performance for the three models, good capabilities for single site simulations but low skills for multi-site simulations. Remarkably, there is no significant difference in simulation performance comparing the tasks flood protection with pollution reduction, so the models are finally able to simulate both the extremes and the long term characteristics of rainfall equally well. Bárdossy, A., 1998. Generating precipitation time series using simulated annealing. Wat. Resour. Res., 34(7): 1737-1744. Berg, P., Wagner, S., Kunstmann, H., Schädler, G., 2013. High resolution regional climate model simulations for Germany: part I — validation. Climate Dynamics, 40(1): 401-414. Haberlandt, U., Ebner von Eschenbach, A.-D., Buchwald, I., 2008. A space-time hybrid hourly rainfall model for derived flood frequency analysis. Hydrol. Earth Syst. Sci., 12: 1353-1367.

  1. CFD simulation of a screw compressor including leakage flows and rotor heating

    NASA Astrophysics Data System (ADS)

    Spille-Kohoff, Andreas, Dr.; Hesse, Jan; El Shorbagy, Ahmed

    2015-08-01

    Computational Fluid Dynamics (CFD) simulations have promising potential to become an important part in the development process of positive displacement (PD) machines. CFD delivers deep insights into the flow and thermodynamic behaviour of PD machines. However, the numerical simulation of such machines is more complex compared to dynamic pumps like turbines or fans. The fluid transport in size-changing chambers with very small clearances between the rotors, and between rotors and casing, demands complex meshes that change with each time step. Additionally, the losses due to leakage flows and the heat transfer to the rotors need high-quality meshes so that automatic remeshing is almost impossible. In this paper, setup steps and results for the simulation of a dry screw compressor are shown. The rotating parts are meshed with TwinMesh, a special hexahedral meshing program for gear pumps, gerotors, lobe pumps and screw compressors. In particular, these meshes include axial and radial clearances between housing and rotors, and beside the fluid volume the rotor solids are also meshed. The CFD simulation accounts for gas flow with compressibility and turbulence effects, heat transfer between gas and rotors, and leakage flows through the clearances. We show time- resolved results for torques, forces, interlobe pressure, mass flow, and heat flow between gas and rotors, as well as time- and space-resolved results for pressure, velocity, temperature etc. for different discharge ports and working points of the screw compressor. These results are also used as thermal loads for deformation simulations of the rotors.

  2. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  3. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE PAGES

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; ...

    2016-01-28

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  4. Spatial heterogeneity in the timing of birch budburst in response to future climate warming in Ireland

    NASA Astrophysics Data System (ADS)

    Caffarra, Amelia; Zottele, Fabio; Gleeson, Emily; Donnelly, Alison

    2014-05-01

    In order to predict the impact of future climate warming on trees it is important to quantify the effect climate has on their development. Our understanding of the phenological response to environmental drivers has given rise to various mathematical models of the annual growth cycle of plants. These models simulate the timing of phenophases by quantifying the relationship between development and its triggers, typically temperature. In addition, other environmental variables have an important role in determining the timing of budburst. For example, photoperiod has been shown to have a strong influence on phenological events of a number of tree species, including Betula pubescens (birch). A recently developed model for birch (DORMPHOT), which integrates the effects of temperature and photoperiod on budburst, was applied to future temperature projections from a 19-member ensemble of regional climate simulations (on a 25 km grid) generated as part of the ENSEMBLES project, to simulate the timing of birch budburst in Ireland each year up to the end of the present century. Gridded temperature time series data from the climate simulations were used as input to the DORMPHOT model to simulate future budburst timing. The results showed an advancing trend in the timing of birch budburst over most regions in Ireland up to 2100. Interestingly, this trend appeared greater in the northeast of the country than in the southwest, where budburst is currently relatively early. These results could have implications for future forest planning, species distribution modeling, and the birch allergy season.

  5. The transition of a real-time single-rotor helicopter simulation program to a supercomputer

    NASA Technical Reports Server (NTRS)

    Martinez, Debbie

    1995-01-01

    This report presents the conversion effort and results of a real-time flight simulation application transition to a CONVEX supercomputer. Enclosed is a detailed description of the conversion process and a brief description of the Langley Research Center's (LaRC) flight simulation application program structure. Currently, this simulation program may be configured to represent Sikorsky S-61 helicopter (a five-blade, single-rotor, commercial passenger-type helicopter) or an Army Cobra helicopter (either the AH-1 G or AH-1 S model). This report refers to the Sikorsky S-61 simulation program since it is the most frequently used configuration.

  6. GPU-based Efficient Realistic Techniques for Bleeding and Smoke Generation in Surgical Simulators

    PubMed Central

    Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu

    2010-01-01

    Background In actual surgery, smoke and bleeding due to cautery processes, provide important visual cues to the surgeon which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated effects of bleeding and smoke generation, they are not realistic due to the requirement of real time performance. To be interactive, visual update must be performed at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques since other computationally intensive processes compete for the available CPU resources. Methods In this work, we develop a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. Results The smoke and bleeding simulation were implemented as part of a Laparoscopic Adjustable Gastric Banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur in noticeable overhead. However, for smoke generation, an I/O (Input/Output) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Conclusions Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited in VR-based surgical simulators. PMID:20878651

  7. High performance ultrasonic field simulation on complex geometries

    NASA Astrophysics Data System (ADS)

    Chouh, H.; Rougeron, G.; Chatillon, S.; Iehl, J. C.; Farrugia, J. P.; Ostromoukhov, V.

    2016-02-01

    Ultrasonic field simulation is a key ingredient for the design of new testing methods as well as a crucial step for NDT inspection simulation. As presented in a previous paper [1], CEA-LIST has worked on the acceleration of these simulations focusing on simple geometries (planar interfaces, isotropic materials). In this context, significant accelerations were achieved on multicore processors and GPUs (Graphics Processing Units), bringing the execution time of realistic computations in the 0.1 s range. In this paper, we present recent works that aim at similar performances on a wider range of configurations. We adapted the physical model used by the CIVA platform to design and implement a new algorithm providing a fast ultrasonic field simulation that yields nearly interactive results for complex cases. The improvements over the CIVA pencil-tracing method include adaptive strategies for pencil subdivisions to achieve a good refinement of the sensor geometry while keeping a reasonable number of ray-tracing operations. Also, interpolation of the times of flight was used to avoid time consuming computations in the impulse response reconstruction stage. To achieve the best performance, our algorithm runs on multi-core superscalar CPUs and uses high performance specialized libraries such as Intel Embree for ray-tracing, Intel MKL for signal processing and Intel TBB for parallelization. We validated the simulation results by comparing them to the ones produced by CIVA on identical test configurations including mono-element and multiple-element transducers, homogeneous, meshed 3D CAD specimens, isotropic and anisotropic materials and wave paths that can involve several interactions with interfaces. We show performance results on complete simulations that achieve computation times in the 1s range.

  8. Introducing a laparoscopic simulation training and credentialing program in gynaecology: an observational study.

    PubMed

    Janssens, Sarah; Beckmann, Michael; Bonney, Donna

    2015-08-01

    Simulation training in laparoscopic surgery has been shown to improve surgical performance. To describe the implementation of a laparoscopic simulation training and credentialing program for gynaecology registrars. A pilot program consisting of protected, supervised laparoscopic simulation time, a tailored curriculum and a credentialing process, was developed and implemented. Quantitative measures assessing simulated surgical performance were measured over the simulation training period. Laparoscopic procedures requiring credentialing were assessed for both the frequency of a registrar being the primary operator and the duration of surgery and compared to a presimulation cohort. Qualitative measures regarding quality of surgical training were assessed pre- and postsimulation. Improvements were seen in simulated surgical performance in efficiency domains. Operative time for procedures requiring credentialing was reduced by 12%. Primary operator status in the operating theatre for registrars was unchanged. Registrar assessment of training quality improved. The introduction of a laparoscopic simulation training and credentialing program resulted in improvements in simulated performance, reduced operative time and improved registrar assessment of the quality of training. © 2015 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.

  9. Analysis of the impact of simulation model simplifications on the quality of low-energy buildings simulation results

    NASA Astrophysics Data System (ADS)

    Klimczak, Marcin; Bojarski, Jacek; Ziembicki, Piotr; Kęskiewicz, Piotr

    2017-11-01

    The requirements concerning energy performance of buildings and their internal installations, particularly HVAC systems, have been growing continuously in Poland and all over the world. The existing, traditional calculation methods following from the static heat exchange model are frequently not sufficient for a reasonable heating design of a building. Both in Poland and elsewhere in the world, methods and software are employed which allow a detailed simulation of the heating and moisture conditions in a building, and also an analysis of the performance of HVAC systems within a building. However, these systems are usually difficult in use and complex. In addition, the development of a simulation model that is sufficiently adequate to the real building requires considerable time involvement of a designer, is time-consuming and laborious. A simplification of the simulation model of a building renders it possible to reduce the costs of computer simulations. The paper analyses in detail the effect of introducing a number of different variants of the simulation model developed in Design Builder on the quality of final results obtained. The objective of this analysis is to find simplifications which allow obtaining simulation results which have an acceptable level of deviations from the detailed model, thus facilitating a quick energy performance analysis of a given building.

  10. The effects of simulated fog and motion on simulator sickness in a driving simulator and the duration of after-effects.

    PubMed

    Dziuda, Lukasz; Biernacki, Marcin P; Baran, Paulina M; Truszczyński, Olaf E

    2014-05-01

    In the study, we checked: 1) how the simulator test conditions affect the severity of simulator sickness symptoms; 2) how the severity of simulator sickness symptoms changes over time; and 3) whether the conditions of the simulator test affect the severity of these symptoms in different ways, depending on the time that has elapsed since the performance of the task in the simulator. We studied 12 men aged 24-33 years (M = 28.8, SD = 3.26) using a truck simulator. The SSQ questionnaire was used to assess the severity of the symptoms of simulator sickness. Each of the subjects performed three 30-minute tasks running along the same route in a driving simulator. Each of these tasks was carried out in a different simulator configuration: A) fixed base platform with poor visibility; B) fixed base platform with good visibility; and C) motion base platform with good visibility. The measurement of the severity of the simulator sickness symptoms took place in five consecutive intervals. The results of the analysis showed that the simulator test conditions affect in different ways the severity of the simulator sickness symptoms, depending on the time which has elapsed since performing the task on the simulator. The simulator sickness symptoms persisted at the highest level for the test conditions involving the motion base platform. Also, when performing the tasks on the motion base platform, the severity of the simulator sickness symptoms varied depending on the time that had elapsed since performing the task. Specifically, the addition of motion to the simulation increased the oculomotor and disorientation symptoms reported as well as the duration of the after-effects. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. A reduced basis method for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Vincent-Finley, Rachel Elisabeth

    In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.

  12. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  13. A parallelization method for time periodic steady state in simulation of radio frequency sheath dynamics

    NASA Astrophysics Data System (ADS)

    Kwon, Deuk-Chul; Shin, Sung-Sik; Yu, Dong-Hun

    2017-10-01

    In order to reduce the computing time in simulation of radio frequency (rf) plasma sources, various numerical schemes were developed. It is well known that the upwind, exponential, and power-law schemes can efficiently overcome the limitation on the grid size for fluid transport simulations of high density plasma discharges. Also, the semi-implicit method is a well-known numerical scheme to overcome on the simulation time step. However, despite remarkable advances in numerical techniques and computing power over the last few decades, efficient multi-dimensional modeling of low temperature plasma discharges has remained a considerable challenge. In particular, there was a difficulty on parallelization in time for the time periodic steady state problems such as capacitively coupled plasma discharges and rf sheath dynamics because values of plasma parameters in previous time step are used to calculate new values each time step. Therefore, we present a parallelization method for the time periodic steady state problems by using period-slices. In order to evaluate the efficiency of the developed method, one-dimensional fluid simulations are conducted for describing rf sheath dynamics. The result shows that speedup can be achieved by using a multithreading method.

  14. Influence of Climate Variability on Brown Planthopper Population Dynamics and Development Time

    NASA Astrophysics Data System (ADS)

    Romadhon, S.; Koesmaryono, Y.; Hidayati, R.

    2017-03-01

    Brown planthopper or Nilaparvata lugens (BPH) is one of the rice major pest in Indonesia. BPH can cause extensive damage and almost always appear in each planting season, frequent explosions attack (outbreaks) resulting in very high economic losses. Outbreaks of BPH were often occurred in paddy fields in Indramayu regency and several endemic regency in Java island, where rice is cultivated twice to three times a year both in the rainy and dry cropping seasons. The output of simulation shows the BPH population starts increasing from December to February (rainy season) and from June to August (dry season). The result relatively had same pattern with light trap observation data, but overestimate to predict BPH population. Therefore, the output of simulation had adequately close pattern if it is compares to BPH attacked area observation data. The development time taken by different stages of BPH varied at different temperatures. BPH development time at eggs and adults stage from the simulation output is suitable with BPH real lifestage, but at nymphs stage the result is different with the concept of development time.

  15. Medication waste reduction in pediatric pharmacy batch processes.

    PubMed

    Toerper, Matthew F; Veltri, Michael A; Hamrock, Eric; Mollenkopf, Nicole L; Holt, Kristen; Levin, Scott

    2014-04-01

    To inform pediatric cart-fill batch scheduling for reductions in pharmaceutical waste using a case study and simulation analysis. A pre and post intervention and simulation analysis was conducted during 3 months at a 205-bed children's center. An algorithm was developed to detect wasted medication based on time-stamped computerized provider order entry information. The algorithm was used to quantify pharmaceutical waste and associated costs for both preintervention (1 batch per day) and postintervention (3 batches per day) schedules. Further, simulation was used to systematically test 108 batch schedules outlining general characteristics that have an impact on the likelihood for waste. Switching from a 1-batch-per-day to a 3-batch-per-day schedule resulted in a 31.3% decrease in pharmaceutical waste (28.7% to 19.7%) and annual cost savings of $183,380. Simulation results demonstrate how increasing batch frequency facilitates a more just-in-time process that reduces waste. The most substantial gains are realized by shifting from a schedule of 1 batch per day to at least 2 batches per day. The simulation exhibits how waste reduction is also achievable by avoiding batch preparation during daily time periods where medication administration or medication discontinuations are frequent. Last, the simulation was used to show how reducing batch preparation time per batch provides some, albeit minimal, opportunity to decrease waste. The case study and simulation analysis demonstrate characteristics of batch scheduling that may support pediatric pharmacy managers in redesign toward minimizing pharmaceutical waste.

  16. A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy; Hartley, Tom T.

    1998-01-01

    Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.

  17. Rise time of proton cut-off energy in 2D and 3D PIC simulations

    NASA Astrophysics Data System (ADS)

    Babaei, J.; Gizzi, L. A.; Londrillo, P.; Mirzanejad, S.; Rovelli, T.; Sinigardi, S.; Turchetti, G.

    2017-04-01

    The Target Normal Sheath Acceleration regime for proton acceleration by laser pulses is experimentally consolidated and fairly well understood. However, uncertainties remain in the analysis of particle-in-cell simulation results. The energy spectrum is exponential with a cut-off, but the maximum energy depends on the simulation time, following different laws in two and three dimensional (2D, 3D) PIC simulations so that the determination of an asymptotic value has some arbitrariness. We propose two empirical laws for the rise time of the cut-off energy in 2D and 3D PIC simulations, suggested by a model in which the proton acceleration is due to a surface charge distribution on the target rear side. The kinetic energy of the protons that we obtain follows two distinct laws, which appear to be nicely satisfied by PIC simulations, for a model target given by a uniform foil plus a contaminant layer that is hydrogen-rich. The laws depend on two parameters: the scaling time, at which the energy starts to rise, and the asymptotic cut-off energy. The values of the cut-off energy, obtained by fitting 2D and 3D simulations for the same target and laser pulse configuration, are comparable. This suggests that parametric scans can be performed with 2D simulations since 3D ones are computationally very expensive, delegating their role only to a correspondence check. In this paper, the simulations are carried out with the PIC code ALaDyn by changing the target thickness L and the incidence angle α, with a fixed a0 = 3. A monotonic dependence, on L for normal incidence and on α for fixed L, is found, as in the experimental results for high temporal contrast pulses.

  18. Computational steering of GEM based detector simulations

    NASA Astrophysics Data System (ADS)

    Sheharyar, Ali; Bouhali, Othmane

    2017-10-01

    Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.

  19. Mixed reality ventriculostomy simulation: experience in neurosurgical residency.

    PubMed

    Hooten, Kristopher G; Lister, J Richard; Lombard, Gwen; Lizdas, David E; Lampotang, Samsun; Rajon, Didier A; Bova, Frank; Murad, Gregory J A

    2014-12-01

    Medicine and surgery are turning toward simulation to improve on limited patient interaction during residency training. Many simulators today use virtual reality with augmented haptic feedback with little to no physical elements. In a collaborative effort, the University of Florida Department of Neurosurgery and the Center for Safety, Simulation & Advanced Learning Technologies created a novel "mixed" physical and virtual simulator to mimic the ventriculostomy procedure. The simulator contains all the physical components encountered for the procedure with superimposed 3-D virtual elements for the neuroanatomical structures. To introduce the ventriculostomy simulator and its validation as a necessary training tool in neurosurgical residency. We tested the simulator in more than 260 residents. An algorithm combining time and accuracy was used to grade performance. Voluntary postperformance surveys were used to evaluate the experience. Results demonstrate that more experienced residents have statistically significant better scores and completed the procedure in less time than inexperienced residents. Survey results revealed that most residents agreed that practice on the simulator would help with future ventriculostomies. This mixed reality simulator provides a real-life experience, and will be an instrumental tool in training the next generation of neurosurgeons. We have now implemented a standard where incoming residents must prove efficiency and skill on the simulator before their first interaction with a patient.

  20. Simulations of 6-DOF Motion with a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.

  1. Time-dependent broken-symmetry density functional theory simulation of the optical response of entangled paramagnetic defects: Color centers in lithium fluoride

    NASA Astrophysics Data System (ADS)

    Janesko, Benjamin G.

    2018-02-01

    Parameter-free atomistic simulations of entangled solid-state paramagnetic defects may aid in the rational design of devices for quantum information science. This work applies time-dependent density functional theory (TDDFT) embedded-cluster simulations to a prototype entangled-defect system, namely two adjacent singlet-coupled F color centers in lithium fluoride. TDDFT calculations accurately reproduce the experimental visible absorption of both isolated and coupled F centers. The most accurate results are obtained by combining spin symmetry breaking to simulate strong correlation, a large fraction of exact (Hartree-Fock-like) exchange to minimize the defect electrons' self-interaction error, and a standard semilocal approximation for dynamical correlations between the defect electrons and the surrounding ionic lattice. These results motivate application of two-reference correlated ab initio approximations to the M-center, and application of TDDFT in parameter-free simulations of more complex entangled paramagnetic defect architectures.

  2. Monte Carlo simulation of Ray-Scan 64 PET system and performance evaluation using GATE toolkit

    NASA Astrophysics Data System (ADS)

    Li, Suying; Zhang, Qiushi; Vuletic, Ivan; Xie, Zhaoheng; Yang, Kun; Ren, Qiushi

    2017-02-01

    In this study, we aimed to develop a GATE model for the simulation of Ray-Scan 64 PET scanner and model its performance characteristics. A detailed implementation of system geometry and physical process were included in the simulation model. Then we modeled the performance characteristics of Ray-Scan 64 PET system for the first time, based on National Electrical Manufacturers Association (NEMA) NU-2 2007 protocols and validated the model against experimental measurement, including spatial resolution, sensitivity, counting rates and noise equivalent count rate (NECR). Moreover, an accurate dead time module was investigated to simulate the counting rate performance. Overall results showed reasonable agreement between simulation and experimental data. The validation results showed the reliability and feasibility of the GATE model to evaluate major performance of Ray-Scan 64 PET system. It provided a useful tool for a wide range of research applications.

  3. Effect of real-time boundary wind conditions on the air flow and pollutant dispersion in an urban street canyon—Large eddy simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Yun-Wei; Gu, Zhao-Lin; Cheng, Yan; Lee, Shun-Cheng

    2011-07-01

    Air flow and pollutant dispersion characteristics in an urban street canyon are studied under the real-time boundary conditions. A new scheme for realizing real-time boundary conditions in simulations is proposed, to keep the upper boundary wind conditions consistent with the measured time series of wind data. The air flow structure and its evolution under real-time boundary wind conditions are simulated by using this new scheme. The induced effect of time series of ambient wind conditions on the flow structures inside and above the street canyon is investigated. The flow shows an obvious intermittent feature in the street canyon and the flapping of the shear layer forms near the roof layer under real-time wind conditions, resulting in the expansion or compression of the air mass in the canyon. The simulations of pollutant dispersion show that the pollutants inside and above the street canyon are transported by different dispersion mechanisms, relying on the time series of air flow structures. Large scale air movements in the processes of the air mass expansion or compression in the canyon exhibit obvious effects on pollutant dispersion. The simulations of pollutant dispersion also show that the transport of pollutants from the canyon to the upper air flow is dominated by the shear layer turbulence near the roof level and the expansion or compression of the air mass in street canyon under real-time boundary wind conditions. Especially, the expansion of the air mass, which features the large scale air movement of the air mass, makes more contribution to the pollutant dispersion in this study. Comparisons of simulated results under different boundary wind conditions indicate that real-time boundary wind conditions produces better condition for pollutant dispersion than the artificially-designed steady boundary wind conditions.

  4. The Simulation of a Jumbo Jet Transport Aircraft. Volume 2: Modeling Data

    NASA Technical Reports Server (NTRS)

    Hanke, C. R.; Nordwall, D. R.

    1970-01-01

    The manned simulation of a large transport aircraft is described. Aircraft and systems data necessary to implement the mathematical model described in Volume I and a discussion of how these data are used in model are presented. The results of the real-time computations in the NASA Ames Research Center Flight Simulator for Advanced Aircraft are shown and compared to flight test data and to the results obtained in a training simulator known to be satisfactory.

  5. Improving surgeon utilization in an orthopedic department using simulation modeling

    PubMed Central

    Simwita, Yusta W; Helgheim, Berit I

    2016-01-01

    Purpose Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time. Methods The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization. Results The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services. Conclusion This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. PMID:29355193

  6. Continued Research into Characterizing the Preturbulence Environment for Sensor Development, New Hazard Algorithms and Experimental Flight Planning

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lin, Yuh-Lang

    2005-01-01

    The purpose of the research was to develop and test improved hazard algorithms that could result in the development of sensors that are better able to anticipate potentially severe atmospheric turbulence, which affects aircraft safety. The research focused on employing numerical simulation models to develop improved algorithms for the prediction of aviation turbulence. This involved producing both research simulations and real-time simulations of environments predisposed to moderate and severe aviation turbulence. The research resulted in the following fundamental advancements toward the aforementioned goal: 1) very high resolution simulations of turbulent environments indicated how predictive hazard indices could be improved resulting in a candidate hazard index that indicated the potential for improvement over existing operational indices, 2) a real-time turbulence hazard numerical modeling system was improved by correcting deficiencies in its simulation of moist convection and 3) the same real-time predictive system was tested by running the code twice daily and the hazard prediction indices updated and improved. Additionally, a simple validation study was undertaken to determine how well a real time hazard predictive index performed when compared to commercial pilot observations of aviation turbulence. Simple statistical analyses were performed in this validation study indicating potential skill in employing the hazard prediction index to predict regions of varying intensities of aviation turbulence. Data sets from a research numerical model where provided to NASA for use in a large eddy simulation numerical model. A NASA contractor report and several refereed journal articles where prepared and submitted for publication during the course of this research.

  7. Magnetosheath Propagation Time of Solar Wind Directional Discontinuities

    NASA Astrophysics Data System (ADS)

    Samsonov, A. A.; Sibeck, D. G.; Dmitrieva, N. P.; Semenov, V. S.; Slivka, K. Yu.; Å afránkova, J.; Němeček, Z.

    2018-05-01

    Observed delays in the ground response to solar wind directional discontinuities have been explained as the result of larger than expected magnetosheath propagation times. Recently, Samsonov et al. (2017, https://doi.org/10.1002/2017GL075020) showed that the typical time for a southward interplanetary magnetic field (IMF) turning to propagate across the magnetosheath is 14 min. Here by using a combination of magnetohydrodynamic simulations, spacecraft observations, and analytic calculations, we study the dependence of the propagation time on solar wind parameters and near-magnetopause cutoff speed. Increases in the solar wind speed result in greater magnetosheath plasma flow velocities, decreases in the magnetosheath thickness and, as a result, decreases in the propagation time. Increases in the IMF strength result in increases in the magnetosheath thickness and increases in the propagation time. Both magnetohydrodynamic simulations and observations suggest that propagation times are slightly smaller for northward IMF turnings. Magnetosheath flow deceleration must be taken into account when predicting the arrival times of solar wind structures at the dayside magnetopause.

  8. Investigation of Asymmetric Thrust Detection with Demonstration in a Real-Time Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy; Rinehart, Aidan W.; Sowers, T. Shane; Simon, Donald L.

    2015-01-01

    The purpose of this effort is to develop, demonstrate, and evaluate three asymmetric thrust detection approaches to aid in the reduction of asymmetric thrust-induced aviation accidents. This paper presents the results from that effort and their evaluation in simulation studies, including those from a real-time flight simulation testbed. Asymmetric thrust is recognized as a contributing factor in several Propulsion System Malfunction plus Inappropriate Crew Response (PSM+ICR) aviation accidents. As an improvement over the state-of-the-art, providing annunciation of asymmetric thrust to alert the crew may hold safety benefits. For this, the reliable detection and confirmation of asymmetric thrust conditions is required. For this work, three asymmetric thrust detection methods are presented along with their results obtained through simulation studies. Representative asymmetric thrust conditions are modeled in simulation based on failure scenarios similar to those reported in aviation incident and accident descriptions. These simulated asymmetric thrust scenarios, combined with actual aircraft operational flight data, are then used to conduct a sensitivity study regarding the detection capabilities of the three methods. Additional evaluation results are presented based on pilot-in-the-loop simulation studies conducted in the NASA Glenn Research Center (GRC) flight simulation testbed. Data obtained from this flight simulation facility are used to further evaluate the effectiveness and accuracy of the asymmetric thrust detection approaches. Generally, the asymmetric thrust conditions are correctly detected and confirmed.

  9. Investigation of Asymmetric Thrust Detection with Demonstration in a Real-Time Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy K.; Rinehart, Aidan W.; Sowers, T. Shane; Simon, Donald L.

    2016-01-01

    The purpose of this effort is to develop, demonstrate, and evaluate three asymmetric thrust detection approaches to aid in the reduction of asymmetric thrust-induced aviation accidents. This paper presents the results from that effort and their evaluation in simulation studies, including those from a real-time flight simulation testbed. Asymmetric thrust is recognized as a contributing factor in several Propulsion System Malfunction plus Inappropriate Crew Response (PSM+ICR) aviation accidents. As an improvement over the state-of-the-art, providing annunciation of asymmetric thrust to alert the crew may hold safety benefits. For this, the reliable detection and confirmation of asymmetric thrust conditions is required. For this work, three asymmetric thrust detection methods are presented along with their results obtained through simulation studies. Representative asymmetric thrust conditions are modeled in simulation based on failure scenarios similar to those reported in aviation incident and accident descriptions. These simulated asymmetric thrust scenarios, combined with actual aircraft operational flight data, are then used to conduct a sensitivity study regarding the detection capabilities of the three methods. Additional evaluation results are presented based on pilot-in-the-loop simulation studies conducted in the NASA Glenn Research Center (GRC) flight simulation testbed. Data obtained from this flight simulation facility are used to further evaluate the effectiveness and accuracy of the asymmetric thrust detection approaches. Generally, the asymmetric thrust conditions are correctly detected and confirmed.

  10. Hybrid neuro-heuristic methodology for simulation and control of dynamic systems over time interval.

    PubMed

    Woźniak, Marcin; Połap, Dawid

    2017-09-01

    Simulation and positioning are very important aspects of computer aided engineering. To process these two, we can apply traditional methods or intelligent techniques. The difference between them is in the way they process information. In the first case, to simulate an object in a particular state of action, we need to perform an entire process to read values of parameters. It is not very convenient for objects for which simulation takes a long time, i.e. when mathematical calculations are complicated. In the second case, an intelligent solution can efficiently help on devoted way of simulation, which enables us to simulate the object only in a situation that is necessary for a development process. We would like to present research results on developed intelligent simulation and control model of electric drive engine vehicle. For a dedicated simulation method based on intelligent computation, where evolutionary strategy is simulating the states of the dynamic model, an intelligent system based on devoted neural network is introduced to control co-working modules while motion is in time interval. Presented experimental results show implemented solution in situation when a vehicle transports things over area with many obstacles, what provokes sudden changes in stability that may lead to destruction of load. Therefore, applied neural network controller prevents the load from destruction by positioning characteristics like pressure, acceleration, and stiffness voltage to absorb the adverse changes of the ground. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. FPGA in-the-loop simulations of cardiac excitation model under voltage clamp conditions

    NASA Astrophysics Data System (ADS)

    Othman, Norliza; Adon, Nur Atiqah; Mahmud, Farhanahani

    2017-01-01

    Voltage clamp technique allows the detection of single channel currents in biological membranes in identifying variety of electrophysiological problems in the cellular level. In this paper, a simulation study of the voltage clamp technique has been presented to analyse current-voltage (I-V) characteristics of ion currents based on Luo-Rudy Phase-I (LR-I) cardiac model by using a Field Programmable Gate Array (FPGA). Nowadays, cardiac models are becoming increasingly complex which can cause a vast amount of time to run the simulation. Thus, a real-time hardware implementation using FPGA could be one of the best solutions for high-performance real-time systems as it provides high configurability and performance, and able to executes in parallel mode operation. For shorter time development while retaining high confidence results, FPGA-based rapid prototyping through HDL Coder from MATLAB software has been used to construct the algorithm for the simulation system. Basically, the HDL Coder is capable to convert the designed MATLAB Simulink blocks into hardware description language (HDL) for the FPGA implementation. As a result, the voltage-clamp fixed-point design of LR-I model has been successfully conducted in MATLAB Simulink and the simulation of the I-V characteristics of the ionic currents has been verified on Xilinx FPGA Virtex-6 XC6VLX240T development board through an FPGA-in-the-loop (FIL) simulation.

  12. Wind energy system time-domain (WEST) analyzers

    NASA Technical Reports Server (NTRS)

    Dreier, M. E.; Hoffman, J. A.

    1981-01-01

    A portable analyzer which simulates in real time the complex nonlinear dynamics of horizontal axis wind energy systems was constructed. Math models for an aeroelastic rotor featuring nonlinear aerodynamic and inertial terms were implemented with high speed digital controllers and analog calculation. This model was combined with other math models of elastic supports, control systems, a power train and gimballed rotor kinematics. A stroboscopic display system graphically depicting distributed blade loads, motion, and other aerodynamic functions on a cathode ray tube is included. Limited correlation efforts showed good comparison between the results of this analyzer and other sophisticated digital simulations. The digital simulation results were successfully correlated with test data.

  13. Rapid prototyping and AI programming environments applied to payload modeling

    NASA Technical Reports Server (NTRS)

    Carnahan, Richard S., Jr.; Mendler, Andrew P.

    1987-01-01

    This effort focused on using artificial intelligence (AI) programming environments and rapid prototyping to aid in both space flight manned and unmanned payload simulation and training. Significant problems addressed are the large amount of development time required to design and implement just one of these payload simulations and the relative inflexibility of the resulting model to accepting future modification. Results of this effort have suggested that both rapid prototyping and AI programming environments can significantly reduce development time and cost when applied to the domain of payload modeling for crew training. The techniques employed are applicable to a variety of domains where models or simulations are required.

  14. Low-Earth-Orbit and Geosynchronous-Earth-Orbit Testing of 80 Ah Batteries under Real-time Profiles

    NASA Technical Reports Server (NTRS)

    Staniewicz, Robert J.; Willson, John; Briscoe, J. Douglas; Rao, Gopalakrishna M.

    2004-01-01

    This viewgraph presentation gives an update on test results from two 16 cell batteries, one in a simulated Low Earth Orbit (LEO) environment and the other in simulated Geosynchronous Earth Orbit (GEO) environment. The tests measured how voltage and capacity are affected over time by thermal cycling.

  15. Near-realtime simulations of biolelectric activity in small mammalian hearts using graphical processing units

    PubMed Central

    Vigmond, Edward J.; Boyle, Patrick M.; Leon, L. Joshua; Plank, Gernot

    2014-01-01

    Simulations of cardiac bioelectric phenomena remain a significant challenge despite continual advancements in computational machinery. Spanning large temporal and spatial ranges demands millions of nodes to accurately depict geometry, and a comparable number of timesteps to capture dynamics. This study explores a new hardware computing paradigm, the graphics processing unit (GPU), to accelerate cardiac models, and analyzes results in the context of simulating a small mammalian heart in real time. The ODEs associated with membrane ionic flow were computed on traditional CPU and compared to GPU performance, for one to four parallel processing units. The scalability of solving the PDE responsible for tissue coupling was examined on a cluster using up to 128 cores. Results indicate that the GPU implementation was between 9 and 17 times faster than the CPU implementation and scaled similarly. Solving the PDE was still 160 times slower than real time. PMID:19964295

  16. Framework for modeling urban restoration resilience time in the aftermath of an extreme event

    USGS Publications Warehouse

    Ramachandran, Varun; Long, Suzanna K.; Shoberg, Thomas G.; Corns, Steven; Carlo, Héctor

    2015-01-01

    The impacts of extreme events continue long after the emergency response has terminated. Effective reconstruction of supply-chain strategic infrastructure (SCSI) elements is essential for postevent recovery and the reconnectivity of a region with the outside. This study uses an interdisciplinary approach to develop a comprehensive framework to model resilience time. The framework is tested by comparing resilience time results for a simulated EF-5 tornado with ground truth data from the tornado that devastated Joplin, Missouri, on May 22, 2011. Data for the simulated tornado were derived for Overland Park, Johnson County, Kansas, in the greater Kansas City, Missouri, area. Given the simulated tornado, a combinatorial graph considering the damages in terms of interconnectivity between different SCSI elements is derived. Reconstruction in the aftermath of the simulated tornado is optimized using the proposed framework to promote a rapid recovery of the SCSI. This research shows promising results when compared with the independent quantifiable data obtained from Joplin, Missouri, returning a resilience time of 22 days compared with 25 days reported by city and state officials.

  17. Ultra-Fast Hadronic Calorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denisov, Dmitri; Lukić, Strahinja; Mokhov, Nikolai

    2018-08-01

    Calorimeters for particle physics experiments with integration time of a few ns will substantially improve the capability of the experiment to resolve event pileup and to reject backgrounds. In this paper the time development of hadronic showers induced by 30 and 60 GeV positive pions and 120 GeV protons is studied using Monte Carlo simulation and beam tests with a prototype of a sampling steel-scintillator hadronic calorimeter. In the beam tests, scintillator signals induced by hadronic showers in steel are sampled with a period of 0.2 ns and precisely time-aligned in order to study the average signal waveform at various locations with respectmore » to the beam particle impact. Simulations of the same setup are performed using the MARS15 code. Both simulation and test beam results suggest that energy deposition in steel calorimeters develop over a time shorter than 2 ns providing opportunity for ultra-fast calorimetry. Simulation results for an “ideal” calorimeter consisting exclusively of bulk tungsten or copper are presented to establish the lower limit of the signal integration window.« less

  18. Ultra-Fast Hadronic Calorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denisov, Dmitri; Lukić, Strahinja; Mokhov, Nikolai

    2017-12-18

    Calorimeters for particle physics experiments with integration time of a few ns will substantially improve the capability of the experiment to resolve event pileup and to reject backgrounds. In this paper time development of hadronic showers induced by 30 and 60 GeV positive pions and 120 GeV protons is studied using Monte Carlo simulation and beam tests with a prototype of a sampling steel-scintillator hadronic calorimeter. In the beam tests, scintillator signals induced by hadronic showers in steel are sampled with a period of 0.2 ns and precisely time-aligned in order to study the average signal waveform at various locationsmore » w.r.t. the beam particle impact. Simulations of the same setup are performed using the MARS15 code. Both simulation and test beam results suggest that energy deposition in steel calorimeters develop over a time shorter than 3 ns providing opportunity for ultra-fast calorimetry. Simulation results for an "ideal" calorimeter consisting exclusively of bulk tungsten or copper are presented to establish the lower limit of the signal integration window.« less

  19. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model

    PubMed Central

    van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.

    2018-01-01

    The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620

  20. A new battery-charging method suggested by molecular dynamics simulations.

    PubMed

    Abou Hamad, Ibrahim; Novotny, M A; Wipf, D O; Rikvold, P A

    2010-03-20

    Based on large-scale molecular dynamics simulations, we propose a new charging method that should be capable of charging a lithium-ion battery in a fraction of the time needed when using traditional methods. This charging method uses an additional applied oscillatory electric field. Our simulation results show that this charging method offers a great reduction in the average intercalation time for Li(+) ions, which dominates the charging time. The oscillating field not only increases the diffusion rate of Li(+) ions in the electrolyte but, more importantly, also enhances intercalation by lowering the corresponding overall energy barrier.

  1. The Development of Dispatcher Training Simulator in a Thermal Energy Generation System

    NASA Astrophysics Data System (ADS)

    Hakim, D. L.; Abdullah, A. G.; Mulyadi, Y.; Hasan, B.

    2018-01-01

    A dispatcher training simulator (DTS) is a real-time Human Machine Interface (HMI)-based control tool that is able to visualize industrial control system processes. The present study was aimed at developing a simulator tool for boilers in a thermal power station. The DTS prototype was designed using technical data of thermal power station boilers in Indonesia. It was then designed and implemented in Wonderware Intouch 10. The resulting simulator came with component drawing, animation, control display, alarm system, real-time trend, historical trend. This application used 26 tagnames and was equipped with a security system. The test showed that the principles of real-time control worked well. It is expected that this research could significantly contribute to the development of thermal power station, particularly in terms of its application as a training simulator for beginning dispatchers.

  2. Clinical results of computerized tomography-based simulation with laser patient marking.

    PubMed

    Ragan, D P; Forman, J D; He, T; Mesina, C F

    1996-02-01

    Accuracy of a patient treatment portal marking device and computerized tomography (CT) simulation have been clinically tested. A CT-based simulator has been assembled based on a commercial CT scanner. This includes visualization software and a computer-controlled laser drawing device. This laser drawing device is used to transfer the setup, central axis, and/or radiation portals from the CT simulator to the patient for appropriate patient skin marking. A protocol for clinical testing is reported. Twenty-five prospectively, sequentially accessioned patients have been analyzed. The simulation process can be completed in an average time of 62 min. Under many cases, the treatment portals can be designed and the patient marked in one session. Mechanical accuracy of the system was found to be within +/- 1mm. The portal projection accuracy in clinical cases is observed to be better than +/- 1.2 mm. Operating costs are equivalent to the conventional simulation process it replaces. Computed tomography simulation is a clinical accurate substitute for conventional simulation when used with an appropriate patient marking system and digitally reconstructed radiographs. Personnel time spent in CT simulation is equivalent to time in conventional simulation.

  3. Modelling the line shape of very low energy peaks of positron beam induced secondary electrons measured using a time of flight spectrometer

    NASA Astrophysics Data System (ADS)

    Fairchild, A. J.; Chirayath, V. A.; Gladen, R. W.; Chrysler, M. D.; Koymen, A. R.; Weiss, A. H.

    2017-01-01

    In this paper, we present results of numerical modelling of the University of Texas at Arlington’s time of flight positron annihilation induced Auger electron spectrometer (UTA TOF-PAES) using SIMION® 8.1 Ion and Electron Optics Simulator. The time of flight (TOF) spectrometer measures the energy of electrons emitted from the surface of a sample as a result of the interaction of low energy positrons with the sample surface. We have used SIMION® 8.1 to calculate the times of flight spectra of electrons leaving the sample surface with energies and angles dispersed according to distribution functions chosen to model the positron induced electron emission process and have thus obtained an estimate of the true electron energy distribution. The simulated TOF distribution was convolved with a Gaussian timing resolution function and compared to the experimental distribution. The broadening observed in the simulated TOF spectra was found to be consistent with that observed in the experimental secondary electron spectra of Cu generated as a result of positrons incident with energy 1.5 eV to 901 eV, when a timing resolution of 2.3 ns was assumed.

  4. Global exponential periodicity and stability of discrete-time complex-valued recurrent neural networks with time-delays.

    PubMed

    Hu, Jin; Wang, Jun

    2015-06-01

    In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Spin-up simulation behaviors in a climate model to build a basement of long-time simulation

    NASA Astrophysics Data System (ADS)

    Lee, J.; Xue, Y.; De Sales, F.

    2015-12-01

    It is essential to develop start-up information when conducting long-time climate simulation. In case that the initial condition is already available from the previous simulation of same type model this does not necessary; however, if not, model needs spin-up simulation to have adjusted and balanced initial condition with the model climatology. Otherwise, a severe spin may take several years. Some of model variables such as deep soil temperature fields and temperature in ocean deep layers in initial fields would affect model's further long-time simulation due to their long residual memories. To investigate the important factor for spin-up simulation in producing an atmospheric initial condition, we had conducted two different spin-up simulations when no atmospheric condition is available from exist datasets. One simulation employed atmospheric global circulation model (AGCM), namely Global Forecast System (GFS) of National Center for Environmental Prediction (NCEP), while the other employed atmosphere-ocean coupled global circulation model (CGCM), namely Climate Forecast System (CFS) of NCEP. Both models share the atmospheric modeling part and only difference is in applying of ocean model coupling, which is conducted by Modular Ocean Model version 4 (MOM4) of Geophysical Fluid Dynamics Laboratory (GFDL) in CFS. During a decade of spin-up simulation, prescribed sea-surface temperature (SST) fields of target year is forced to the GFS daily basis, while CFS digested only first time step ocean condition and freely iterated for the rest of the period. Both models were forced by CO2 condition and solar constant given from the target year. Our analyses of spin-up simulation results indicate that freely conducted interaction between the ocean and the atmosphere is more helpful to produce the initial condition for the target year rather than produced by fixed SST forcing. Since the GFS used prescribed forcing exactly given from the target year, this result is unexpected. The detail analysis will be discussed in this presentation.

  6. Computer simulation results of attitude estimation of earth orbiting satellites

    NASA Technical Reports Server (NTRS)

    Kou, S. R.

    1976-01-01

    Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.

  7. Efficient evaluation of wireless real-time control networks.

    PubMed

    Horvath, Peter; Yampolskiy, Mark; Koutsoukos, Xenofon

    2015-02-11

    In this paper, we present a system simulation framework for the design and performance evaluation of complex wireless cyber-physical systems. We describe the simulator architecture and the specific developments that are required to simulate cyber-physical systems relying on multi-channel, multihop mesh networks. We introduce realistic and efficient physical layer models and a system simulation methodology, which provides statistically significant performance evaluation results with low computational complexity. The capabilities of the proposed framework are illustrated in the example of WirelessHART, a centralized, real-time, multi-hop mesh network designed for industrial control and monitor applications.

  8. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Studies were conducted to develop appropriate space shuttle electrical power distribution and control (EPDC) subsystem simulation models and to apply the computer simulations to systems analysis of the EPDC. A previously developed software program (SYSTID) was adapted for this purpose. The following objectives were attained: (1) significant enhancement of the SYSTID time domain simulation software, (2) generation of functionally useful shuttle EPDC element models, and (3) illustrative simulation results in the analysis of EPDC performance, under the conditions of fault, current pulse injection due to lightning, and circuit protection sizing and reaction times.

  9. Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less

  10. Reliable oligonucleotide conformational ensemble generation in explicit solvent for force field assessment using reservoir replica exchange molecular dynamics simulations

    PubMed Central

    Henriksen, Niel M.; Roe, Daniel R.; Cheatham, Thomas E.

    2013-01-01

    Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 microseconds of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations. PMID:23477537

  11. Reliable oligonucleotide conformational ensemble generation in explicit solvent for force field assessment using reservoir replica exchange molecular dynamics simulations.

    PubMed

    Henriksen, Niel M; Roe, Daniel R; Cheatham, Thomas E

    2013-04-18

    Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example, by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 μs of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations.

  12. Experimental study and numerical simulation of evacuation from a dormitory

    NASA Astrophysics Data System (ADS)

    Lei, Wenjun; Li, Angui; Gao, Ran; Zhou, Ning; Mei, Sen; Tian, Zhenguo

    2012-11-01

    The evacuation process of students from a dormitory is investigated by both experiment and modeling. We investigate the video record of pedestrian movement in a dormitory, and find some typical characteristics of evacuation, including continuous pedestrian flow, mass behavior and so on. Based on the experimental observation, we found that simulation results considering pre-movement time are closer to the experimental results. With the model considering pre-movement time, we simulate the evacuation process and compare the simulation results with the experimental results, and find that they agree with each other closely. The crowd massing phenomenon is conducted in this paper. It is found that different crowd massing phenomena will emerge due to different desired velocities. The crowd massing phenomenon could be more serious with the increase of the desired velocity. In this study, we also found the faster-is-slower effect. When the positive effect produced by increasing the desired velocity is not sufficient for making up for its negative effect, the phenomenon of the greater the desired velocity the longer the time required for evacuation will emerge. From the video record, it can be observed that the mass behavior is obvious during the evacuation process. And the mass phenomenon could also be found in simulation. The results obtained from our study are also suitable to all these buildings in which both living and resting areas occupy the majority space, such as dormitories, residential buildings, hotels (restaurants) and so on.

  13. Fast Learning for Immersive Engagement in Energy Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M

    The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less

  14. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  15. GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.

    PubMed

    Liu, Yangchuan; Tang, Yuguo; Gao, Xin

    2017-12-01

    The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.

  16. Real time implementation and control validation of the wind energy conversion system

    NASA Astrophysics Data System (ADS)

    Sattar, Adnan

    The purpose of the thesis is to analyze dynamic and transient characteristics of wind energy conversion systems including the stability issues in real time environment using the Real Time Digital Simulator (RTDS). There are different power system simulation tools available in the market. Real time digital simulator (RTDS) is one of the powerful tools among those. RTDS simulator has a Graphical User Interface called RSCAD which contains detail component model library for both power system and control relevant analysis. The hardware is based upon the digital signal processors mounted in the racks. RTDS simulator has the advantage of interfacing the real world signals from the external devices, hence used to test the protection and control system equipments. Dynamic and transient characteristics of the fixed and variable speed wind turbine generating systems (WTGSs) are analyzed, in this thesis. Static Synchronous Compensator (STATCOM) as a flexible ac transmission system (FACTS) device is used to enhance the fault ride through (FRT) capability of the fixed speed wind farm. Two level voltage source converter based STATCOM is modeled in both VSC small time-step and VSC large time-step of RTDS. The simulation results of the RTDS model system are compared with the off-line EMTP software i.e. PSCAD/EMTDC. A new operational scheme for a MW class grid-connected variable speed wind turbine driven permanent magnet synchronous generator (VSWT-PMSG) is developed. VSWT-PMSG uses fully controlled frequency converters for the grid interfacing and thus have the ability to control the real and reactive powers simultaneously. Frequency converters are modeled in the VSC small time-step of the RTDS and three phase realistic grid is adopted with RSCAD simulation through the use of optical analogue digital converter (OADC) card of the RTDS. Steady state and LVRT characteristics are carried out to validate the proposed operational scheme. Simulation results show good agreement with real time simulation software and thus can be used to validate the controllers for the real time operation. Integration of the Battery Energy Storage System (BESS) with wind farm can smoothen its intermittent power fluctuations. The work also focuses on the real time implementation of the Sodium Sulfur (NaS) type BESS. BESS is integrated with the STATCOM. The main advantage of this system is that it can also provide the reactive power support to the system along with the real power exchange from BESS unit. BESS integrated with STATCOM is modeled in the VSC small time-step of the RTDS. The cascaded vector control scheme is used for the control of the STATCOM and suitable control is developed to control the charging/discharging of the NaS type BESS. Results are compared with Laboratory standard power system software PSCAD/EMTDC and the advantages of using RTDS in dynamic and transient characteristics analyses of wind farm are also demonstrated clearly.

  17. Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, L.G.; Norman, P.I.; Leadbeater, T.W.

    Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less

  18. Dealing with Time in Health Economic Evaluation: Methodological Issues and Recommendations for Practice.

    PubMed

    O'Mahony, James F; Newall, Anthony T; van Rosmalen, Joost

    2015-12-01

    Time is an important aspect of health economic evaluation, as the timing and duration of clinical events, healthcare interventions and their consequences all affect estimated costs and effects. These issues should be reflected in the design of health economic models. This article considers three important aspects of time in modelling: (1) which cohorts to simulate and how far into the future to extend the analysis; (2) the simulation of time, including the difference between discrete-time and continuous-time models, cycle lengths, and converting rates and probabilities; and (3) discounting future costs and effects to their present values. We provide a methodological overview of these issues and make recommendations to help inform both the conduct of cost-effectiveness analyses and the interpretation of their results. For choosing which cohorts to simulate and how many, we suggest analysts carefully assess potential reasons for variation in cost effectiveness between cohorts and the feasibility of subgroup-specific recommendations. For the simulation of time, we recommend using short cycles or continuous-time models to avoid biases and the need for half-cycle corrections, and provide advice on the correct conversion of transition probabilities in state transition models. Finally, for discounting, analysts should not only follow current guidance and report how discounting was conducted, especially in the case of differential discounting, but also seek to develop an understanding of its rationale. Our overall recommendations are that analysts explicitly state and justify their modelling choices regarding time and consider how alternative choices may impact on results.

  19. Simulation of a Real-Time Local Data Integration System over East-Central Florida

    NASA Technical Reports Server (NTRS)

    Case, Jonathan

    1999-01-01

    The Applied Meteorology Unit (AMU) simulated a real-time configuration of a Local Data Integration System (LDIS) using data from 15-28 February 1999. The objectives were to assess the utility of a simulated real-time LDIS, evaluate and extrapolate system performance to identify the hardware necessary to run a real-time LDIS, and determine the sensitivities of LDIS. The ultimate goal for running LDIS is to generate analysis products that enhance short-range (less than 6 h) weather forecasts issued in support of the 45th Weather Squadron, Spaceflight Meteorology Group, and Melbourne National Weather Service operational requirements. The simulation used the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) software on an IBM RS/6000 workstation with a 67-MHz processor. This configuration ran in real-time, but not sufficiently fast for operational requirements. Thus, the AMU recommends a workstation with a 200-MHz processor and 512 megabytes of memory to run the AMU's configuration of LDIS in real-time. This report presents results from two case studies and several data sensitivity experiments. ADAS demonstrates utility through its ability to depict high-resolution cloud and wind features in a variety of weather situations. The sensitivity experiments illustrate the influence of disparate data on the resulting ADAS analyses.

  20. Simulation center training as a means to improve resident performance in percutaneous noncontinuous CT-guided fluoroscopic procedures with dose reduction.

    PubMed

    Mendiratta-Lala, Mishal; Williams, Todd R; Mendiratta, Vivek; Ahmed, Hafeez; Bonnett, John W

    2015-04-01

    The purpose of this study was to evaluate the effectiveness of a multifaceted simulation-based resident training for CT-guided fluoroscopic procedures by measuring procedural and technical skills, radiation dose, and procedure times before and after simulation training. A prospective analysis included 40 radiology residents and eight staff radiologists. Residents took an online pretest to assess baseline procedural knowledge. Second-through fourth-year residents' baseline technical skills with a procedural phantom were evaluated. First-through third-year residents then underwent formal didactic and simulation-based procedural and technical training with one of two interventional radiologists and followed the training with 1 month of supervised phantom-based practice. Thereafter, residents underwent final written and practical examinations. The practical examination included essential items from a 20-point checklist, including site and side marking, consent, time-out, and sterile technique along with a technical skills portion assessing pedal steps, radiation dose, needle redirects, and procedure time. The results indicated statistically significant improvement in procedural and technical skills after simulation training. For residents, the median number of pedal steps decreased by three (p=0.001), median dose decreased by 15.4 mGy (p<0.001), median procedure time decreased by 4.0 minutes (p<0.001), median number of needle redirects decreased by 1.0 (p=0.005), and median number of 20-point checklist items successfully completed increased by three (p<0.001). The results suggest that procedural skills can be acquired and improved by simulation-based training of residents, regardless of experience. CT simulation training decreases procedural time, decreases radiation dose, and improves resident efficiency and confidence, which may transfer to clinical practice with improved patient care and safety.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, E.A.; Smed, P.F.; Bryndum, M.B.

    The paper describes the numerical program, PIPESIN, that simulates the behavior of a pipeline placed on an erodible seabed. PIPEline Seabed INteraction from installation until a stable pipeline seabed configuration has occurred is simulated in the time domain including all important physical processes. The program is the result of the joint research project, ``Free Span Development and Self-lowering of Offshore Pipelines`` sponsored by EU and a group of companies and carried out by the Danish Hydraulic Institute and Delft Hydraulics. The basic modules of PIPESIN are described. The description of the scouring processes has been based on and verified throughmore » physical model tests carried out as part of the research project. The program simulates a section of the pipeline (typically 500 m) in the time domain, the main input being time series of the waves and current. The main results include predictions of the onset of free spans, their length distribution, their variation in time, and the lowering of the pipeline as function of time.« less

  2. Development and testing of a fast conceptual river water quality model.

    PubMed

    Keupers, Ingrid; Willems, Patrick

    2017-04-15

    Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Adaptive time steps in trajectory surface hopping simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spörkel, Lasse, E-mail: spoerkel@kofo.mpg.de; Thiel, Walter, E-mail: thiel@kofo.mpg.de

    2016-05-21

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energymore » surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.« less

  4. Adaptive time steps in trajectory surface hopping simulations

    NASA Astrophysics Data System (ADS)

    Spörkel, Lasse; Thiel, Walter

    2016-05-01

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.

  5. Validation of Potential Models for Li2O in Classical Molecular Dynamics Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oda, Takuji; Oya, Yasuhisa; Tanaka, Satoru

    2007-08-01

    Four Buckingham-type pairwise potential models for Li2O were assessed by molecular static and dynamics simulations. In the static simulation, all models afforded acceptable agreement with experimental values and ab initio calculation results for the crystalline properties. Moreover, the superionic phase transition was realized in the dynamics simulation. However, the Li diffusivity and the lattice expansion were not adequately reproduced at the same time by any model. When using these models in future radiation simulation, these features should be taken into account, in order to reduce the model dependency of the results.

  6. Design and Optimization of a Stationary Electrode in a Vertically-Driven MEMS Inertial Switch for Extending Contact Duration

    PubMed Central

    Xu, Qiu; Yang, Zhuo-Qing; Fu, Bo; Bao, Yan-Ping; Wu, Hao; Sun, Yun-Na; Zhao, Meng-Yuan; Li, Jian; Ding, Gui-Fu; Zhao, Xiao-Lin

    2017-01-01

    A novel micro-electro-mechanical systems (MEMS) inertial microswitch with a flexible contact-enhanced structure to extend the contact duration has been proposed in the present work. In order to investigate the stiffness k of the stationary electrodes, the stationary electrodes with different shapes, thickness h, width b, and length l were designed, analyzed, and simulated using ANSYS software. Both the analytical and the simulated results indicate that the stiffness k increases with thickness h and width b, while decreasing with an increase of length l, and it is related to the shape. The inertial micro-switches with different kinds of stationary electrodes were simulated using ANSYS software and fabricated using surface micromachining technology. The dynamic simulation indicates that the contact time will decrease with the increase of thickness h and width b, but increase with the length l, and it is related to the shape. As a result, the contact time decreases with the stiffness k of the stationary electrode. Furthermore, the simulated results reveal that the stiffness k changes more rapidly with h and l compared to b. However, overlarge dimension of the whole microswitch is contradicted with small footprint area expectation in the structure design. Therefore, it is unreasonable to extend the contact duration by increasing the length l excessively. Thus, the best and most convenient way to prolong the contact time is to reduce the thickness h of the stationary electrode while keeping the plane geometric structure of the inertial micro-switch unchanged. Finally, the fabricated micro-switches with different shapes of stationary electrodes have been evaluated by a standard dropping hammer system. The test maximum contact time under 288 g acceleration can reach 125 µs. It is shown that the test results are in accordance with the simulated results. The conclusions obtained in this work can provide guidance for the future design and fabrication of inertial microswitches. PMID:28272330

  7. Design and Optimization of a Stationary Electrode in a Vertically-Driven MEMS Inertial Switch for Extending Contact Duration.

    PubMed

    Xu, Qiu; Yang, Zhuo-Qing; Fu, Bo; Bao, Yan-Ping; Wu, Hao; Sun, Yun-Na; Zhao, Meng-Yuan; Li, Jian; Ding, Gui-Fu; Zhao, Xiao-Lin

    2017-03-07

    A novel micro-electro-mechanical systems (MEMS) inertial microswitch with a flexible contact-enhanced structure to extend the contact duration has been proposed in the present work. In order to investigate the stiffness k of the stationary electrodes, the stationary electrodes with different shapes, thickness h , width b , and length l were designed, analyzed, and simulated using ANSYS software. Both the analytical and the simulated results indicate that the stiffness k increases with thickness h and width b , while decreasing with an increase of length l , and it is related to the shape. The inertial micro-switches with different kinds of stationary electrodes were simulated using ANSYS software and fabricated using surface micromachining technology. The dynamic simulation indicates that the contact time will decrease with the increase of thickness h and width b , but increase with the length l , and it is related to the shape. As a result, the contact time decreases with the stiffness k of the stationary electrode. Furthermore, the simulated results reveal that the stiffness k changes more rapidly with h and l compared to b . However, overlarge dimension of the whole microswitch is contradicted with small footprint area expectation in the structure design. Therefore, it is unreasonable to extend the contact duration by increasing the length l excessively. Thus, the best and most convenient way to prolong the contact time is to reduce the thickness h of the stationary electrode while keeping the plane geometric structure of the inertial micro-switch unchanged. Finally, the fabricated micro-switches with different shapes of stationary electrodes have been evaluated by a standard dropping hammer system. The test maximum contact time under 288 g acceleration can reach 125 µs. It is shown that the test results are in accordance with the simulated results. The conclusions obtained in this work can provide guidance for the future design and fabrication of inertial microswitches.

  8. Determination of blood oxygenation in the brain by time-resolved reflectance spectroscopy: influence of the skin, skull, and meninges

    NASA Astrophysics Data System (ADS)

    Hielscher, Andreas H.; Liu, Hanli; Wang, Lihong V.; Tittel, Frank K.; Chance, Britton; Jacques, Steven L.

    1994-07-01

    Near infrared light has been used for the determination of blood oxygenation in the brain but little attention has been paid to the fact that the states of blood oxygenation in arteries, veins, and capillaries differ substantially. In this study, Monte Carlo simulations for a heterogeneous system were conducted, and near infrared time-resolved reflectance measurements were performed on a heterogeneous tissue phantom model. The model was made of a solid polyester resin, which simulates the tissue background. A network of tubes was distributed uniformly through the resin to simulate the blood vessels. The time-resolved reflectance spectra were taken with different absorbing solutions filled in the network. Based on the simulation and experimental results, we investigated the dependence of the absorption coefficient obtained from the heterogeneous system on the absorption of the actual absorbing solution filled in the tubes. We show that light absorption by the brain should result from the combination of blood and blood-free tissue background.

  9. Taxi Time Prediction at Charlotte Airport Using Fast-Time Simulation and Machine Learning Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2016-01-01

    Accurate taxi time prediction is required for enabling efficient runway scheduling that can increase runway throughput and reduce taxi times and fuel consumptions on the airport surface. Currently NASA and American Airlines are jointly developing a decision-support tool called Spot and Runway Departure Advisor (SARDA) that assists airport ramp controllers to make gate pushback decisions and improve the overall efficiency of airport surface traffic. In this presentation, we propose to use Linear Optimized Sequencing (LINOS), a discrete-event fast-time simulation tool, to predict taxi times and provide the estimates to the runway scheduler in real-time airport operations. To assess its prediction accuracy, we also introduce a data-driven analytical method using machine learning techniques. These two taxi time prediction methods are evaluated with actual taxi time data obtained from the SARDA human-in-the-loop (HITL) simulation for Charlotte Douglas International Airport (CLT) using various performance measurement metrics. Based on the taxi time prediction results, we also discuss how the prediction accuracy can be affected by the operational complexity at this airport and how we can improve the fast time simulation model before implementing it with an airport scheduling algorithm in a real-time environment.

  10. Thermal Analysis System

    NASA Technical Reports Server (NTRS)

    DiStefano, III, Frank James (Inventor); Wobick, Craig A. (Inventor); Chapman, Kirt Auldwin (Inventor); McCloud, Peter L. (Inventor)

    2014-01-01

    A thermal fluid system modeler including a plurality of individual components. A solution vector is configured and ordered as a function of one or more inlet dependencies of the plurality of individual components. A fluid flow simulator simulates thermal energy being communicated with the flowing fluid and between first and second components of the plurality of individual components. The simulation extends from an initial time to a later time step and bounds heat transfer to be substantially between the flowing fluid, walls of tubes formed in each of the individual components of the plurality, and between adjacent tubes. Component parameters of the solution vector are updated with simulation results for each of the plurality of individual components of the simulation.

  11. Transient simulations of nitrogen load for a coastal aquifer and embayment, Cape Cod, MA

    USGS Publications Warehouse

    Colman, J.A.; Masterson, J.P.

    2008-01-01

    A time-varying, multispecies, modular, three-dimensional transport model (MT3DMS) was developed to simulate groundwater transport of nitrogen from increasing sources on land to the shore of Nauset Marsh, a coastal embayment of the Cape Cod National Seashore. Simulated time-dependent nitrogen loads at the coast can be used to correlate with current observed coastal eutrophic effects, to predict current and ultimate effects of development, and to predict loads resulting from source remediation. A time-varying nitrogen load, corrected for subsurface loss, was applied to the land subsurface in the transport model based on five land-use coverages documenting increasing development from 1951 to 1999. Simulated nitrogen loads to Nauset Marsh increased from 230 kg/yr before 1930 to 4390 kg/yr in 2001 to 7130 kg/yr in 2100, assuming future nitrogen sources constant at the 1999 land-use rate. The simulated nitrogen load per area of embayment was 5 times greater for Salt Pond, a eutrophic landward extension of Nauset Marsh, than for other Nauset Marsh areas. Sensitivity analysis indicated that load results were little affected by changes in vertical discretization and annual recharge but much affected by the nitrogen loss rate assumed for a kettle lake downgradient from a landfill.

  12. The modeling, simulation, and control of transport phenomena in a thermally destabilizing Bridgman crystal growth system

    NASA Astrophysics Data System (ADS)

    Sonda, Paul Julio

    This thesis presents a comprehensive examination of the modeling, simulation, and control of axisymmetric flows occurring in a vertical Bridgman crystal growth system with the melt underlying the crystal. The significant complexity and duration of the manufacturing process make experimental optimization a prohibitive task. Numerical simulation has emerged as a powerful tool in understanding the processing issues still prevalent in industry. A first-principles model is developed to better understand the transport phenomena within a representative vertical Bridgman system. The set of conservation equations for momentum, energy, and species concentration are discretized using the Galerkin finite element method and simulated using accurate time-marching schemes. Simulation results detail the occurrence of fascinating nonlinear dynamics, in the form of stable, time-varying behavior for sufficiently large melt regimes and multiple steady flow states. This discovery of time-periodic flows for high intensity flows is qualitatively consistent with experimental observations. Transient simulations demonstrate that process operating conditions have a marked effect on the hydrodynamic behavior within the melt, which consequently affects the dopant concentration profile within the crystal. The existence of nonlinear dynamical behavior within this system motivates the need for feedback control algorithms which can provide superior crystal quality. This work studies the feasibility of using crucible rotation to control flows in the vertical Bridgman system. Simulations show that crucible rotation acts to suppress the axisymmetric flows. However, for the case when the melt lies below the crystal, crucible rotation also acts to accelerate the onset of time-periodic behavior. This result is attributed to coupling between the centrifugal force and the intense, buoyancy-driven flows. Proportional, proportional-integral, and input-output linearizing controllers are applied to vertical Bridgman systems in stabilizing (crystal below the melt) and destabilizing (melt below the crystal) configurations. The spatially-averaged, axisymmetric kinetic energy is the controlled output. The flows are controlled via rotation of the crucible containing the molten material. Simulation results show that feedback controllers using crucible rotation effectively attenuate flow oscillations in a stabilizing configuration with time-varying disturbance. Crucible rotation is not an optimal choice for suppressing inherent flow oscillations in the destabilizing configuration.

  13. A numerical simulation of finite-length Taylor-Couette flow

    NASA Technical Reports Server (NTRS)

    Streett, C. L.; Hussaini, M. Y.

    1988-01-01

    Results from numerical simulations of finite-length Taylor-Couette flow are presented. Included are time-accurate and steady-state studies of the change in the nature of the symmetric two-cell/asymmetric one-cell bifurcation with varying aspect ratio and of the Reynolds number/aspect ratio locus of the two-cell/four-cell bifurcation. Preliminary results from wavy-vortex simulations at low aspect ratios are also presented.

  14. Delivery performance of conventional aircraft by terminal-area, time-based air traffic control: A real-time simulation evaluation

    NASA Technical Reports Server (NTRS)

    Credeur, Leonard; Houck, Jacob A.; Capron, William R.; Lohr, Gary W.

    1990-01-01

    A description and results are presented of a study to measure the performance and reaction of airline flight crews, in a full workload DC-9 cockpit, flying in a real-time simulation of an air traffic control (ATC) concept called Traffic Intelligence for the Management of Efficient Runway-scheduling (TIMER). Experimental objectives were to verify earlier fast-time TIMER time-delivery precision results and obtain data for the validation or refinement of existing computer models of pilot/airborne performance. Experimental data indicated a runway threshold, interarrival-time-error standard deviation in the range of 10.4 to 14.1 seconds. Other real-time system performance parameters measured include approach speeds, response time to controller turn instructions, bank angles employed, and ATC controller message delivery-time errors.

  15. Comparison of calculation and simulation of evacuation in real buildings

    NASA Astrophysics Data System (ADS)

    Szénay, Martin; Lopušniak, Martin

    2018-03-01

    Each building must meet requirements for safe evacuation in order to prevent casualties. Therefore methods for evaluation of evacuation are used when designing buildings. In the paper, calculation methods were tested on three real buildings. The testing used methods of evacuation time calculation pursuant to Slovak standards and evacuation time calculation using the buildingExodus simulation software. If calculation methods have been suitably selected taking into account the nature of evacuation and at the same time if correct values of parameters were entered, we will be able to obtain almost identical times of evacuation in comparison with real results obtained from simulation. The difference can range from 1% to 27%.

  16. Alternative modeling methods for plasma-based Rf ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less

  17. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.

  18. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  19. Fluid Structural Analysis of Human Cerebral Aneurysm Using Their Own Wall Mechanical Properties

    PubMed Central

    Valencia, Alvaro; Burdiles, Patricio; Ignat, Miguel; Mura, Jorge; Rivera, Rodrigo; Sordo, Juan

    2013-01-01

    Computational Structural Dynamics (CSD) simulations, Computational Fluid Dynamics (CFD) simulation, and Fluid Structure Interaction (FSI) simulations were carried out in an anatomically realistic model of a saccular cerebral aneurysm with the objective of quantifying the effects of type of simulation on principal fluid and solid mechanics results. Eight CSD simulations, one CFD simulation, and four FSI simulations were made. The results allowed the study of the influence of the type of material elements in the solid, the aneurism's wall thickness, and the type of simulation on the modeling of a human cerebral aneurysm. The simulations use their own wall mechanical properties of the aneurysm. The more complex simulation was the FSI simulation completely coupled with hyperelastic Mooney-Rivlin material, normal internal pressure, and normal variable thickness. The FSI simulation coupled in one direction using hyperelastic Mooney-Rivlin material, normal internal pressure, and normal variable thickness is the one that presents the most similar results with respect to the more complex FSI simulation, requiring one-fourth of the calculation time. PMID:24151523

  20. Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.

    PubMed

    Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A

    2011-01-01

    Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.

  1. Simulation Analysis of Helicopter Ground Resonance Nonlinear Dynamics

    NASA Astrophysics Data System (ADS)

    Zhu, Yan; Lu, Yu-hui; Ling, Ai-min

    2017-07-01

    In order to accurately predict the dynamic instability of helicopter ground resonance, a modeling and simulation method of helicopter ground resonance considering nonlinear dynamic characteristics of components (rotor lead-lag damper, landing gear wheel and absorber) is presented. The numerical integral method is used to calculate the transient responses of the body and rotor, simulating some disturbance. To obtain quantitative instabilities, Fast Fourier Transform (FFT) is conducted to estimate the modal frequencies, and the mobile rectangular window method is employed in the predictions of the modal damping in terms of the response time history. Simulation results show that ground resonance simulation test can exactly lead up the blade lead-lag regressing mode frequency, and the modal damping obtained according to attenuation curves are close to the test results. The simulation test results are in accordance with the actual accident situation, and prove the correctness of the simulation method. This analysis method used for ground resonance simulation test can give out the results according with real helicopter engineering tests.

  2. Introducing Simulation via the Theory of Records

    ERIC Educational Resources Information Center

    Johnson, Arvid C.

    2011-01-01

    While spreadsheet simulation can be a useful method by which to help students to understand some of the more advanced concepts in an introductory statistics course, introducing the simulation methodology at the same time as these concepts can result in student cognitive overload. This article describes a spreadsheet model that has been…

  3. Real-time flutter analysis

    NASA Technical Reports Server (NTRS)

    Walker, R.; Gupta, N.

    1984-01-01

    The important algorithm issues necessary to achieve a real time flutter monitoring system; namely, the guidelines for choosing appropriate model forms, reduction of the parameter convergence transient, handling multiple modes, the effect of over parameterization, and estimate accuracy predictions, both online and for experiment design are addressed. An approach for efficiently computing continuous-time flutter parameter Cramer-Rao estimate error bounds were developed. This enables a convincing comparison of theoretical and simulation results, as well as offline studies in preparation for a flight test. Theoretical predictions, simulation and flight test results from the NASA Drones for Aerodynamic and Structural Test (DAST) Program are compared.

  4. Using the Integration of Discrete Event and Agent-Based Simulation to Enhance Outpatient Service Quality in an Orthopedic Department.

    PubMed

    Kittipittayakorn, Cholada; Ying, Kuo-Ching

    2016-01-01

    Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries' healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES) and agent-based simulation (ABS) to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department.

  5. Using the Integration of Discrete Event and Agent-Based Simulation to Enhance Outpatient Service Quality in an Orthopedic Department

    PubMed Central

    Kittipittayakorn, Cholada

    2016-01-01

    Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries' healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES) and agent-based simulation (ABS) to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department. PMID:27195606

  6. A derivation and scalable implementation of the synchronous parallel kinetic Monte Carlo method for simulating long-time dynamics

    NASA Astrophysics Data System (ADS)

    Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2017-10-01

    Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.

  7. Financial analysis of pruning coast Douglas-fir.

    Treesearch

    Roger D. Fight; James M. Cahlll; Thomas D. Fahey; Thomas A. Snellgrove

    1987-01-01

    Pruning of coast Douglas-fir was evaluated; recent product recovery information for pruned and unpruned logs for both sawn and peeled products was used. Dimensions of pruned and unpruned trees were simulated with the Douglas-fir stand simulator (DFSIM). Results are presented for a range of sites, ages at time of pruning, ages at time of harvest, product prices, and...

  8. Orthogonal recursive bisection as data decomposition strategy for massively parallel cardiac simulations.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Pitman, Michael C; Rice, John J

    2011-06-01

    We present the orthogonal recursive bisection algorithm that hierarchically segments the anatomical model structure into subvolumes that are distributed to cores. The anatomy is derived from the Visible Human Project, with electrophysiology based on the FitzHugh-Nagumo (FHN) and ten Tusscher (TT04) models with monodomain diffusion. Benchmark simulations with up to 16,384 and 32,768 cores on IBM Blue Gene/P and L supercomputers for both FHN and TT04 results show good load balancing with almost perfect speedup factors that are close to linear with the number of cores. Hence, strong scaling is demonstrated. With 32,768 cores, a 1000 ms simulation of full heart beat requires about 6.5 min of wall clock time for a simulation of the FHN model. For the largest machine partitions, the simulations execute at a rate of 0.548 s (BG/P) and 0.394 s (BG/L) of wall clock time per 1 ms of simulation time. To our knowledge, these simulations show strong scaling to substantially higher numbers of cores than reported previously for organ-level simulation of the heart, thus significantly reducing run times. The ability to reduce runtimes could play a critical role in enabling wider use of cardiac models in research and clinical applications.

  9. LSST: Cadence Design and Simulation

    NASA Astrophysics Data System (ADS)

    Cook, Kem H.; Pinto, P. A.; Delgado, F.; Miller, M.; Petry, C.; Saha, A.; Gee, P. A.; Tyson, J. A.; Ivezic, Z.; Jones, L.; LSST Collaboration

    2009-01-01

    The LSST Project has developed an operations simulator to investigate how best to observe the sky to achieve its multiple science goals. The simulator has a sophisticated model of the telescope and dome to properly constrain potential observing cadences. This model has also proven useful for investigating various engineering issues ranging from sizing of slew motors, to design of cryogen lines to the camera. The simulator is capable of balancing cadence goals from multiple science programs, and attempts to minimize time spent slewing as it carries out these goals. The operations simulator has been used to demonstrate a 'universal' cadence which delivers the science requirements for a deep cosmology survey, a Near Earth Object Survey and good sampling in the time domain. We will present the results of simulating 10 years of LSST operations using realistic seeing distributions, historical weather data, scheduled engineering downtime and current telescope and camera parameters. These simulations demonstrate the capability of the LSST to deliver a 25,000 square degree survey probing the time domain including 20,000 square degrees for a uniform deep, wide, fast survey, while effectively surveying for NEOs over the same area. We will also present our plans for future development of the simulator--better global minimization of slew time and eventual transition to a scheduler for the real LSST.

  10. A MODFLOW Infiltration Device Package for Simulating Storm Water Infiltration.

    PubMed

    Jeppesen, Jan; Christensen, Steen

    2015-01-01

    This article describes a MODFLOW Infiltration Device (INFD) Package that can simulate infiltration devices and their two-way interaction with groundwater. The INFD Package relies on a water balance including inflow of storm water, leakage-like seepage through the device faces, overflow, and change in storage. The water balance for the device can be simulated in multiple INFD time steps within a single MODFLOW time step, and infiltration from the device can be routed through the unsaturated zone to the groundwater table. A benchmark test shows that the INFD Package's analytical solution for stage computes exact results for transient behavior. To achieve similar accuracy by the numerical solution of the MODFLOW Surface-Water Routing (SWR1) Process requires many small time steps. Furthermore, the INFD Package includes an improved representation of flow through the INFD sides that results in lower infiltration rates than simulated by SWR1. The INFD Package is also demonstrated in a transient simulation of a hypothetical catchment where two devices interact differently with groundwater. This simulation demonstrates that device and groundwater interaction depends on the thickness of the unsaturated zone because a shallow groundwater table (a likely result from storm water infiltration itself) may occupy retention volume, whereas a thick unsaturated zone may cause a phase shift and a change of amplitude in groundwater table response to a change of infiltration. We thus find that the INFD Package accommodates the simulation of infiltration devices and groundwater in an integrated manner on small as well as large spatial and temporal scales. © 2014, National Ground Water Association.

  11. Interactive real time flow simulations

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1990-01-01

    An interactive real time flow simulation technique is developed for an unsteady channel flow. A finite-volume algorithm in conjunction with a Runge-Kutta time stepping scheme was developed for two-dimensional Euler equations. A global time step was used to accelerate convergence of steady-state calculations. A raster image generation routine was developed for high speed image transmission which allows the user to have direct interaction with the solution development. In addition to theory and results, the hardware and software requirements are discussed.

  12. Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre

    2009-01-01

    The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.

  13. ANSYS-MATLAB co-simulation of mucus flow distribution and clearance effectiveness of a new simulated cough device.

    PubMed

    Ren, Shuai; Shi, Yan; Cai, Maolin; Zhao, Hongmei; Zhang, Zhaozhi; Zhang, Xiaohua Douglas

    2018-06-01

    Coughing is an irritable reaction that protects the respiratory system from infection and improves mucus clearance. However, for the patients who cannot cough autonomously, an assisted cough device is essential for mucus clearance. Considering the low efficiency of current assisted cough devices, a new simulated cough device based on the pneumatic system is proposed in this paper. Given the uncertainty of airflow rates necessary to clear mucus from airways, the computational fluid dynamics Eulerian wall film model and cough efficiency (CE) were used in this study to simulate the cough process and evaluate cough effectiveness. The Ansys-Matlab co-simulation model was set up and verified through experimental studies using Newtonian fluids. Next, model simulations were performed using non-Newtonian fluids, and peak cough flow (PCF) and PCF duration time were analyzed to determine their influence on mucus clearance. CE growth rate (λ) was calculated to reflect the CE variation trend. From the numerical simulation results, we find that CE rises as PCF increases while the growth rate trends to slow as PCF increases; when PCF changes from 60 to 360 L/min, CE changes from 3.2% to 51.5% which is approximately 16 times the initial value. Meanwhile, keeping a long PCF duration time could greatly improve CE under the same cough expired volume and PCF. The results indicated that increasing the PCF and PCF duration time can improve the efficiency of mucus clearance. This paper provides a new approach and a research direction for control strategy in simulated cough devices for airway mucus clearance. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Assessment of the effects of horizontal grid resolution on long ...

    EPA Pesticide Factsheets

    The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions. The National Exposure Research Laboratory’s Atmospheric Modeling Division (AMAD) conducts research in support of EPA’s mission to protect human health and the environment.

  15. Massively parallel simulator of optical coherence tomography of inhomogeneous turbid media.

    PubMed

    Malektaji, Siavash; Lima, Ivan T; Escobar I, Mauricio R; Sherif, Sherif S

    2017-10-01

    An accurate and practical simulator for Optical Coherence Tomography (OCT) could be an important tool to study the underlying physical phenomena in OCT such as multiple light scattering. Recently, many researchers have investigated simulation of OCT of turbid media, e.g., tissue, using Monte Carlo methods. The main drawback of these earlier simulators is the long computational time required to produce accurate results. We developed a massively parallel simulator of OCT of inhomogeneous turbid media that obtains both Class I diffusive reflectivity, due to ballistic and quasi-ballistic scattered photons, and Class II diffusive reflectivity due to multiply scattered photons. This Monte Carlo-based simulator is implemented on graphic processing units (GPUs), using the Compute Unified Device Architecture (CUDA) platform and programming model, to exploit the parallel nature of propagation of photons in tissue. It models an arbitrary shaped sample medium as a tetrahedron-based mesh and uses an advanced importance sampling scheme. This new simulator speeds up simulations of OCT of inhomogeneous turbid media by about two orders of magnitude. To demonstrate this result, we have compared the computation times of our new parallel simulator and its serial counterpart using two samples of inhomogeneous turbid media. We have shown that our parallel implementation reduced simulation time of OCT of the first sample medium from 407 min to 92 min by using a single GPU card, to 12 min by using 8 GPU cards and to 7 min by using 16 GPU cards. For the second sample medium, the OCT simulation time was reduced from 209 h to 35.6 h by using a single GPU card, and to 4.65 h by using 8 GPU cards, and to only 2 h by using 16 GPU cards. Therefore our new parallel simulator is considerably more practical to use than its central processing unit (CPU)-based counterpart. Our new parallel OCT simulator could be a practical tool to study the different physical phenomena underlying OCT, or to design OCT systems with improved performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Using a simulation assistant in modeling manufacturing systems

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, S. X.; Wolfsberger, John W.

    1988-01-01

    Numerous simulation languages exist for modeling discrete event processes, and are now ported to microcomputers. Graphic and animation capabilities were added to many of these languages to assist the users build models and evaluate the simulation results. With all these languages and added features, the user is still plagued with learning the simulation language. Futhermore, the time to construct and then to validate the simulation model is always greater than originally anticipated. One approach to minimize the time requirement is to use pre-defined macros that describe various common processes or operations in a system. The development of a simulation assistant for modeling discrete event manufacturing processes is presented. A simulation assistant is defined as an interactive intelligent software tool that assists the modeler in writing a simulation program by translating the modeler's symbolic description of the problem and then automatically generating the corresponding simulation code. The simulation assistant is discussed with emphasis on an overview of the simulation assistant, the elements of the assistant, and the five manufacturing simulation generators. A typical manufacturing system will be modeled using the simulation assistant and the advantages and disadvantages discussed.

  17. Simulation of multi-photon emission isotopes using time-resolved SimSET multiple photon history generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Chih-Chieh; Lin, Hsin-Hon; Lin, Chang-Shiun

    Abstract-Multiple-photon emitters, such as In-111 or Se-75, have enormous potential in the field of nuclear medicine imaging. For example, Se-75 can be used to investigate the bile acid malabsorption and measure the bile acid pool loss. The simulation system for emission tomography (SimSET) is a well-known Monte Carlo simulation (MCS) code in nuclear medicine for its high computational efficiency. However, current SimSET cannot simulate these isotopes due to the lack of modeling of complex decay scheme and the time-dependent decay process. To extend the versatility of SimSET for simulation of those multi-photon emission isotopes, a time-resolved multiple photon history generatormore » based on SimSET codes is developed in present study. For developing the time-resolved SimSET (trSimSET) with radionuclide decay process, the new MCS model introduce new features, including decay time information and photon time-of-flight information, into this new code. The half-life of energy states were tabulated from the Evaluated Nuclear Structure Data File (ENSDF) database. The MCS results indicate that the overall percent difference is less than 8.5% for all simulation trials as compared to GATE. To sum up, we demonstrated that time-resolved SimSET multiple photon history generator can have comparable accuracy with GATE and keeping better computational efficiency. The new MCS code is very useful to study the multi-photon imaging of novel isotopes that needs the simulation of lifetime and the time-of-fight measurements. (authors)« less

  18. The North American Regional Climate Change Assessment Program (NARCCAP): Status and results

    NASA Astrophysics Data System (ADS)

    Gutowski, W. J.

    2009-12-01

    NARCCAP is a multi-institutional program that is investigating systematically the uncertainties in regional scale simulations of contemporary climate and projections of future climate. NARCCAP is supported by multiple federal agencies. NARCCAP is producing an ensemble of high-resolution climate-change scenarios by nesting multiple RCMs in reanalyses and multiple atmosphere-ocean GCM simulations of contemporary and future-scenario climates. The RCM domains cover the contiguous U.S., northern Mexico, and most of Canada. The simulation suite also includes time-slice, high resolution GCMs that use sea-surface temperatures from parent atmosphere-ocean GCMs. The baseline resolution of the RCMs and time-slice GCMs is 50 km. Simulations use three sources of boundary conditions: National Centers for Environmental Prediction (NCEP)/Department of Energy (DOE) AMIP-II Reanalysis, GCMs simulating contemporary climate and GCMs using the A2 SRES emission scenario for the twenty-first century. Simulations cover 1979-2004 and 2038-2060, with the first 3 years discarded for spin-up. The resulting RCM and time-slice simulations offer opportunity for extensive analysis of RCM simulations as well as a basis for multiple high-resolution climate scenarios for climate change impacts assessments. Geophysical statisticians are developing measures of uncertainty from the ensemble. To enable very high-resolution simulations of specific regions, both RCM and high-resolution time-slice simulations are saving output needed for further downscaling. All output is publically available to the climate analysis and the climate impacts assessment community, through an archiving and data-distribution plan. Some initial results show that the models closely reproduce ENSO-related precipitation variations in coastal California, where the correlation between the simulated and observed monthly time series exceeds 0.94 for all models. The strong El Nino events of 1982-83 and 1997-98 are well reproduced for the Pacific coastal region of the U.S. in all models. ENSO signals are less well reproduced in other regions. The models also produce well extreme monthly precipitation in coastal California and the Upper Midwest. Model performance tends to deteriorate from west to east across the domain, or roughly from the inflow boundary toward the outflow boundary. This deterioration with distance from the inflow boundary is ameliorated to some extent in models formulated such that large-scale information is included in the model solution, whether implemented by spectral nudging or by use of a perturbation form of the governing equations.

  19. Intercalation of Li Ions into a Graphite Anode Material: Molecular Dynamics Simulations

    NASA Astrophysics Data System (ADS)

    Abou Hamad, Ibrahim; Novotny, Mark

    2008-03-01

    Large-scale molecular dynamics simulations of the anode half-cell of a lithium-ion battery are presented. The model system is composed of an anode represented by a stack of graphite sheets, an electrolyte of ethylene carbonate and propylene carbonate molecules, and lithium and hexafluorophosphate ions. The simulations are done in the NVT ensemble and at room temperature. One charging scheme explored is normal charging in which intercalation is enhanced by electric charges on the graphitic sheets. The second charging mechanism has an external applied oscillatory electric field of amplitude A and frequency f. The simulations were performed on 2.6 GHz Opteron processors, using 160 processors at a time. Our simulation results show an improvement in the intercalation time of the lithium ions for the second charging mechanism. The dependence of the intercalation time on A and f will be discussed.

  20. Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD

    NASA Astrophysics Data System (ADS)

    Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.

    2017-12-01

    We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.

  1. Simplified energy-balance model for pragmatic multi-dimensional device simulation

    NASA Astrophysics Data System (ADS)

    Chang, Duckhyun; Fossum, Jerry G.

    1997-11-01

    To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.

  2. DNS of Flows over Periodic Hills using a Discontinuous-Galerkin Spectral-Element Method

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo T.; Murman, Scott M.

    2014-01-01

    Direct numerical simulation (DNS) of turbulent compressible flows is performed using a higher-order space-time discontinuous-Galerkin finite-element method. The numerical scheme is validated by performing DNS of the evolution of the Taylor-Green vortex and turbulent flow in a channel. The higher-order method is shown to provide increased accuracy relative to low-order methods at a given number of degrees of freedom. The turbulent flow over a periodic array of hills in a channel is simulated at Reynolds number 10,595 using an 8th-order scheme in space and a 4th-order scheme in time. These results are validated against previous large eddy simulation (LES) results. A preliminary analysis provides insight into how these detailed simulations can be used to improve Reynoldsaveraged Navier-Stokes (RANS) modeling

  3. A hybrid method of estimating pulsating flow parameters in the space-time domain

    NASA Astrophysics Data System (ADS)

    Pałczyński, Tomasz

    2017-05-01

    This paper presents a method for estimating pulsating flow parameters in partially open pipes, such as pipelines, internal combustion engine inlets, exhaust pipes and piston compressors. The procedure is based on the method of characteristics, and employs a combination of measurements and simulations. An experimental test rig is described, which enables pressure, temperature and mass flow rate to be measured within a defined cross section. The second part of the paper discusses the main assumptions of a simulation algorithm elaborated in the Matlab/Simulink environment. The simulation results are shown as 3D plots in the space-time domain, and compared with proposed models of phenomena relating to wave propagation, boundary conditions, acoustics and fluid mechanics. The simulation results are finally compared with acoustic phenomena, with an emphasis on the identification of resonant frequencies.

  4. F100 multivariable control synthesis program: Evaluation of a multivariable control using a real-time engine simulation

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Soeder, J. F.; Seldner, K.; Cwynar, D. S.

    1977-01-01

    The design, evaluation, and testing of a practical, multivariable, linear quadratic regulator control for the F100 turbofan engine were accomplished. NASA evaluation of the multivariable control logic and implementation are covered. The evaluation utilized a real time, hybrid computer simulation of the engine. Results of the evaluation are presented, and recommendations concerning future engine testing of the control are made. Results indicated that the engine testing of the control should be conducted as planned.

  5. Point-Process Models of Social Network Interactions: Parameter Estimation and Missing Data Recovery

    DTIC Science & Technology

    2014-08-01

    treating them as zero will have a de minimis impact on the results, but avoiding computing them (and computing with them) saves tremendous time. Set a... test the methods on simulated time series on artificial social networks, including some toy networks and some meant to resemble IkeNet. We conclude...the section by discussing the results in detail. In each of our tests we begin with a complete data set, whether it is real (IkeNet) or simulated. Then

  6. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  7. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  8. Time-lapse analysis of methane quantity in Mary Lee group of coal seams using filter-based multiple-point geostatistical simulation

    USGS Publications Warehouse

    Karacan, C. Özgen; Olea, Ricardo A.

    2013-01-01

    The systematic approach presented in this paper is the first time in literature that history matching, TIs of GIPs and filter simulations are used for degasification performance evaluation and for assessing GIP for mining safety. Results from this study showed that using production history matching of coalbed methane wells to determine time-lapsed reservoir data could be used to compute spatial GIP and representative GIP TIs generated through Voronoi decomposition. Furthermore, performing filter simulations using point-wise data and TIs could be used to predict methane quantity in coal seams subjected to degasification. During the course of the study, it was shown that the material balance of gas produced by wellbores and the GIP reductions in coal seams predicted using filter simulations compared very well, showing the success of filter simulations for continuous variables in this case study. Quantitative results from filter simulations of GIP within the studied area briefly showed that GIP was reduced from an initial ∼73 Bcf (median) to ∼46 Bcf (2011), representing a 37 % decrease and varying spatially through degasification. It is forecasted that there will be an additional ∼2 Bcf reduction in methane quantity between 2011 and 2015. This study and presented results showed that the applied methodology and utilized techniques can be used to map GIP and its change within coal seams after degasification, which can further be used for ventilation design for methane control in coal mines.

  9. Development of NSSS Thermal-Hydraulic Model for KNPEC-2 Simulator Using the Best-Estimate Code RETRAN-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyung-Doo; Jeong, Jae-Jun; Lee, Seung-Wook

    The Nuclear Steam Supply System (NSSS) thermal-hydraulic model adopted in the Korea Nuclear Plant Education Center (KNPEC)-2 simulator was provided in the early 1980s. The reference plant for KNPEC-2 is the Yong Gwang Nuclear Unit 1, which is a Westinghouse-type 3-loop, 950 MW(electric) pressurized water reactor. Because of the limited computational capability at that time, it uses overly simplified physical models and assumptions for a real-time simulation of NSSS thermal-hydraulic transients. This may entail inaccurate results and thus, the possibility of so-called ''negative training,'' especially for complicated two-phase flows in the reactor coolant system. To resolve the problem, we developedmore » a realistic NSSS thermal-hydraulic program (named ARTS code) based on the best-estimate code RETRAN-3D. The systematic assessment of ARTS has been conducted by both a stand-alone test and an integrated test in the simulator environment. The non-integrated stand-alone test (NIST) results were reasonable in terms of accuracy, real-time simulation capability, and robustness. After successful completion of the NIST, ARTS was integrated with a 3-D reactor kinetics model and other system models. The site acceptance test (SAT) has been completed successively and confirmed to comply with the ANSI/ANS-3.5-1998 simulator software performance criteria. This paper presents our efforts for the ARTS development and some test results of the NIST and SAT.« less

  10. Station to instrumented aircraft L-band telemetry system and RF signal controller for spacecraft simulations and station calibration

    NASA Technical Reports Server (NTRS)

    Scaffidi, C. A.; Stocklin, F. J.; Feldman, M. B.

    1971-01-01

    An L-band telemetry system designed to provide the capability of near-real-time processing of calibration data is described. The system also provides the capability of performing computerized spacecraft simulations, with the aircraft as a data source, and evaluating the network response. The salient characteristics of a telemetry analysis and simulation program (TASP) are discussed, together with the results of TASP testing. The results of the L-band system testing have successfully demonstrated the capability of near-real-time processing of telemetry test data, the control of the ground-received signal to within + or - 0.5 db, and the computer generation of test signals.

  11. GAPPARD: a computationally efficient method of approximating gap-scale disturbance in vegetation models

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.

    2013-09-01

    Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, we developed a new method for simulating stand-replacing disturbances that is both accurate and faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing (e.g., as a result of climate change), GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the vegetation model LPJ-GUESS, and evaluated it in a series of simulations along an altitudinal transect of an inner-Alpine valley. We obtained results very similar to the output of the original LPJ-GUESS model that uses 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited for rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and results of other forest models.

  12. Data Files for Ground-Motion Simulations of the 1906 San Francisco Earthquake and Scenario Earthquakes on the Northern San Andreas Fault

    USGS Publications Warehouse

    Aagaard, Brad T.; Barall, Michael; Brocher, Thomas M.; Dolenc, David; Dreger, Douglas; Graves, Robert W.; Harmsen, Stephen; Hartzell, Stephen; Larsen, Shawn; McCandless, Kathleen; Nilsson, Stefan; Petersson, N. Anders; Rodgers, Arthur; Sjogreen, Bjorn; Zoback, Mary Lou

    2009-01-01

    This data set contains results from ground-motion simulations of the 1906 San Francisco earthquake, seven hypothetical earthquakes on the northern San Andreas Fault, and the 1989 Loma Prieta earthquake. The bulk of the data consists of synthetic velocity time-histories. Peak ground velocity on a 1/60th degree grid and geodetic displacements from the simulations are also included. Details of the ground-motion simulations and analysis of the results are discussed in Aagaard and others (2008a,b).

  13. Hybrid Architectural Framework for C4ISR and Discrete-Event Simulation (DES) to Support Sensor-Driven Model Synthesis in Real-World Scenarios

    DTIC Science & Technology

    2013-09-01

    which utilizes FTA and then loads it into a DES engine to generate simulation results. .......44 Figure 21. This simulation architecture is...While Discrete Event Simulation ( DES ) can provide accurate time estimation and fast simulation speed, models utilizing it often suffer...C4ISR progress in MDW is developed in this research to demonstrate the feasibility of AEMF- DES and explore its potential. The simulation (MDSIM

  14. Development of measurement simulation of the laser dew-point hygrometer using an optical fiber cable

    NASA Astrophysics Data System (ADS)

    Matsumoto, Shigeaki

    2005-02-01

    In order to improve the initial and the response times of the Laser Dew-Point Hygrometer (LDH), the measurement simulation was developed on the basis of the loop computation of the surface temperature of a gold plate for dew depostition, the quantity of deposited dew and the intensity of scattered light from the surface of the plate at time interval of 5 sec during measurement. A more detailed relationship between the surface temperature of the plate and the cooling current, and the time constant of the integrator in the control circuit of the LDH were introduced in the simulation program as a function of atmospheric temperature. The simulation was more close to the actual measurement by the LDH. The simulation results indicated the possibility of improving both the times of teh LDH by the increase of the sensitivity of dew and that of the mass transfer coefficient of dew deposited on the plate surface. It was concluded that the initial and the response times could be improved to below 100sec and 120 sec, respectively in the dew-point range at room temperature, that are almost half of the those times of the original LDH.

  15. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  16. Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2018-02-01

    Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.

  17. Differences between racing and non-racing drivers: A simulator study using eye-tracking

    PubMed Central

    de Groot, Stefan; Happee, Riender; de Winter, Joost C. F.

    2017-01-01

    Motorsport has developed into a professional international competition. However, limited research is available on the perceptual and cognitive skills of racing drivers. By means of a racing simulator, we compared the driving performance of seven racing drivers with ten non-racing drivers. Participants were tasked to drive the fastest possible lap time. Additionally, both groups completed a choice reaction time task and a tracking task. Results from the simulator showed faster lap times, higher steering activity, and a more optimal racing line for the racing drivers than for the non-racing drivers. The non-racing drivers’ gaze behavior corresponded to the tangent point model, whereas racing drivers showed a more variable gaze behavior combined with larger head rotations while cornering. Results from the choice reaction time task and tracking task showed no statistically significant difference between the two groups. Our results are consistent with the current consensus in sports sciences in that task-specific differences exist between experts and novices while there are no major differences in general cognitive and motor abilities. PMID:29121090

  18. Managing emergency department overcrowding via ambulance diversion: a discrete event simulation model.

    PubMed

    Lin, Chih-Hao; Kao, Chung-Yao; Huang, Chong-Ye

    2015-01-01

    Ambulance diversion (AD) is considered one of the possible solutions to relieve emergency department (ED) overcrowding. Study of the effectiveness of various AD strategies is prerequisite for policy-making. Our aim is to develop a tool that quantitatively evaluates the effectiveness of various AD strategies. A simulation model and a computer simulation program were developed. Three sets of simulations were executed to evaluate AD initiating criteria, patient-blocking rules, and AD intervals, respectively. The crowdedness index, the patient waiting time for service, and the percentage of adverse patients were assessed to determine the effect of various AD policies. Simulation results suggest that, in a certain setting, the best timing for implementing AD is when the crowdedness index reaches the critical value, 1.0 - an indicator that ED is operating at its maximal capacity. The strategy to divert all patients transported by ambulance is more effective than to divert either high-acuity patients only or low-acuity patients only. Given a total allowable AD duration, implementing AD multiple times with short intervals generally has better effect than having a single AD with maximal allowable duration. An input-throughput-output simulation model is proposed for simulating ED operation. Effectiveness of several AD strategies on relieving ED overcrowding was assessed via computer simulations based on this model. By appropriate parameter settings, the model can represent medical resource providers of different scales. It is also feasible to expand the simulations to evaluate the effect of AD strategies on a community basis. The results may offer insights for making effective AD policies. Copyright © 2012. Published by Elsevier B.V.

  19. Taurus II Stage Test Simulations: Using Large-Scale CFD Simulations to Provide Critical Insight into Plume Induced Environments During Design

    NASA Technical Reports Server (NTRS)

    Struzenberg, L. L.; West, J. S.

    2011-01-01

    This paper describes the use of targeted Loci/CHEM CFD simulations to evaluate the effects of a dual-engine first-stage hot-fire test on an evolving integrated launch pad/test article design. This effort was undertaken as a part of the NESC Independent Assessment of the Taurus II Stage Test Series. The underlying conceptual model included development of a series of computational models and simulations to analyze the plume induced environments on the pad, facility structures and test article. A pathfinder simulation was first developed, capable of providing quick-turn around evaluation of plume impingement pressures on the flame deflector. Results from this simulation were available in time to provide data for an ongoing structural assessment of the deflector. The resulting recommendation was available in a timely manner and was incorporated into construction schedule for the new launch stand under construction at Wallops Flight Facility. A series of Reynolds-Averaged Navier-Stokes (RANS) quasi-steady simulations representative of various key elements of the test profile was performed to identify potential concerns with the test configuration and test profile. As required, unsteady Hybrid-RANS/LES simulations were performed, to provide additional insight into critical aspects of the test sequence. Modifications to the test-specific hardware and facility structures thermal protection as well as modifications to the planned hot-fire test profile were implemented based on these simulation results.

  20. Front panel engineering with CAD simulation tool

    NASA Astrophysics Data System (ADS)

    Delacour, Jacques; Ungar, Serge; Mathieu, Gilles; Hasna, Guenther; Martinez, Pascal; Roche, Jean-Christophe

    1999-04-01

    THe progress made recently in display technology covers many fields of application. The specification of radiance, colorimetry and lighting efficiency creates some new challenges for designers. Photometric design is limited by the capability of correctly predicting the result of a lighting system, to save on the costs and time taken to build multiple prototypes or bread board benches. The second step of the research carried out by company OPTIS is to propose an optimization method to be applied to the lighting system, developed in the software SPEOS. The main features of the tool requires include the CAD interface, to enable fast and efficient transfer between mechanical and light design software, the source modeling, the light transfer model and an optimization tool. The CAD interface is mainly a prototype of transfer, which is not the subjects here. Photometric simulation is efficiently achieved by using the measured source encoding and a simulation by the Monte Carlo method. Today, the advantages and the limitations of the Monte Carlo method are well known. The noise reduction requires a long calculation time, which increases with the complexity of the display panel. A successful optimization is difficult to achieve, due to the long calculation time required for each optimization pass including a Monte Carlo simulation. The problem was initially defined as an engineering method of study. The experience shows that good understanding and mastering of the phenomenon of light transfer is limited by the complexity of non sequential propagation. The engineer must call for the help of a simulation and optimization tool. The main point needed to be able to perform an efficient optimization is a quick method for simulating light transfer. Much work has been done in this area and some interesting results can be observed. It must be said that the Monte Carlo method wastes time calculating some results and information which are not required for the needs of the simulation. Low efficiency transfer system cost a lot of lost time. More generally, the light transfer simulation can be treated efficiently when the integrated result is composed of elementary sub results that include quick analytical calculated intersections. The first axis of research appear. The quick integration research and the quick calculation of geometric intersections. The first axis of research brings some general solutions also valid for multi-reflection systems. The second axis requires some deep thinking on the intersection calculation. An interesting way is the subdivision of space in VOXELS. This is an adapted method of 3D division of space according to the objects and their location. An experimental software has been developed to provide a validation of the method. The gain is particularly high in complex systems. An important reduction in the calculation time has been achieved.

  1. Design of neurophysiologically motivated structures of time-pulse coded neurons

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lobodzinska, Raisa F.

    2009-04-01

    The common methodology of biologically motivated concept of building of processing sensors systems with parallel input and picture operands processing and time-pulse coding are described in paper. Advantages of such coding for creation of parallel programmed 2D-array structures for the next generation digital computers which require untraditional numerical systems for processing of analog, digital, hybrid and neuro-fuzzy operands are shown. The optoelectronic time-pulse coded intelligent neural elements (OETPCINE) simulation results and implementation results of a wide set of neuro-fuzzy logic operations are considered. The simulation results confirm engineering advantages, intellectuality, circuit flexibility of OETPCINE for creation of advanced 2D-structures. The developed equivalentor-nonequivalentor neural element has power consumption of 10mW and processing time about 10...100us.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan S; Bugbee, Bruce; Gotseff, Peter

    Capturing technical and economic impacts of solar photovoltaics (PV) and other distributed energy resources (DERs) on electric distribution systems can require high-time resolution (e.g. 1 minute), long-duration (e.g. 1 year) simulations. However, such simulations can be computationally prohibitive, particularly when including complex control schemes in quasi-steady-state time series (QSTS) simulation. Various approaches have been used in the literature to down select representative time segments (e.g. days), but typically these are best suited for lower time resolutions or consider only a single data stream (e.g. PV production) for selection. We present a statistical approach that combines stratified sampling and bootstrapping tomore » select representative days while also providing a simple method to reassemble annual results. We describe the approach in the context of a recent study with a utility partner. This approach enables much faster QSTS analysis by simulating only a subset of days, while maintaining accurate annual estimates.« less

  3. Reaching multi-nanosecond timescales in combined QM/MM molecular dynamics simulations through parallel horsetail sampling.

    PubMed

    Martins-Costa, Marilia T C; Ruiz-López, Manuel F

    2017-04-15

    We report an enhanced sampling technique that allows to reach the multi-nanosecond timescale in quantum mechanics/molecular mechanics molecular dynamics simulations. The proposed technique, called horsetail sampling, is a specific type of multiple molecular dynamics approach exhibiting high parallel efficiency. It couples a main simulation with a large number of shorter trajectories launched on independent processors at periodic time intervals. The technique is applied to study hydrogen peroxide at the water liquid-vapor interface, a system of considerable atmospheric relevance. A total simulation time of a little more than 6 ns has been attained for a total CPU time of 5.1 years representing only about 20 days of wall-clock time. The discussion of the results highlights the strong influence of the solvation effects at the interface on the structure and the electronic properties of the solute. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Formation Algorithms and Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward

    2004-01-01

    Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.

  5. Real-time management of an urban groundwater well field threatened by pollution.

    PubMed

    Bauser, Gero; Franssen, Harrie-Jan Hendricks; Kaiser, Hans-Peter; Kuhlmann, Ulrich; Stauffer, Fritz; Kinzelbach, Wolfgang

    2010-09-01

    We present an optimal real-time control approach for the management of drinking water well fields. The methodology is applied to the Hardhof field in the city of Zurich, Switzerland, which is threatened by diffuse pollution. The risk of attracting pollutants is higher if the pumping rate is increased and can be reduced by increasing artificial recharge (AR) or by adaptive allocation of the AR. The method was first tested in offline simulations with a three-dimensional finite element variably saturated subsurface flow model for the period January 2004-August 2005. The simulations revealed that (1) optimal control results were more effective than the historical control results and (2) the spatial distribution of AR should be different from the historical one. Next, the methodology was extended to a real-time control method based on the Ensemble Kalman Filter method, using 87 online groundwater head measurements, and tested at the site. The real-time control of the well field resulted in a decrease of the electrical conductivity of the water at critical measurement points which indicates a reduced inflow of water originating from contaminated sites. It can be concluded that the simulation and the application confirm the feasibility of the real-time control concept.

  6. Simulation and Modeling of Positrons and Electrons in advanced Time-of-Flight Positron Annihilation Induced Auger Electron Spectroscopy Systems

    NASA Astrophysics Data System (ADS)

    Joglekar, Prasad; Shastry, Karthik; Satyal, Suman; Weiss, Alexander

    2011-10-01

    Time of Flight Positron Annihilation Induced Auger Electron Spectroscopy (T-O-F PAES) is a highly surface selective analytical technique in which elemental identification is accomplished through a measurement of the flight time distributions of Auger electrons resulting from the annihilation of core electron by positrons. SIMION charged particle optics simulation software was used to model the trajectories both the incident positrons and outgoing electrons in our existing T-O-F PAES system as well as in a new system currently under construction in our laboratory. The implication of these simulation regarding the instrument design and performance are discussed.

  7. How can we deal with ANN in flood forecasting? As a simulation model or updating kernel!

    NASA Astrophysics Data System (ADS)

    Hassan Saddagh, Mohammad; Javad Abedini, Mohammad

    2010-05-01

    Flood forecasting and early warning, as a non-structural measure for flood control, is often considered to be the most effective and suitable alternative to mitigate the damage and human loss caused by flood. Forecast results which are output of hydrologic, hydraulic and/or black box models should secure accuracy of flood values and timing, especially for long lead time. The application of the artificial neural network (ANN) in flood forecasting has received extensive attentions in recent years due to its capability to capture the dynamics inherent in complex processes including flood. However, results obtained from executing plain ANN as simulation model demonstrate dramatic reduction in performance indices as lead time increases. This paper is intended to monitor the performance indices as it relates to flood forecasting and early warning using two different methodologies. While the first method employs a multilayer neural network trained using back-propagation scheme to forecast output hydrograph of a hypothetical river for various forecast lead time up to 6.0 hr, the second method uses 1D hydrodynamic MIKE11 model as forecasting model and multilayer neural network as updating kernel to monitor and assess the performance indices compared to ANN alone in light of increase in lead time. Results presented in both graphical and tabular format indicate superiority of MIKE11 coupled with ANN as updating kernel compared to ANN as simulation model alone. While plain ANN produces more accurate results for short lead time, the errors increase expeditiously for longer lead time. The second methodology provides more accurate and reliable results for longer forecast lead time.

  8. Thermal Simulations, Open Boundary Conditions and Switches

    NASA Astrophysics Data System (ADS)

    Burnier, Yannis; Florio, Adrien; Kaczmarek, Olaf; Mazur, Lukas

    2018-03-01

    SU(N) gauge theories on compact spaces have a non-trivial vacuum structure characterized by a countable set of topological sectors and their topological charge. In lattice simulations, every topological sector needs to be explored a number of times which reflects its weight in the path integral. Current lattice simulations are impeded by the so-called freezing of the topological charge problem. As the continuum is approached, energy barriers between topological sectors become well defined and the simulations get trapped in a given sector. A possible way out was introduced by Lüscher and Schaefer using open boundary condition in the time extent. However, this solution cannot be used for thermal simulations, where the time direction is required to be periodic. In this proceedings, we present results obtained using open boundary conditions in space, at non-zero temperature. With these conditions, the topological charge is not quantized and the topological barriers are lifted. A downside of this method are the strong finite-size effects introduced by the boundary conditions. We also present some exploratory results which show how these conditions could be used on an algorithmic level to reshuffle the system and generate periodic configurations with non-zero topological charge.

  9. Simulation of Transcritical CO2 Refrigeration System with Booster Hot Gas Bypass in Tropical Climate

    NASA Astrophysics Data System (ADS)

    Santosa, I. D. M. C.; Sudirman; Waisnawa, IGNS; Sunu, PW; Temaja, IW

    2018-01-01

    A Simulation computer becomes significant important for performance analysis since there is high cost and time allocation to build an experimental rig, especially for CO2 refrigeration system. Besides, to modify the rig also need additional cos and time. One of computer program simulation that is very eligible to refrigeration system is Engineering Equation System (EES). In term of CO2 refrigeration system, environmental issues becomes priority on the refrigeration system development since the Carbon dioxide (CO2) is natural and clean refrigerant. This study aims is to analysis the EES simulation effectiveness to perform CO2 transcritical refrigeration system with booster hot gas bypass in high outdoor temperature. The research was carried out by theoretical study and numerical analysis of the refrigeration system using the EES program. Data input and simulation validation were obtained from experimental and secondary data. The result showed that the coefficient of performance (COP) decreased gradually with the outdoor temperature variation increasing. The results show the program can calculate the performance of the refrigeration system with quick running time and accurate. So, it will be significant important for the preliminary reference to improve the CO2 refrigeration system design for the hot climate temperature.

  10. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.

  11. Time-Varying Loads of Co-Axial Rotor Blade Crossings

    NASA Technical Reports Server (NTRS)

    Schatzman, Natasha L.; Komerath, Narayanan; Romander, Ethan A.

    2017-01-01

    The blade crossing event of a coaxial counter-rotating rotor is a potential source of noise and impulsive blade loads. Blade crossings occur many times during each rotor revolution. In previous research by the authors, this phenomenon was analyzed by simulating two airfoils passing each other at specified speeds and vertical separation distances, using the compressible Navier-Stokes solver OVERFLOW. The simulations explored mutual aerodynamic interactions associated with thickness, circulation, and compressibility effects. Results revealed the complex nature of the aerodynamic impulses generated by upperlower airfoil interactions. In this paper, the coaxial rotor system is simulated using two trains of airfoils, vertically offset, and traveling in opposite directions. The simulation represents multiple blade crossings in a rotor revolution by specifying horizontal distances between each airfoil in the train based on the circumferential distance between blade tips. The shed vorticity from prior crossing events will affect each pair of upperlower airfoils. The aerodynamic loads on the airfoil and flow field characteristics are computed before, at, and after each airfoil crossing. Results from the multiple-airfoil simulation show noticeable changes in the airfoil aerodynamics by introducing additional fluctuation in the aerodynamic time history.

  12. Mentally simulated movements in virtual reality: does Fitts's law hold in motor imagery?

    PubMed

    Decety, J; Jeannerod, M

    1995-12-14

    This study was designed to investigate mentally simulated actions in a virtual reality environment. Naive human subjects (n = 15) were instructed to imagine themselves walking in a three-dimensional virtual environment toward gates of different apparent widths placed at three different apparent distances. Each subject performed nine blocks of six trials in a randomised order. The response time (reaction time and mental walking time) was measured as the duration between an acoustic go signal and a motor signal produced by the subject. There was a combined effect on response time of both gate width and distance. Response time increased for decreasing apparent gate widths when the gate was placed at different distances. These results support the notion that mentally simulated actions are governed by central motor rules.

  13. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  14. New analytic results for speciation times in neutral models.

    PubMed

    Gernhard, Tanja

    2008-05-01

    In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.

  15. Recent applications of boxed molecular dynamics: a simple multiscale technique for atomistic simulations.

    PubMed

    Booth, Jonathan; Vazquez, Saulo; Martinez-Nunez, Emilio; Marks, Alison; Rodgers, Jeff; Glowacki, David R; Shalashilin, Dmitrii V

    2014-08-06

    In this paper, we briefly review the boxed molecular dynamics (BXD) method which allows analysis of thermodynamics and kinetics in complicated molecular systems. BXD is a multiscale technique, in which thermodynamics and long-time dynamics are recovered from a set of short-time simulations. In this paper, we review previous applications of BXD to peptide cyclization, solution phase organic reaction dynamics and desorption of ions from self-assembled monolayers (SAMs). We also report preliminary results of simulations of diamond etching mechanisms and protein unfolding in atomic force microscopy experiments. The latter demonstrate a correlation between the protein's structural motifs and its potential of mean force. Simulations of these processes by standard molecular dynamics (MD) is typically not possible, because the experimental time scales are very long. However, BXD yields well-converged and physically meaningful results. Compared with other methods of accelerated MD, our BXD approach is very simple; it is easy to implement, and it provides an integrated approach for simultaneously obtaining both thermodynamics and kinetics. It also provides a strategy for obtaining statistically meaningful dynamical results in regions of configuration space that standard MD approaches would visit only very rarely.

  16. Assessing the thermo-mechanical TaMeTirE model in offline vehicle simulation and driving simulator tests

    NASA Astrophysics Data System (ADS)

    Durand-Gasselin, Benoit; Dailliez, Thibault; Mössner-Beigel, Monika; Knorr, Stephanie; Rauh, Jochen

    2010-12-01

    This paper presents the experiences using Michelin's thermo-mechanical TaMeTirE tyre model for real-time handling applications in the field of advanced passenger car simulation. Passenger car handling simulations were performed using the tyre model in a full-vehicle real-time environment in order to assess TaMeTirE's level of consistency with real on-track handling behaviour. To achieve this goal, a first offline comparison with a state-of-the-art handling tyre model was carried out on three handling manoeuvres. Then, online real-time simulations of steering wheel steps and slaloms in straight line were run on Daimler's driving simulator by skilled and unskilled drivers. Two analytical tyre temperature effects and two inflation pressure effects were carried out in order to feel their impact on the handling behaviour of the vehicle. This paper underlines the realism of the handling simulation results performed with TaMeTirE, and shows the significant impact of a pressure or a temperature effect on the handling behaviour of a car.

  17. A Fast-Time Simulation Environment for Airborne Merging and Spacing Research

    NASA Technical Reports Server (NTRS)

    Bussink, Frank J. L.; Doble, Nathan A.; Barmore, Bryan E.; Singer, Sharon

    2005-01-01

    As part of NASA's Distributed Air/Ground Traffic Management (DAG-TM) effort, NASA Langley Research Center is developing concepts and algorithms for merging multiple aircraft arrival streams and precisely spacing aircraft over the runway threshold. An airborne tool has been created for this purpose, called Airborne Merging and Spacing for Terminal Arrivals (AMSTAR). To evaluate the performance of AMSTAR and complement human-in-the-loop experiments, a simulation environment has been developed that enables fast-time studies of AMSTAR operations. The environment is based on TMX, a multiple aircraft desktop simulation program created by the Netherlands National Aerospace Laboratory (NLR). This paper reviews the AMSTAR concept, discusses the integration of the AMSTAR algorithm into TMX and the enhancements added to TMX to support fast-time AMSTAR studies, and presents initial simulation results.

  18. Latin hypercube sampling and geostatistical modeling of spatial uncertainty in a spatially explicit forest landscape model simulation

    Treesearch

    Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu

    2005-01-01

    Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...

  19. Simulations of Convection Zone Flows and Measurements from Multiple Viewing Angles

    NASA Technical Reports Server (NTRS)

    Duvall, Thomas L.; Hanasoge, Shravan

    2011-01-01

    A deep-focusing time-distance measurement technique has been applied to linear acoustic simulations of a solar interior perturbed by convective flows. The simulations are for the full sphere for r/R greater than 0.2. From these it is straightforward to simulate the observations from different viewing angles and to test how multiple viewing angles enhance detectibility. Some initial results will be presented.

  20. Apparent resistivity for transient electromagnetic induction logging and its correction in radial layer identification

    NASA Astrophysics Data System (ADS)

    Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei

    2018-04-01

    We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.

  1. The NASA Lewis integrated propulsion and flight control simulator

    NASA Technical Reports Server (NTRS)

    Bright, Michelle M.; Simon, Donald L.

    1991-01-01

    A new flight simulation facility has been developed at NASA Lewis to allow integrated propulsion-control and flight-control algorithm development and evaluation in real time. As a preliminary check of the simulator facility and the correct integration of its components, the control design and physics models for an STOVL fighter aircraft model have been demonstrated, with their associated system integration and architecture, pilot vehicle interfaces, and display symbology. The results show that this fixed-based flight simulator can provide real-time feedback and display of both airframe and propulsion variables for validation of integrated systems and testing of control design methodologies and cockpit mechanizations.

  2. Temporal Evolution of the Plasma Sheath Surrounding Solar Cells in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Willis, Emily M.; Pour, Maria Z. A.

    2017-01-01

    Initial results from the PIC simulation and the LEM simulation have been presented. The PIC simulation results show that more detailed study is required to refine the ISS solar array current collection model and to understand the development of the current collection in time. The initial results from the LEM demonstrate that is it possible the transients are caused by solar array interaction with the environment, but there are presently too many assumptions in the model to be certain. Continued work on the PIC simulation will provide valuable information on the development of the barrier potential, which will allow refinement the LEM simulation and a better understanding of the causes and effects of the transients.

  3. Physico-Chemical Evolution of Organic Aerosol from Wildfire Emissions

    NASA Astrophysics Data System (ADS)

    Croteau, P.; Jathar, S.; Akherati, A.; Galang, A.; Tarun, S.; Onasch, T. B.; Lewane, L.; Herndon, S. C.; Roscioli, J. R.; Yacovitch, T. I.; Fortner, E.; Xu, W.; Daube, C.; Knighton, W. B.; Werden, B.; Wood, E.

    2017-12-01

    Wildfires are the largest combustion-related source of carbonaceous emissions to the atmosphere; these include direct emissions of black carbon (BC), primary organic aerosol (POA) and semi-volatile, intermediate-volatility, and volatile organic compounds (SVOCs, IVOCs, and VOCs). However, there are large uncertainties surrounding the evolution of these carbonaceous emissions as they are physically and chemically transformed in the atmosphere. To understand these transformations, we performed sixteen experiments using an environmental chamber to simulate day- and night-time chemistry of gas- and aerosol-phase emissions from 6 different fuels at the Fire Laboratory in Missoula, MT. Across the test matrix, the experiments simulated 2 to 8 hours of equivalent day-time aging (with the hydroxyl radical and ozone) or several hours of night-time aging (with the nitrate radical). Aging resulted in an average organic aerosol (OA) mass enhancement of 28% although the full range of OA mass enhancements varied between -10% and 254%. These enhancement findings were consistent with chamber and flow reactor experiments performed at the Fire Laboratory in 2010 and 2012 but, similar to previous studies, offered no evidence to link the OA mass enhancement to fuel type or oxidant exposure. Experiments simulating night-time aging resulted in an average OA mass enhancement of 10% and subsequent day-time aging resulted in a decrease in OA mass of 8%. While small, for the first time, these experiments highlighted the continuous nature of the OA evolution as the wildfire smoke cycled through night- and day-time processes. Ongoing work is focussed on (i) quantifying bulk compositional changes in OA, (ii) comparing the near-field aging simulated in this work with far-field aging simulated during the same campaign (via a mini chamber and flow tube) and (iii) integrating wildfire smoke aging datasets over the past decade to examine the relationship between OA mass enhancement ratios, modified combustion efficiency, initial aerosol concentrations and composition, aerosol size, oxidant exposure, VOC:NOx ratios, and emissions and speciation of SOA precursors.

  4. Simulation-Based Bronchoscopy Training

    PubMed Central

    Kennedy, Cassie C.; Maldonado, Fabien

    2013-01-01

    Background: Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. Methods: We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. Results: From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n = 8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n = 7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, −1.47 to 2.69]) and process (0.33 [95% CI, −1.46 to 2.11]) outcomes (n = 2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Conclusions: Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few. PMID:23370487

  5. Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boas, F. Edward, E-mail: boasf@mskcc.org; Srimathveeravalli, Govindarajan, E-mail: srimaths@mskcc.org; Durack, Jeremy C., E-mail: durackj@mskcc.org

    PurposeTo create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated.Materials and MethodsIce ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1–6 cryoablation probes and 1–2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements weremore » obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions.ResultsAverage absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm.ConclusionCryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.« less

  6. Predictive validity of driving-simulator assessments following traumatic brain injury: a preliminary study.

    PubMed

    Lew, Henry L; Poole, John H; Lee, Eun Ha; Jaffe, David L; Huang, Hsiu-Chen; Brodd, Edward

    2005-03-01

    To evaluate whether driving simulator and road test evaluations can predict long-term driving performance, we conducted a prospective study on 11 patients with moderate to severe traumatic brain injury. Sixteen healthy subjects were also tested to provide normative values on the simulator at baseline. At their initial evaluation (time-1), subjects' driving skills were measured during a 30-minute simulator trial using an automated 12-measure Simulator Performance Index (SPI), while a trained observer also rated their performance using a Driving Performance Inventory (DPI). In addition, patients were evaluated on the road by a certified driving evaluator. Ten months later (time-2), family members observed patients driving for at least 3 hours over 4 weeks and rated their driving performance using the DPI. At time-1, patients were significantly impaired on automated SPI measures of driving skill, including: speed and steering control, accidents, and vigilance to a divided-attention task. These simulator indices significantly predicted the following aspects of observed driving performance at time-2: handling of automobile controls, regulation of vehicle speed and direction, higher-order judgment and self-control, as well as a trend-level association with car accidents. Automated measures of simulator skill (SPI) were more sensitive and accurate than observational measures of simulator skill (DPI) in predicting actual driving performance. To our surprise, the road test results at time-1 showed no significant relation to driving performance at time-2. Simulator-based assessment of patients with brain injuries can provide ecologically valid measures that, in some cases, may be more sensitive than a traditional road test as predictors of long-term driving performance in the community.

  7. Performance evaluation of GPU parallelization, space-time adaptive algorithms, and their combination for simulating cardiac electrophysiology.

    PubMed

    Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo

    2018-02-01

    The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Assessment of simulation fidelity using measurements of piloting technique in flight. II

    NASA Technical Reports Server (NTRS)

    Ferguson, S. W.; Clement, W. F.; Hoh, R. H.; Cleveland, W. B.

    1985-01-01

    Two components of the Vertical Motion Simulator (presently being used to assess the fidelity of UH-60A simulation) are evaluated: (1) the dash/quickstop Nap-of-the-earth (NOE) piloting task, and (2) the bop-up task. Data from these two flight test experiments are presented which provide information on the effect of reduced visual field of view, variation in scene content and texture, and the affect of pure time delay in the closed-loop pilot response. In comparison with task performance results obtained in flight tests, the results from the simulation indicate that the pilot's NOE task performance in the simulator is significantly degraded.

  9. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  10. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  11. Isolating Added Mass Load Components of CPAS Main Clusters

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.

    2017-01-01

    The current simulation for the Capsule Parachute Assembly System (CPAS) lacks fidelity in representing added mass for the 116 ft Do ringsail Main parachute. The availability of 3-D models of inflating Main canopies allowed for better estimation the enclosed air volume as a function of time. This was combined with trajectory state information to estimate the components making up measured axial loads. A proof-of-concept for an alternate simulation algorithm was developed based on enclosed volume as the primary independent variable rather than drag area growth. Databases of volume growth and parachute drag area vs. volume were developed for several flight tests. Other state information was read directly from test data, rather than numerically propagated. The resulting simulated peak loads were close in timing and magnitude to the measured loads data. However, results are very sensitive to data curve fitting and may not be suitable for Monte Carlo simulations. It was assumed that apparent mass was either negligible or a small fraction of enclosed mass, with little difference in results.

  12. A time domain simulation of a beam control system

    NASA Astrophysics Data System (ADS)

    Mitchell, J. R.

    1981-02-01

    The Airborne Laser Laboratory (ALL) is being developed by the Air Force to investigate the integration and operation of high energy laser components in a dynamic airborne environment and to study the propagation of laser light from an airborne vehicle to an airborne target. The ALL is composed of several systems; among these are the Airborne Pointing and Tracking System (APT) and the Automatic Alignment System (AAS). This report presents the results of performing a time domain dynamic simulation for an integrated beam control system composed of the APT and AAS. The simulation is performed on a digital computer using the MIMIC language. It includes models of the dynamics of the system and of disturbances. Also presented in the report are the rationales and developments of these models. The data from the simulation code is summarized by several plots. In addition results from massaging the data with waveform analysis packages are presented. The results are discussed and conclusions are drawn.

  13. MARIKA - A model revision system using qualitative analysis of simulations. [of human orientation system

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Frainier, Richard; Colombano, Silvano; Hazelton, Lyman; Szolovits, Peter

    1993-01-01

    This paper describes portions of a novel system called MARIKA (Model Analysis and Revision of Implicit Key Assumptions) to automatically revise a model of the normal human orientation system. The revision is based on analysis of discrepancies between experimental results and computer simulations. The discrepancies are calculated from qualitative analysis of quantitative simulations. The experimental and simulated time series are first discretized in time segments. Each segment is then approximated by linear combinations of simple shapes. The domain theory and knowledge are represented as a constraint network. Incompatibilities detected during constraint propagation within the network yield both parameter and structural model alterations. Interestingly, MARIKA diagnosed a data set from the Massachusetts Eye and Ear Infirmary Vestibular Laboratory as abnormal though the data was tagged as normal. Published results from other laboratories confirmed the finding. These encouraging results could lead to a useful clinical vestibular tool and to a scientific discovery system for space vestibular adaptation.

  14. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE PAGES

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael; ...

    2016-10-14

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  15. Non-conforming finite-element formulation for cardiac electrophysiology: an effective approach to reduce the computation time of heart simulations without compromising accuracy

    NASA Astrophysics Data System (ADS)

    Hurtado, Daniel E.; Rojas, Guillermo

    2018-04-01

    Computer simulations constitute a powerful tool for studying the electrical activity of the human heart, but computational effort remains prohibitively high. In order to recover accurate conduction velocities and wavefront shapes, the mesh size in linear element (Q1) formulations cannot exceed 0.1 mm. Here we propose a novel non-conforming finite-element formulation for the non-linear cardiac electrophysiology problem that results in accurate wavefront shapes and lower mesh-dependance in the conduction velocity, while retaining the same number of global degrees of freedom as Q1 formulations. As a result, coarser discretizations of cardiac domains can be employed in simulations without significant loss of accuracy, thus reducing the overall computational effort. We demonstrate the applicability of our formulation in biventricular simulations using a coarse mesh size of ˜ 1 mm, and show that the activation wave pattern closely follows that obtained in fine-mesh simulations at a fraction of the computation time, thus improving the accuracy-efficiency trade-off of cardiac simulations.

  16. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  17. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.

    2003-01-01

    Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques

  18. Study of Interesting Solidification Phenomena on the Ground and in Space (MEPHISTO)

    NASA Technical Reports Server (NTRS)

    Favier, J.-J.; Iwan, J.; Alexander, D.; Garandet, J.-P.

    1998-01-01

    Real-time Seebeck voltage variations in a Sn-Bi melt during directional solidification in the MEPHISTO spaceflight experiment flown on the USMP-3 mission, can be correlated with well characterized thruster firings and an Orbiter Main System (OMS) burn. The Seebeck voltage measurement is related to the response of the instantaneous average melt composition at the melt-crystal interface. This allowed us to make a direct comparison of numerical simulations with the experimentally obtained Seebeck signals. Based on the results of preflight and real-time computations, several well-defined thruster firing events were programmed to occur at specific times during the experiment. In particular, we simulated the effects of the thruster firings on melt and crystal composition in a directionally solidifying Sn-Bi alloy. The relative accelerations produced by the firings were simulated by impulsive accelerations of the same magnitude, duration and orientation as the requested firings. A comparison of the simulation results with the Seebeck signal indicates that there is a good agreement between the two. This unique opportunity allows us, for the first time, to quantitatively characterize actual g-jitter effects on an actual crystal growth experiment and to properly calibrate our models of g-jitter effects on crystal growth.

  19. Implementation and evaluation of a simulation curriculum for paediatric residency programs including just-in-time in situ mock codes.

    PubMed

    Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam

    2012-02-01

    To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. Paediatric residency program at BC Children's Hospital, Vancouver, British Columbia. The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes.

  20. Real-time simulation of the nonlinear visco-elastic deformations of soft tissues.

    PubMed

    Basafa, Ehsan; Farahmand, Farzam

    2011-05-01

    Mass-spring-damper (MSD) models are often used for real-time surgery simulation due to their fast response and fairly realistic deformation replication. An improved real time simulation model of soft tissue deformation due to a laparoscopic surgical indenter was developed and tested. The mechanical realization of conventional MSD models was improved using nonlinear springs and nodal dampers, while their high computational efficiency was maintained using an adapted implicit integration algorithm. New practical algorithms for model parameter tuning, collision detection, and simulation were incorporated. The model was able to replicate complex biological soft tissue mechanical properties under large deformations, i.e., the nonlinear and viscoelastic behaviors. The simulated response of the model after tuning of its parameters to the experimental data of a deer liver sample, closely tracked the reference data with high correlation and maximum relative differences of less than 5 and 10%, for the tuning and testing data sets respectively. Finally, implementation of the proposed model and algorithms in a graphical environment resulted in a real-time simulation with update rates of 150 Hz for interactive deformation and haptic manipulation, and 30 Hz for visual rendering. The proposed real time simulation model of soft tissue deformation due to a laparoscopic surgical indenter was efficient, realistic, and accurate in ex vivo testing. This model is a suitable candidate for testing in vivo during laparoscopic surgery.

  1. Real time evolution at finite temperatures with operator space matrix product states

    NASA Astrophysics Data System (ADS)

    Pižorn, Iztok; Eisler, Viktor; Andergassen, Sabine; Troyer, Matthias

    2014-07-01

    We propose a method to simulate the real time evolution of one-dimensional quantum many-body systems at finite temperature by expressing both the density matrices and the observables as matrix product states. This allows the calculation of expectation values and correlation functions as scalar products in operator space. The simulations of density matrices in inverse temperature and the local operators in the Heisenberg picture are independent and result in a grid of expectation values for all intermediate temperatures and times. Simulations can be performed using real arithmetics with only polynomial growth of computational resources in inverse temperature and time for integrable systems. The method is illustrated for the XXZ model and the single impurity Anderson model.

  2. Flood simulation and verification with IoT sensors

    NASA Astrophysics Data System (ADS)

    Chang, Che-Hao; Hsu, Chih-Tsung; Wu, Shiang-Jen; Huang, Sue-Wei

    2017-04-01

    2D flood dynamic simulation is a vivid tool to demonstrate the possible expose area that sustain impact of high rise of water level. Along with progress in high resolution digital terrain model, the simulation results are quite convinced yet not proved to be close to what is really happened. Due to the dynamic and uncertain essence, the expose area usually could not be well defined during a flood event. Recent development in IoT sensors bring a low power and long distance communication which help us to collect real time flood depths. With these time series of flood depths at different locations, we are capable of verifying the simulation results corresponding to the flood event. 16 flood gauges with IoT specification as well as two flood events in Annan district, Tainan city, Taiwan are examined in this study. During the event in 11, June, 2016, 12 flood gauges works well and 8 of them provide observation match to simulation.

  3. GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy

    NASA Astrophysics Data System (ADS)

    Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro

    2011-03-01

    The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.

  4. Space shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1980-01-01

    The effects of atmospheric turbulence in both horizontal and near horizontal flight, during the return of the space shuttle, are important for determining design, control, and 'pilot-in-the-loop' effects. A nonrecursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed which provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gust gradients. Based on this model, the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes which are entitled shuttle simulation turbulence tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 10,000 meters. The turbulence generation procedure is described as well as the results of validating the simulated turbulence. Conclusions and recommendations are presented and references cited. The tabulated one dimensional von Karman spectra and the results of spectral and statistical analyses of the SSTT are contained in the appendix.

  5. Stimulation of a turbofan engine for evaluation of multivariable optimal control concepts. [(computerized simulation)

    NASA Technical Reports Server (NTRS)

    Seldner, K.

    1976-01-01

    The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.

  6. An embedded controller for a 7-degree of freedom prosthetic arm.

    PubMed

    Tenore, Francesco; Armiger, Robert S; Vogelstein, R Jacob; Wenstrand, Douglas S; Harshbarger, Stuart D; Englehart, Kevin

    2008-01-01

    We present results from an embedded real-time hardware system capable of decoding surface myoelectric signals (sMES) to control a seven degree of freedom upper limb prosthesis. This is one of the first hardware implementations of sMES decoding algorithms and the most advanced controller to-date. We compare decoding results from the device to simulation results from a real-time PC-based operating system. Performance of both systems is shown to be similar, with decoding accuracy greater than 90% for the floating point software simulation and 80% for fixed point hardware and software implementations.

  7. Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Alder, J.; van Griensven, A.; Meixner, T.

    2003-12-01

    Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.

  8. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.

  9. The effect of gas dynamics on semi-analytic modelling of cluster galaxies

    NASA Astrophysics Data System (ADS)

    Saro, A.; De Lucia, G.; Dolag, K.; Borgani, S.

    2008-12-01

    We study the degree to which non-radiative gas dynamics affect the merger histories of haloes along with subsequent predictions from a semi-analytic model (SAM) of galaxy formation. To this aim, we use a sample of dark matter only and non-radiative smooth particle hydrodynamics (SPH) simulations of four massive clusters. The presence of gas-dynamical processes (e.g. ram pressure from the hot intra-cluster atmosphere) makes haloes more fragile in the runs which include gas. This results in a 25 per cent decrease in the total number of subhaloes at z = 0. The impact on the galaxy population predicted by SAMs is complicated by the presence of `orphan' galaxies, i.e. galaxies whose parent substructures are reduced below the resolution limit of the simulation. In the model employed in our study, these galaxies survive (unaffected by the tidal stripping process) for a residual merging time that is computed using a variation of the Chandrasekhar formula. Due to ram-pressure stripping, haloes in gas simulations tend to be less massive than their counterparts in the dark matter simulations. The resulting merging times for satellite galaxies are then longer in these simulations. On the other hand, the presence of gas influences the orbits of haloes making them on average more circular and therefore reducing the estimated merging times with respect to the dark matter only simulation. This effect is particularly significant for the most massive satellites and is (at least in part) responsible for the fact that brightest cluster galaxies in runs with gas have stellar masses which are about 25 per cent larger than those obtained from dark matter only simulations. Our results show that gas dynamics has only a marginal impact on the statistical properties of the galaxy population, but that its impact on the orbits and merging times of haloes strongly influences the assembly of the most massive galaxies.

  10. Investigation of 2-stage meta-analysis methods for joint longitudinal and time-to-event data through simulation and real data application.

    PubMed

    Sudell, Maria; Tudur Smith, Catrin; Gueyffier, François; Kolamunnage-Dona, Ruwanthi

    2018-04-15

    Joint modelling of longitudinal and time-to-event data is often preferred over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta-analysis of joint model estimates from multiple studies. We propose a 2-stage method for meta-analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta-analyses of separate longitudinal or time-to-event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta-analytic setting where association exists between the longitudinal and time-to-event outcomes. Where evidence of association between longitudinal and time-to-event outcomes exists, results from joint models over standalone analyses should be pooled in 2-stage meta-analyses. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  11. An effective online data monitoring and saving strategy for large-scale climate simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin

    Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less

  12. An effective online data monitoring and saving strategy for large-scale climate simulations

    DOE PAGES

    Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin; ...

    2018-01-22

    Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less

  13. Real-time image-based B-mode ultrasound image simulation of needles using tensor-product interpolation.

    PubMed

    Zhu, Mengchen; Salcudean, Septimiu E

    2011-07-01

    In this paper, we propose an interpolation-based method for simulating rigid needles in B-mode ultrasound images in real time. We parameterize the needle B-mode image as a function of needle position and orientation. We collect needle images under various spatial configurations in a water-tank using a needle guidance robot. Then we use multidimensional tensor-product interpolation to simulate images of needles with arbitrary poses and positions using collected images. After further processing, the interpolated needle and seed images are superimposed on top of phantom or tissue image backgrounds. The similarity between the simulated and the real images is measured using a correlation metric. A comparison is also performed with in vivo images obtained during prostate brachytherapy. Our results, carried out for both the convex (transverse plane) and linear (sagittal/para-sagittal plane) arrays of a trans-rectal transducer indicate that our interpolation method produces good results while requiring modest computing resources. The needle simulation method we present can be extended to the simulation of ultrasound images of other wire-like objects. In particular, we have shown that the proposed approach can be used to simulate brachytherapy seeds.

  14. Piloted simulator study of allowable time delays in large-airplane response

    NASA Technical Reports Server (NTRS)

    Grantham, William D.; Bert T.?aetingas, Stephen A.dings with ran; Bert T.?aetingas, Stephen A.dings with ran

    1987-01-01

    A piloted simulation was performed to determine the permissible time delay and phase shift in the flight control system of a specific large transport-type airplane. The study was conducted with a six degree of freedom ground-based simulator and a math model similar to an advanced wide-body jet transport. Time delays in discrete and lagged form were incorporated into the longitudinal, lateral, and directional control systems of the airplane. Three experienced pilots flew simulated approaches and landings with random localizer and glide slope offsets during instrument tracking as their principal evaluation task. Results of the present study suggest a level 1 (satisfactory) handling qualities limit for the effective time delay of 0.15 sec in both the pitch and roll axes, as opposed to a 0.10-sec limit of the present specification (MIL-F-8785C) for both axes. Also, the present results suggest a level 2 (acceptable but unsatisfactory) handling qualities limit for an effective time delay of 0.82 sec and 0.57 sec for the pitch and roll axes, respectively, as opposed to 0.20 sec of the present specifications for both axes. In the area of phase shift between cockpit input and control surface deflection,the results of this study, flown in turbulent air, suggest less severe phase shift limitations for the approach and landing task-approximately 50 deg. in pitch and 40 deg. in roll - as opposed to 15 deg. of the present specifications for both axes.

  15. Atmospheric Dispersion Modeling of the February 2014 Waste Isolation Pilot Plant Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nasstrom, John; Piggott, Tom; Simpson, Matthew

    2015-07-22

    This report presents the results of a simulation of the atmospheric dispersion and deposition of radioactivity released from the Waste Isolation Pilot Plant (WIPP) site in New Mexico in February 2014. These simulations were made by the National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory (LLNL), and supersede NARAC simulation results published in a previous WIPP report (WIPP, 2014). The results presented in this report use additional, more detailed data from WIPP on the specific radionuclides released, radioactivity release amounts and release times. Compared to the previous NARAC simulations, the new simulation results in this report aremore » based on more detailed modeling of the winds, turbulence, and particle dry deposition. In addition, the initial plume rise from the exhaust vent was considered in the new simulations, but not in the previous NARAC simulations. The new model results show some small differences compared to previous results, but do not change the conclusions in the WIPP (2014) report. Presented are the data and assumptions used in these model simulations, as well as the model-predicted dose and deposition on and near the WIPP site. A comparison of predicted and measured radionuclide-specific air concentrations is also presented.« less

  16. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arndt, S.A.

    1997-07-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for codemore » use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.« less

  17. Designing Realistic Human Behavior into Multi-Agent Systems

    DTIC Science & Technology

    2001-09-01

    different results based on some sort of randomness built into it, a trend can be looked at over time and a success or failure rate can be...simulation remains in that state, very different results can be achieved each simulation run. An analyst can look at success and failure over a long

  18. Enhancing the Simulation Speed of Sensor Network Applications by Asynchronization of Interrupt Service Routines

    PubMed Central

    Joe, Hyunwoo; Woo, Duk-Kyun; Kim, Hyungshin

    2013-01-01

    Sensor network simulations require high fidelity and timing accuracy to be used as an implementation and evaluation tool. The cycle-accurate and instruction-level simulator is the known solution for these purposes. However, this type of simulation incurs a high computation cost since it has to model not only the instruction level behavior but also the synchronization between multiple sensors for their causality. This paper presents a novel technique that exploits asynchronous simulations of interrupt service routines (ISR). We can avoid the synchronization overheads when the interrupt service routines are simulated without preemption. If the causality errors occur, we devise a rollback procedure to restore the original synchronized simulation. This concept can be extended to any instruction-level sensor network simulator. Evaluation results show our method can enhance the simulation speed up to 52% in the case of our experiments. For applications with longer interrupt service routines and smaller number of preemptions, the speedup becomes greater. In addition, our simulator is 2 to 11 times faster than the well-known sensor network simulator. PMID:23966200

  19. Multiple Solutions of Real-time Tsunami Forecasting Using Short-term Inundation Forecasting for Tsunamis Tool

    NASA Astrophysics Data System (ADS)

    Gica, E.

    2016-12-01

    The Short-term Inundation Forecasting for Tsunamis (SIFT) tool, developed by NOAA Center for Tsunami Research (NCTR) at the Pacific Marine Environmental Laboratory (PMEL), is used in forecast operations at the Tsunami Warning Centers in Alaska and Hawaii. The SIFT tool relies on a pre-computed tsunami propagation database, real-time DART buoy data, and an inversion algorithm to define the tsunami source. The tsunami propagation database is composed of 50×100km unit sources, simulated basin-wide for at least 24 hours. Different combinations of unit sources, DART buoys, and length of real-time DART buoy data can generate a wide range of results within the defined tsunami source. For an inexperienced SIFT user, the primary challenge is to determine which solution, among multiple solutions for a single tsunami event, would provide the best forecast in real time. This study investigates how the use of different tsunami sources affects simulated tsunamis at tide gauge locations. Using the tide gauge at Hilo, Hawaii, a total of 50 possible solutions for the 2011 Tohoku tsunami are considered. Maximum tsunami wave amplitude and root mean square error results are used to compare tide gauge data and the simulated tsunami time series. Results of this study will facilitate SIFT users' efforts to determine if the simulated tide gauge tsunami time series from a specific tsunami source solution would be within the range of possible solutions. This study will serve as the basis for investigating more historical tsunami events and tide gauge locations.

  20. WE-C-217BCD-08: Rapid Monte Carlo Simulations of DQE(f) of Scintillator-Based Detectors.

    PubMed

    Star-Lack, J; Abel, E; Constantin, D; Fahrig, R; Sun, M

    2012-06-01

    Monte Carlo simulations of DQE(f) can greatly aid in the design of scintillator-based detectors by helping optimize key parameters including scintillator material and thickness, pixel size, surface finish, and septa reflectivity. However, the additional optical transport significantly increases simulation times, necessitating a large number of parallel processors to adequately explore the parameter space. To address this limitation, we have optimized the DQE(f) algorithm, reducing simulation times per design iteration to 10 minutes on a single CPU. DQE(f) is proportional to the ratio, MTF(f)̂2 /NPS(f). The LSF-MTF simulation uses a slanted line source and is rapidly performed with relatively few gammas launched. However, the conventional NPS simulation for standard radiation exposure levels requires the acquisition of multiple flood fields (nRun), each requiring billions of input gamma photons (nGamma), many of which will scintillate, thereby producing thousands of optical photons (nOpt) per deposited MeV. The resulting execution time is proportional to the product nRun x nGamma x nOpt. In this investigation, we revisit the theoretical derivation of DQE(f), and reveal significant computation time savings through the optimization of nRun, nGamma, and nOpt. Using GEANT4, we determine optimal values for these three variables for a GOS scintillator-amorphous silicon portal imager. Both isotropic and Mie optical scattering processes were modeled. Simulation results were validated against the literature. We found that, depending on the radiative and optical attenuation properties of the scintillator, the NPS can be accurately computed using values for nGamma below 1000, and values for nOpt below 500/MeV. nRun should remain above 200. Using these parameters, typical computation times for a complete NPS ranged from 2-10 minutes on a single CPU. The number of launched particles and corresponding execution times for a DQE simulation can be dramatically reduced allowing for accurate computation with modest computer hardware. NIHRO1 CA138426. Several authors work for Varian Medical Systems. © 2012 American Association of Physicists in Medicine.

  1. No Escaping the Rat Race: Simulated Night Shift Work Alters the Time-of-Day Variation in BMAL1 Translational Activity in the Prefrontal Cortex.

    PubMed

    Marti, Andrea R; Patil, Sudarshan; Mrdalj, Jelena; Meerlo, Peter; Skrede, Silje; Pallesen, Ståle; Pedersen, Torhild T; Bramham, Clive R; Grønli, Janne

    2017-01-01

    Millions of people worldwide work during the night, resulting in disturbed circadian rhythms and sleep loss. This may cause deficits in cognitive functions, impaired alertness and increased risk of errors and accidents. Disturbed circadian rhythmicity resulting from night shift work could impair brain function and cognition through disrupted synthesis of proteins involved in synaptic plasticity and neuronal function. Recently, the circadian transcription factor brain-and-muscle arnt-like protein 1 (BMAL1) has been identified as a promoter of mRNA translation initiation, the most highly regulated step in protein synthesis, through binding to the mRNA "cap". In this study we investigated the effects of simulated shift work on protein synthesis markers. Male rats ( n = 40) were exposed to forced activity, either in their rest phase (simulated night shift work) or in their active phase (simulated day shift work) for 3 days. Following the third work shift, experimental animals and time-matched undisturbed controls were euthanized (rest work at ZT12; active work at ZT0). Tissue lysates from two brain regions (prefrontal cortex, PFC and hippocampus) implicated in cognition and sleep loss, were analyzed with m 7 GTP (cap) pull-down to examine time-of-day variation and effects of simulated shift work on cap-bound protein translation. The results show time-of-day variation of protein synthesis markers in PFC, with increased protein synthesis at ZT12. In the hippocampus there was little difference between ZT0 and ZT12. Active phase work did not induce statistically significant changes in protein synthesis markers at ZT0 compared to time-matched undisturbed controls. Rest work, however, resulted in distinct brain-region specific changes of protein synthesis markers compared to time-matched controls at ZT12. While no changes were observed in the hippocampus, phosphorylation of cap-bound BMAL1 and its regulator S6 kinase beta-1 (S6K1) was significantly reduced in the PFC, together with significant reduction in the synaptic plasticity associated protein activity-regulatedcytoskeleton-associated protein (Arc). Our results indicate considerable time-of-day and brain-region specific variation in cap-dependent translation initiation. We concludethat simulated night shift work in rats disrupts the pathways regulating the circadian component of the translation of mRNA in the PFC, and that this may partly explain impaired waking function during night shift work.

  2. Evaluation of approaches focused on modelling of organic carbon stocks using the RothC model

    NASA Astrophysics Data System (ADS)

    Koco, Štefan; Skalský, Rastislav; Makovníková, Jarmila; Tarasovičová, Zuzana; Barančíková, Gabriela

    2014-05-01

    The aim of current efforts in the European area is the protection of soil organic matter, which is included in all relevant documents related to the protection of soil. The use of modelling of organic carbon stocks for anticipated climate change, respectively for land management can significantly help in short and long-term forecasting of the state of soil organic matter. RothC model can be applied in the time period of several years to centuries and has been tested in long-term experiments within a large range of soil types and climatic conditions in Europe. For the initialization of the RothC model, knowledge about the carbon pool sizes is essential. Pool size characterization can be obtained from equilibrium model runs, but this approach is time consuming and tedious, especially for larger scale simulations. Due to this complexity we search for new possibilities how to simplify and accelerate this process. The paper presents a comparison of two approaches for SOC stocks modelling in the same area. The modelling has been carried out on the basis of unique input of land use, management and soil data for each simulation unit separately. We modeled 1617 simulation units of 1x1 km grid on the territory of agroclimatic region Žitný ostrov in the southwest of Slovakia. The first approach represents the creation of groups of simulation units based on the evaluation of results for simulation unit with similar input values. The groups were created after the testing and validation of modelling results for individual simulation units with results of modelling the average values of inputs for the whole group. Tests of equilibrium model for interval in the range 5 t.ha-1 from initial SOC stock showed minimal differences in results comparing with result for average value of whole interval. Management inputs data from plant residues and farmyard manure for modelling of carbon turnover were also the same for more simulation units. Combining these groups (intervals of initial SOC stock, groups of plant residues inputs, groups of farmyard manure inputs), we created 661 simulation groups. Within the group, for all simulation units we used average values of inputs. Export of input data and modelling has been carried out manually in the graphic environment of RothC 26.3 v2.0 application for each group separately. SOC stocks were modeled for 661 groups of simulation units. For the second possibility we used RothC 26.3 version for DOS. The inputs for modelling were exported using VBA scripts in the environment of MS Access program. Equilibrium modelling for more variations of plant residues inputs was performed. Subsequently we selected the nearest value of total pool size to the real initial SOC stock value. All simulation units (1617) were automatically modeled by means of the predefined Batch File. The comparison of two methods of modelling showed spatial differentiation of results mainly with the increasing time of modelling period. In the time sequence, from initial period we mark the increasing the number of simulation units with differences in SOC stocks according to selected approaches. Observed differences suggest that the results of modelling obtained by inputs generalization should be taken into account with a certain degree of reserve. At large scales simulations it is more appropriate to use the DOS version of RothC 26.3 model which allows automated modelling. This reduces the time needed for model operation, without the necessity to look for the possibilities of minimizing the simulated units. Key words Soil organic carbon stock, modelling, RothC 26.3, agricultural soils, Slovakia Acknowledgements This work was supported by the Slovak Research and Development Agency under the contract No. APVV-0580-10 and APVV-0131-11.

  3. Tidal Effect in Small-Scale Sound Propagation Experiment

    NASA Astrophysics Data System (ADS)

    Kamimura, Seiji; Ogasawara, Hanako; Mori, Kazuyoshi; Nakamura, Toshiaki

    2012-07-01

    A sound propagation experiment in very shallow water was conducted at Hashirimizu port in 2009. We transmitted 5 kHz sinusoidal waves with M-sequence modulation. As a result, we found that the travel time concentrated in two time frames. When comparing the travel time with the tide level, the travel time was dependent on the tide level. In terms of the wave patterns, most of the wave patterns have two peaks. As the tide level changed, the biggest peak switched within two peaks. To discuss this, numerical simulation by finite difference time domain (FDTD) method was carried out. The result agreed with the experimental result. Finally, we changed the material of the quay wall in the FDTD simulation and concluded that the first peak is a multireflected combination wave and the effect of its reflected wave at a quay wall has superiority in the second peak.

  4. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling.

    PubMed

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-12-07

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.

  5. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling

    PubMed Central

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-01-01

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600

  6. Using virtual instruments to develop an actuator-based hardware-in-the-loop simulation test-bed for autopilot of unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Sun, Yun-Ping; Ju, Jiun-Yan; Liang, Yen-Chu

    2008-12-01

    Since the unmanned aerial vehicles (UAVs) bring forth many innovative applications in scientific, civilian, and military fields, the development of UAVs is rapidly growing every year. The on-board autopilot that reliably performs attitude and guidance control is a vital part for out-of-sight flights. However, the control law in autopilot is designed according to a simplified plant model in which the dynamics of real hardware are usually not taken into consideration. It is a necessity to develop a test-bed including real servos to make real-time control experiments for prototype autopilots, so called hardware-in-the-loop (HIL) simulation. In this paper on the basis of the graphical application software LabVIEW, the real-time HIL simulation system is realized efficiently by the virtual instrumentation approach. The proportional-integral-derivative (PID) controller in autopilot for the pitch angle control loop is experimentally determined by the classical Ziegler-Nichols tuning rule and exhibits good transient and steady-state response in real-time HIL simulation. From the results the differences between numerical simulation and real-time HIL simulation are also clearly presented. The effectiveness of HIL simulation for UAV autopilot design is definitely confirmed

  7. A finite-difference time-domain electromagnetic solver in a generalized coordinate system

    NASA Astrophysics Data System (ADS)

    Hochberg, Timothy Allen

    A new, finite-difference, time-domain method for the simulation of full-wave electromagnetic wave propogation in complex structures is developed. This method is simple and flexible; it allows for the simulation of transient wave propogation in a large class of practical structures. Boundary conditions are implemented for perfect and imperfect electrically conducting boundaries, perfect magnetically conducting boundaries, and absorbing boundaries. The method is validated with the aid of several different types of test cases. Two types of coaxial cables with helical breaks are simulated and the results are discussed.

  8. Multi-Sensor Systems and Data Fusion for Telecommunications, Remote Sensing and Radar (les Systemes multi-senseurs et le fusionnement des donnees pour les telecommunications, la teledetection et les radars)

    DTIC Science & Technology

    1998-04-01

    The result of the project is a demonstration of the fusion process, the sensors management and the real-time capabilities using simulated sensors...demonstrator (TAD) is a system that demonstrates the core ele- ment of a battlefield ground surveillance system by simulation in near real-time. The core...Management and Sensor/Platform simulation . The surveillance system observes the real world through a non-collocated heterogene- ous multisensory system

  9. Status of the LISA On Table experiment: a electro-optical simulator for LISA

    NASA Astrophysics Data System (ADS)

    Laporte, M.; Halloin, H.; Bréelle, E.; Buy, C.; Grüning, P.; Prat, P.

    2017-05-01

    The LISA project is a space mission that aim at detecting gravitational waves in space. An electro-optical simulator called LISA On Table (LOT) is being developed at APC in order to test noise reduction techniques (such as Timed Delayed Interferometry) and instruments that will be used. This document presents its latest results: TimeDelayed Interferometry of 1st generation works in the case of a simulated white noise with static, unequal arms. Future and ongoing developments of the experiment are also addressed.

  10. SU-E-T-222: Computational Optimization of Monte Carlo Simulation On 4D Treatment Planning Using the Cloud Computing Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, J

    Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of computemore » node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.« less

  11. Using Monte Carlo Simulation to Prioritize Key Maritime Environmental Impacts of Port Infrastructure

    NASA Astrophysics Data System (ADS)

    Perez Lespier, L. M.; Long, S.; Shoberg, T.

    2016-12-01

    This study creates a Monte Carlo simulation model to prioritize key indicators of environmental impacts resulting from maritime port infrastructure. Data inputs are derived from LandSat imagery, government databases, and industry reports to create the simulation. Results are validated using subject matter experts and compared with those returned from time-series regression to determine goodness of fit. The Port of Prince Rupert, Canada is used as the location for the study.

  12. Cascade Defect Evolution Processes: Comparison of Atomistic Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Haixuan; Stoller, Roger E; Osetskiy, Yury N

    2013-11-01

    Determining the defect evolution beyond the molecular dynamics (MD) time scale is critical in bridging the gap between atomistic simulations and experiments. The recently developed self-evolving atomistic kinetic Monte Carlo (SEAKMC) method provides new opportunities to simulate long-term defect evolution with MD-like fidelity. In this study, SEAKMC is applied to investigate the cascade defect evolution in bcc iron. First, the evolution of a vacancy rich region is simulated and compared with results obtained using autonomous basin climbing (ABC) +KMC and kinetic activation-relaxation technique (kART) simulations. Previously, it is found the results from kART are orders of magnitude faster than ABC+KMC.more » The results obtained from SEAKMC are similar to kART but the time predicted is about one order of magnitude faster than kART. The fidelity of SEAKMC is confirmed by statistically relevant MD simulations at multiple higher temperatures, which proves that the saddle point sampling is close to complete in SEAKMC. The second is the irradiation-induced formation of C15 Laves phase nano-size defect clusters. In contrast to previous studies, which claim the defects can grow by capturing self-interstitials, we found these highly stable clusters can transform to <111> glissile configuration on a much longer time scale. Finally, cascade-annealing simulations using SEAKMC is compared with traditional object KMC (OKMC) method. SEAKMC predicts substantially fewer surviving defects compared with OKMC. The possible origin of this difference is discussed and a possible way to improve the accuracy of OKMC based on SEAKMC results is outlined. These studies demonstrate the atomistic fidelity of SEAKMC in comparison with other on-the-fly KMC methods and provide new information on long-term defect evolution in iron.« less

  13. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  14. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  15. A Framework to Analyze the Performance of Load Balancing Schemes for Ensembles of Stochastic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Tae-Hyuk; Sandu, Adrian; Watson, Layne T.

    2015-08-01

    Ensembles of simulations are employed to estimate the statistics of possible future states of a system, and are widely used in important applications such as climate change and biological modeling. Ensembles of runs can naturally be executed in parallel. However, when the CPU times of individual simulations vary considerably, a simple strategy of assigning an equal number of tasks per processor can lead to serious work imbalances and low parallel efficiency. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms for ensembles of simulations where many tasks are mapped onto each processor, andmore » where the individual compute times vary considerably among tasks. Four load balancing strategies are discussed: most-dividing, all-redistribution, random-polling, and neighbor-redistribution. Simulation results with a stochastic budding yeast cell cycle model are consistent with the theoretical analysis. It is especially significant that there is a provable global decrease in load imbalance for the local rebalancing algorithms due to scalability concerns for the global rebalancing algorithms. The overall simulation time is reduced by up to 25 %, and the total processor idle time by 85 %.« less

  16. PFLOTRAN-E4D: A parallel open source PFLOTRAN module for simulating time-lapse electrical resistivity data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Timothy C.; Hammond, Glenn E.; Chen, Xingyuan

    Time-lapse electrical resistivity tomography (ERT) is finding increased application for remotely monitoring processes occurring in the near subsurface in three-dimensions (i.e. 4D monitoring). However, there are few codes capable of simulating the evolution of subsurface resistivity and corresponding tomographic measurements arising from a particular process, particularly in parallel and with an open source license. Herein we describe and demonstrate an electrical resistivity tomography module for the PFLOTRAN subsurface flow and reactive transport simulation code, named PFLOTRAN-E4D. The PFLOTRAN-E4D module operates in parallel using a dedicated set of compute cores in a master-slave configuration. At each time step, the master processesmore » receives subsurface states from PFLOTRAN, converts those states to bulk electrical conductivity, and instructs the slave processes to simulate a tomographic data set. The resulting multi-physics simulation capability enables accurate feasibility studies for ERT imaging, the identification of the ERT signatures that are unique to a given process, and facilitates the joint inversion of ERT data with hydrogeological data for subsurface characterization. PFLOTRAN-E4D is demonstrated herein using a field study of stage-driven groundwater/river water interaction ERT monitoring along the Columbia River, Washington, USA. Results demonstrate the complex nature of subsurface electrical conductivity changes, in both the saturated and unsaturated zones, arising from river stage fluctuations and associated river water intrusion into the aquifer. Furthermore, the results also demonstrate the sensitivity of surface based ERT measurements to those changes over time.« less

  17. PFLOTRAN-E4D: A parallel open source PFLOTRAN module for simulating time-lapse electrical resistivity data

    DOE PAGES

    Johnson, Timothy C.; Hammond, Glenn E.; Chen, Xingyuan

    2016-09-22

    Time-lapse electrical resistivity tomography (ERT) is finding increased application for remotely monitoring processes occurring in the near subsurface in three-dimensions (i.e. 4D monitoring). However, there are few codes capable of simulating the evolution of subsurface resistivity and corresponding tomographic measurements arising from a particular process, particularly in parallel and with an open source license. Herein we describe and demonstrate an electrical resistivity tomography module for the PFLOTRAN subsurface flow and reactive transport simulation code, named PFLOTRAN-E4D. The PFLOTRAN-E4D module operates in parallel using a dedicated set of compute cores in a master-slave configuration. At each time step, the master processesmore » receives subsurface states from PFLOTRAN, converts those states to bulk electrical conductivity, and instructs the slave processes to simulate a tomographic data set. The resulting multi-physics simulation capability enables accurate feasibility studies for ERT imaging, the identification of the ERT signatures that are unique to a given process, and facilitates the joint inversion of ERT data with hydrogeological data for subsurface characterization. PFLOTRAN-E4D is demonstrated herein using a field study of stage-driven groundwater/river water interaction ERT monitoring along the Columbia River, Washington, USA. Results demonstrate the complex nature of subsurface electrical conductivity changes, in both the saturated and unsaturated zones, arising from river stage fluctuations and associated river water intrusion into the aquifer. Furthermore, the results also demonstrate the sensitivity of surface based ERT measurements to those changes over time.« less

  18. Evaluation of several secondary tasks in the determination of permissible time delays in simulator visual and motion cues

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1978-01-01

    The effect of secondary tasks in determining permissible time delays in visual-motion simulation of a pursuit tracking task was examined. A single subject, a single set of aircraft handling qualities, and a single motion condition in tracking a target aircraft that oscillates sinusoidally in altitude were used. In addition to the basic simulator delays the results indicate that the permissible time delay is about 250 msec for either a tapping task, an adding task, or an audio task and is approximately 125 msec less than when no secondary task is involved. The magnitudes of the primary task performance measures, however, differ only for the tapping task. A power spectraldensity analysis basically confirms the result by comparing the root-mean-square performance measures. For all three secondary tasks, the total pilot workload was quite high.

  19. Computer-aided trauma simulation system with haptic feedback is easy and fast for oral-maxillofacial surgeons to learn and use.

    PubMed

    Schvartzman, Sara C; Silva, Rebeka; Salisbury, Ken; Gaudilliere, Dyani; Girod, Sabine

    2014-10-01

    Computer-assisted surgical (CAS) planning tools have become widely available in craniomaxillofacial surgery, but are time consuming and often require professional technical assistance to simulate a case. An initial oral and maxillofacial (OM) surgical user experience was evaluated with a newly developed CAS system featuring a bimanual sense of touch (haptic). Three volunteer OM surgeons received a 5-minute verbal introduction to the use of a newly developed haptic-enabled planning system. The surgeons were instructed to simulate mandibular fracture reductions of 3 clinical cases, within a 15-minute time limit and without a time limit, and complete a questionnaire to assess their subjective experience with the system. Standard landmarks and linear and angular measurements between the simulated results and the actual surgical outcome were compared. After the 5-minute instruction, all 3 surgeons were able to use the system independently. The analysis of standardized anatomic measurements showed that the simulation results within a 15-minute time limit were not significantly different from those without a time limit. Mean differences between measurements of surgical and simulated fracture reductions were within current resolution limitations in collision detection, segmentation of computed tomographic scans, and haptic devices. All 3 surgeons reported that the system was easy to learn and use and that they would be comfortable integrating it into their daily clinical practice for trauma cases. A CAS system with a haptic interface that capitalizes on touch and force feedback experience similar to operative procedures is fast and easy for OM surgeons to learn and use. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. All rights reserved.

  20. Dynamics and Solubility of He and CO 2 in Brine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Tuan Anh; Tenney, Craig M.

    2016-09-01

    Molecular dynamics simulation was implemented using LAMMPS simulation package (1) to study the diffusivity of He 3 and CO 2 in NaCl aqueous solution. To simulate at infinite dilute gas concentration, we placed one He 3 or CO 2 molecule in an initial simulation box of 24x24x33Å 3 containing 512 water molecules and a certain number of NaCl molecules depending on the concentration. Initial configuration was set up by placing water, NaCl, and gas molecules into different regions in the simulation box. Calculating diffusion coefficient for one He or CO 2 molecule consistently yields poor results. To overcome this, formore » each simulation at specific conditions (i.e., temperature, pressure, and NaCl concentration), we conducted 50 simulations initiated from 50 different configurations. These configurations are obtained by performing the simulation starting from the initial configuration mentioned above in the NVE ensemble (i.e., constant number of particles, volume, and energy). for 100,000 time steps and collecting one configuration every 2,000 times step. The output temperature of this simulation is about 500K. The collected configurations were then equilibrated for 2ns in the NPT ensemble (i.e., constant number of particles, pressure, and temperature) followed by 9ns simulations in the NVT ensemble (i.e., constant number of particles, volume, and temperature). The time step is 1fs for all simulations.« less

  1. Comparison of predictive estimates of high-latitude electrodynamics with observations of global-scale Birkeland currents

    NASA Astrophysics Data System (ADS)

    Anderson, Brian J.; Korth, Haje; Welling, Daniel T.; Merkin, Viacheslav G.; Wiltberger, Michael J.; Raeder, Joachim; Barnes, Robin J.; Waters, Colin L.; Pulkkinen, Antti A.; Rastaetter, Lutz

    2017-02-01

    Two of the geomagnetic storms for the Space Weather Prediction Center Geospace Environment Modeling challenge occurred after data were first acquired by the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE). We compare Birkeland currents from AMPERE with predictions from four models for the 4-5 April 2010 and 5-6 August 2011 storms. The four models are the Weimer (2005b) field-aligned current statistical model, the Lyon-Fedder-Mobarry magnetohydrodynamic (MHD) simulation, the Open Global Geospace Circulation Model MHD simulation, and the Space Weather Modeling Framework MHD simulation. The MHD simulations were run as described in Pulkkinen et al. (2013) and the results obtained from the Community Coordinated Modeling Center. The total radial Birkeland current, ITotal, and the distribution of radial current density, Jr, for all models are compared with AMPERE results. While the total currents are well correlated, the quantitative agreement varies considerably. The Jr distributions reveal discrepancies between the models and observations related to the latitude distribution, morphologies, and lack of nightside current systems in the models. The results motivate enhancing the simulations first by increasing the simulation resolution and then by examining the relative merits of implementing more sophisticated ionospheric conductance models, including ionospheric outflows or other omitted physical processes. Some aspects of the system, including substorm timing and location, may remain challenging to simulate, implying a continuing need for real-time specification.

  2. Mueller matrix polarimetry for characterizing microstructural variation of nude mouse skin during tissue optical clearing.

    PubMed

    Chen, Dongsheng; Zeng, Nan; Xie, Qiaolin; He, Honghui; Tuchin, Valery V; Ma, Hui

    2017-08-01

    We investigate the polarization features corresponding to changes in the microstructure of nude mouse skin during immersion in a glycerol solution. By comparing the Mueller matrix imaging experiments and Monte Carlo simulations, we examine in detail how the Mueller matrix elements vary with the immersion time. The results indicate that the polarization features represented by Mueller matrix elements m22&m33&m44 and the absolute values of m34&m43 are sensitive to the immersion time. To gain a deeper insight on how the microstructures of the skin vary during the tissue optical clearing (TOC), we set up a sphere-cylinder birefringence model (SCBM) of the skin and carry on simulations corresponding to different TOC mechanisms. The good agreement between the experimental and simulated results confirm that Mueller matrix imaging combined with Monte Carlo simulation is potentially a powerful tool for revealing microscopic features of biological tissues.

  3. Internal structure of a vortex breakdown

    NASA Technical Reports Server (NTRS)

    Nakamura, Y.; Leonard, A.; Spalart, P. R.

    1986-01-01

    An axisymmetric vortex breakdown was well simulated by the vortex filament method. The agreement with the experiment was qualitatively good. In particular, the structure in the interior of the vortex breakdown was ensured to a great degree by the present simulation. The second breakdown, or spiral type, which occurs downstream of the first axisymmetric breakdown, was simulated more similarly to the experiment than before. It shows a kink of the vortex filaments and strong three-dimensionality. Furthermore, a relatively low velocity region was observed near the second breakdown. It was also found that it takes some time for this physical phenomenon to attain its final stage. The comparison with the experiment is getting better as time goes on. In this paper, emphasis is placed on the comparison of the simulated results with the experiment. The present results help to make clear the mechanism of a vortex breakdown.

  4. Flow Channel Influence of a Collision-Based Piezoelectric Jetting Dispenser on Jet Performance

    PubMed Central

    Deng, Guiling; Li, Junhui; Duan, Ji’an

    2018-01-01

    To improve the jet performance of a bi-piezoelectric jet dispenser, mathematical and simulation models were established according to the operating principle. In order to improve the accuracy and reliability of the simulation calculation, a viscosity model of the fluid was fitted to a fifth-order function with shear rate based on rheological test data, and the needle displacement model was fitted to a nine-order function with time based on real-time displacement test data. The results show that jet performance is related to the diameter of the nozzle outlet and the cone angle of the nozzle, and the impacts of the flow channel structure were confirmed. The approach of numerical simulation is confirmed by the testing results of droplet volume. It will provide a reliable simulation platform for mechanical collision-based jet dispensing and a theoretical basis for micro jet valve design and improvement. PMID:29677140

  5. Monte Carlo Simulation of THz Multipliers

    NASA Technical Reports Server (NTRS)

    East, J.; Blakey, P.

    1997-01-01

    Schottky Barrier diode frequency multipliers are critical components in submillimeter and Thz space based earth observation systems. As the operating frequency of these multipliers has increased, the agreement between design predictions and experimental results has become poorer. The multiplier design is usually based on a nonlinear model using a form of harmonic balance and a model for the Schottky barrier diode. Conventional voltage dependent lumped element models do a poor job of predicting THz frequency performance. This paper will describe a large signal Monte Carlo simulation of Schottky barrier multipliers. The simulation is a time dependent particle field Monte Carlo simulation with ohmic and Schottky barrier boundary conditions included that has been combined with a fixed point solution for the nonlinear circuit interaction. The results in the paper will point out some important time constants in varactor operation and will describe the effects of current saturation and nonlinear resistances on multiplier operation.

  6. Numerical analysis of temperature field in the high speed rotary dry-milling process

    NASA Astrophysics Data System (ADS)

    Wu, N. X.; Deng, L. J.; Liao, D. H.

    2018-01-01

    For the effect of the temperature field in the ceramic dry granulation. Based on the Euler-Euler mathematical model, at the same time, made ceramic dry granulation experiment equipment more simplify and established physical model, the temperature of the dry granulation process was simulated with the granulation time. The relationship between the granulation temperature and granulation effect in dry granulation process was analyzed, at the same time, the correctness of numerical simulation was verified by measuring the fluidity index of ceramic bodies. Numerical simulation and experimental results showed that when granulation time was 4min, 5min, 6min, maximum temperature inside the granulation chamber was: 70°C, 85°C, 95°C. And the equilibrium of the temperature in the granulation chamber was weakened, the fluidity index of the billet particles was: 56.4. 89.7. 81.6. Results of the research showed that when granulation time was 5min, the granulation effect was best. When the granulation chamber temperature was more than 85°C, the fluidity index and the effective particles quantity of the billet particles were reduced.

  7. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  8. Use NU-WRF and GCE Model to Simulate the Precipitation Processes During MC3E Campaign

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Wu, Di; Matsui, Toshi; Li, Xiaowen; Zeng, Xiping; Peter-Lidard, Christa; Hou, Arthur

    2012-01-01

    One of major CRM approaches to studying precipitation processes is sometimes referred to as "cloud ensemble modeling". This approach allows many clouds of various sizes and stages of their lifecycles to be present at any given simulation time. Large-scale effects derived from observations are imposed into CRMs as forcing, and cyclic lateral boundaries are used. The advantage of this approach is that model results in terms of rainfall and QI and Q2 usually are in good agreement with observations. In addition, the model results provide cloud statistics that represent different types of clouds/cloud systems during their lifetime (life cycle). The large-scale forcing derived from MC3EI will be used to drive GCE model simulations. The model-simulated results will be compared with observations from MC3E. These GCE model-simulated datasets are especially valuable for LH algorithm developers. In addition, the regional scale model with very high-resolution, NASA Unified WRF is also used to real time forecast during the MC3E campaign to ensure that the precipitation and other meteorological forecasts are available to the flight planning team and to interpret the forecast results in terms of proposed flight scenarios. Post Mission simulations are conducted to examine the sensitivity of initial and lateral boundary conditions to cloud and precipitation processes and rainfall. We will compare model results in terms of precipitation and surface rainfall using GCE model and NU-WRF

  9. Comparing Real-time Versus Delayed Video Assessments for Evaluating ACGME Sub-competency Milestones in Simulated Patient Care Environments

    PubMed Central

    Stiegler, Marjorie; Hobbs, Gene; Martinelli, Susan M; Zvara, David; Arora, Harendra; Chen, Fei

    2018-01-01

    Background Simulation is an effective method for creating objective summative assessments of resident trainees. Real-time assessment (RTA) in simulated patient care environments is logistically challenging, especially when evaluating a large group of residents in multiple simulation scenarios. To date, there is very little data comparing RTA with delayed (hours, days, or weeks later) video-based assessment (DA) for simulation-based assessments of Accreditation Council for Graduate Medical Education (ACGME) sub-competency milestones. We hypothesized that sub-competency milestone evaluation scores obtained from DA, via audio-video recordings, are equivalent to the scores obtained from RTA. Methods Forty-one anesthesiology residents were evaluated in three separate simulated scenarios, representing different ACGME sub-competency milestones. All scenarios had one faculty member perform RTA and two additional faculty members perform DA. Subsequently, the scores generated by RTA were compared with the average scores generated by DA. Variance component analysis was conducted to assess the amount of variation in scores attributable to residents and raters. Results Paired t-tests showed no significant difference in scores between RTA and averaged DA for all cases. Cases 1, 2, and 3 showed an intraclass correlation coefficient (ICC) of 0.67, 0.85, and 0.50 for agreement between RTA scores and averaged DA scores, respectively. Analysis of variance of the scores assigned by the three raters showed a small proportion of variance attributable to raters (4% to 15%). Conclusions The results demonstrate that video-based delayed assessment is as reliable as real-time assessment, as both assessment methods yielded comparable scores. Based on a department’s needs or logistical constraints, our findings support the use of either real-time or delayed video evaluation for assessing milestones in a simulated patient care environment. PMID:29736352

  10. Protein simulation using coarse-grained two-bead multipole force field with polarizable water models.

    PubMed

    Li, Min; Zhang, John Z H

    2017-02-14

    A recently developed two-bead multipole force field (TMFF) is employed in coarse-grained (CG) molecular dynamics (MD) simulation of proteins in combination with polarizable CG water models, the Martini polarizable water model, and modified big multipole water model. Significant improvement in simulated structures and dynamics of proteins is observed in terms of both the root-mean-square deviations (RMSDs) of the structures and residue root-mean-square fluctuations (RMSFs) from the native ones in the present simulation compared with the simulation result with Martini's non-polarizable water model. Our result shows that TMFF simulation using CG water models gives much stable secondary structures of proteins without the need for adding extra interaction potentials to constrain the secondary structures. Our result also shows that by increasing the MD time step from 2 fs to 6 fs, the RMSD and RMSF results are still in excellent agreement with those from all-atom simulations. The current study demonstrated clearly that the application of TMFF together with a polarizable CG water model significantly improves the accuracy and efficiency for CG simulation of proteins.

  11. Protein simulation using coarse-grained two-bead multipole force field with polarizable water models

    NASA Astrophysics Data System (ADS)

    Li, Min; Zhang, John Z. H.

    2017-02-01

    A recently developed two-bead multipole force field (TMFF) is employed in coarse-grained (CG) molecular dynamics (MD) simulation of proteins in combination with polarizable CG water models, the Martini polarizable water model, and modified big multipole water model. Significant improvement in simulated structures and dynamics of proteins is observed in terms of both the root-mean-square deviations (RMSDs) of the structures and residue root-mean-square fluctuations (RMSFs) from the native ones in the present simulation compared with the simulation result with Martini's non-polarizable water model. Our result shows that TMFF simulation using CG water models gives much stable secondary structures of proteins without the need for adding extra interaction potentials to constrain the secondary structures. Our result also shows that by increasing the MD time step from 2 fs to 6 fs, the RMSD and RMSF results are still in excellent agreement with those from all-atom simulations. The current study demonstrated clearly that the application of TMFF together with a polarizable CG water model significantly improves the accuracy and efficiency for CG simulation of proteins.

  12. Discontinuous Observers Design for Finite-Time Consensus of Multiagent Systems With External Disturbances.

    PubMed

    Liu, Xiaoyang; Ho, Daniel W C; Cao, Jinde; Xu, Wenying

    This brief investigates the problem of finite-time robust consensus (FTRC) for second-order nonlinear multiagent systems with external disturbances. Based on the global finite-time stability theory of discontinuous homogeneous systems, a novel finite-time convergent discontinuous disturbed observer (DDO) is proposed for the leader-following multiagent systems. The states of the designed DDO are then used to design the control inputs to achieve the FTRC of nonlinear multiagent systems in the presence of bounded disturbances. The simulation results are provided to validate the effectiveness of these theoretical results.This brief investigates the problem of finite-time robust consensus (FTRC) for second-order nonlinear multiagent systems with external disturbances. Based on the global finite-time stability theory of discontinuous homogeneous systems, a novel finite-time convergent discontinuous disturbed observer (DDO) is proposed for the leader-following multiagent systems. The states of the designed DDO are then used to design the control inputs to achieve the FTRC of nonlinear multiagent systems in the presence of bounded disturbances. The simulation results are provided to validate the effectiveness of these theoretical results.

  13. Investigation of interactions between limb-manipulator dynamics and effective vehicle roll control characteristics

    NASA Technical Reports Server (NTRS)

    Johnston, D. E.; Mcruer, D. T.

    1986-01-01

    A fixed-base simulation was performed to identify and quantify interactions between the pilot's hand/arm neuromuscular subsystem and such features of typical modern fighter aircraft roll rate command control system mechanization as: (1) force sensing side-stick type manipulator; (2) vehicle effective role time constant; and (3) flight control system effective time delay. The simulation results provide insight to high frequency pilot induced oscillations (PIO) (roll ratchet), low frequency PIO, and roll-to-right control and handling problems previously observed in experimental and production fly-by-wire control systems. The simulation configurations encompass and/or duplicate actual flight situations, reproduce control problems observed in flight, and validate the concept that the high frequency nuisance mode known as roll ratchet derives primarily from the pilot's neuromuscular subsystem. The simulations show that force-sensing side-stick manipulator force/displacement/command gradients, command prefilters, and flight control system time delays need to be carefully adjusted to minimize neuromuscular mode amplitude peaking (roll ratchet tendency) without restricting roll control bandwidth (with resulting sluggish or PIO prone control).

  14. Investigation of television transmission using adaptive delta modulation principles

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1976-01-01

    The results are presented of a study on the use of the delta modulator as a digital encoder of television signals. The computer simulation of different delta modulators was studied in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results were analyzed and then implemented in hardware to study its ability to encode real time motion pictures from an NTSC format television camera. The effects of channel errors on the delta modulated video signal were tested along with several error correction algorithms via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. Delta modulators were investigated which could achieve significant bandwidth reduction without regard to complexity or speed. The first scheme investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved the computer simulation of two dimensional delta modulator algorithms.

  15. GPU-based efficient realistic techniques for bleeding and smoke generation in surgical simulators.

    PubMed

    Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu

    2010-12-01

    In actual surgery, smoke and bleeding due to cauterization processes provide important visual cues to the surgeon, which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated the effects of bleeding and smoke generation, they are not realistic due to the requirement of real-time performance. To be interactive, visual update must be performed at at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques, since other computationally intensive processes compete for the available Central Processing Unit (CPU) resources. In this study we developed a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators, which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. The smoke and bleeding simulation were implemented as part of a laparoscopic adjustable gastric banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur noticeable overhead. However, for smoke generation, an input/output (I/O) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited to VR-based surgical simulators. Copyright © 2010 John Wiley & Sons, Ltd.

  16. Electrical circuit modeling and analysis of microwave acoustic interaction with biological tissues.

    PubMed

    Gao, Fei; Zheng, Qian; Zheng, Yuanjin

    2014-05-01

    Numerical study of microwave imaging and microwave-induced thermoacoustic imaging utilizes finite difference time domain (FDTD) analysis for simulation of microwave and acoustic interaction with biological tissues, which is time consuming due to complex grid-segmentation and numerous calculations, not straightforward due to no analytical solution and physical explanation, and incompatible with hardware development requiring circuit simulator such as SPICE. In this paper, instead of conventional FDTD numerical simulation, an equivalent electrical circuit model is proposed to model the microwave acoustic interaction with biological tissues for fast simulation and quantitative analysis in both one and two dimensions (2D). The equivalent circuit of ideal point-like tissue for microwave-acoustic interaction is proposed including transmission line, voltage-controlled current source, envelop detector, and resistor-inductor-capacitor (RLC) network, to model the microwave scattering, thermal expansion, and acoustic generation. Based on which, two-port network of the point-like tissue is built and characterized using pseudo S-parameters and transducer gain. Two dimensional circuit network including acoustic scatterer and acoustic channel is also constructed to model the 2D spatial information and acoustic scattering effect in heterogeneous medium. Both FDTD simulation, circuit simulation, and experimental measurement are performed to compare the results in terms of time domain, frequency domain, and pseudo S-parameters characterization. 2D circuit network simulation is also performed under different scenarios including different sizes of tumors and the effect of acoustic scatterer. The proposed circuit model of microwave acoustic interaction with biological tissue could give good agreement with FDTD simulated and experimental measured results. The pseudo S-parameters and characteristic gain could globally evaluate the performance of tumor detection. The 2D circuit network enables the potential to combine the quasi-numerical simulation and circuit simulation in a uniform simulator for codesign and simulation of a microwave acoustic imaging system, bridging bioeffect study and hardware development seamlessly.

  17. Data for Figures and Tables in Journal Article Assessment of the Effects of Horizontal Grid Resolution on Long-Term Air Quality Trends using Coupled WRF-CMAQ Simulations, doi:10.1016/j.atmosenv.2016.02.036

    EPA Pesticide Factsheets

    The dataset represents the data depicted in the Figures and Tables of a Journal Manuscript with the following abstract: The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions.This dataset is associated with the following publication

  18. Accelerating 3D Hall MHD Magnetosphere Simulations with Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Bard, C.; Dorelli, J.

    2017-12-01

    The resolution required to simulate planetary magnetospheres with Hall magnetohydrodynamics result in program sizes approaching several hundred million grid cells. These would take years to run on a single computational core and require hundreds or thousands of computational cores to complete in a reasonable time. However, this requires access to the largest supercomputers. Graphics processing units (GPUs) provide a viable alternative: one GPU can do the work of roughly 100 cores, bringing Hall MHD simulations of Ganymede within reach of modest GPU clusters ( 8 GPUs). We report our progress in developing a GPU-accelerated, three-dimensional Hall magnetohydrodynamic code and present Hall MHD simulation results for both Ganymede (run on 8 GPUs) and Mercury (56 GPUs). We benchmark our Ganymede simulation with previous results for the Galileo G8 flyby, namely that adding the Hall term to ideal MHD simulations changes the global convection pattern within the magnetosphere. Additionally, we present new results for the G1 flyby as well as initial results from Hall MHD simulations of Mercury and compare them with the corresponding ideal MHD runs.

  19. Building energy simulation in real time through an open standard interface

    DOE PAGES

    Pang, Xiufeng; Nouidui, Thierry S.; Wetter, Michael; ...

    2015-10-20

    Building energy models (BEMs) are typically used for design and code compliance for new buildings and in the renovation of existing buildings to predict energy use. We present the increasing adoption of BEM as standard practice in the building industry presents an opportunity to extend the use of BEMs into construction, commissioning and operation. In 2009, the authors developed a real-time simulation framework to execute an EnergyPlus model in real time to improve building operation. This paper reports an enhancement of that real-time energy simulation framework. The previous version only works with software tools that implement the custom co-simulation interfacemore » of the Building Controls Virtual Test Bed (BCVTB), such as EnergyPlus, Dymola and TRNSYS. The new version uses an open standard interface, the Functional Mockup Interface (FMI), to provide a generic interface to any application that supports the FMI protocol. In addition, the new version utilizes the Simple Measurement and Actuation Profile (sMAP) tool as the data acquisition system to acquire, store and present data. Lastly, this paper introduces the updated architecture of the real-time simulation framework using FMI and presents proof-of-concept demonstration results which validate the new framework.« less

  20. A Method for Measuring the Effective Throughput Time Delay in Simulated Displays Involving Manual Control

    NASA Technical Reports Server (NTRS)

    Jewell, W. F.; Clement, W. F.

    1984-01-01

    The advent and widespread use of the computer-generated image (CGI) device to simulate visual cues has a mixed impact on the realism and fidelity of flight simulators. On the plus side, CGIs provide greater flexibility in scene content than terrain boards and closed circuit television based visual systems, and they have the potential for a greater field of view. However, on the minus side, CGIs introduce into the visual simulation relatively long time delays. In many CGIs, this delay is as much as 200 ms, which is comparable to the inherent delay time of the pilot. Because most GCIs use multiloop processing and smoothing algorithms and are linked to a multiloop host computer, it is seldom possible to identify a unique throughput time delay, and it is therefore difficult to quantify the performance of the closed loop pilot simulator system relative to the real world task. A method to address these issues using the critical task tester is described. Some empirical results from applying the method are presented, and a novel technique for improving the performance of GCIs is discussed.

  1. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan

    2016-11-01

    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  2. Challenges and solutions for realistic room simulation

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    2002-05-01

    Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.

  3. Simulation evaluation of TIMER, a time-based, terminal air traffic, flow-management concept

    NASA Technical Reports Server (NTRS)

    Credeur, Leonard; Capron, William R.

    1989-01-01

    A description of a time-based, extended terminal area ATC concept called Traffic Intelligence for the Management of Efficient Runway scheduling (TIMER) and the results of a fast-time evaluation are presented. The TIMER concept is intended to bridge the gap between today's ATC system and a future automated time-based ATC system. The TIMER concept integrates en route metering, fuel-efficient cruise and profile descents, terminal time-based sequencing and spacing together with computer-generated controller aids, to improve delivery precision for fuller use of runway capacity. Simulation results identify and show the effects and interactions of such key variables as horizon of control location, delivery time error at both the metering fix and runway threshold, aircraft separation requirements, delay discounting, wind, aircraft heading and speed errors, and knowledge of final approach speed.

  4. Issues regarding 'immortal time' in the analysis of the treatment effects in observational studies.

    PubMed

    Liu, Jiannong; Weinhandl, Eric D; Gilbertson, David T; Collins, Allan J; St Peter, Wendy L

    2012-02-01

    In observational studies, treatment is often time dependent. Mishandling the time from the beginning of follow-up to treatment initiation can result in bias known as immortal time bias. Nephrology researchers who conduct observational research must be aware of how immortal time bias can be introduced into analyses. We review immortal time bias issues in time-to-event analyses in the biomedical literature and give examples from the nephrology literature. We also use simulations to quantify the bias in different methods of mishandling immortal time; intuitively explain how bias is introduced when immortal time is mishandled; raise issues regarding unadjusted treatment comparison, patient characteristics comparison, and confounder adjustment; and, using data from DaVita Inc., linked with the Centers for Medicare & Medicaid Services end-stage renal disease database, show that the severity of bias and the issues described can occur in actual data analyses of patients with end-stage renal disease. In the simulation examples, mishandling immortal time led to an underestimated hazard ratio (treatment vs. control), thus an overestimated treatment effect, by as much as 96%, and an overestimated hazard ratio by as much as 138%, depending on the distribution of 'survival' time and the method used. Results from the DaVita data were consistent with the simulation. Careful consideration of methodology is needed in observational analyses with time-dependent treatment.

  5. Acceleration techniques for dependability simulation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Barnette, James David

    1995-01-01

    As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.

  6. Three-Dimensional Simulation of Traveling-Wave Tube Cold-Test Characteristics Using CST MICROWAVE STUDIO

    NASA Technical Reports Server (NTRS)

    Chevalier, Christine T.; Herrmann, Kimberly A.; Kory, Carol L.; Wilson, Jeffrey D.; Cross, Andrew W.; Santana , Samuel

    2003-01-01

    The electromagnetic field simulation software package CST MICROWAVE STUDIO (MWS) was used to compute the cold-test parameters - frequency-phase dispersion, on-axis impedance, and attenuation - for a traveling-wave tube (TWT) slow-wave circuit. The results were compared to experimental data, as well as to results from MAFIA, another three-dimensional simulation code from CST currently used at the NASA Glenn Research Center (GRC). The strong agreement between cold-test parameters simulated with MWS and those measured experimentally demonstrates the potential of this code to reduce the time and cost of TWT development.

  7. Optimization design of urban expressway ramp control

    NASA Astrophysics Data System (ADS)

    Xu, Hongke; Li, Peiqi; Zheng, Jinnan; Sun, Xiuzhen; Lin, Shan

    2017-05-01

    In this paper, various types of expressway systems are analyzed, and a variety of signal combinations are proposed to mitigate traffic congestion. And various signal combinations are used to verify the effectiveness of the multi-signal combinatorial control strategy. The simulation software VISSIM was used to simulate the system. Based on the network model of 25 kinds of road length combinations and the simulation results, an optimization scheme suitable for the practical road model is summarized. The simulation results show that the controller can reduce the travel time by 25% under the large traffic flow and improve the road capacity by about 20%.

  8. Monthly mean simulation experiments with a course-mesh global atmospheric model

    NASA Technical Reports Server (NTRS)

    Spar, J.; Klugman, R.; Lutz, R. J.; Notario, J. J.

    1978-01-01

    Substitution of observed monthly mean sea-surface temperatures (SSTs) as lower boundary conditions, in place of climatological SSTs, failed to improve the model simulations. While the impact of SST anomalies on the model output is greater at sea level than at upper levels the impact on the monthly mean simulations is not beneficial at any level. Shifts of one and two days in initialization time produced small, but non-trivial, changes in the model-generated monthly mean synoptic fields. No improvements in the mean simulations resulted from the use of either time-averaged initial data or re-initialization with time-averaged early model output. The noise level of the model, as determined from a multiple initial state perturbation experiment, was found to be generally low, but with a noisier response to initial state errors in high latitudes than the tropics.

  9. Turbine-99 unsteady simulations - Validation

    NASA Astrophysics Data System (ADS)

    Cervantes, M. J.; Andersson, U.; Lövgren, H. M.

    2010-08-01

    The Turbine-99 test case, a Kaplan draft tube model, aimed to determine the state of the art within draft tube simulation. Three workshops were organized on the matter in 1999, 2001 and 2005 where the geometry and experimental data were provided as boundary conditions to the participants. Since the last workshop, computational power and flow modelling have been developed and the available data completed with unsteady pressure measurements and phase resolved velocity measurements in the cone. Such new set of data together with the corresponding phase resolved velocity boundary conditions offer new possibilities to validate unsteady numerical simulations in Kaplan draft tube. The present work presents simulation of the Turbine-99 test case with time dependent angular resolved inlet velocity boundary conditions. Different grids and time steps are investigated. The results are compared to experimental time dependent pressure and velocity measurements.

  10. Self-diffusion in periodic porous media: a comparison of numerical simulation and eigenvalue methods.

    PubMed

    Schwartz, L M; Bergman, D J; Dunn, K J; Mitra, P P

    1996-01-01

    Random walk computer simulations are an important tool in understanding magnetic resonance measurements in porous media. In this paper we focus on the description of pulsed field gradient spin echo (PGSE) experiments that measure the probability, P(R,t), that a diffusing water molecule will travel a distance R in a time t. Because PGSE simulations are often limited by statistical considerations, we will see that valuable insight can be gained by working with simple periodic geometries and comparing simulation data to the results of exact eigenvalue expansions. In this connection, our attention will be focused on (1) the wavevector, k, and time dependent magnetization, M(k, t); and (2) the normalized probability, Ps(delta R, t), that a diffusing particle will return to within delta R of the origin after time t.

  11. Evaluation of an F100 multivariable control using a real-time engine simulation

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Skira, C.; Soeder, J. F.

    1977-01-01

    A multivariable control design for the F100 turbofan engine was evaluated, as part of the F100 multivariable control synthesis (MVCS) program. The evaluation utilized a real-time, hybrid computer simulation of the engine and a digital computer implementation of the control. Significant results of the evaluation are presented and recommendations concerning future engine testing of the control are made.

  12. The predictive power of SIMION/SDS simulation software for modeling ion mobility spectrometry instruments

    NASA Astrophysics Data System (ADS)

    Lai, Hanh; McJunkin, Timothy R.; Miller, Carla J.; Scott, Jill R.; Almirall, José R.

    2008-09-01

    The combined use of SIMION 7.0 and the statistical diffusion simulation (SDS) user program in conjunction with SolidWorks® with COSMSOSFloWorks® fluid dynamics software to model a complete, commercial ion mobility spectrometer (IMS) was demonstrated for the first time and compared to experimental results for tests using compounds of immediate interest in the security industry (e.g., 2,4,6-trinitrotoluene, 2,7-dinitrofluorene, and cocaine). The effort of this research was to evaluate the predictive power of SIMION/SDS for application to IMS instruments. The simulation was evaluated against experimental results in three studies: (1) a drift:carrier gas flow rates study assesses the ability of SIMION/SDS to correctly predict the ion drift times; (2) a drift gas composition study evaluates the accuracy in predicting the resolution; (3) a gate width study compares the simulated peak shape and peak intensity with the experimental values. SIMION/SDS successfully predicted the correct drift time, intensity, and resolution trends for the operating parameters studied. Despite the need for estimations and assumptions in the construction of the simulated instrument, SIMION/SDS was able to predict the resolution between two ion species in air within 3% accuracy. The preliminary success of IMS simulations using SIMION/SDS software holds great promise for the design of future instruments with enhanced performance.

  13. The Predictive Power of SIMION/SDS Simulation Software for Modeling Ion Mobility Spectrometry Instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanh Lai; Timothy R. McJunkin; Carla J. Miller

    2008-09-01

    The combined use of SIMION 7.0 and the statistical diffusion simulation (SDS) user program in conjunction with SolidWorks® with COSMSOFloWorks® fluid dynamics software to model a complete, commercial ion mobility spectrometer (IMS) was demonstrated for the first time and compared to experimental results for tests using compounds of immediate interest in the security industry (e.g., 2,4,6-trinitrotoluene and cocaine). The effort of this research was to evaluate the predictive power of SIMION/SDS for application to IMS instruments. The simulation was evaluated against experimental results in three studies: 1) a drift:carrier gas flow rates study assesses the ability of SIMION/SDS to correctlymore » predict the ion drift times; 2) a drift gas composition study evaluates the accuracy in predicting the resolution; and 3) a gate width study compares the simulated peak shape and peak intensity with the experimental values. SIMION/SDS successfully predicted the correct drift time, intensity, and resolution trends for the operating parameters studied. Despite the need for estimations and assumptions in the construction of the simulated instrument, SIMION/SDS was able to predict the resolution between two ion species in air within 3% accuracy. The preliminary success of IMS simulations using SIMION/SDS software holds great promise for the design of future instruments with enhanced performance.« less

  14. Maximum height and minimum time vertical jumping.

    PubMed

    Domire, Zachary J; Challis, John H

    2015-08-20

    The performance criterion in maximum vertical jumping has typically been assumed to simply raise the center of mass as high as possible. In many sporting activities minimizing movement time during the jump is likely also critical to successful performance. The purpose of this study was to examine maximum height jumps performed while minimizing jump time. A direct dynamics model was used to examine squat jump performance, with dual performance criteria: maximize jump height and minimize jump time. The muscle model had activation dynamics, force-length, force-velocity properties, and a series of elastic component representing the tendon. The simulations were run in two modes. In Mode 1 the model was placed in a fixed initial position. In Mode 2 the simulation model selected the initial squat configuration as well as the sequence of muscle activations. The inclusion of time as a factor in Mode 1 simulations resulted in a small decrease in jump height and moderate time savings. The improvement in time was mostly accomplished by taking off from a less extended position. In Mode 2 simulations, more substantial time savings could be achieved by beginning the jump in a more upright posture. However, when time was weighted more heavily in these simulations, there was a more substantial reduction in jump height. Future work is needed to examine the implications for countermovement jumping and to examine the possibility of minimizing movement time as part of the control scheme even when the task is to jump maximally. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Relation of Parallel Discrete Event Simulation algorithms with physical models

    NASA Astrophysics Data System (ADS)

    Shchur, L. N.; Shchur, L. V.

    2015-09-01

    We extend concept of local simulation times in parallel discrete event simulation (PDES) in order to take into account architecture of the current hardware and software in high-performance computing. We shortly review previous research on the mapping of PDES on physical problems, and emphasise how physical results may help to predict parallel algorithms behaviour.

  16. Application of wildfire simulation models for risk analysis

    Treesearch

    Alan A. Ager; Mark A. Finney

    2009-01-01

    Wildfire simulation models are being widely used by fire and fuels specialists in the U.S. to support tactical and strategic decisions related to the mitigation of wildfire risk. Much of this application has resulted from the development of a minimum travel time (MTT) fire spread algorithm (M. Finney) that makes it computationally feasible to simulate thousands of...

  17. HYDRA: High Speed Simulation Architecture for Precision Spacecraft Formation Flying

    NASA Technical Reports Server (NTRS)

    Martin, Bryan J.; Sohl, Garett A.

    2003-01-01

    This viewgraph presentation describes HYDRA, which is architecture to facilitate high-fidelity and real-time simulation of formation flying missions. The contents include: 1) Motivation; 2) Objective; 3) HYDRA-Description and Overview; 4) HYDRA-Hierarchy; 5) Communication in HYDRA; 6) Simulation Specific Concerns in HYDRA; 7) Example application (Formation Acquisition); and 8) Sample Problem Results.

  18. Ground testing and simulation. II - Aerodynamic testing and simulation: Saving lives, time, and money

    NASA Technical Reports Server (NTRS)

    Dayman, B., Jr.; Fiore, A. W.

    1974-01-01

    The present work discusses in general terms the various kinds of ground facilities, in particular, wind tunnels, which support aerodynamic testing. Since not all flight parameters can be simulated simultaneously, an important problem consists in matching parameters. It is pointed out that there is a lack of wind tunnels for a complete Reynolds-number simulation. Using a computer to simulate flow fields can result in considerable reduction of wind-tunnel hours required to develop a given flight vehicle.

  19. Comparison of Cellulose Iβ Simulations with Three Carbohydrate Force Fields.

    PubMed

    Matthews, James F; Beckham, Gregg T; Bergenstråhle-Wohlert, Malin; Brady, John W; Himmel, Michael E; Crowley, Michael F

    2012-02-14

    Molecular dynamics simulations of cellulose have recently become more prevalent due to increased interest in renewable energy applications, and many atomistic and coarse-grained force fields exist that can be applied to cellulose. However, to date no systematic comparison between carbohydrate force fields has been conducted for this important system. To that end, we present a molecular dynamics simulation study of hydrated, 36-chain cellulose Iβ microfibrils at room temperature with three carbohydrate force fields (CHARMM35, GLYCAM06, and Gromos 45a4) up to the near-microsecond time scale. Our results indicate that each of these simulated microfibrils diverge from the cellulose Iβ crystal structure to varying degrees under the conditions tested. The CHARMM35 and GLYCAM06 force fields eventually result in structures similar to those observed at 500 K with the same force fields, which are consistent with the experimentally observed high-temperature behavior of cellulose I. The third force field, Gromos 45a4, produces behavior significantly different from experiment, from the other two force fields, and from previously reported simulations with this force field using shorter simulation times and constrained periodic boundary conditions. For the GLYCAM06 force field, initial hydrogen-bond conformations and choice of electrostatic scaling factors significantly affect the rate of structural divergence. Our results suggest dramatically different time scales for convergence of properties of interest, which is important in the design of computational studies and comparisons to experimental data. This study highlights that further experimental and theoretical work is required to understand the structure of small diameter cellulose microfibrils typical of plant cellulose.

  20. High Fidelity, “Faster than Real-Time” Simulator for Predicting Power System Dynamic Behavior - Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flueck, Alex

    The “High Fidelity, Faster than Real­Time Simulator for Predicting Power System Dynamic Behavior” was designed and developed by Illinois Institute of Technology with critical contributions from Electrocon International, Argonne National Laboratory, Alstom Grid and McCoy Energy. Also essential to the project were our two utility partners: Commonwealth Edison and AltaLink. The project was a success due to several major breakthroughs in the area of large­scale power system dynamics simulation, including (1) a validated faster than real­ time simulation of both stable and unstable transient dynamics in a large­scale positive sequence transmission grid model, (2) a three­phase unbalanced simulation platform formore » modeling new grid devices, such as independently controlled single­phase static var compensators (SVCs), (3) the world’s first high fidelity three­phase unbalanced dynamics and protection simulator based on Electrocon’s CAPE program, and (4) a first­of­its­ kind implementation of a single­phase induction motor model with stall capability. The simulator results will aid power grid operators in their true time of need, when there is a significant risk of cascading outages. The simulator will accelerate performance and enhance accuracy of dynamics simulations, enabling operators to maintain reliability and steer clear of blackouts. In the long­term, the simulator will form the backbone of the newly conceived hybrid real­time protection and control architecture that will coordinate local controls, wide­area measurements, wide­area controls and advanced real­time prediction capabilities. The nation’s citizens will benefit in several ways, including (1) less down time from power outages due to the faster­than­real­time simulator’s predictive capability, (2) higher levels of reliability due to the detailed dynamics plus protection simulation capability, and (3) more resiliency due to the three­ phase unbalanced simulator’s ability to model three­phase and single­ phase networks and devices.« less

  1. Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, Akihiko, E-mail: takahsr@hs.med.kyushu-u.ac.jp; Sasaki, Masayuki; Himuro, Kazuhiko

    2015-04-15

    Purpose: Yittrium-90 ({sup 90}Y) is traditionally thought of as a pure beta emitter, and is used in targeted radionuclide therapy, with imaging performed using bremsstrahlung single-photon emission computed tomography (SPECT). However, because {sup 90}Y also emits positrons through internal pair production with a very small branching ratio, positron emission tomography (PET) imaging is also available. Because of the insufficient image quality of {sup 90}Y bremsstrahlung SPECT, PET imaging has been suggested as an alternative. In this paper, the authors present the Monte Carlo-based simulation–reconstruction framework for {sup 90}Y to comprehensively analyze the PET and SPECT imaging techniques and to quantitativelymore » consider the disadvantages associated with them. Methods: Our PET and SPECT simulation modules were developed using Monte Carlo simulation of Electrons and Photons (MCEP), developed by Dr. S. Uehara. PET code (MCEP-PET) generates a sinogram, and reconstructs the tomography image using a time-of-flight ordered subset expectation maximization (TOF-OSEM) algorithm with attenuation compensation. To evaluate MCEP-PET, simulated results of {sup 18}F PET imaging were compared with the experimental results. The results confirmed that MCEP-PET can simulate the experimental results very well. The SPECT code (MCEP-SPECT) models the collimator and NaI detector system, and generates the projection images and projection data. To save the computational time, the authors adopt the prerecorded {sup 90}Y bremsstrahlung photon data calculated by MCEP. The projection data are also reconstructed using the OSEM algorithm. The authors simulated PET and SPECT images of a water phantom containing six hot spheres filled with different concentrations of {sup 90}Y without background activity. The amount of activity was 163 MBq, with an acquisition time of 40 min. Results: The simulated {sup 90}Y-PET image accurately simulated the experimental results. PET image is visually superior to SPECT image because of the low background noise. The simulation reveals that the detected photon number in SPECT is comparable to that of PET, but the large fraction (approximately 75%) of scattered and penetration photons contaminates SPECT image. The lower limit of {sup 90}Y detection in SPECT image was approximately 200 kBq/ml, while that in PET image was approximately 100 kBq/ml. Conclusions: By comparing the background noise level and the image concentration profile of both the techniques, PET image quality was determined to be superior to that of bremsstrahlung SPECT. The developed simulation codes will be very useful in the future investigations of PET and bremsstrahlung SPECT imaging of {sup 90}Y.« less

  2. A discrete event simulation model for evaluating the performances of an m/g/c/c state dependent queuing system.

    PubMed

    Khalid, Ruzelan; Nawawi, Mohd Kamal M; Kawsar, Luthful A; Ghani, Noraida A; Kamil, Anton A; Mustafa, Adli

    2013-01-01

    M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed.

  3. Orthogonal recursive bisection data decomposition for high performance computing in cardiac model simulations: dependence on anatomical geometry.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J

    2009-01-01

    Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.

  4. A real-time simulation evaluation of an advanced detection. Isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  5. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  6. Translocation time of a polymer chain through an energy gradient nanopore

    NASA Astrophysics Data System (ADS)

    Luo, Meng-Bo; Zhang, Shuang; Wu, Fan; Sun, Li-Zhen

    2017-06-01

    The translocation time of a polymer chain through an interaction energy gradient nanopore was studied by Monte Carlo simulations and the Fokker-Planck equation with double-absorbing boundary conditions. Both the simulation and calculation revealed three different behaviors for polymer translocation. These behaviors can be explained qualitatively from free-energy landscapes obtained for polymer translocation at different parameters. Results show that the translocation time of a polymer chain through a nanopore can be tuned by suitably designing the interaction energy gradient.

  7. Dynamics analysis of the fast-slow hydro-turbine governing system with different time-scale coupling

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Chen, Diyi; Wu, Changzhi; Wang, Xiangyu

    2018-01-01

    Multi-time scales modeling of hydro-turbine governing system is crucial in precise modeling of hydropower plant and provides support for the stability analysis of the system. Considering the inertia and response time of the hydraulic servo system, the hydro-turbine governing system is transformed into the fast-slow hydro-turbine governing system. The effects of the time-scale on the dynamical behavior of the system are analyzed and the fast-slow dynamical behaviors of the system are investigated with different time-scale. Furthermore, the theoretical analysis of the stable regions is presented. The influences of the time-scale on the stable region are analyzed by simulation. The simulation results prove the correctness of the theoretical analysis. More importantly, the methods and results of this paper provide a perspective to multi-time scales modeling of hydro-turbine governing system and contribute to the optimization analysis and control of the system.

  8. Simulation of a small muon tomography station system based on RPCs

    NASA Astrophysics Data System (ADS)

    Chen, S.; Li, Q.; Ma, J.; Kong, H.; Ye, Y.; Gao, J.; Jiang, Y.

    2014-10-01

    In this work, Monte Carlo simulations were used to study the performance of a small muon Tomography Station based on four glass resistive plate chambers(RPCs) with a spatial resolution of approximately 1.0mm (FWHM). We developed a simulation code to generate cosmic ray muons with the appropriate distribution of energies and angles. PoCA and EM algorithm were used to rebuild the objects for comparison. We compared Z discrimination time with and without muon momentum measurement. The relation between Z discrimination time and spatial resolution was also studied. Simulation results suggest that mean scattering angle is a better Z indicator and upgrading to larger RPCs will improve reconstruction image quality.

  9. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  10. Calibration of a water-quality model for low-flow conditions on the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, 2003

    USGS Publications Warehouse

    Lundgren, Robert F.; Nustad, Rochelle A.

    2008-01-01

    A time-of-travel and reaeration-rate study was conducted by the U.S. Geological Survey, in cooperation with the North Dakota Department of Health, the Minnesota Pollution Control Agency, and the cities of Fargo, North Dakota, and Moorhead, Minnesota, to provide information to calibrate a water-quality model for streamflows of less than 150 cubic feet per second. Data collected from September 24 through 27, 2003, were used to develop and calibrate the U.S. Environmental Protection Agency Water Quality Analysis Simulation Program model (hereinafter referred to as the Fargo WASP water-quality model) for a 19.2-mile reach of the Red River of the North. The Fargo WASP water-quality model was calibrated for the transport of dye by fitting simulated time-concentration dye curves to measured time-concentration dye curves. Simulated peak concentrations were within 10 percent of measured concentrations. Simulated traveltimes of the dye cloud centroid were within 7 percent of measured traveltimes. The variances of the simulated dye concentrations were similar to the variances of the measured dye concentrations, indicating dispersion was reproduced reasonably well. Average simulated dissolved-oxygen concentrations were within 6 percent of average measured concentrations. Average simulated ammonia concentrations were within the range of measured concentrations. Simulated dissolved-oxygen and ammonia concentrations were affected by the specification of a single nitrification rate in the Fargo WASP water-quality model. Data sets from August 1989 and August 1990 were used to test traveltime and simulation of dissolved oxygen and ammonia. For streamflows that ranged from 60 to 407 cubic feet per second, simulated traveltimes were within 7 percent of measured traveltimes. Measured dissolved-oxygen concentrations were underpredicted by less than 15 percent for both data sets. Results for ammonia were poor; measured ammonia concentrations were underpredicted by as much as 70 percent for both data sets. Overall, application of the Fargo WASP water-quality model to the 1989 and 1990 data sets resulted in poor agreement between measured and simulated concentrations. This likely is a result of changes in the waste-load composition for the Fargo and Moorhead wastewater-treatment plants as a result of improvements to the wastewater-treatment plants since 1990. The change in waste-load composition probably resulted in a change in decay rates and in dissolved oxygen no longer being substantially depressed downstream from the Moorhead and Fargo wastewater-treatment plants. The Fargo WASP water-quality model is valid for the current (2008) treatment processes at the wastewater-treatment plants.

  11. Steady and Unsteady Nozzle Simulations Using the Conservation Element and Solution Element Method

    NASA Technical Reports Server (NTRS)

    Friedlander, David Joshua; Wang, Xiao-Yen J.

    2014-01-01

    This paper presents results from computational fluid dynamic (CFD) simulations of a three-stream plug nozzle. Time-accurate, Euler, quasi-1D and 2D-axisymmetric simulations were performed as part of an effort to provide a CFD-based approach to modeling nozzle dynamics. The CFD code used for the simulations is based on the space-time Conservation Element and Solution Element (CESE) method. Steady-state results were validated using the Wind-US code and a code utilizing the MacCormack method while the unsteady results were partially validated via an aeroacoustic benchmark problem. The CESE steady-state flow field solutions showed excellent agreement with solutions derived from the other methods and codes while preliminary unsteady results for the three-stream plug nozzle are also shown. Additionally, a study was performed to explore the sensitivity of gross thrust computations to the control surface definition. The results showed that most of the sensitivity while computing the gross thrust is attributed to the control surface stencil resolution and choice of stencil end points and not to the control surface definition itself.Finally, comparisons between the quasi-1D and 2D-axisymetric solutions were performed in order to gain insight on whether a quasi-1D solution can capture the steady and unsteady nozzle phenomena without the cost of a 2D-axisymmetric simulation. Initial results show that while the quasi-1D solutions are similar to the 2D-axisymmetric solutions, the inability of the quasi-1D simulations to predict two dimensional phenomena limits its accuracy.

  12. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun

    Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less

  13. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    NASA Astrophysics Data System (ADS)

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; Derome, Dominique; Carmeliet, Jan

    2018-03-01

    An entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace's law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results. Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.

  14. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    DOE PAGES

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; ...

    2018-03-22

    Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less

  15. Investigations in mechanisms and strategies to enhance hearing with cochlear implants

    NASA Astrophysics Data System (ADS)

    Churchill, Tyler H.

    Cochlear implants (CIs) produce hearing sensations by stimulating the auditory nerve (AN) with current pulses whose amplitudes are modulated by filtered acoustic temporal envelopes. While this technology has provided hearing for multitudinous CI recipients, even bilaterally-implanted listeners have more difficulty understanding speech in noise and localizing sounds than normal hearing (NH) listeners. Three studies reported here have explored ways to improve electric hearing abilities. Vocoders are often used to simulate CIs for NH listeners. Study 1 was a psychoacoustic vocoder study examining the effects of harmonic carrier phase dispersion and simulated CI current spread on speech intelligibility in noise. Results showed that simulated current spread was detrimental to speech understanding and that speech vocoded with carriers whose components' starting phases were equal was the least intelligible. Cross-correlogram analyses of AN model simulations confirmed that carrier component phase dispersion resulted in better neural envelope representation. Localization abilities rely on binaural processing mechanisms in the brainstem and mid-brain that are not fully understood. In Study 2, several potential mechanisms were evaluated based on the ability of metrics extracted from stereo AN simulations to predict azimuthal locations. Results suggest that unique across-frequency patterns of binaural cross-correlation may provide a strong cue set for lateralization and that interaural level differences alone cannot explain NH sensitivity to lateral position. While it is known that many bilateral CI users are sensitive to interaural time differences (ITDs) in low-rate pulsatile stimulation, most contemporary CI processing strategies use high-rate, constant-rate pulse trains. In Study 3, we examined the effects of pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition by bilateral CI listeners. Results showed that listeners were able to use low-rate pulse timing cues presented redundantly on multiple electrodes for ITD discrimination and lateralization of speech stimuli even when mixed with high rates on other electrodes. These results have contributed to a better understanding of those aspects of the auditory system that support speech understanding and binaural hearing, suggested vocoder parameters that may simulate aspects of electric hearing, and shown that redundant, low-rate pulse timing supports improved spatial hearing for bilateral CI listeners.

  16. A simulation study of the flight dynamics of elastic aircraft. Volume 2: Data

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Davidson, John B.; Schmidt, David K.

    1987-01-01

    The simulation experiment described addresses the effects of structural flexibility on the dynamic characteristics of a generic family of aircraft. The simulation was performed using the NASA Langley VMS simulation facility. The vehicle models were obtained as part of this research project. The simulation results include complete response data and subjective pilot ratings and comments and so allow a variety of analyses. The subjective ratings and analysis of the time histories indicate that increased flexibility can lead to increased tracking errors, degraded handling qualities, and changes in the frequency content of the pilot inputs. These results, furthermore, are significantly affected by the visual cues available to the pilot.

  17. Role of Boundary Conditions in Monte Carlo Simulation of MEMS Devices

    NASA Technical Reports Server (NTRS)

    Nance, Robert P.; Hash, David B.; Hassan, H. A.

    1997-01-01

    A study is made of the issues surrounding prediction of microchannel flows using the direct simulation Monte Carlo method. This investigation includes the introduction and use of new inflow and outflow boundary conditions suitable for subsonic flows. A series of test simulations for a moderate-size microchannel indicates that a high degree of grid under-resolution in the streamwise direction may be tolerated without loss of accuracy. In addition, the results demonstrate the importance of physically correct boundary conditions, as well as possibilities for reducing the time associated with the transient phase of a simulation. These results imply that simulations of longer ducts may be more feasible than previously envisioned.

  18. Agent-Based Simulation to Support the Effectiveness, Procurement, and Employment of Non-Lethal Weapon Systems

    DTIC Science & Technology

    2017-06-01

    cases have the most significant impact on reducing the number of lethal shots fired in the simulation. Table 10 shows the reduction in the average...Figure ES-2 was developed to show the results of the focused study on maximum effective range. After analyzing the results of the 1,700 simulated...toward other agents based on whose side they are on at that time. This attribute is critical to this study as the sidedness of the local population is

  19. Flight investigation of a four-dimensional terminal area guidance system for STOL aircraft

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Hardy, G. H.

    1981-01-01

    A series of flight tests and fast-time simulations were conducted, using the augmentor wing jet STOL research aircraft and the STOLAND 4D-RNAV system to add to the growing data base of 4D-RNAV system performance capabilities. To obtain statistically meaningful data a limited amount of flight data were supplemented by a statistically significant amount of data obtained from fast-time simulation. The results of these tests are reported. Included are comparisons of the 4D-RNAV estimated winds with actual winds encountered in flight, as well as data on along-track navigation and guidance errors, and time-of-arrival errors at the final approach waypoint. In addition, a slight improvement of the STOLAND 4D-RNAV system is proposed and demonstrated, using the fast-time simulation.

  20. Tube thoracostomy training with a medical simulator is associated with faster, more successful performance of the procedure

    PubMed Central

    Chung, Tae Nyoung; Kim, Sun Wook; You, Je Sung; Chung, Hyun Soo

    2016-01-01

    Objective Tube thoracostomy (TT) is a commonly performed intensive care procedure. Simulator training may be a good alternative method for TT training, compared with conventional methods such as apprenticeship and animal skills laboratory. However, there is insufficient evidence supporting use of a simulator. The aim of this study is to determine whether training with medical simulator is associated with faster TT process, compared to conventional training without simulator. Methods This is a simulation study. Eligible participants were emergency medicine residents with very few (≤3 times) TT experience. Participants were randomized to two groups: the conventional training group, and the simulator training group. While the simulator training group used the simulator to train TT, the conventional training group watched the instructor performing TT on a cadaver. After training, all participants performed a TT on a cadaver. The performance quality was measured as correct placement and time delay. Subjects were graded if they had difficulty on process. Results Estimated median procedure time was 228 seconds in the conventional training group and 75 seconds in the simulator training group, with statistical significance (P=0.040). The difficulty grading did not show any significant difference among groups (overall performance scale, 2 vs. 3; P=0.094). Conclusion Tube thoracostomy training with a medical simulator, when compared to no simulator training, is associated with a significantly faster procedure, when performed on a human cadaver. PMID:27752610

  1. GPU-based Green's function simulations of shear waves generated by an applied acoustic radiation force in elastic and viscoelastic models.

    PubMed

    Yang, Yiqun; Urban, Matthew W; McGough, Robert J

    2018-05-15

    Shear wave calculations induced by an acoustic radiation force are very time-consuming on desktop computers, and high-performance graphics processing units (GPUs) achieve dramatic reductions in the computation time for these simulations. The acoustic radiation force is calculated using the fast near field method and the angular spectrum approach, and then the shear waves are calculated in parallel with Green's functions on a GPU. This combination enables rapid evaluation of shear waves for push beams with different spatial samplings and for apertures with different f/#. Relative to shear wave simulations that evaluate the same algorithm on an Intel i7 desktop computer, a high performance nVidia GPU reduces the time required for these calculations by a factor of 45 and 700 when applied to elastic and viscoelastic shear wave simulation models, respectively. These GPU-accelerated simulations also compared to measurements in different viscoelastic phantoms, and the results are similar. For parametric evaluations and for comparisons with measured shear wave data, shear wave simulations with the Green's function approach are ideally suited for high-performance GPUs.

  2. Shoulder arthroscopy simulator training improves shoulder arthroscopy performance in a cadaveric model.

    PubMed

    Henn, R Frank; Shah, Neel; Warner, Jon J P; Gomoll, Andreas H

    2013-06-01

    The purpose of this study was to quantify the benefits of shoulder arthroscopy simulator training with a cadaveric model of shoulder arthroscopy. Seventeen first-year medical students with no prior experience in shoulder arthroscopy were enrolled and completed this study. Each subject completed a baseline proctored arthroscopy on a cadaveric shoulder, which included controlling the camera and completing a standard series of tasks using the probe. The subjects were randomized, and 9 of the subjects received training on a virtual reality simulator for shoulder arthroscopy. All subjects then repeated the same cadaveric arthroscopy. The arthroscopic videos were analyzed in a blinded fashion for time to task completion and subjective assessment of technical performance. The 2 groups were compared by use of Student t tests, and change over time within groups was analyzed with paired t tests. There were no observed differences between the 2 groups on the baseline evaluation. The simulator group improved significantly from baseline with respect to time to completion and subjective performance (P < .05). Time to completion was significantly faster in the simulator group compared with controls at the final evaluation (P < .05). No difference was observed between the groups on the subjective scores at the final evaluation (P = .98). Shoulder arthroscopy simulator training resulted in significant benefits in clinical shoulder arthroscopy time to task completion in this cadaveric model. This study provides important additional evidence of the benefit of simulators in orthopaedic surgical training. There may be a role for simulator training in shoulder arthroscopy education. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  3. Modeling and control for a magnetic levitation system based on SIMLAB platform in real time

    NASA Astrophysics Data System (ADS)

    Yaseen, Mundher H. A.; Abd, Haider J.

    2018-03-01

    Magnetic Levitation system becomes a hot topic of study due to the minimum friction and low energy consumption which regards as very important issues. This paper proposed a new magnetic levitation system using real-time control simulink feature of (SIMLAB) microcontroller. The control system of the maglev transportation system is verified by simulations with experimental results, and its superiority is indicated in comparison with previous literature and conventional control strategies. In addition, the proposed system was implemented under effect of three controller types which are Linear-quadratic regulator (LQR), proportional-integral-derivative controller (PID) and Lead compensation. As well, the controller system performance was compared in term of three parameters Peak overshoot, Settling time and Rise time. The findings prove the agreement of simulation with experimental results obtained. Moreover, the LQR controller produced a great stability and homogeneous response than other controllers used. For experimental results, the LQR brought a 14.6%, 0.199 and 0.064 for peak overshoot, Setting time and Rise time respectively.

  4. Research of laser echo signal simulator

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Shi, Rui; Wang, Xin; Li, Zhou

    2015-11-01

    Laser echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR. System model and time series model of laser echo signal simulator are established. Some influential factors which could induce fixed error and random error on the simulated return signals are analyzed, and then these system insertion errors are analyzed quantitatively. Using this theoretical model, the simulation system is investigated experimentally. The results corrected by subtracting fixed error indicate that the range error of the simulated laser return signal is less than 0.25m, and the distance range that the system can simulate is from 50m to 20km.

  5. Using fire dynamics simulator to reconstruct a hydroelectric power plant fire accident.

    PubMed

    Chi, Jen-Hao; Wu, Sheng-Hung; Shu, Chi-Min

    2011-11-01

    The location of the hydroelectric power plant poses a high risk to occupants seeking to escape in a fire accident. Calculating the heat release rate of transformer oil as 11.5 MW/m(2), the fire at the Taiwan Dajia-River hydroelectric power plant was reconstructed using the fire dynamics simulator (FDS). The variations at the escape route of the fire hazard factors temperature, radiant heat, carbon monoxide, and oxygen were collected during the simulation to verify the causes of the serious casualties resulting from the fire. The simulated safe escape time when taking temperature changes into account is about 236 sec, 155 sec for radiant heat changes, 260 sec for carbon monoxide changes, and 235-248 sec for oxygen changes. These escape times are far less than the actual escape time of 302 sec. The simulation thus demonstrated the urgent need to improve escape options for people escaping a hydroelectric power plant fire. © 2011 American Academy of Forensic Sciences.

  6. Simulation studies of vestibular macular afferent-discharge patterns using a new, quasi-3-D finite volume method

    NASA Technical Reports Server (NTRS)

    Ross, M. D.; Linton, S. W.; Parnas, B. R.

    2000-01-01

    A quasi-three-dimensional finite-volume numerical simulator was developed to study passive voltage spread in vestibular macular afferents. The method, borrowed from computational fluid dynamics, discretizes events transpiring in small volumes over time. The afferent simulated had three calyces with processes. The number of processes and synapses, and direction and timing of synapse activation, were varied. Simultaneous synapse activation resulted in shortest latency, while directional activation (proximal to distal and distal to proximal) yielded most regular discharges. Color-coded visualizations showed that the simulator discretized events and demonstrated that discharge produced a distal spread of voltage from the spike initiator into the ending. The simulations indicate that directional input, morphology, and timing of synapse activation can affect discharge properties, as must also distal spread of voltage from the spike initiator. The finite volume method has generality and can be applied to more complex neurons to explore discrete synaptic effects in four dimensions.

  7. Thin film growth by 3D multi-particle diffusion limited aggregation model: Anomalous roughening and fractal analysis

    NASA Astrophysics Data System (ADS)

    Nasehnejad, Maryam; Nabiyouni, G.; Gholipour Shahraki, Mehran

    2018-03-01

    In this study a 3D multi-particle diffusion limited aggregation method is employed to simulate growth of rough surfaces with fractal behavior in electrodeposition process. A deposition model is used in which the radial motion of the particles with probability P, competes with random motions with probability 1 - P. Thin films growth is simulated for different values of probability P (related to the electric field) and thickness of the layer(related to the number of deposited particles). The influence of these parameters on morphology, kinetic of roughening and the fractal dimension of the simulated surfaces has been investigated. The results show that the surface roughness increases with increasing the deposition time and scaling exponents exhibit a complex behavior which is called as anomalous scaling. It seems that in electrodeposition process, radial motion of the particles toward the growing seeds may be an important mechanism leading to anomalous scaling. The results also indicate that the larger values of probability P, results in smoother topography with more densely packed structure. We have suggested a dynamic scaling ansatz for interface width has a function of deposition time, scan length and probability. Two different methods are employed to evaluate the fractal dimension of the simulated surfaces which are "cube counting" and "roughness" methods. The results of both methods show that by increasing the probability P or decreasing the deposition time, the fractal dimension of the simulated surfaces is increased. All gained values for fractal dimensions are close to 2.5 in the diffusion limited aggregation model.

  8. A high-order multi-zone cut-stencil method for numerical simulations of high-speed flows over complex geometries

    NASA Astrophysics Data System (ADS)

    Greene, Patrick T.; Eldredge, Jeff D.; Zhong, Xiaolin; Kim, John

    2016-07-01

    In this paper, we present a method for performing uniformly high-order direct numerical simulations of high-speed flows over arbitrary geometries. The method was developed with the goal of simulating and studying the effects of complex isolated roughness elements on the stability of hypersonic boundary layers. The simulations are carried out on Cartesian grids with the geometries imposed by a third-order cut-stencil method. A fifth-order hybrid weighted essentially non-oscillatory scheme was implemented to capture any steep gradients in the flow created by the geometries and a third-order Runge-Kutta method is used for time advancement. A multi-zone refinement method was also utilized to provide extra resolution at locations with expected complex physics. The combination results in a globally fourth-order scheme in space and third order in time. Results confirming the method's high order of convergence are shown. Two-dimensional and three-dimensional test cases are presented and show good agreement with previous results. A simulation of Mach 3 flow over the logo of the Ubuntu Linux distribution is shown to demonstrate the method's capabilities for handling complex geometries. Results for Mach 6 wall-bounded flow over a three-dimensional cylindrical roughness element are also presented. The results demonstrate that the method is a promising tool for the study of hypersonic roughness-induced transition.

  9. Multiscale analysis of structure development in expanded starch snacks

    NASA Astrophysics Data System (ADS)

    van der Sman, R. G. M.; Broeze, J.

    2014-11-01

    In this paper we perform a multiscale analysis of the food structuring process of the expansion of starchy snack foods like keropok, which obtains a solid foam structure. In particular, we want to investigate the validity of the hypothesis of Kokini and coworkers, that expansion is optimal at the moisture content, where the glass transition and the boiling line intersect. In our analysis we make use of several tools, (1) time scale analysis from the field of physical transport phenomena, (2) the scale separation map (SSM) developed within a multiscale simulation framework of complex automata, (3) the supplemented state diagram (SSD), depicting phase transition and glass transition lines, and (4) a multiscale simulation model for the bubble expansion. Results of the time scale analysis are plotted in the SSD, and give insight into the dominant physical processes involved in expansion. Furthermore, the results of the time scale analysis are used to construct the SSM, which has aided us in the construction of the multiscale simulation model. Simulation results are plotted in the SSD. This clearly shows that the hypothesis of Kokini is qualitatively true, but has to be refined. Our results show that bubble expansion is optimal for moisture content, where the boiling line for gas pressure of 4 bars intersects the isoviscosity line of the critical viscosity 106 Pa.s, which runs parallel to the glass transition line.

  10. Effect of Hydrodynamic Interactions on Self-Diffusion of Quasi-Two-Dimensional Colloidal Hard Spheres.

    PubMed

    Thorneywork, Alice L; Rozas, Roberto E; Dullens, Roel P A; Horbach, Jürgen

    2015-12-31

    We compare experimental results from a quasi-two-dimensional colloidal hard sphere fluid to a Monte Carlo simulation of hard disks with small particle displacements. The experimental short-time self-diffusion coefficient D(S) scaled by the diffusion coefficient at infinite dilution, D(0), strongly depends on the area fraction, pointing to significant hydrodynamic interactions at short times in the experiment, which are absent in the simulation. In contrast, the area fraction dependence of the experimental long-time self-diffusion coefficient D(L)/D(0) is in quantitative agreement with D(L)/D(0) obtained from the simulation. This indicates that the reduction in the particle mobility at short times due to hydrodynamic interactions does not lead to a proportional reduction in the long-time self-diffusion coefficient. Furthermore, the quantitative agreement between experiment and simulation at long times indicates that hydrodynamic interactions effectively do not affect the dependence of D(L)/D(0) on the area fraction. In light of this, we discuss the link between structure and long-time self-diffusion in terms of a configurational excess entropy and do not find a simple exponential relation between these quantities for all fluid area fractions.

  11. A novel approach for extracting viscoelastic parameters of living cells through combination of inverse finite element simulation and Atomic Force Microscopy.

    PubMed

    Wei, Fanan; Yang, Haitao; Liu, Lianqing; Li, Guangyong

    2017-03-01

    Dynamic mechanical behaviour of living cells has been described by viscoelasticity. However, quantitation of the viscoelastic parameters for living cells is far from sophisticated. In this paper, combining inverse finite element (FE) simulation with Atomic Force Microscope characterization, we attempt to develop a new method to evaluate and acquire trustworthy viscoelastic index of living cells. First, influence of the experiment parameters on stress relaxation process is assessed using FE simulation. As suggested by the simulations, cell height has negligible impact on shape of the force-time curve, i.e. the characteristic relaxation time; and the effect originates from substrate can be totally eliminated when stiff substrate (Young's modulus larger than 3 GPa) is used. Then, so as to develop an effective optimization strategy for the inverse FE simulation, the parameters sensitivity evaluation is performed for Young's modulus, Poisson's ratio, and characteristic relaxation time. With the experiment data obtained through typical stress relaxation measurement, viscoelastic parameters are extracted through the inverse FE simulation by comparing the simulation results and experimental measurements. Finally, reliability of the acquired mechanical parameters is verified with different load experiments performed on the same cell.

  12. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations

    PubMed Central

    Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-01-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911

  13. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations.

    PubMed

    Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-05-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.

  14. Simulation-based validation and arrival-time correction for Patlak analyses of Perfusion-CT scans

    NASA Astrophysics Data System (ADS)

    Bredno, Jörg; Hom, Jason; Schneider, Thomas; Wintermark, Max

    2009-02-01

    Blood-brain-barrier (BBB) breakdown is a hypothesized mechanism for hemorrhagic transformation in acute stroke. The Patlak analysis of a Perfusion Computed Tomography (PCT) scan measures the BBB permeability, but the method yields higher estimates when applied to the first pass of the contrast bolus compared to a delayed phase. We present a numerical phantom that simulates vascular and parenchymal time-attenuation curves to determine the validity of permeability measurements obtained with different acquisition protocols. A network of tubes represents the major cerebral arteries ipsi- and contralateral to an ischemic event. These tubes branch off into smaller segments that represent capillary beds. Blood flow in the phantom is freely defined and simulated as non-Newtonian tubular flow. Diffusion of contrast in the vessels and permeation through vessel walls is part of the simulation. The phantom allows us to compare the results of a permeability measurement to the simulated vessel wall status. A Patlak analysis reliably detects areas with BBB breakdown for acquisitions of 240s duration, whereas results obtained from the first pass are biased in areas of reduced blood flow. Compensating for differences in contrast arrival times reduces this bias and gives good estimates of BBB permeability for PCT acquisitions of 90-150s duration.

  15. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, A; Zbijewski, W; Bolch, W

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less

  16. SEMICONDUCTOR INTEGRATED CIRCUITS: A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    NASA Astrophysics Data System (ADS)

    Jizhi, Liu; Xingbi, Chen

    2009-12-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate.

  17. Development of high resolution simulations of the atmospheric environment using the MASS model

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Zack, John W.; Karyampudi, V. Mohan

    1989-01-01

    Numerical simulations were performed with a very high resolution (7.25 km) version of the MASS model (Version 4.0) in an effort to diagnose the vertical wind shear and static stability structure during the Shuttle Challenger disaster which occurred on 28 January 1986. These meso-beta scale simulations reveal that the strongest vertical wind shears were concentrated in the 200 to 150 mb layer at 1630 GMT, i.e., at about the time of the disaster. These simulated vertical shears were the result of two primary dynamical processes. The juxtaposition of both of these processes produced a shallow (30 mb deep) region of strong vertical wind shear, and hence, low Richardson number values during the launch time period. Comparisons with the Cape Canaveral (XMR) rawinsonde indicates that the high resolution MASS 4.0 simulation more closely emulated nature than did previous simulations of the same event with the GMASS model.

  18. Numerical Simulation of Screech Tones from Supersonic Jets: Physics and Prediction

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Zaman, Khairul Q. (Technical Monitor)

    2002-01-01

    The objectives of this project are to: (1) perform a numerical simulation of the jet screech phenomenon; and (2) use the data of the simulations to obtain a better understanding of the physics of jet screech. The original grant period was for three years. This was extended at no cost for an extra year to allow the principal investigator time to publish the results. We would like to report that our research work and results (supported by this grant) have fulfilled both objectives of the grant. The following is a summary of the important accomplishments: (1) We have now demonstrated that it is possible to perform accurate numerical simulations of the jet screech phenomenon. Both the axisymmetric case and the fully three-dimensional case were carried out successfully. It is worthwhile to note that this is the first time the screech tone phenomenon has been successfully simulated numerically; (2) All four screech modes were reproduced in the simulation. The computed screech frequencies and intensities were in good agreement with the NASA Langley Research Center data; (3) The staging phenomenon was reproduced in the simulation; (4) The effects of nozzle lip thickness and jet temperature were studied. Simulated tone frequencies at various nozzle lip thickness and jet temperature were found to agree well with experiments; (5) The simulated data were used to explain, for the first time, why there are two axisymmetric screech modes and two helical/flapping screech modes; (6) The simulated data were used to show that when two tones are observed, they co-exist rather than switching from one mode to the other, back and forth, as some previous investigators have suggested; and (7) Some resources of the grant were used to support the development of new computational aeroacoustics (CAA) methodology. (Our screech tone simulations have benefited because of the availability of these improved methods.)

  19. The Distant Tail at 200 R(sub E): Comparison Between Geotail Observations and the Results from a Global Magnetohydrodynamic Simulation

    NASA Technical Reports Server (NTRS)

    Berchem, J.; Raeder, J.; Ashour-Abdalla, M.; Frank, L. A.; Paterson, W. R.; Ackerson, K. L.; Kokubun, S.; Yamamoto, T.; Lepping, R. P.

    1998-01-01

    This paper reports a comparison between Geotail observations of plasmas and magnetic fields at 200 R(sub E) in the Earth's magnetotail with results from a time-dependent, global magnetohydrodynamic simulation of the interaction of the solar wind with the magnetosphere. The study focuses on observations from July 7, 1993, during which the Geotail spacecraft crossed the distant tail magnetospheric boundary several times while the interplanetary magnetic field (IMF) was predominantly northward and was marked by slow rotations of its clock angle. Simultaneous IMP 8 observations of solar wind ions and the IMF were used as driving input for the MHD simulation, and the resulting time series were compared directly with those from the Geotail spacecraft. The very good agreement found provided the basis for an investigation of the response of the distant tail associated with the clock angle of the IMF. Results from the simulation show that the stresses imposed by the draping of magnetosheath field lines and the asymmetric removal of magnetic flux tailward of the cusps altered considerably the shape of the distant tail as the solar wind discontinuities convected downstream of Earth. As a result, the cross section of the distant tail was considerably flattened along the direction perpendicular to the IMF clock angle, the direction of the neutral sheet following that of the IMF. The simulation also revealed that the combined action of magnetic reconnection and the slow rotation of the IMF clock angle led to a braiding of the distant tail's magnetic field lines along the axis of the tail, with the plane of the braid lying in the direction of the IMF.

  20. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    PubMed

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  1. Real time flight simulation methodology

    NASA Technical Reports Server (NTRS)

    Parrish, E. A.; Cook, G.; Mcvey, E. S.

    1976-01-01

    An example sensitivity study is presented to demonstrate how a digital autopilot designer could make a decision on minimum sampling rate for computer specification. It consists of comparing the simulated step response of an existing analog autopilot and its associated aircraft dynamics to the digital version operating at various sampling frequencies and specifying a sampling frequency that results in an acceptable change in relative stability. In general, the zero order hold introduces phase lag which will increase overshoot and settling time. It should be noted that this solution is for substituting a digital autopilot for a continuous autopilot. A complete redesign could result in results which more closely resemble the continuous results or which conform better to original design goals.

  2. A theoretical and simulation study of the contact discontinuities based on a Vlasov simulation code

    NASA Astrophysics Data System (ADS)

    Tsai, T. C.; Lyu, L. H.; Chao, J. K.; Chen, M. Q.; Tsai, W. H.

    2009-12-01

    Contact discontinuity (CD) is the simplest solution that can be obtained from the magnetohydrodynamics (MHD) Rankine-Hugoniot jump conditions. Due to the limitations of the previous kinetic simulation models, the stability of the CD has become a controversial issue in the past 10 years. The stability of the CD is reexamined analytically and numerically. Our theoretical analysis shows that the electron temperature profile and the ion temperature profile must be out of phase across the CD if the CD structure is to be stable in the electron time scale and with zero electron heat flux on either side of the CD. Both a newly developed fourth-order implicit electrostatic Vlasov simulation code and an electromagnetic finite-size particle code are used to examine the stability and the electrostatic nature of the CD structure. Our theoretical prediction is verified by both simulations. Our results of Vlasov simulation also indicate that a simulation with initial electron temperature profile and ion temperature profile varying in phase across the CD will undergo very transient changes in the electron time scale but will relax into a quasi-steady CD structure within a few ion plasma oscillation periods if a real ion-electron mass ratio is used in the simulation and if the boundary conditions allow nonzero heat flux to be presented at the boundaries of the simulation box. The simulation results of this study indicate that the Vlasov simulation is a powerful tool to study nonlinear phenomena with nonperiodic boundary conditions and with nonzero heat flux at the boundaries of the simulation box.

  3. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  4. Homogeneous buoyancy-generated turbulence

    NASA Technical Reports Server (NTRS)

    Batchelor, G. K.; Canuto, V. M.; Chasnov, J. R.

    1992-01-01

    Using a theoretical analysis of fundamental equations and a numerical simulation of the flow field, the statistically homogeneous motion that is generated by buoyancy forces after the creation of homogeneous random fluctuations in the density of infinite fluid at an initial instant is examined. It is shown that analytical results together with numerical results provide a comprehensive description of the 'birth, life, and death' of buoyancy-generated turbulence. Results of numerical simulations yielded the mean-square density mean-square velocity fluctuations and the associated spectra as functions of time for various initial conditions, and the time required for the mean-square density fluctuation to fall to a specified small value was estimated.

  5. An Integrated Approach to Modeling Solar Electric Propulsion Vehicles During Long Duration, Near-Earth Orbit Transfers

    NASA Technical Reports Server (NTRS)

    Smith, David A.; Hojnicki, Jeffrey S.; Sjauw, Waldy K.

    2014-01-01

    Recent NASA interest in utilizing solar electronic propulsion (SEP) technology to transfer payloads, e.g. from low-Earth orbit (LEO) to higher energy geostationary-Earth orbit (GEO) or to Earth escape, has necessitated the development of high fidelity SEP vehicle models and simulations. These models and simulations need to be capable of capturing vehicle dynamics and sub-system interactions experienced during the transfer trajectories which are typically accomplished with continuous-burn (potentially interrupted by solar eclipse), long duration "spiral out" maneuvers taking several months or more to complete. This paper presents details of an integrated simulation approach achieved by combining a high fidelity vehicle simulation code with a detailed solar array model. The combined simulation tool gives researchers the functionality to study the integrated effects of various vehicle sub-systems (e.g. vehicle guidance, navigation and control (GN&C), electric propulsion system (EP)) with time varying power production. Results from a simulation model of a vehicle with a 50 kW class SEP system using the integrated tool are presented and compared to the results from another simulation model employing a 50 kW end-of-life (EOL) fixed power level assumption. These models simulate a vehicle under three degree of freedom dynamics (i.e. translational dynamics only) and include the effects of a targeting guidance algorithm (providing a "near optimal" transfer) during a LEO to near Earth escape (C (sub 3) = -2.0 km (sup 2) / sec (sup -2) spiral trajectory. The presented results include the impact of the fully integrated, time-varying solar array model (e.g. cumulative array degradation from traversing the Van Allen belts, impact of solar eclipses on the vehicle and the related temperature responses in the solar arrays due to operating in the Earth's thermal environment, high fidelity array power module, etc.); these are used to assess the impact on vehicle performance (i.e. propellant consumption) and transit times.

  6. Antimicrobial Peptide Simulations and the Influence of Force Field on the Free Energy for Pore Formation in Lipid Bilayers.

    PubMed

    Bennett, W F Drew; Hong, Chun Kit; Wang, Yi; Tieleman, D Peter

    2016-09-13

    Due to antimicrobial resistance, the development of new drugs to combat bacterial and fungal infections is an important area of research. Nature uses short, charged, and amphipathic peptides for antimicrobial defense, many of which disrupt the lipid membrane in addition to other possible targets inside the cell. Computer simulations have revealed atomistic details for the interactions of antimicrobial peptides and cell-penetrating peptides with lipid bilayers. Strong interactions between the polar interface and the charged peptides can induce bilayer deformations - including membrane rupture and peptide stabilization of a hydrophilic pore. Here, we performed microsecond-long simulations of the antimicrobial peptide CM15 in a POPC bilayer expecting to observe pore formation (based on previous molecular dynamics simulations). We show that caution is needed when interpreting results of equilibrium peptide-membrane simulations, given the length of time single trajectories can dwell in local energy minima for 100's of ns to microseconds. While we did record significant membrane perturbations from the CM15 peptide, pores were not observed. We explain this discrepancy by computing the free energy for pore formation with different force fields. Our results show a large difference in the free energy barrier (ca. 40 kJ/mol) against pore formation predicted by the different force fields that would result in orders of magnitude differences in the simulation time required to observe spontaneous pore formation. This explains why previous simulations using the Berger lipid parameters reported pores induced by charged peptides, while with CHARMM based models pores were not observed in our long time-scale simulations. We reconcile some of the differences in the distance dependent free energies by shifting the free energy profiles to account for thickness differences between force fields. The shifted curves show that all the models describe small defects in lipid bilayers in a consistent manner, suggesting a common physical basis.

  7. Evaluating Real-Time Platforms for Aircraft Prognostic Health Management Using Hardware-In-The-Loop

    DTIC Science & Technology

    2008-08-01

    obtained when using HIL and a simulated load. Initially, noticeable differences are seen when comparing the results from each real - time operating system . However...same model in native Simulink. These results show that each real - time operating system can be configured to accurately run transient Simulink

  8. Using Coupled Groundwater-Surface Water Models to Simulate Eco-Regional Differences in Climate Change Impacts on Hydrological Drought Regimes in British Columbia

    NASA Astrophysics Data System (ADS)

    Dierauer, J. R.; Allen, D. M.

    2016-12-01

    Climate change is expected to lead to an increase in extremes, including daily maximum temperatures, heat waves, and meteorological droughts, which will likely result in shifts in the hydrological drought regime (i.e. the frequency, timing, duration, and severity of drought events). While many studies have used hydrologic models to simulate climate change impacts on water resources, only a small portion of these studies have analyzed impacts on low flows and/or hydrological drought. This study is the first to use a fully coupled groundwater-surface water (gw-sw) model to study climate change impacts on hydrological drought. Generic catchment-scale gw-sw models were created for each of the six major eco-regions in British Columbia using the MIKE-SHE/MIKE-11 modelling code. Daily precipitation and temperature time series downscaled using bias-correction spatial disaggregation for the simulated period of 1950-2100 were obtained from the Pacific Climate Institute Consortium (PCIC). Streamflow and groundwater drought events were identified from the simulated time series for each catchment model using the moving window quantile threshold. The frequency, timing, duration, and severity of drought events were compared between the reference period (1961-2000) and two future time periods (2031-2060, 2071-2100). Results show how hydrological drought regimes across the different British Columbia eco-regions will be impacted by climate change.

  9. Controlling protein molecular dynamics: How to accelerate folding while preserving the native state

    NASA Astrophysics Data System (ADS)

    Jensen, Christian H.; Nerukh, Dmitry; Glen, Robert C.

    2008-12-01

    The dynamics of peptides and proteins generated by classical molecular dynamics (MD) is described by using a Markov model. The model is built by clustering the trajectory into conformational states and estimating transition probabilities between the states. Assuming that it is possible to influence the dynamics of the system by varying simulation parameters, we show how to use the Markov model to determine the parameter values that preserve the folded state of the protein and at the same time, reduce the folding time in the simulation. We investigate this by applying the method to two systems. The first system is an imaginary peptide described by given transition probabilities with a total folding time of 1μs. We find that only small changes in the transition probabilities are needed to accelerate (or decelerate) the folding. This implies that folding times for slowly folding peptides and proteins calculated using MD cannot be meaningfully compared to experimental results. The second system is a four residue peptide valine-proline-alanine-leucine in water. We control the dynamics of the transitions by varying the temperature and the atom masses. The simulation results show that it is possible to find the combinations of parameter values that accelerate the dynamics and at the same time preserve the native state of the peptide. A method for accelerating larger systems without performing simulations for the whole folding process is outlined.

  10. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. On the application of accelerated molecular dynamics to liquid water simulations.

    PubMed

    de Oliveira, César Augusto F; Hamelberg, Donald; McCammon, J Andrew

    2006-11-16

    Our group recently proposed a robust bias potential function that can be used in an efficient all-atom accelerated molecular dynamics (MD) approach to simulate the transition of high energy barriers without any advance knowledge of the potential-energy landscape. The main idea is to modify the potential-energy surface by adding a bias, or boost, potential in regions close to the local minima, such that all transitions rates are increased. By applying the accelerated MD simulation method to liquid water, we observed that this new simulation technique accelerates the molecular motion without losing its microscopic structure and equilibrium properties. Our results showed that the application of a small boost energy on the potential-energy surface significantly reduces the statistical inefficiency of the simulation while keeping all the other calculated properties unchanged. On the other hand, although aggressive acceleration of the dynamics simulation increases the self-diffusion coefficient of water molecules greatly and dramatically reduces the correlation time of the simulation, configurations representative of the true structure of liquid water are poorly sampled. Our results also showed the strength and robustness of this simulation technique, which confirm this approach as a very useful and promising tool to extend the time scale of the all-atom simulations of biological system with explicit solvent models. However, we should keep in mind that there is a compromise between the strength of the boost applied in the simulation and the reproduction of the ensemble average properties.

  12. Estimating the economic opportunity cost of water use with river basin simulators in a computationally efficient way

    NASA Astrophysics Data System (ADS)

    Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.

    2017-04-01

    The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).

  13. Heating, Hydrodynamics, and Radiation From a Laser Heated Non-LTE High-Z Target

    NASA Astrophysics Data System (ADS)

    Gray, William; Foord, M. E.; Schneider, M. B.; Barrios, M. A.; Brown, G. V.; Heeter, R. F.; Jarrott, L. C.; Liedahl, D. A.; Marley, E. V.; Mauche, C. W.; Widmann, K.

    2016-10-01

    We present 2D R-z simulations that model the hydrodynamics and x-ray output of a laser heated, tamped foil, using the rad-hydro code LASNEX. The foil consists of a thin (2400 A) cylindrical disk of iron/vanadium/gold that is embedded in a thicker Be tamper. The simulations utilize a non-LTE detailed configuration (DCA) model, which generates the emission spectra. Simulated pinhole images are compared with data, finding qualitative agreement with the time-history of the face-on emission profiles, and exhibiting an interesting reduction in emission size over a few ns time period. Furthermore, we find that the simulations recover similar burn through times in both the target and Be tamper as measured by a time-dependent filtered x-ray detector (DANTE). Additional results and characterization of the experimental plasma will be presented. This work performed under the auspices of U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  14. Simulation of air quality and operational cost to ventilate swine farrowing facilities in Midwest U.S. during winter

    PubMed Central

    Park, Jae Hong; Peters, Thomas M.; Altmaier, Ralph; Jones, Samuel M.; Gassman, Richard; Anthony, T. Renée

    2017-01-01

    We have developed a time-dependent simulation model to estimate in-room concentrations of multiple contaminants [ammonia (NH3), carbon dioxide (CO2), carbon monoxide (CO) and dust] as a function of increased ventilation with filtered recirculation for swine farrowing facilities. Energy and mass balance equations were used to simulate the indoor air quality (IAQ) and operational cost for a variety of ventilation conditions over a 3-month winter period for a facility located in the Midwest U.S., using simplified and real-time production parameters, comparing results to field data. A revised model was improved by minimizing the sum of squared errors (SSE) between modeled and measured NH3 and CO2. After optimizing NH3 and CO2, other IAQ results from the simulation were compared to field measurements using linear regression. For NH3, the coefficient of determination (R2) for simulation results and field measurements improved from 0.02 with the original model to 0.37 with the new model. For CO2, the R2 for simulation results and field measurements was 0.49 with the new model. When the makeup air was matched to hallway air CO2 concentrations (1,500 ppm), simulation results showed the smallest SSE. With the new model, the R2 for other contaminants were 0.34 for inhalable dust, 0.36 for respirable dust, and 0.26 for CO. Operation of the air cleaner decreased inhalable dust by 35% and respirable dust concentrations by 33%, while having no effect on NH3, CO2, in agreement with field data, and increasing operational cost by $860 (58%) for the three-month period. PMID:28775911

  15. Monte Carlo simulation of chemistry following radiolysis with TOPAS-nBio

    NASA Astrophysics Data System (ADS)

    Ramos-Méndez, J.; Perl, J.; Schuemann, J.; McNamara, A.; Paganetti, H.; Faddegon, B.

    2018-05-01

    Simulation of water radiolysis and the subsequent chemistry provides important information on the effect of ionizing radiation on biological material. The Geant4 Monte Carlo toolkit has added chemical processes via the Geant4-DNA project. The TOPAS tool simplifies the modeling of complex radiotherapy applications with Geant4 without requiring advanced computational skills, extending the pool of users. Thus, a new extension to TOPAS, TOPAS-nBio, is under development to facilitate the configuration of track-structure simulations as well as water radiolysis simulations with Geant4-DNA for radiobiological studies. In this work, radiolysis simulations were implemented in TOPAS-nBio. Users may now easily add chemical species and their reactions, and set parameters including branching ratios, dissociation schemes, diffusion coefficients, and reaction rates. In addition, parameters for the chemical stage were re-evaluated and updated from those used by default in Geant4-DNA to improve the accuracy of chemical yields. Simulation results of time-dependent and LET-dependent primary yields Gx (chemical species per 100 eV deposited) produced at neutral pH and 25 °C by short track-segments of charged particles were compared to published measurements. The LET range was 0.05–230 keV µm‑1. The calculated Gx values for electrons satisfied the material balance equation within 0.3%, similar for protons albeit with long calculation time. A smaller geometry was used to speed up proton and alpha simulations, with an acceptable difference in the balance equation of 1.3%. Available experimental data of time-dependent G-values for agreed with simulated results within 7%  ±  8% over the entire time range; for over the full time range within 3%  ±  4% for H2O2 from 49%  ±  7% at earliest stages and 3%  ±  12% at saturation. For the LET-dependent Gx, the mean ratios to the experimental data were 1.11  ±  0.98, 1.21  ±  1.11, 1.05  ±  0.52, 1.23  ±  0.59 and 1.49  ±  0.63 (1 standard deviation) for , , H2, H2O2 and , respectively. In conclusion, radiolysis and subsequent chemistry with Geant4-DNA has been successfully incorporated in TOPAS-nBio. Results are in reasonable agreement with published measured and simulated data.

  16. A Numerical Study of the Thermal Characteristics of an Air Cavity Formed by Window Sashes in a Double Window

    NASA Astrophysics Data System (ADS)

    Kang, Jae-sik; Oh, Eun-Joo; Bae, Min-Jung; Song, Doo-Sam

    2017-12-01

    Given that the Korean government is implementing what has been termed the energy standards and labelling program for windows, window companies will be required to assign window ratings based on the experimental results of their product. Because this has added to the cost and time required for laboratory tests by window companies, the simulation system for the thermal performance of windows has been prepared to compensate for time and cost burdens. In Korea, a simulator is usually used to calculate the thermal performance of a window through WINDOW/THERM, complying with ISO 15099. For a single window, the simulation results are similar to experimental results. A double window is also calculated using the same method, but the calculation results for this type of window are unreliable. ISO 15099 should not recommend the calculation of the thermal properties of an air cavity between window sashes in a double window. This causes a difference between simulation and experimental results pertaining to the thermal performance of a double window. In this paper, the thermal properties of air cavities between window sashes in a double window are analyzed through computational fluid dynamics (CFD) simulations with the results compared to calculation results certified by ISO 15099. The surface temperature of the air cavity analyzed by CFD is compared to the experimental temperatures. These results show that an appropriate calculation method for an air cavity between window sashes in a double window should be established for reliable thermal performance results for a double window.

  17. Analysis of dispatching rules in a stochastic dynamic job shop manufacturing system with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Sharma, Pankaj; Jain, Ajai

    2014-12-01

    Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.

  18. SU-E-J-66: Evaluation of a Real-Time Positioning Assistance Simulator System for Skull Radiography Using the Microsoft Kinect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurata, T; Ono, M; Kozono, K

    2014-06-01

    Purpose: The purpose of this study is to investigate the feasibility of a low cost, small size positioning assistance simulator system for skull radiography using the Microsoft Kinect sensor. A conventional radiographic simulator system can only measure the three-dimensional coordinates of an x-ray tube using angle sensors, but not measure the movement of the subject. Therefore, in this study, we developed a real-time simulator system using the Microsoft Kinect to measure both the x-ray tube and the subject, and evaluated its accuracy and feasibility by comparing the simulated and the measured x-ray images. Methods: This system can track a headmore » phantom by using Face Tracking, which is one of the functions of the Kinect. The relative relationship between the Kinect and the head phantom was measured and the projection image was calculated by using the ray casting method, and by using three-dimensional CT head data with 220 slices at 512 × 512 pixels. X-ray images were thus obtained by using a computed radiography (CR) system. We could then compare the simulated projection images with the measured x-ray images from 0 degrees to 45 degrees at increments of 15 degrees by calculating the cross correlation coefficient C. Results: The calculation time of the simulated projection images was almost real-time (within 1 second) by using the Graphics Processing Unit(GPU). The cross-correlation coefficients C are: 0.916; 0.909; 0.891; and, 0.886 at 0, 15, 30, and 45 degrees, respectively. As a result, there were strong correlations between the simulated and measured images. Conclusion: This system can be used to perform head positioning more easily and accurately. It is expected that this system will be useful for learning radiographic techniques by students. Moreover, it could also be used for predicting the actual x-ray image prior to x-ray exposure in clinical environments.« less

  19. Gray: a ray tracing-based Monte Carlo simulator for PET

    NASA Astrophysics Data System (ADS)

    Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.

    2018-05-01

    Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.

  20. A path-level exact parallelization strategy for sequential simulation

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

Top