[Effects of high +Gx during simulated spaceship emergency return on learning and memory in rats].
Xu, Zhi-peng; Sun, Xi-qing; Liu, Ting-song; Wu, Bin; Zhang, Shu; Wu, Ping
2005-02-01
To observe the effects of high +Gx during simulated spaceship emergency return on learning and memory in rats. Thirty two male SD rats were randomly divided into control group, 7 d simulated weightlessness group, +15 Gx/180 s group and +15 Gx/180 s exposure after 7 d simulated weightlessness group, with 8 rats in each group. The changes of learning and memory in rats were measured after stresses by means of Y-maze test and step-through test. In Y-maze test, as compared with control group, percentage of correct reactions decreased significantly (P<0.01) and reaction time increased significantly (P<0.01) in hypergravity after simulated weightlessness group at all time after stress; as compared with +15 Gx group or simulated weightlessness group, percentage of correct reactions decreased significantly (P< 0.05) and reaction time increased significantly (P< 0.05) immediately after stress. In step-through test, as compared with control group, total time increased significantly (P<0.01) in hypergravity after simulated weightlessness group at 1 d after stress; latent time decreased significantly (P<0.01) and number of errors increased significantly (P< 0.01) at all the time after stress. As compared with +15 Gx group, total time increased significantly (P<0.05) immediately, 1 d after stress. As compared with simulated weightlessness group, total time and number of errors increased significantly (P<0.05) immediately after stress. It is suggested that +15 Gx/180 s and simulated weightlessness may affect the ability of learning and memory of rats. Simulated weightlessness for 7 d can aggravate the effect of +Gx on learning and memory ability in rats.
Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson
2008-01-01
We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...
Bonjour, Timothy J; Charny, Grigory; Thaxton, Robert E
2016-11-01
Rapid effective trauma resuscitations (TRs) decrease patient morbidity and mortality. Few studies have evaluated TR care times. Effective time goals and superior human patient simulator (HPS) training can improve patient survivability. The purpose of this study was to compare live TR to HPS resuscitation times to determine mean incremental resuscitation times and ascertain if simulation was educationally equivalent. The study was conducted at San Antonio Military Medical Center, Department of Defense Level I trauma center. This was a prospective observational study measuring incremental step times by trauma teams during trauma and simulation patient resuscitations. Trauma and simulation patient arms had 60 patients for statistical significance. Participants included Emergency Medicine residents and Physician Assistant residents as the trauma team leader. The trauma patient arm revealed a mean evaluation time of 10:33 and simulation arm 10:23. Comparable time characteristics in the airway, intravenous access, blood sample collection, and blood pressure data subsets were seen. TR mean times were similar to the HPS arm subsets demonstrating simulation as an effective educational tool. Effective stepwise approaches, incremental time goals, and superior HPS training can improve patient survivability and improved departmental productivity using TR teams. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
Numerical Study of a High Head Francis Turbine with Measurements from the Francis-99 Project
NASA Astrophysics Data System (ADS)
Wallimann, H.; Neubauer, R.
2015-01-01
For the Francis-99 project initiated by the Norwegian University of Science and Technology (NTNU, Norway) and the Luleå University of Technology (LTU, Sweden) numerical flow simulation has been performed and the results compared to experimentally obtained data. The full machine including spiral casing, stay vanes, guide vanes, runner and draft tube was simulated transient for three operating points defined by the Francis-99 organisers. Two sets of results were created with differing time steps. Additionally, a reduced domain was simulated in a stationary manner to create a complete cut along constant prototype head and constant prototype discharge. The efficiency values and shape of the curves have been investigated and compared to the experimental data. Special attention has been given to rotor stator interaction (RSI). Signals from several probes and their counterpart in the simulation have been processed to evaluate the pressure fluctuations occurring due to the RSI. The direct comparison of the hydraulic efficiency obtained by the full machine simulation compared to the experimental data showed no improvement when using a 1° time step compared to a coarser 2° time step. At the BEP the 2° time step even showed a slightly better result with an absolute deviation 1.08% compared with 1.24% for the 1° time step. At the other two operating points the simulation results were practically identical but fell short of predicting the measured values. The RSI evaluation was done using the results of the 2° time step simulation, which proved to be an adequate setting to reproduce pressure signals with peaks at the correct frequencies. The simulation results showed the highest amplitudes in the vaneless space at the BEP operating point at a location different from the probe measurements available. This implies that not only the radial distance, but the shape of the vaneless space influences the RSI.
Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-29
In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less
Chung, Tae Nyoung; Kim, Sun Wook; You, Je Sung; Chung, Hyun Soo
2016-01-01
Objective Tube thoracostomy (TT) is a commonly performed intensive care procedure. Simulator training may be a good alternative method for TT training, compared with conventional methods such as apprenticeship and animal skills laboratory. However, there is insufficient evidence supporting use of a simulator. The aim of this study is to determine whether training with medical simulator is associated with faster TT process, compared to conventional training without simulator. Methods This is a simulation study. Eligible participants were emergency medicine residents with very few (≤3 times) TT experience. Participants were randomized to two groups: the conventional training group, and the simulator training group. While the simulator training group used the simulator to train TT, the conventional training group watched the instructor performing TT on a cadaver. After training, all participants performed a TT on a cadaver. The performance quality was measured as correct placement and time delay. Subjects were graded if they had difficulty on process. Results Estimated median procedure time was 228 seconds in the conventional training group and 75 seconds in the simulator training group, with statistical significance (P=0.040). The difficulty grading did not show any significant difference among groups (overall performance scale, 2 vs. 3; P=0.094). Conclusion Tube thoracostomy training with a medical simulator, when compared to no simulator training, is associated with a significantly faster procedure, when performed on a human cadaver. PMID:27752610
The viability of ADVANTG deterministic method for synthetic radiography generation
NASA Astrophysics Data System (ADS)
Bingham, Andrew; Lee, Hyoung K.
2018-07-01
Fast simulation techniques to generate synthetic radiographic images of high resolution are helpful when new radiation imaging systems are designed. However, the standard stochastic approach requires lengthy run time with poorer statistics at higher resolution. The investigation of the viability of a deterministic approach to synthetic radiography image generation was explored. The aim was to analyze a computational time decrease over the stochastic method. ADVANTG was compared to MCNP in multiple scenarios including a small radiography system prototype, to simulate high resolution radiography images. By using ADVANTG deterministic code to simulate radiography images the computational time was found to decrease 10 to 13 times compared to the MCNP stochastic approach while retaining image quality.
Reducing EnergyPlus Run Time For Code Compliance Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.
2014-09-12
Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less
Real-time simulation of the TF30-P-3 turbofan engine using a hybrid computer
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Bruton, W. M.
1974-01-01
A real-time, hybrid-computer simulation of the TF30-P-3 turbofan engine was developed. The simulation was primarily analog in nature but used the digital portion of the hybrid computer to perform bivariate function generation associated with the performance of the engine's rotating components. FORTRAN listings and analog patching diagrams are provided. The hybrid simulation was controlled by a digital computer programmed to simulate the engine's standard hydromechanical control. Both steady-state and dynamic data obtained from the digitally controlled engine simulation are presented. Hybrid simulation data are compared with data obtained from a digital simulation provided by the engine manufacturer. The comparisons indicate that the real-time hybrid simulation adequately matches the baseline digital simulation.
Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD
NASA Astrophysics Data System (ADS)
Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.
2017-12-01
We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.
Comparison of Errors Using Two Length-Based Tape Systems for Prehospital Care in Children.
Rappaport, Lara D; Brou, Lina; Givens, Tim; Mandt, Maria; Balakas, Ashley; Roswell, Kelley; Kotas, Jason; Adelgais, Kathleen M
2016-01-01
The use of a length/weight-based tape (LBT) for equipment size and drug dosing for pediatric patients is recommended in a joint statement by multiple national organizations. A new system, known as Handtevy™, allows for rapid determination of critical drug doses without performing calculations. To compare two LBT systems for dosing errors and time to medication administration in simulated prehospital scenarios. This was a prospective randomized trial comparing the Broselow Pediatric Emergency Tape™ (Broselow) and Handtevy LBT™ (Handtevy). Paramedics performed 2 pediatric simulations: cardiac arrest with epinephrine administration and hypoglycemia mandating dextrose. Each scenario was repeated utilizing both systems with a 1-year-old and 5-year-old size manikin. Facilitators recorded identified errors and time points of critical actions including time to medication. We enrolled 80 paramedics, performing 320 simulations. For Dextrose, there were significantly more errors with Broselow (63.8%) compared to Handtevy (13.8%) and time to administration was longer with the Broselow system (220 seconds vs. 173 seconds). For epinephrine, the LBTs were similar in overall error rate (Broselow 21.3% vs. Handtevy 16.3%) and time to administration (89 vs. 91 seconds). Cognitive errors were more frequent when using the Broselow compared to Handtevy, particularly with dextrose administration. The frequency of procedural errors was similar between the two LBT systems. In simulated prehospital scenarios, use of the Handtevy LBT system resulted in fewer errors for dextrose administration compared to the Broselow LBT, with similar time to administration and accuracy of epinephrine administration.
Mesoscale Simulation Data for Initializing Fast-Time Wake Transport and Decay Models
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.; Vanvalkenburg, Randal L.; Pruis, Mathew J.; LimonDuparcmeur, Fanny M.
2012-01-01
The fast-time wake transport and decay models require vertical profiles of crosswinds, potential temperature and the eddy dissipation rate as initial conditions. These inputs are normally obtained from various field sensors. In case of data-denied scenarios or operational use, these initial conditions can be provided by mesoscale model simulations. In this study, the vertical profiles of potential temperature from a mesoscale model were used as initial conditions for the fast-time wake models. The mesoscale model simulations were compared against available observations and the wake model predictions were compared with the Lidar measurements from three wake vortex field experiments.
Real-time simulations for automated rendezvous and capture
NASA Technical Reports Server (NTRS)
Cuseo, John A.
1991-01-01
Although the individual technologies for automated rendezvous and capture (AR&C) exist, they have not yet been integrated to produce a working system in the United States. Thus, real-time integrated systems simulations are critical to the development and pre-flight demonstration of an AR&C capability. Real-time simulations require a level of development more typical of a flight system compared to purely analytical methods, thus providing confidence in derived design concepts. This presentation will describe Martin Marietta's Space Operations Simulation (SOS) Laboratory, a state-of-the-art real-time simulation facility for AR&C, along with an implementation for the Satellite Servicer System (SSS) Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
NASA Astrophysics Data System (ADS)
Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
Framework for modeling urban restoration resilience time in the aftermath of an extreme event
Ramachandran, Varun; Long, Suzanna K.; Shoberg, Thomas G.; Corns, Steven; Carlo, Héctor
2015-01-01
The impacts of extreme events continue long after the emergency response has terminated. Effective reconstruction of supply-chain strategic infrastructure (SCSI) elements is essential for postevent recovery and the reconnectivity of a region with the outside. This study uses an interdisciplinary approach to develop a comprehensive framework to model resilience time. The framework is tested by comparing resilience time results for a simulated EF-5 tornado with ground truth data from the tornado that devastated Joplin, Missouri, on May 22, 2011. Data for the simulated tornado were derived for Overland Park, Johnson County, Kansas, in the greater Kansas City, Missouri, area. Given the simulated tornado, a combinatorial graph considering the damages in terms of interconnectivity between different SCSI elements is derived. Reconstruction in the aftermath of the simulated tornado is optimized using the proposed framework to promote a rapid recovery of the SCSI. This research shows promising results when compared with the independent quantifiable data obtained from Joplin, Missouri, returning a resilience time of 22 days compared with 25 days reported by city and state officials.
NASA Astrophysics Data System (ADS)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore themore » robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.« less
Fu, Shangxi; Liu, Xiao; Zhou, Li; Zhou, Meisheng; Wang, Liming
2017-08-01
The purpose of this study was to estimate the effects of surgical laparoscopic operation course on laparoscopic operation skills after the simulated training for medical students with relatively objective results via data gained before and after the practice course of laparoscopic simulator of the resident standardized trainees. Experiment 1: 20 resident standardized trainees with no experience in laparoscopic surgery were included in the inexperienced group and finished simulated cholecystectomy according to simulator videos. Simulator data was collected (total operation time, path length, average speed of instrument movement, movement efficiency, number of perforations, the time cautery is applied without appropriate contact with adhesions, number of serious complications). Ten attending doctors were included in the experienced group and conducted the operation of simulated cholecystectomy directly. Data was collected with simulator. Data of two groups was compared. Experiment 2: Participants in inexperienced group were assigned to basic group (receiving 8 items of basic operation training) and special group (receiving 8 items of basic operation training and 4 items of specialized training), and 10 persons for each group. They received training course designed by us respectively. After training level had reached the expected target, simulated cholecystectomy was performed, and data was collected. Experimental data between basic group and special group was compared and then data between special group and experienced group was compared. Results of experiment 1 showed that there is significant difference between data in inexperienced group in which participants operated simulated cholecystectomy only according to instructors' teaching and operation video and data in experienced group. Result of experiment 2 suggested that, total operation time, number of perforations, number of serious complications, number of non-cauterized bleeding and the time cautery is applied without appropriate contact with adhesions in special group were all superior to those in basic group. There was no statistical difference on other data between special group and basic group. Comparing special group with experienced group, data of total operation time and the time cautery is applied without appropriate contact with adhesions in experienced group was superior to that in special group. There was no statistical difference on other data between special group and experienced group. Laparoscopic simulators are effective for surgical skills training. Basic courses could mainly improve operator's hand-eye coordination and perception of sense of the insertion depth for instruments. Specialized training courses could not only improve operator's familiarity with surgeries, but also reduce operation time and risk, and improve safety.
The effect of thermal velocities on structure formation in N-body simulations of warm dark matter
NASA Astrophysics Data System (ADS)
Leo, Matteo; Baugh, Carlton M.; Li, Baojiu; Pascoli, Silvia
2017-11-01
We investigate the impact of thermal velocities in N-body simulations of structure formation in warm dark matter models. Adopting the commonly used approach of adding thermal velocities, randomly selected from a Fermi-Dirac distribution, to the gravitationally-induced velocities of the simulation particles, we compare the matter and velocity power spectra measured from CDM and WDM simulations, in the latter case with and without thermal velocities. This prescription for adding thermal velocities introduces numerical noise into the initial conditions, which influences structure formation. At early times, the noise affects dramatically the power spectra measured from simulations with thermal velocities, with deviations of the order of ~ Script O(10) (in the matter power spectra) and of the order of ~ Script O(102) (in the velocity power spectra) compared to those extracted from simulations without thermal velocities. At late times, these effects are less pronounced with deviations of less than a few percent. Increasing the resolution of the N-body simulation shifts these discrepancies to higher wavenumbers. We also find that spurious haloes start to appear in simulations which include thermal velocities at a mass that is ~3 times larger than in simulations without thermal velocities.
Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production
NASA Astrophysics Data System (ADS)
Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne
2018-05-01
A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.
NASA Technical Reports Server (NTRS)
Lee, Sangsan; Lele, Sanjiva K.; Moin, Parviz
1992-01-01
For the numerical simulation of inhomogeneous turbulent flows, a method is developed for generating stochastic inflow boundary conditions with a prescribed power spectrum. Turbulence statistics from spatial simulations using this method with a low fluctuation Mach number are in excellent agreement with the experimental data, which validates the procedure. Turbulence statistics from spatial simulations are also compared to those from temporal simulations using Taylor's hypothesis. Statistics such as turbulence intensity, vorticity, and velocity derivative skewness compare favorably with the temporal simulation. However, the statistics of dilatation show a significant departure from those obtained in the temporal simulation. To directly check the applicability of Taylor's hypothesis, space-time correlations of fluctuations in velocity, vorticity, and dilatation are investigated. Convection velocities based on vorticity and velocity fluctuations are computed as functions of the spatial and temporal separations. The profile of the space-time correlation of dilatation fluctuations is explained via a wave propagation model.
Stiegler, Marjorie; Hobbs, Gene; Martinelli, Susan M; Zvara, David; Arora, Harendra; Chen, Fei
2018-01-01
Background Simulation is an effective method for creating objective summative assessments of resident trainees. Real-time assessment (RTA) in simulated patient care environments is logistically challenging, especially when evaluating a large group of residents in multiple simulation scenarios. To date, there is very little data comparing RTA with delayed (hours, days, or weeks later) video-based assessment (DA) for simulation-based assessments of Accreditation Council for Graduate Medical Education (ACGME) sub-competency milestones. We hypothesized that sub-competency milestone evaluation scores obtained from DA, via audio-video recordings, are equivalent to the scores obtained from RTA. Methods Forty-one anesthesiology residents were evaluated in three separate simulated scenarios, representing different ACGME sub-competency milestones. All scenarios had one faculty member perform RTA and two additional faculty members perform DA. Subsequently, the scores generated by RTA were compared with the average scores generated by DA. Variance component analysis was conducted to assess the amount of variation in scores attributable to residents and raters. Results Paired t-tests showed no significant difference in scores between RTA and averaged DA for all cases. Cases 1, 2, and 3 showed an intraclass correlation coefficient (ICC) of 0.67, 0.85, and 0.50 for agreement between RTA scores and averaged DA scores, respectively. Analysis of variance of the scores assigned by the three raters showed a small proportion of variance attributable to raters (4% to 15%). Conclusions The results demonstrate that video-based delayed assessment is as reliable as real-time assessment, as both assessment methods yielded comparable scores. Based on a department’s needs or logistical constraints, our findings support the use of either real-time or delayed video evaluation for assessing milestones in a simulated patient care environment. PMID:29736352
Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner
NASA Astrophysics Data System (ADS)
Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.
2015-02-01
Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.
Capabilities of stochastic rainfall models as data providers for urban hydrology
NASA Astrophysics Data System (ADS)
Haberlandt, Uwe
2017-04-01
For planning of urban drainage systems using hydrological models, long, continuous precipitation series with high temporal resolution are needed. Since observed time series are often too short or not available everywhere, the use of synthetic precipitation is a common alternative. This contribution compares three precipitation models regarding their suitability to provide 5 minute continuous rainfall time series for a) sizing of drainage networks for urban flood protection and b) dimensioning of combined sewage systems for pollution reduction. The rainfall models are a parametric stochastic model (Haberlandt et al., 2008), a non-parametric probabilistic approach (Bárdossy, 1998) and a stochastic downscaling of dynamically simulated rainfall (Berg et al., 2013); all models are operated both as single site and multi-site generators. The models are applied with regionalised parameters assuming that there is no station at the target location. Rainfall and discharge characteristics are utilised for evaluation of the model performance. The simulation results are compared against results obtained from reference rainfall stations not used for parameter estimation. The rainfall simulations are carried out for the federal states of Baden-Württemberg and Lower Saxony in Germany and the discharge simulations for the drainage networks of the cities of Hamburg, Brunswick and Freiburg. Altogether, the results show comparable simulation performance for the three models, good capabilities for single site simulations but low skills for multi-site simulations. Remarkably, there is no significant difference in simulation performance comparing the tasks flood protection with pollution reduction, so the models are finally able to simulate both the extremes and the long term characteristics of rainfall equally well. Bárdossy, A., 1998. Generating precipitation time series using simulated annealing. Wat. Resour. Res., 34(7): 1737-1744. Berg, P., Wagner, S., Kunstmann, H., Schädler, G., 2013. High resolution regional climate model simulations for Germany: part I — validation. Climate Dynamics, 40(1): 401-414. Haberlandt, U., Ebner von Eschenbach, A.-D., Buchwald, I., 2008. A space-time hybrid hourly rainfall model for derived flood frequency analysis. Hydrol. Earth Syst. Sci., 12: 1353-1367.
Pan, Jui-Wen; Tsai, Pei-Jung; Chang, Kao-Der; Chang, Yung-Yuan
2013-03-01
In this paper, we propose a method to analyze the light extraction efficiency (LEE) enhancement of a nanopatterned sapphire substrates (NPSS) light-emitting diode (LED) by comparing wave optics software with ray optics software. Finite-difference time-domain (FDTD) simulations represent the wave optics software and Light Tools (LTs) simulations represent the ray optics software. First, we find the trends of and an optimal solution for the LEE enhancement when the 2D-FDTD simulations are used to save on simulation time and computational memory. The rigorous coupled-wave analysis method is utilized to explain the trend we get from the 2D-FDTD algorithm. The optimal solution is then applied in 3D-FDTD and LTs simulations. The results are similar and the difference in LEE enhancement between the two simulations does not exceed 8.5% in the small LED chip area. More than 10(4) times computational memory is saved during the LTs simulation in comparison to the 3D-FDTD simulation. Moreover, LEE enhancement from the side of the LED can be obtained in the LTs simulation. An actual-size NPSS LED is simulated using the LTs. The results show a more than 307% improvement in the total LEE enhancement of the NPSS LED with the optimal solution compared to the conventional LED.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Chih-Chieh; Lin, Hsin-Hon; Lin, Chang-Shiun
Abstract-Multiple-photon emitters, such as In-111 or Se-75, have enormous potential in the field of nuclear medicine imaging. For example, Se-75 can be used to investigate the bile acid malabsorption and measure the bile acid pool loss. The simulation system for emission tomography (SimSET) is a well-known Monte Carlo simulation (MCS) code in nuclear medicine for its high computational efficiency. However, current SimSET cannot simulate these isotopes due to the lack of modeling of complex decay scheme and the time-dependent decay process. To extend the versatility of SimSET for simulation of those multi-photon emission isotopes, a time-resolved multiple photon history generatormore » based on SimSET codes is developed in present study. For developing the time-resolved SimSET (trSimSET) with radionuclide decay process, the new MCS model introduce new features, including decay time information and photon time-of-flight information, into this new code. The half-life of energy states were tabulated from the Evaluated Nuclear Structure Data File (ENSDF) database. The MCS results indicate that the overall percent difference is less than 8.5% for all simulation trials as compared to GATE. To sum up, we demonstrated that time-resolved SimSET multiple photon history generator can have comparable accuracy with GATE and keeping better computational efficiency. The new MCS code is very useful to study the multi-photon imaging of novel isotopes that needs the simulation of lifetime and the time-of-fight measurements. (authors)« less
NASA Astrophysics Data System (ADS)
Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong
2016-11-01
In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.
Information and Complexity Measures Applied to Observed and Simulated Soil Moisture Time Series
USDA-ARS?s Scientific Manuscript database
Time series of soil moisture-related parameters provides important insights in functioning of soil water systems. Analysis of patterns within these time series has been used in several studies. The objective of this work was to compare patterns in observed and simulated soil moisture contents to u...
Modelling approaches: the case of schizophrenia.
Heeg, Bart M S; Damen, Joep; Buskens, Erik; Caleo, Sue; de Charro, Frank; van Hout, Ben A
2008-01-01
Schizophrenia is a chronic disease characterized by periods of relative stability interrupted by acute episodes (or relapses). The course of the disease may vary considerably between patients. Patient histories show considerable inter- and even intra-individual variability. We provide a critical assessment of the advantages and disadvantages of three modelling techniques that have been used in schizophrenia: decision trees, (cohort and micro-simulation) Markov models and discrete event simulation models. These modelling techniques are compared in terms of building time, data requirements, medico-scientific experience, simulation time, clinical representation, and their ability to deal with patient heterogeneity, the timing of events, prior events, patient interaction, interaction between co-variates and variability (first-order uncertainty). We note that, depending on the research question, the optimal modelling approach should be selected based on the expected differences between the comparators, the number of co-variates, the number of patient subgroups, the interactions between co-variates, and simulation time. Finally, it is argued that in case micro-simulation is required for the cost-effectiveness analysis of schizophrenia treatments, a discrete event simulation model is best suited to accurately capture all of the relevant interdependencies in this chronic, highly heterogeneous disease with limited long-term follow-up data.
van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.
2018-01-01
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620
NASA Technical Reports Server (NTRS)
Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.
2005-01-01
This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Auditory perceptual simulation: Simulating speech rates or accents?
Zhou, Peiyun; Christianson, Kiel
2016-07-01
When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. Copyright © 2016 Elsevier B.V. All rights reserved.
Automated external defibrillators and simulated in-hospital cardiac arrests.
Rossano, Joseph W; Jefferson, Larry S; Smith, E O'Brian; Ward, Mark A; Mott, Antonio R
2009-05-01
To test the hypothesis that pediatric residents would have shorter time to attempted defibrillation using automated external defibrillators (AEDs) compared with manual defibrillators (MDs). A prospective, randomized, controlled trial of AEDs versus MDs was performed. Pediatric residents responded to a simulated in-hospital ventricular fibrillation cardiac arrest and were randomized to using either an AED or MD. The primary end point was time to attempted defibrillation. Sixty residents, 21 (35%) interns, were randomized to 2 groups (AED = 30, MD = 30). Residents randomized to the AED group had a significantly shorter time to attempted defibrillation [median, 60 seconds (interquartile range, 53 to 71 seconds)] compared with those randomized to the MD group [median, 103 seconds (interquartile range, 68 to 288 seconds)] (P < .001). All residents in the AED group attempted defibrillation at <5 minutes compared with 23 (77%) in the MD group (P = .01). AEDs improve the time to attempted defibrillation by pediatric residents in simulated cardiac arrests. Further studies are needed to help determine the role of AEDs in pediatric in-hospital cardiac arrests.
NASA Astrophysics Data System (ADS)
Hagan, Maura; Häusler, Kathrin; Lu, Gang; Forbes, Jeffrey; Zhang, Xiaoli; Doornbos, Eelco; Bruinsma, Sean
2014-05-01
We present the results of an investigation of the upper atmosphere during April 2010 when it was disturbed by a fast-moving coronal mass ejection. Our study is based on comparative analysis of observations made by the Gravity field and steady-state Ocean Circulation Explorer (GOCE), Challenging Minisatellite Payload (CHAMP), and Gravity Recovery And Climate Experiment (GRACE) satellites and a set of simulations with the National Center for Atmospheric Research (NCAR) thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (TIME-GCM). We compare and contrast the satellite observations with TIME-GCM results from a realistic simulation based on prevailing meteorological and solar geomagnetic conditions. We diagnose the comparative importance of the upper atmospheric signatures attributable to meteorological forcing with those attributable to storm effects by diagnosing a series of complementary control TIME-GCM simulations. These results also quantify the extent to which lower and middle atmospheric sources of upper atmospheric variability precondition its response to the solar geomagnetic storm.
Process fault detection and nonlinear time series analysis for anomaly detection in safeguards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, T.L.; Mullen, M.F.; Wangen, L.E.
In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less
Gray: a ray tracing-based Monte Carlo simulator for PET
NASA Astrophysics Data System (ADS)
Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.
2018-05-01
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
Rojas, David; Haji, Faizal; Shewaga, Rob; Kapralos, Bill; Dubrowski, Adam
2014-01-01
Interest in the measurement of cognitive load (CL) in simulation-based education has grown in recent years. In this paper we present two pilot experiments comparing the sensitivity of two reaction time based secondary task measures of CL. The results suggest that simple reaction time measures are sensitive enough to detect changes in CL experienced by novice learners in the initial stages of simulation-based surgical skills training.
Fast Simulation of Dynamic Ultrasound Images Using the GPU.
Storve, Sigurd; Torp, Hans
2017-10-01
Simulated ultrasound data is a valuable tool for development and validation of quantitative image analysis methods in echocardiography. Unfortunately, simulation time can become prohibitive for phantoms consisting of a large number of point scatterers. The COLE algorithm by Gao et al. is a fast convolution-based simulator that trades simulation accuracy for improved speed. We present highly efficient parallelized CPU and GPU implementations of the COLE algorithm with an emphasis on dynamic simulations involving moving point scatterers. We argue that it is crucial to minimize the amount of data transfers from the CPU to achieve good performance on the GPU. We achieve this by storing the complete trajectories of the dynamic point scatterers as spline curves in the GPU memory. This leads to good efficiency when simulating sequences consisting of a large number of frames, such as B-mode and tissue Doppler data for a full cardiac cycle. In addition, we propose a phase-based subsample delay technique that efficiently eliminates flickering artifacts seen in B-mode sequences when COLE is used without enough temporal oversampling. To assess the performance, we used a laptop computer and a desktop computer, each equipped with a multicore Intel CPU and an NVIDIA GPU. Running the simulator on a high-end TITAN X GPU, we observed two orders of magnitude speedup compared to the parallel CPU version, three orders of magnitude speedup compared to simulation times reported by Gao et al. in their paper on COLE, and a speedup of 27000 times compared to the multithreaded version of Field II, using numbers reported in a paper by Jensen. We hope that by releasing the simulator as an open-source project we will encourage its use and further development.
Leung, Sumie; Croft, Rodney J; Jackson, Melinda L; Howard, Mark E; McKenzie, Raymond J
2012-01-01
The present study compared the effects of a variety of mobile phone usage conditions to different levels of alcohol intoxication on simulated driving performance and psychomotor vigilance. Twelve healthy volunteers participated in a crossover design in which each participant completed a simulated driving task on 2 days, separated by a 1-week washout period. On the mobile phone day, participants performed the simulated driving task under each of 4 conditions: no phone usage, a hands-free naturalistic conversation, a hands-free cognitively demanding conversation, and texting. On the alcohol day, participants performed the simulated driving task at four different blood alcohol concentration (BAC) levels: 0.00, 0.04, 0.07, and 0.10. Driving performance was assessed by variables including time within target speed range, time spent speeding, braking reaction time, speed deviation, and lateral lane position deviation. In the BAC 0.07 and 0.10 alcohol conditions, participants spent less time in the target speed range and more time speeding and took longer to brake in the BAC 0.04, 0.07, and 0.10 than in the BAC 0.00 condition. In the mobile phone condition, participants took longer to brake in the natural hands-free conversation, cognitively demanding hands-free conversation and texting conditions and spent less time in the target speed range and more time speeding in the cognitively demanding, hands-free conversation, and texting conditions. When comparing the 2 conditions, the naturalistic conversation was comparable to the legally permissible BAC level (0.04), and the cognitively demanding and texting conversations were similar to the BAC 0.07 to 0.10 results. The findings of the current laboratory study suggest that very simple conversations on a mobile phone may not represent a significant driving risk (compared to legally permissible BAC levels), whereas cognitively demanding, hands-free conversation, and particularly texting represent significant risks to driving.
Prediction of Land use changes using CA in GIS Environment
NASA Astrophysics Data System (ADS)
Kiavarz Moghaddam, H.; Samadzadegan, F.
2009-04-01
Urban growth is a typical self-organized system that results from the interaction between three defined systems; developed urban system, natural non-urban system and planned urban system. Urban growth simulation for an artificial city is carried out first. It evaluates a number of urban sprawl parameters including the size and shape of neighborhood besides testing different types of constraints on urban growth simulation. The results indicate that circular-type neighborhood shows smoother but faster urban growth as compared to nine-cell Moore neighborhood. Cellular Automata is proved to be very efficient in simulating the urban growth simulation over time. The strength of this technology comes from the ability of urban modeler to implement the growth simulation model, evaluating the results and presenting the output simulation results in visual interpretable environment. Artificial city simulation model provides an excellent environment to test a number of simulation parameters such as neighborhood influence on growth results and constraints role in driving the urban growth .Also, CA rules definition is critical stage in simulating the urban growth pattern in a close manner to reality. CA urban growth simulation and prediction of Tehran over the last four decades succeeds to simulate specified tested growth years at a high accuracy level. Some real data layer have been used in the CA simulation training phase such as 1995 while others used for testing the prediction results such as 2002. Tuning the CA growth rules is important through comparing the simulated images with the real data to obtain feedback. An important notice is that CA rules need also to be modified over time to adapt to the urban growth pattern. The evaluation method used on region basis has its advantage in covering the spatial distribution component of the urban growth process. Next step includes running the developed CA simulation over classified raster data for three years in a developed ArcGIS extention. A set of crisp rules are defined and calibrated based on real urban growth pattern. Uncertainty analysis is performed to evaluate the accuracy of the simulated results as compared to the historical real data. Evaluation shows promising results represented by the high average accuracies achieved. The average accuracy for the predicted growth images 1964 and 2002 is over 80 %. Modifying CA growth rules over time to match the growth pattern changes is important to obtain accurate simulation. This modification is based on the urban growth relationship for Tehran over time as can be seen in the historical raster data. The feedback obtained from comparing the simulated and real data is crucial in identifying the optimal set of CA rules for reliable simulation and calibrating growth steps.
Real-time video communication improves provider performance in a simulated neonatal resuscitation.
Fang, Jennifer L; Carey, William A; Lang, Tara R; Lohse, Christine M; Colby, Christopher E
2014-11-01
To determine if a real-time audiovisual link with a neonatologist, termed video-assisted resuscitation or VAR, improves provider performance during a simulated neonatal resuscitation scenario. Using high-fidelity simulation, 46 study participants were presented with a neonatal resuscitation scenario. The control group performed independently, while the intervention group utilized VAR. Time to effective ventilation was compared using Wilcoxon rank sum tests. Providers' use of the corrective steps for ineffective ventilation per the NRP algorithm was compared using Cochran-Armitage trend tests. The time needed to establish effective ventilation was significantly reduced in the intervention group when compared to the control group (mean time 2 min 42 s versus 4 min 11 s, p<0.001). In the setting of ineffective ventilation, only 35% of control subjects used three or more of the first five corrective steps and none of them used all five steps. Providers in the control group most frequently neglected to open the mouth and increase positive pressure. In contrast, all of those in the intervention group used all of the first five corrective steps, p<0.001. All participants in the control group decided to intubate the infant to establish effective ventilation, compared to none in the intervention group, p<0.001. Using VAR during a simulated neonatal resuscitation scenario significantly reduces the time to establish effective ventilation and improves provider adherence to NRP guidelines. This technology may be a means for regional centers to support local providers during a neonatal emergency to improve patient safety and improve neonatal outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Time-Domain Filtering for Spatial Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. David
1997-01-01
An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.
Henn, R Frank; Shah, Neel; Warner, Jon J P; Gomoll, Andreas H
2013-06-01
The purpose of this study was to quantify the benefits of shoulder arthroscopy simulator training with a cadaveric model of shoulder arthroscopy. Seventeen first-year medical students with no prior experience in shoulder arthroscopy were enrolled and completed this study. Each subject completed a baseline proctored arthroscopy on a cadaveric shoulder, which included controlling the camera and completing a standard series of tasks using the probe. The subjects were randomized, and 9 of the subjects received training on a virtual reality simulator for shoulder arthroscopy. All subjects then repeated the same cadaveric arthroscopy. The arthroscopic videos were analyzed in a blinded fashion for time to task completion and subjective assessment of technical performance. The 2 groups were compared by use of Student t tests, and change over time within groups was analyzed with paired t tests. There were no observed differences between the 2 groups on the baseline evaluation. The simulator group improved significantly from baseline with respect to time to completion and subjective performance (P < .05). Time to completion was significantly faster in the simulator group compared with controls at the final evaluation (P < .05). No difference was observed between the groups on the subjective scores at the final evaluation (P = .98). Shoulder arthroscopy simulator training resulted in significant benefits in clinical shoulder arthroscopy time to task completion in this cadaveric model. This study provides important additional evidence of the benefit of simulators in orthopaedic surgical training. There may be a role for simulator training in shoulder arthroscopy education. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Shoulder Arthroscopy Simulator Training Improves Shoulder Arthroscopy Performance in a Cadaver Model
Henn, R. Frank; Shah, Neel; Warner, Jon J.P.; Gomoll, Andreas H.
2013-01-01
Purpose The purpose of this study was to quantify the benefits of shoulder arthroscopy simulator training with a cadaver model of shoulder arthroscopy. Methods Seventeen first year medical students with no prior experience in shoulder arthroscopy were enrolled and completed this study. Each subject completed a baseline proctored arthroscopy on a cadaveric shoulder, which included controlling the camera and completing a standard series of tasks using the probe. The subjects were randomized, and nine of the subjects received training on a virtual reality simulator for shoulder arthroscopy. All subjects then repeated the same cadaveric arthroscopy. The arthroscopic videos were analyzed in a blinded fashion for time to task completion and subjective assessment of technical performance. The two groups were compared with students t-tests, and change over time within groups was analyzed with paired t-tests. Results There were no observed differences between the two groups on the baseline evaluation. The simulator group improved significantly from baseline with respect to time to completion and subjective performance (p<0.05). Time to completion was significantly faster in the simulator group compared to controls at final evaluation (p<0.05). No difference was observed between the groups on the subjective scores at final evaluation (p=0.98). Conclusions Shoulder arthroscopy simulator training resulted in significant benefits in clinical shoulder arthroscopy time to task completion in this cadaver model. This study provides important additional evidence of the benefit of simulators in orthopaedic surgical training. Clinical Relevance There may be a role for simulator training in shoulder arthroscopy education. PMID:23591380
Software Comparison for Renewable Energy Deployment in a Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian
The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercialmore » tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.« less
Udani, Ankeet Deepak; Harrison, T Kyle; Mariano, Edward R; Derby, Ryan; Kan, Jack; Ganaway, Toni; Shum, Cynthia; Gaba, David M; Tanaka, Pedro; Kou, Alex; Howard, Steven K
2016-01-01
Simulation-based education strategies to teach regional anesthesia have been described, but their efficacy largely has been assumed. We designed this study to determine whether residents trained using the simulation-based strategy of deliberate practice show greater improvement of ultrasound-guided regional anesthesia (UGRA) skills than residents trained using self-guided practice in simulation. Anesthesiology residents new to UGRA were randomized to participate in either simulation-based deliberate practice (intervention) or self-guided practice (control). Participants were recorded and assessed while performing simulated peripheral nerve blocks at baseline, immediately after the experimental condition, and 3 months after enrollment. Subject performance was scored from video by 2 blinded reviewers using a composite tool. The amount of time each participant spent in deliberate or self-guided practice was recorded. Twenty-eight participants completed the study. Both groups showed within-group improvement from baseline scores immediately after the curriculum and 3 months following study enrollment. There was no difference between groups in changed composite scores immediately after the curriculum (P = 0.461) and 3 months following study enrollment (P = 0.927) from baseline. The average time in minutes that subjects spent in simulation practice was 6.8 minutes for the control group compared with 48.5 minutes for the intervention group (P < 0.001). In this comparative effectiveness study, there was no difference in acquisition and retention of skills in UGRA for novice residents taught by either simulation-based deliberate practice or self-guided practice. Both methods increased skill from baseline; however, self-guided practice required less time and faculty resources.
The calculation of viscosity of liquid n-decane and n-hexadecane by the Green-Kubo method
NASA Astrophysics Data System (ADS)
Cui, S. T.; Cummings, P. T.; Cochran, H. D.
This short commentary presents the result of long molecular dynamics simulation calculations of the shear viscosity of liquid n-decane and n-hexadecane using the Green-Kubo integration method. The relaxation time of the stress-stress correlation function is compared with those of rotation and diffusion. The rotational and diffusional relaxation times, which are easy to calculate, provide useful guides for the required simulation time in viscosity calculations. Also, the computational time required for viscosity calculations of these systems by the Green-Kubo method is compared with the time required for previous non-equilibrium molecular dynamics calculations of the same systems. The method of choice for a particular calculation is determined largely by the properties of interest, since the efficiencies of the two methods are comparable for calculation of the zero strain rate viscosity.
NASA Technical Reports Server (NTRS)
Arneson, Heather; Evans, Antony D.; Li, Jinhua; Wei, Mei Yueh
2017-01-01
Integrated Demand Management (IDM) is a near- to mid-term NASA concept that proposes to address mismatches in air traffic system demand and capacity by using strategic flow management capabilities to pre-condition demand into the more tactical Time-Based Flow Management System (TBFM). This paper describes an automated simulation capability to support IDM concept development. The capability closely mimics existing human-in-the-loop (HITL) capabilities, automating both the human components and collaboration between operational systems, and speeding up the real-time aircraft simulations. Such a capability allows for parametric studies that will inform the HITL simulations, identifying breaking points and parameter values at which significant changes in system behavior occur. This paper also describes the initial validation of individual components of the automated simulation capability, and an example application comparing the performance of the IDM concept under two TBFM scheduling paradigms. The results and conclusions from this simulation compare closely to those from previous HITL simulations using similar scenarios, providing an initial validation of the automated simulation capability.
Assessment of virtual reality robotic simulation performance by urology resident trainees.
Ruparel, Raaj K; Taylor, Abby S; Patel, Janil; Patel, Vipul R; Heckman, Michael G; Rawal, Bhupendra; Leveillee, Raymond J; Thiel, David D
2014-01-01
To examine resident performance on the Mimic dV-Trainer (MdVT; Mimic Technologies, Inc., Seattle, WA) for correlation with resident trainee level (postgraduate year [PGY]), console experience (CE), and simulator exposure in their training program to assess for internal bias with the simulator. Residents from programs of the Southeastern Section of the American Urologic Association participated. Each resident was scored on 4 simulator tasks (peg board, camera targeting, energy dissection [ED], and needle targeting) with 3 different outcomes (final score, economy of motion score, and time to complete exercise) measured for each task. These scores were evaluated for association with PGY, CE, and simulator exposure. Robotic skills training laboratory. A total of 27 residents from 14 programs of the Southeastern Section of the American Urologic Association participated. Time to complete the ED exercise was significantly shorter for residents who had logged live robotic console compared with those who had not (p = 0.003). There were no other associations with live robotic console time that approached significance (all p ≥ 0.21). The only measure that was significantly associated with PGY was time to complete ED exercise (p = 0.009). No associations with previous utilization of a robotic simulator in the resident's home training program were statistically significant. The ED exercise on the MdVT is most associated with CE and PGY compared with other exercises. Exposure of trainees to the MdVT in training programs does not appear to alter performance scores compared with trainees who do not have the simulator. © 2013 Published by Association of Program Directors in Surgery on behalf of Association of Program Directors in Surgery.
Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886
Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.
Time and Frequency-Domain Cross-Verification of SLS 6DOF Trajectory Simulations
NASA Technical Reports Server (NTRS)
Johnson, Matthew; McCullough, John
2017-01-01
The Space Launch System (SLS) Guidance, Navigation, and Control (GNC) team and its partners have developed several time- and frequency-based simulations for development and analysis of the proposed SLS launch vehicle. The simulations differ in fidelity and some have unique functionality that allows them to perform specific analyses. Some examples of the purposes of the various models are: trajectory simulation, multi-body separation, Monte Carlo, hardware in the loop, loads, and frequency domain stability analyses. While no two simulations are identical, many of the models are essentially six degree-of-freedom (6DOF) representations of the SLS plant dynamics, hardware implementation, and flight software. Thus at a high level all of those models should be in agreement. Comparison of outputs from several SLS trajectory and stability analysis tools are ongoing as part of the program's current verification effort. The purpose of these comparisons is to highlight modeling and analysis differences, verify simulation data sources, identify inconsistencies and minor errors, and ultimately to verify output data as being a good representation of the vehicle and subsystem dynamics. This paper will show selected verification work in both the time and frequency domain from the current design analysis cycle of the SLS for several of the design and analysis simulations. In the time domain, the tools that will be compared are MAVERIC, CLVTOPS, SAVANT, STARS, ARTEMIS, and POST 2. For the frequency domain analysis, the tools to be compared are FRACTAL, SAVANT, and STARS. The paper will include discussion of these tools including their capabilities, configurations, and the uses to which they are put in the SLS program. Determination of the criteria by which the simulations are compared (matching criteria) requires thoughtful consideration, and there are several pitfalls that may occur that can severely punish a simulation if not considered carefully. The paper will discuss these considerations and will present a framework for responding to these issues when they arise. For example, small event timing differences can lead to large differences in mass properties if the criteria are to measure those properties at the same time, or large differences in altitude if the criteria are to measure those properties when the simulation experiences a staging event. Similarly, a tiny difference in phase can lead to large gain margin differences for frequency-domain comparisons of gain margins.
Effective description of a 3D object for photon transportation in Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Suganuma, R.; Ogawa, K.
2000-06-01
Photon transport simulation by means of the Monte Carlo method is an indispensable technique for examining scatter and absorption correction methods in SPECT and PET. The authors have developed a method for object description with maximum size regions (maximum rectangular regions: MRRs) to speed up photon transport simulation, and compared the computation time with that for conventional object description methods, a voxel-based (VB) method and an octree method, in the simulations of two kinds of phantoms. The simulation results showed that the computation time with the proposed method became about 50% of that with the VD method and about 70% of that with the octree method for a high resolution MCAT phantom. Here, details of the expansion of the MRR method to three dimensions are given. Moreover, the effectiveness of the proposed method was compared with the VB and octree methods.
Accelerating gravitational microlensing simulations using the Xeon Phi coprocessor
NASA Astrophysics Data System (ADS)
Chen, B.; Kantowski, R.; Dai, X.; Baron, E.; Van der Mark, P.
2017-04-01
Recently Graphics Processing Units (GPUs) have been used to speed up very CPU-intensive gravitational microlensing simulations. In this work, we use the Xeon Phi coprocessor to accelerate such simulations and compare its performance on a microlensing code with that of NVIDIA's GPUs. For the selected set of parameters evaluated in our experiment, we find that the speedup by Intel's Knights Corner coprocessor is comparable to that by NVIDIA's Fermi family of GPUs with compute capability 2.0, but less significant than GPUs with higher compute capabilities such as the Kepler. However, the very recently released second generation Xeon Phi, Knights Landing, is about 5.8 times faster than the Knights Corner, and about 2.9 times faster than the Kepler GPU used in our simulations. We conclude that the Xeon Phi is a very promising alternative to GPUs for modern high performance microlensing simulations.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
Evaluation of Model Performance over the Maritime Continent
NASA Astrophysics Data System (ADS)
Reynolds, C. A.; Barton, N. P.; Chen, S.; Flatau, M. K.; Ridout, J. A.; Janiga, M.; Jensen, T.; Richman, J. G.; Metzger, E. J.; Baranowski, D.
2017-12-01
The introduction of high-resolution global coupled models holds promise for extended-range (subseasonal to seasonal) prediction of high-impact weather. While forecast models have shown considerable improvement in the prediction of tropical phenomena on these timescales, specifically in the simulation and prediction of the Madden-Julian Oscillation (MJO), obstacles remain. In particular, many models still have difficulty accurately simulating the propagation of the MJO over the maritime continent. This has been hypothesized, at least in part, to be related to deficiencies in simulating the diurnal cycle over this region, which in turn is dependent on accurate representation of fine-scale atmosphere-ocean-land interactions, orography, and atmospheric convection. These issues have motivated the international Year of Maritime Continent (YMC) effort and the Office of Naval Research Propagation of Intra-Seasonal Tropical Oscillations (PISTON) initiative. In preparation for YMC and PISTON, we closely evaluate the performance of the Navy Earth System Model (NESM), a coupled global forecast model, in representing the diurnal cycle and other prominent phenomena in the maritime continent region. NESM performance is compared with stand-alone atmospheric simulations with prescribed fixed and analyzed sea surface temperatures (SSTs). Initial results from the Dynamics of the Madden-Julian Oscillation field phase (Fall 2011) period indicate that NESM is able to capture the precipitation day-time maximum over land and night-time maximum over ocean, but day-time precipitation over Borneo, Sumatra and the Malay Peninsula is too strong as compared to TRMM observations. The simulation of low-level winds qualitatively captures sea and land breeze patterns as compared with ERA-Interim analysis, with quantitative biases varying by island. The fully-coupled system and the stand-alone atmospheric model simulations are more similar to each other than to the observations, indicating that active ocean coupling is not the most prominent issue contributing to biases in these simulations. The performance of NESM will be more thoroughly evaluated and compared to other forecast systems using the 45-day forecasts currently being produced four times per week for the 1999-2015 time period under the NOAA SubX project.
Vigmond, Edward J.; Boyle, Patrick M.; Leon, L. Joshua; Plank, Gernot
2014-01-01
Simulations of cardiac bioelectric phenomena remain a significant challenge despite continual advancements in computational machinery. Spanning large temporal and spatial ranges demands millions of nodes to accurately depict geometry, and a comparable number of timesteps to capture dynamics. This study explores a new hardware computing paradigm, the graphics processing unit (GPU), to accelerate cardiac models, and analyzes results in the context of simulating a small mammalian heart in real time. The ODEs associated with membrane ionic flow were computed on traditional CPU and compared to GPU performance, for one to four parallel processing units. The scalability of solving the PDE responsible for tissue coupling was examined on a cluster using up to 128 cores. Results indicate that the GPU implementation was between 9 and 17 times faster than the CPU implementation and scaled similarly. Solving the PDE was still 160 times slower than real time. PMID:19964295
McDonald, Catherine C.; Seacrist, Thomas S.; Lee, Yi-Ching; Loeb, Helen; Kandadai, Venk; Winston, Flaura K.
2014-01-01
Summary Driving simulators can be used to evaluate driving performance under controlled, safe conditions. Teen drivers are at particular risk for motor vehicle crashes and simulated driving can provide important information on performance. We developed a new simulator protocol, the Simulated Driving Assessment (SDA), with the goal of providing a new tool for driver assessment and a common outcome measure for evaluation of training programs. As an initial effort to examine the validity of the SDA to differentiate performance according to experience, this analysis compared driving behaviors and crashes between novice teens (n=20) and experienced adults (n=17) on a high fidelity simulator for one common crash scenario, a rear-end crash. We examined headway time and crashes during a lead truck with sudden braking event in our SDA. We found that 35% of the novice teens crashed and none of the experienced adults crashed in this lead truck braking event; 50% of the teens versus 25% of the adults had a headway time <3 seconds at the time of truck braking. Among the 10 teens with <3 seconds headway time, 70% crashed. Among all participants with a headway time of 2–3 seconds, further investigation revealed descriptive differences in throttle position and brake pedal force when comparing teens who crashed, teens who did not crash and adults (none of whom crashed). Even with a relatively small sample, we found statistically significant differences in headway time for adults and teens, providing preliminary construct validation for our new SDA. PMID:25197724
Helioseismology of a Realistic Magnetoconvective Sunspot Simulation
NASA Technical Reports Server (NTRS)
Braun, D. C.; Birch, A. C.; Rempel, M.; Duvall, T. L., Jr.
2012-01-01
We compare helioseismic travel-time shifts measured from a realistic magnetoconvective sunspot simulation using both helioseismic holography and time-distance helioseismology, and measured from real sunspots observed with the Helioseismic and Magnetic Imager instrument on board the Solar Dynamics Observatory and the Michelson Doppler Imager instrument on board the Solar and Heliospheric Observatory. We find remarkable similarities in the travel-time shifts measured between the methodologies applied and between the simulated and real sunspots. Forward modeling of the travel-time shifts using either Born or ray approximation kernels and the sound-speed perturbations present in the simulation indicates major disagreements with the measured travel-time shifts. These findings do not substantially change with the application of a correction for the reduction of wave amplitudes in the simulated and real sunspots. Overall, our findings demonstrate the need for new methods for inferring the subsurface structure of sunspots through helioseismic inversions.
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
NASA Astrophysics Data System (ADS)
Li, Xin; Song, Weiying; Yang, Kai; Krishnan, N. M. Anoop; Wang, Bu; Smedskjaer, Morten M.; Mauro, John C.; Sant, Gaurav; Balonis, Magdalena; Bauchy, Mathieu
2017-08-01
Although molecular dynamics (MD) simulations are commonly used to predict the structure and properties of glasses, they are intrinsically limited to short time scales, necessitating the use of fast cooling rates. It is therefore challenging to compare results from MD simulations to experimental results for glasses cooled on typical laboratory time scales. Based on MD simulations of a sodium silicate glass with varying cooling rate (from 0.01 to 100 K/ps), here we show that thermal history primarily affects the medium-range order structure, while the short-range order is largely unaffected over the range of cooling rates simulated. This results in a decoupling between the enthalpy and volume relaxation functions, where the enthalpy quickly plateaus as the cooling rate decreases, whereas density exhibits a slower relaxation. Finally, we show that, using the proper extrapolation method, the outcomes of MD simulations can be meaningfully compared to experimental values when extrapolated to slower cooling rates.
NASA Astrophysics Data System (ADS)
Dube, B.; Lefebvre, S.; Perocheau, A.; Nakra, H. L.
1988-01-01
This paper describes the comparative results obtained from digital and hybrid simulation studies on a variable speed wind generator interconnected to the utility grid. The wind generator is a vertical-axis Darrieus type coupled to a synchronous machine by a gear-box; the synchronous machine is connected to the AC utility grid through a static frequency converter. Digital simulation results have been obtained using CSMP software; these results are compared with those obtained from a real-time hybrid simulator that in turn uses a part of the IREQ HVDC simulator. The agreement between hybrid and digital simulation results is generally good. The results demonstrate that the digital simulation reproduces the dynamic behavior of the system in a satisfactory manner and thus constitutes a valid tool for the design of the control systems of the wind generator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rian, D.T.; Hage, A.
1994-12-31
A numerical simulator is often used as a reservoir management tool. One of its main purposes is to aid in the evaluation of number of wells, well locations and start time for wells. Traditionally, the optimization of a field development is done by a manual trial and error process. In this paper, an example of an automated technique is given. The core in the automization process is the reservoir simulator Frontline. Frontline is based on front tracking techniques, which makes it fast and accurate compared to traditional finite difference simulators. Due to its CPU-efficiency the simulator has been coupled withmore » an optimization module, which enables automatic optimization of location of wells, number of wells and start-up times. The simulator was used as an alternative method in the evaluation of waterflooding in a North Sea fractured chalk reservoir. Since Frontline, in principle, is 2D, Buckley-Leverett pseudo functions were used to represent the 3rd dimension. The area full field simulation model was run with up to 25 wells for 20 years in less than one minute of Vax 9000 CPU-time. The automatic Frontline evaluation indicated that a peripheral waterflood could double incremental recovery compared to a central pattern drive.« less
Jacobs, Kevin A; Kressler, Jochen; Stoutenberg, Mark; Roos, Bernard A; Friedlander, Anne L
2011-01-01
Sildenafil improves maximal exercise capacity at high altitudes (∼4350-5800 m) by reducing pulmonary arterial pressure and enhancing oxygen delivery, but the effects on exercise performance at less severe altitudes are less clear. To determine the effects of sildenafil on cardiovascular hemodynamics (heart rate, stroke volume, and cardiac output), arterial oxygen saturation (SaO2), and 6-km time-trial performance of endurance-trained men and women at a simulated altitude of ∼3900 m. Twenty men and 15 women, endurance-trained, completed one experimental exercise trial (30 min at 55% of altitude-specific capacity +6-km time trial) at sea level (SL) and two trials at simulated high altitude (HA) while breathing hypoxic gas (12.8% FIo2) after ingestion of either placebo or 50 mg sildenafil in double-blind, randomized, and counterbalanced fashion. Maximal exercise capacity and SaO2 were significantly reduced at HA compared to SL (18%-23%), but sildenafil did not significantly improve cardiovascular hemodynamics or time-trial performance in either men or women compared to placebo and only improved SaO2 in women (4%). One male subject (5% of male subjects, 2.8% of all subjects) exhibited a meaningful 36-s improvement in time-trial performance with sildenafil compared to placebo. In this group of endurance trained men and women, sildenafil had very little influence on cardiovascular hemodynamics, SaO2, and 6-km time-trial performance at a simulated altitude of ∼3900 m. It appears that a very small percentage of endurance-trained men and women derive meaningful improvements in aerobic performance from sildenafil at a simulated altitude of ∼3900 m.
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
A study of workstation computational performance for real-time flight simulation
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Cleveland, Jeff I., II
1995-01-01
With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.
Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems
NASA Astrophysics Data System (ADS)
Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo
With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.
Gray: a ray tracing-based Monte Carlo simulator for PET.
Freese, David L; Olcott, Peter D; Buss, Samuel R; Levin, Craig S
2018-05-21
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a [Formula: see text] speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within [Formula: see text]% when accounting for differences in peak NECR. We also estimate the peak NECR to be [Formula: see text] kcps, or within [Formula: see text]% of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
Simulated BRDF based on measured surface topography of metal
NASA Astrophysics Data System (ADS)
Yang, Haiyue; Haist, Tobias; Gronle, Marc; Osten, Wolfgang
2017-06-01
The radiative reflective properties of a calibration standard rough surface were simulated by ray tracing and the Finite-difference time-domain (FDTD) method. The simulation results have been used to compute the reflectance distribution functions (BRDF) of metal surfaces and have been compared with experimental measurements. The experimental and simulated results are in good agreement.
Convection in a Very Compressible Fluid: Comparison of Simulations With Experiments
NASA Technical Reports Server (NTRS)
Meyer, H.; Furukawa, A.; Onuki, A.; Kogan, A. B.
2003-01-01
The time profile (Delta)T(t) of the temperature difference, measured across a very compressible fluid layer of supercritical He-3 after the start of a heat flow, shows a damped oscillatory behavior before steady state convection is reached. The results for (Delta)T(t) obtained from numerical simulations and from laboratory experiments are compared over a temperature range where the compressibility varies by a factor of approx. = 40. First the steady-state convective heat current j(sup conv) as a function of the Rayleigh number R(alpha) is presented, and the agreement is found to be good. Second, the shape of the time profile and two characteristic times in the transient part of (Delta)T(t) from simulations and experiments are compared, namely: 1) t(sub osc), the oscillatory period and 2) t(sub p), the time of the first peak after starting the heat flow. These times, scaled by the diffusive time tau(sub D) versus R(alpha), are presented. The agreement is good for t(sup osc)/tau(sub D), where the results collapse on a single curve showing a powerlaw behavior. The simulation hence confirms the universal scaling behavior found experimentally. However for t(sub p)/tau(sub D), where the experimental data also collapse on a single curve, the simulation results show systematic departures from such a behavior. A possible reason for some of the disagreements, both in the time profile and in t(sub p) is discussed. In the Appendix a third characteristic time, t(sub m), between the first peak and the first oscillation minimum is plotted and a comparison between the results of experiments and simulations is made.
Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars; Cayé-Thomasen, Per; Sørensen, Mads Sølvsten
2016-01-01
The cognitive load (CL) theoretical framework suggests that working memory is limited, which has implications for learning and skills acquisition. Complex learning situations such as surgical skills training can potentially induce a cognitive overload, inhibiting learning. This study aims to compare CL in traditional cadaveric dissection training and virtual reality (VR) simulation training of mastoidectomy. A prospective, crossover study. Participants performed cadaveric dissection before VR simulation of the procedure or vice versa. CL was estimated by secondary-task reaction time testing at baseline and during the procedure in both training modalities. The national Danish temporal bone course. A total of 40 novice otorhinolaryngology residents. Reaction time was increased by 20% in VR simulation training and 55% in cadaveric dissection training of mastoidectomy compared with baseline measurements. Traditional dissection training increased CL significantly more than VR simulation training (p < 0.001). VR simulation training imposed a lower CL than traditional cadaveric dissection training of mastoidectomy. Learning complex surgical skills can be a challenge for the novice and mastoidectomy skills training could potentially be optimized by employing VR simulation training first because of the lower CL. Traditional dissection training could then be used to supplement skills training after basic competencies have been acquired in the VR simulation. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Assembly Line Efficiency Improvement by Using WITNESS Simulation Software
NASA Astrophysics Data System (ADS)
Yasir, A. S. H. M.; Mohamed, N. M. Z. N.
2018-03-01
In the nowadays-competitive world, efficiencies and the productivity of the assembly line are essential in manufacturing company. This paper demonstrates the study of the existing production line performance. The actual cycle time observed and recorded during the working process. The current layout was designed and analysed using Witness simulation software. The productivity and effectiveness for every single operator are measured to determine the operator idle time and busy time. Two new alternatives layout were proposed and analysed by using Witness simulation software to improve the performance of production activities. This research provided valuable and better understanding of production effectiveness by adjusting the line balancing. After analysing the data, simulation result from the current layout and the proposed plan later been tabulated to compare the improved efficiency and productivity. The proposed design plan has shown an increase in yield and productivity compared to the current arrangement. This research has been carried out in company XYZ, which is one of the automotive premises in Pahang, Malaysia.
Power in the loop real time simulation platform for renewable energy generation
NASA Astrophysics Data System (ADS)
Li, Yang; Shi, Wenhui; Zhang, Xing; He, Guoqing
2018-02-01
Nowadays, a large scale of renewable energy sources has been connecting to power system and the real time simulation platform is widely used to carry out research on integration control algorithm, power system stability etc. Compared to traditional pure digital simulation and hardware in the loop simulation, power in the loop simulation has higher accuracy and degree of reliability. In this paper, a power in the loop analog digital hybrid simulation platform has been built and it can be used not only for the single generation unit connecting to grid, but also for multiple new energy generation units connecting to grid. A wind generator inertia control experiment was carried out on the platform. The structure of the inertia control platform was researched and the results verify that the platform is up to need for renewable power in the loop real time simulation.
State of the evidence on simulation-based training for laparoscopic surgery: a systematic review.
Zendejas, Benjamin; Brydges, Ryan; Hamstra, Stanley J; Cook, David A
2013-04-01
Summarize the outcomes and best practices of simulation training for laparoscopic surgery. Simulation-based training for laparoscopic surgery has become a mainstay of surgical training. Much new evidence has accrued since previous reviews were published. We systematically searched the literature through May 2011 for studies evaluating simulation, in comparison with no intervention or an alternate training activity, for training health professionals in laparoscopic surgery. Outcomes were classified as satisfaction, skills (in a test setting) of time (to perform the task), process (eg, performance rating), product (eg, knot strength), and behaviors when caring for patients. We used random effects to pool effect sizes. From 10,903 articles screened, we identified 219 eligible studies enrolling 7138 trainees, including 91 (42%) randomized trials. For comparisons with no intervention (n = 151 studies), pooled effect size (ES) favored simulation for outcomes of knowledge (1.18; N = 9 studies), skills time (1.13; N = 89), skills process (1.23; N = 114), skills product (1.09; N = 7), behavior time (1.15; N = 7), behavior process (1.22; N = 15), and patient effects (1.28; N = 1), all P < 0.05. When compared with nonsimulation instruction (n = 3 studies), results significantly favored simulation for outcomes of skills time (ES, 0.75) and skills process (ES, 0.54). Comparisons between different simulation interventions (n = 79 studies) clarified best practices. For example, in comparison with virtual reality, box trainers have similar effects for process skills outcomes and seem to be superior for outcomes of satisfaction and skills time. Simulation-based laparoscopic surgery training of health professionals has large benefits when compared with no intervention and is moderately more effective than nonsimulation instruction.
Faster protein folding using enhanced conformational sampling of molecular dynamics simulation.
Kamberaj, Hiqmet
2018-05-01
In this study, we applied swarm particle-like molecular dynamics (SPMD) approach to enhance conformational sampling of replica exchange simulations. In particular, the approach showed significant improvement in sampling efficiency of conformational phase space when combined with replica exchange method (REM) in computer simulation of peptide/protein folding. First we introduce the augmented dynamical system of equations, and demonstrate the stability of the algorithm. Then, we illustrate the approach by using different fully atomistic and coarse-grained model systems, comparing them with the standard replica exchange method. In addition, we applied SPMD simulation to calculate the time correlation functions of the transitions in a two dimensional surface to demonstrate the enhancement of transition path sampling. Our results showed that folded structure can be obtained in a shorter simulation time using the new method when compared with non-augmented dynamical system. Typically, in less than 0.5 ns using replica exchange runs assuming that native folded structure is known and within simulation time scale of 40 ns in the case of blind structure prediction. Furthermore, the root mean square deviations from the reference structures were less than 2Å. To demonstrate the performance of new method, we also implemented three simulation protocols using CHARMM software. Comparisons are also performed with standard targeted molecular dynamics simulation method. Copyright © 2018 Elsevier Inc. All rights reserved.
Efficient Simulation of Explicitly Solvated Proteins in the Well-Tempered Ensemble.
Deighan, Michael; Bonomi, Massimiliano; Pfaendtner, Jim
2012-07-10
Herein, we report significant reduction in the cost of combined parallel tempering and metadynamics simulations (PTMetaD). The efficiency boost is achieved using the recently proposed well-tempered ensemble (WTE) algorithm. We studied the convergence of PTMetaD-WTE conformational sampling and free energy reconstruction of an explicitly solvated 20-residue tryptophan-cage protein (trp-cage). A set of PTMetaD-WTE simulations was compared to a corresponding standard PTMetaD simulation. The properties of PTMetaD-WTE and the convergence of the calculations were compared. The roles of the number of replicas, total simulation time, and adjustable WTE parameter γ were studied.
A hybrid method of estimating pulsating flow parameters in the space-time domain
NASA Astrophysics Data System (ADS)
Pałczyński, Tomasz
2017-05-01
This paper presents a method for estimating pulsating flow parameters in partially open pipes, such as pipelines, internal combustion engine inlets, exhaust pipes and piston compressors. The procedure is based on the method of characteristics, and employs a combination of measurements and simulations. An experimental test rig is described, which enables pressure, temperature and mass flow rate to be measured within a defined cross section. The second part of the paper discusses the main assumptions of a simulation algorithm elaborated in the Matlab/Simulink environment. The simulation results are shown as 3D plots in the space-time domain, and compared with proposed models of phenomena relating to wave propagation, boundary conditions, acoustics and fluid mechanics. The simulation results are finally compared with acoustic phenomena, with an emphasis on the identification of resonant frequencies.
NASA Astrophysics Data System (ADS)
Hochgraf, Kelsey
Auralization methods have been used for a long time to simulate the acoustics of a concert hall for different seat positions. The goal of this thesis was to apply the concept of auralization to a larger audience area that the listener could walk through to compare differences in acoustics for a wide range of seat positions. For this purpose, the acoustics of Rensselaer's Experimental Media and Performing Arts Center (EMPAC) Concert Hall were simulated to create signals for a 136 channel wave field synthesis (WFS) system located at Rensselaer's Collaborative Research Augmented Immersive Virtual Environment (CRAIVE) Laboratory. By allowing multiple people to dynamically experience the concert hall's acoustics at the same time, this research gained perspective on what is important for achieving objective accuracy and subjective plausibility in an auralization. A finite difference time domain (FDTD) simulation on a three-dimensional face-centered cubic grid, combined at a crossover frequency of 800 Hz with a CATT-Acoustic(TM) simulation, was found to have a reverberation time, direct to reverberant sound energy ratio, and early reflection pattern that more closely matched measured data from the hall compared to a CATT-Acoustic(TM) simulation and other hybrid simulations. In the CRAIVE lab, nine experienced listeners found all hybrid auralizations (with varying source location, grid resolution, crossover frequency, and number of loudspeakers) to be more perceptually plausible than the CATT-Acoustic(TM) auralization. The FDTD simulation required two days to compute, while the CATT-Acoustic(TM) simulation required three separate TUCT(TM) computations, each taking four hours, to accommodate the large number of receivers. Given the perceptual advantages realized with WFS for auralization of a large, inhomogeneous sound field, it is recommended that hybrid simulations be used in the future to achieve more accurate and plausible auralizations. Predictions are made for a parallelized version of the simulation code that could achieve such auralizations in less than one hour, making the tool practical for everyday application.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Distributed simulation using a real-time shared memory network
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Mattern, Duane L.; Wong, Edmond; Musgrave, Jeffrey L.
1993-01-01
The Advanced Control Technology Branch of the NASA Lewis Research Center performs research in the area of advanced digital controls for aeronautic and space propulsion systems. This work requires the real-time implementation of both control software and complex dynamical models of the propulsion system. We are implementing these systems in a distributed, multi-vendor computer environment. Therefore, a need exists for real-time communication and synchronization between the distributed multi-vendor computers. A shared memory network is a potential solution which offers several advantages over other real-time communication approaches. A candidate shared memory network was tested for basic performance. The shared memory network was then used to implement a distributed simulation of a ramjet engine. The accuracy and execution time of the distributed simulation was measured and compared to the performance of the non-partitioned simulation. The ease of partitioning the simulation, the minimal time required to develop for communication between the processors and the resulting execution time all indicate that the shared memory network is a real-time communication technique worthy of serious consideration.
NASA Astrophysics Data System (ADS)
Martin, Ffion A.; Warrior, Nicholas A.; Simacek, Pavel; Advani, Suresh; Hughes, Adrian; Darlington, Roger; Senan, Eissa
2018-03-01
Very short manufacture cycle times are required if continuous carbon fibre and epoxy composite components are to be economically viable solutions for high volume composite production for the automotive industry. Here, a manufacturing process variant of resin transfer moulding (RTM), targets a reduction of in-mould manufacture time by reducing the time to inject and cure components. The process involves two stages; resin injection followed by compression. A flow simulation methodology using an RTM solver for the process has been developed. This paper compares the simulation prediction to experiments performed using industrial equipment. The issues encountered during the manufacturing are included in the simulation and their sensitivity to the process is explored.
Foley, J. M.; Gooding, A. L.; Thames, A. D.; Ettenhofer, M. L.; Kim, M. S.; Castellon, S. A.; Marcotte, T. D.; Sadek, J. R.; Heaton, R. K.; van Gorp, W. G.; Hinkin, C. H.
2013-01-01
Objectives To examine the effects of aging and neuropsychological (NP) impairment on driving simulator performance within a human immunodeficiency virus (HIV)-infected cohort. Methods Participants included 79 HIV-infected adults (n = 58 > age 50, n = 21 ≤ 40) who completed a NP battery and a personnel computer-based driving simulator task. Outcome variables included total completion time (time) and number of city blocks to complete the task (blocks). Results Compared to the younger group, the older group was less efficient in their route finding (blocks over optimum: 25.9 [20.1] vs 14.4 [16.9]; P = .02) and took longer to complete the task (time: 1297.6 [577.6] vs 804.4 [458.5] seconds; P = .001). Regression models within the older adult group indicated that visuospatial abilities (blocks: b = –0.40, P < .001; time: b = –0.40, P = .001) and attention (blocks: b = –0.49, P = .001; time: b = –0.42, P = .006) independently predicted simulator performance. The NP-impaired group performed more poorly on both time and blocks, compared to the NP normal group. Conclusions Older HIV-infected adults may be at risk of driving-related functional compromise secondary to HIV-associated neurocognitive decline. PMID:23314403
Driving performance in a power wheelchair simulator.
Archambault, Philippe S; Tremblay, Stéphanie; Cachecho, Sarah; Routhier, François; Boissy, Patrick
2012-05-01
A power wheelchair simulator can allow users to safely experience various driving tasks. For such training to be efficient, it is important that driving performance be equivalent to that in a real wheelchair. This study aimed at comparing driving performance in a real and in a simulated environment. Two groups of healthy young adults performed different driving tasks, either in a real power wheelchair or in a simulator. Smoothness of joystick control as well as the time necessary to complete each task were recorded and compared between the two groups. Driving strategies were analysed from video recordings. The sense of presence, of really being in the virtual environment, was assessed through a questionnaire. Smoothness of joystick control was the same in the real and virtual groups. Task completion time was higher in the simulator for the more difficult tasks. Both groups showed similar strategies and difficulties. The simulator generated a good sense of presence, which is important for motivation. Performance was very similar for power wheelchair driving in the simulator or in real life. Thus, the simulator could potentially be used to complement training of individuals who require a power wheelchair and use a regular joystick. [Box: see text].
Non-Linear Harmonic flow simulations of a High-Head Francis Turbine test case
NASA Astrophysics Data System (ADS)
Lestriez, R.; Amet, E.; Tartinville, B.; Hirsch, C.
2016-11-01
This work investigates the use of the non-linear harmonic (NLH) method for a high- head Francis turbine, the Francis99 workshop test case. The NLH method relies on a Fourier decomposition of the unsteady flow components in harmonics of Blade Passing Frequencies (BPF), which are the fundamentals of the periodic disturbances generated by the adjacent blade rows. The unsteady flow solution is obtained by marching in pseudo-time to a steady-state solution of the transport equations associated with the time-mean, the BPFs and their harmonics. Thanks to this transposition into frequency domain, meshing only one blade channel is sufficient, like for a steady flow simulation. Notable benefits in terms of computing costs and engineering time can therefore be obtained compared to classical time marching approach using sliding grid techniques. The method has been applied for three operating points of the Francis99 workshop high-head Francis turbine. Steady and NLH flow simulations have been carried out for these configurations. Impact of the grid size and near-wall refinement is analysed on all operating points for steady simulations and for Best Efficiency Point (BEP) for NLH simulations. Then, NLH results for a selected grid size are compared for the three different operating points, reproducing the tendencies observed in the experiment.
Dataflow computing approach in high-speed digital simulation
NASA Technical Reports Server (NTRS)
Ercegovac, M. D.; Karplus, W. J.
1984-01-01
New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.
Technology-enhanced simulation in emergency medicine: a systematic review and meta-analysis.
Ilgen, Jonathan S; Sherbino, Jonathan; Cook, David A
2013-02-01
Technology-enhanced simulation is used frequently in emergency medicine (EM) training programs. Evidence for its effectiveness, however, remains unclear. The objective of this study was to evaluate the effectiveness of technology-enhanced simulation for training in EM and identify instructional design features associated with improved outcomes by conducting a systematic review. The authors systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Original research articles in any language were selected if they compared simulation to no intervention or another educational activity for the purposes of training EM health professionals (including student and practicing physicians, midlevel providers, nurses, and prehospital providers). Reviewers evaluated study quality and abstracted information on learners, instructional design (curricular integration, feedback, repetitive practice, mastery learning), and outcomes. From a collection of 10,903 articles, 85 eligible studies enrolling 6,099 EM learners were identified. Of these, 56 studies compared simulation to no intervention, 12 compared simulation with another form of instruction, and 19 compared two forms of simulation. Effect sizes were pooled using a random-effects model. Heterogeneity among these studies was large (I(2) ≥ 50%). Among studies comparing simulation to no intervention, pooled effect sizes were large (range = 1.13 to 1.48) for knowledge, time, and skills and small to moderate for behaviors with patients (0.62) and patient effects (0.43; all p < 0.02 except patient effects p = 0.12). Among comparisons between simulation and other forms of instruction, the pooled effect sizes were small (≤ 0.33) for knowledge, time, and process skills (all p > 0.1). Qualitative comparisons of different simulation curricula are limited, although feedback, mastery learning, and higher fidelity were associated with improved learning outcomes. Technology-enhanced simulation for EM learners is associated with moderate or large favorable effects in comparison with no intervention and generally small and nonsignificant benefits in comparison with other instruction. Future research should investigate the features that lead to effective simulation-based instructional design. © 2013 by the Society for Academic Emergency Medicine.
OpenACC performance for simulating 2D radial dambreak using FVM HLLE flux
NASA Astrophysics Data System (ADS)
Gunawan, P. H.; Pahlevi, M. R.
2018-03-01
The aim of this paper is to investigate the performances of openACC platform for computing 2D radial dambreak. Here, the shallow water equation will be used to describe and simulate 2D radial dambreak with finite volume method (FVM) using HLLE flux. OpenACC is a parallel computing platform based on GPU cores. Indeed, from this research this platform is used to minimize computational time on the numerical scheme performance. The results show the using OpenACC, the computational time is reduced. For the dry and wet radial dambreak simulations using 2048 grids, the computational time of parallel is obtained 575.984 s and 584.830 s respectively for both simulations. These results show the successful of OpenACC when they are compared with the serial time of dry and wet radial dambreak simulations which are collected 28047.500 s and 29269.40 s respectively.
NASA Astrophysics Data System (ADS)
Yang, Q.; Wang, Y.; Zhang, J.; Delgado, J.
2017-05-01
Accurate and reliable groundwater level forecasting models can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. In this paper, three time series analysis methods, Holt-Winters (HW), integrated time series (ITS), and seasonal autoregressive integrated moving average (SARIMA), are explored to simulate the groundwater level in a coastal aquifer, China. The monthly groundwater table depth data collected in a long time series from 2000 to 2011 are simulated and compared with those three time series models. The error criteria are estimated using coefficient of determination ( R 2), Nash-Sutcliffe model efficiency coefficient ( E), and root-mean-squared error. The results indicate that three models are all accurate in reproducing the historical time series of groundwater levels. The comparisons of three models show that HW model is more accurate in predicting the groundwater levels than SARIMA and ITS models. It is recommended that additional studies explore this proposed method, which can be used in turn to facilitate the development and implementation of more effective and sustainable groundwater management strategies.
Hopper Flow: Experiments and Simulation
NASA Astrophysics Data System (ADS)
Li, Zhusong; Shattuck, Mark
2013-03-01
Jamming and intermittent granular flow are important problems in industry, and the vertical hopper is a canonical example. Clogging of granular hoppers account for significant losses across many industries. We use realistic DEM simulations of gravity driven flow in a hopper to examine flow and jamming of 2D disks and compare with identical companion experiments. We use experimental data to validate simulation parameters and the form of the inter particle force law. We measure and compare flow rate, emptying times, jamming statistics, and flow fields as a function of opening angle and opening size in both experiment and simulations. Suppored by: NSF-CBET-0968013
Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo
2015-07-01
Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.
Simulation-based bronchoscopy training: systematic review and meta-analysis.
Kennedy, Cassie C; Maldonado, Fabien; Cook, David A
2013-07-01
Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n=8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n=7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, -1.47 to 2.69]) and process (0.33 [95% CI, -1.46 to 2.11]) outcomes (n=2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few.
Real-time simulation of an F110/STOVL turbofan engine
NASA Technical Reports Server (NTRS)
Drummond, Colin K.; Ouzts, Peter J.
1989-01-01
A traditional F110-type turbofan engine model was extended to include a ventral nozzle and two thrust-augmenting ejectors for Short Take-Off Vertical Landing (STOVL) aircraft applications. Development of the real-time F110/STOVL simulation required special attention to the modeling approach to component performance maps, the low pressure turbine exit mixing region, and the tailpipe dynamic approximation. Simulation validation derives by comparing output from the ADSIM simulation with the output for a validated F110/STOVL General Electric Aircraft Engines FORTRAN deck. General Electric substantiated basic engine component characteristics through factory testing and full scale ejector data.
Koch Hansen, Lars; Mohammed, Anna; Pedersen, Magnus; Folkestad, Lars; Brodersen, Jacob; Hey, Thomas; Lyhne Christensen, Nicolaj; Carter-Storch, Rasmus; Bendix, Kristoffer; Hansen, Morten R; Brabrand, Mikkel
2016-12-01
Reducing hands-off time during cardiopulmonary resuscitation (CPR) is believed to increase survival after cardiac arrests because of the sustaining of organ perfusion. The aim of our study was to investigate whether charging the defibrillator before rhythm analyses and shock delivery significantly reduced hands-off time compared with the European Resuscitation Council (ERC) 2010 CPR guideline algorithm in full-scale cardiac arrest scenarios. The study was designed as a full-scale cardiac arrest simulation study including administration of drugs. Participants were randomized into using the Stop-Only-While-Shocking (SOWS) algorithm or the ERC2010 algorithm. In SOWS, chest compressions were only interrupted for a post-charging rhythm analysis and immediate shock delivery. A Resusci Anne HLR-D manikin and a LIFEPACK 20 defibrillator were used. The manikin recorded time and chest compressions. Sample size was calculated with an α of 0.05 and 80% power showed that we should test four scenarios with each algorithm. Twenty-nine physicians participated in 11 scenarios. Hands-off time was significantly reduced 17% using the SOWS algorithm compared with ERC2010 [22.1% (SD 2.3) hands-off time vs. 26.6% (SD 4.8); P<0.05]. In full-scale cardiac arrest simulations, a minor change consisting of charging the defibrillator before rhythm check reduces hands-off time by 17% compared with ERC2010 guidelines.
Assessment of the effects of horizontal grid resolution on long ...
The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions. The National Exposure Research Laboratory’s Atmospheric Modeling Division (AMAD) conducts research in support of EPA’s mission to protect human health and the environment.
Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units
NASA Astrophysics Data System (ADS)
Kemal, Jonathan Yashar
For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.
Supercomputers ready for use as discovery machines for neuroscience.
Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus
2012-01-01
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.
Supercomputers Ready for Use as Discovery Machines for Neuroscience
Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus
2012-01-01
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998
Comparing TID simulations using 3-D ray tracing and mirror reflection
NASA Astrophysics Data System (ADS)
Huang, X.; Reinisch, B. W.; Sales, G. S.; Paznukhov, V. V.; Galkin, I. A.
2016-04-01
Measuring the time variations of Doppler frequencies and angles of arrival (AoA) of ionospherically reflected HF waves has been proposed as a means of detecting the occurrence of traveling ionospheric disturbances (TIDs). Simulations are made using ray tracing through the International Reference Ionosphere (IRI) electron density model in an effort to reproduce measured signatures. The TID is represented by a wavelike perturbation of the 3-D electron density traveling horizontally in the ionosphere with an amplitude that varies sinusoidally with time. By judiciously selecting the TID parameters the ray tracing simulation reproduces the observed Doppler frequencies and AoAs. Ray tracing in a 3-D realistic ionosphere is, however, excessively time consuming considering the involved homing procedures. It is shown that a carefully selected reflecting corrugated mirror can reproduce the time variations of the AoA and Doppler frequency. The results from the ray tracing through the IRI model ionosphere and the mirror model reflections are compared to assess the applicability of the mirror-reflection model.
NASA Astrophysics Data System (ADS)
Shafer, S. L.; Bartlein, P. J.
2017-12-01
The period from 15-10 ka was a time of rapid vegetation changes in North America. Continental ice sheets in northern North America were receding, exposing new habitat for vegetation, and regions distant from the ice sheets experienced equally large environmental changes. Northern hemisphere temperatures during this period were increasing, promoting transitions from cold-adapted to temperate plant taxa at mid-latitudes. Long, transient paleovegetation simulations can provide important information on vegetation responses to climate changes, including both the spatial dynamics and rates of species distribution changes over time. Paleovegetation simulations also can fill the spatial and temporal gaps in observed paleovegetation records (e.g., pollen data from lake sediments), allowing us to test hypotheses about past vegetation changes (e.g., the location of past refugia). We used the CCSM3 TraCE transient climate simulation as input for LPJ-GUESS, a general ecosystem model, to simulate vegetation changes from 15-10 ka for parts of western North America at mid-latitudes ( 35-55° N). For these simulations, LPJ-GUESS was parameterized to simulate key tree taxa for western North America (e.g., Pseudotsuga, Tsuga, Quercus, etc.). The CCSM3 TraCE transient climate simulation data were regridded onto a 10-minute grid of the study area. We analyzed the simulated spatial and temporal dynamics of these taxa and compared the simulated changes with observed paleovegetation changes recorded in pollen and plant macrofossil data (e.g., data from the Neotoma Paleoecology Database). In general, the LPJ-GUESS simulations reproduce the general patterns of paleovegetation responses to climate change, although the timing of some simulated vegetation changes do not match the observed paleovegetation record. We describe the areas and time periods with the greatest data-model agreement and disagreement, and discuss some of the strengths and weaknesses of the simulated climate and vegetation data. The magnitude and rate of the simulated past vegetation changes are compared with projected future vegetation changes for the region.
Stefanidis, Dimitrios; Scerbo, Mark W; Montero, Paul N; Acker, Christina E; Smith, Warren D
2012-01-01
We hypothesized that novices will perform better in the operating room after simulator training to automaticity compared with traditional proficiency based training (current standard training paradigm). Simulator-acquired skill translates to the operating room, but the skill transfer is incomplete. Secondary task metrics reflect the ability of trainees to multitask (automaticity) and may improve performance assessment on simulators and skill transfer by indicating when learning is complete. Novices (N = 30) were enrolled in an IRB-approved, blinded, randomized, controlled trial. Participants were randomized into an intervention (n = 20) and a control (n = 10) group. The intervention group practiced on the FLS suturing task until they achieved expert levels of time and errors (proficiency), were tested on a live porcine fundoplication model, continued simulator training until they achieved expert levels on a visual spatial secondary task (automaticity) and were retested on the operating room (OR) model. The control group participated only during testing sessions. Performance scores were compared within and between groups during testing sessions. : Intervention group participants achieved proficiency after 54 ± 14 and automaticity after additional 109 ± 57 repetitions. Participants achieved better scores in the OR after automaticity training [345 (range, 0-537)] compared with after proficiency-based training [220 (range, 0-452; P < 0.001]. Simulator training to automaticity takes more time but is superior to proficiency-based training, as it leads to improved skill acquisition and transfer. Secondary task metrics that reflect trainee automaticity should be implemented during simulator training to improve learning and skill transfer.
Differentiating levels of surgical experience on a virtual reality temporal bone simulator.
Zhao, Yi C; Kennedy, Gregor; Hall, Richard; O'Leary, Stephen
2010-11-01
Virtual reality simulation is increasingly being incorporated into surgical training and may have a role in temporal bone surgical education. Here we test whether metrics generated by a virtual reality surgical simulation can differentiate between three levels of experience, namely novices, otolaryngology residents, and experienced qualified surgeons. Cohort study. Royal Victorian Eye and Ear Hospital. Twenty-seven participants were recruited. There were 12 experts, six residents, and nine novices. After orientation, participants were asked to perform a modified radical mastoidectomy on the simulator. Comparisons of time taken, injury to structures, and forces exerted were made between the groups to determine which specific metrics would discriminate experience levels. Experts completed the simulated task in significantly shorter time than the other two groups (experts 22 minutes, residents 36 minutes, and novices 46 minutes; P = 0.001). Novices exerted significantly higher average forces when dissecting close to vital structures compared with experts (0.24 Newton [N] vs 0.13 N, P = 0.002). Novices were also more likely to injure structures such as dura compared to experts (23 injuries vs 3 injuries, P = 0.001). Compared with residents, the experts modulated their force between initial cortex dissection and dissection close to vital structures. Using the combination of these metrics, we were able to correctly classify the participants' level of experience 90 percent of the time. This preliminary study shows that measurements of performance obtained from within a virtual reality simulator can differentiate between levels of users' experience. These results suggest that simulator training may have a role in temporal bone training beyond foundational training. Copyright © 2010 American Academy of Otolaryngology–Head and Neck Surgery Foundation. Published by Mosby, Inc. All rights reserved.
Computer model to simulate testing at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.
1995-01-01
A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.
NOTE: Implementation of angular response function modeling in SPECT simulations with GATE
NASA Astrophysics Data System (ADS)
Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.
2010-05-01
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.
Regan, R. Steve; Niswonger, Richard G.; Markstrom, Steven L.; Barlow, Paul M.
2015-10-02
The spin-up simulation should be run for a sufficient length of time necessary to establish antecedent conditions throughout a model domain. Each GSFLOW application can require different lengths of time to account for the hydrologic stresses to propagate through a coupled groundwater and surface-water system. Typically, groundwater hydrologic processes require many years to come into equilibrium with dynamic climate and other forcing (or stress) data, such as precipitation and well pumping, whereas runoff-dominated surface-water processes respond relatively quickly. Use of a spin-up simulation can substantially reduce execution-time requirements for applications where the time period of interest is small compared to the time for hydrologic memory; thus, use of the restart option can be an efficient strategy for forecast and calibration simulations that require multiple simulations starting from the same day.
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András
2015-04-01
For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.
Simulator for Microlens Planet Surveys
NASA Astrophysics Data System (ADS)
Ipatov, Sergei I.; Horne, Keith; Alsubai, Khalid A.; Bramich, Daniel M.; Dominik, Martin; Hundertmark, Markus P. G.; Liebig, Christine; Snodgrass, Colin D. B.; Street, Rachel A.; Tsapras, Yiannis
2014-04-01
We summarize the status of a computer simulator for microlens planet surveys. The simulator generates synthetic light curves of microlensing events observed with specified networks of telescopes over specified periods of time. Particular attention is paid to models for sky brightness and seeing, calibrated by fitting to data from the OGLE survey and RoboNet observations in 2011. Time intervals during which events are observable are identified by accounting for positions of the Sun and the Moon, and other restrictions on telescope pointing. Simulated observations are then generated for an algorithm that adjusts target priorities in real time with the aim of maximizing planet detection zone area summed over all the available events. The exoplanet detection capability of observations was compared for several telescopes.
NASA Astrophysics Data System (ADS)
Hur, Min Young; Verboncoeur, John; Lee, Hae June
2014-10-01
Particle-in-cell (PIC) simulations have high fidelity in the plasma device requiring transient kinetic modeling compared with fluid simulations. It uses less approximation on the plasma kinetics but requires many particles and grids to observe the semantic results. It means that the simulation spends lots of simulation time in proportion to the number of particles. Therefore, PIC simulation needs high performance computing. In this research, a graphic processing unit (GPU) is adopted for high performance computing of PIC simulation for low temperature discharge plasmas. GPUs have many-core processors and high memory bandwidth compared with a central processing unit (CPU). NVIDIA GeForce GPUs were used for the test with hundreds of cores which show cost-effective performance. PIC code algorithm is divided into two modules which are a field solver and a particle mover. The particle mover module is divided into four routines which are named move, boundary, Monte Carlo collision (MCC), and deposit. Overall, the GPU code solves particle motions as well as electrostatic potential in two-dimensional geometry almost 30 times faster than a single CPU code. This work was supported by the Korea Institute of Science Technology Information.
NASA Astrophysics Data System (ADS)
Amalia, E.; Moelyadi, M. A.; Ihsan, M.
2018-04-01
The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.
Janssens, Sarah; Beckmann, Michael; Bonney, Donna
2015-08-01
Simulation training in laparoscopic surgery has been shown to improve surgical performance. To describe the implementation of a laparoscopic simulation training and credentialing program for gynaecology registrars. A pilot program consisting of protected, supervised laparoscopic simulation time, a tailored curriculum and a credentialing process, was developed and implemented. Quantitative measures assessing simulated surgical performance were measured over the simulation training period. Laparoscopic procedures requiring credentialing were assessed for both the frequency of a registrar being the primary operator and the duration of surgery and compared to a presimulation cohort. Qualitative measures regarding quality of surgical training were assessed pre- and postsimulation. Improvements were seen in simulated surgical performance in efficiency domains. Operative time for procedures requiring credentialing was reduced by 12%. Primary operator status in the operating theatre for registrars was unchanged. Registrar assessment of training quality improved. The introduction of a laparoscopic simulation training and credentialing program resulted in improvements in simulated performance, reduced operative time and improved registrar assessment of the quality of training. © 2015 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
Sun, Rui; Dama, James F; Tan, Jeffrey S; Rose, John P; Voth, Gregory A
2016-10-11
Metadynamics is an important enhanced sampling technique in molecular dynamics simulation to efficiently explore potential energy surfaces. The recently developed transition-tempered metadynamics (TTMetaD) has been proven to converge asymptotically without sacrificing exploration of the collective variable space in the early stages of simulations, unlike other convergent metadynamics (MetaD) methods. We have applied TTMetaD to study the permeation of drug-like molecules through a lipid bilayer to further investigate the usefulness of this method as applied to problems of relevance to medicinal chemistry. First, ethanol permeation through a lipid bilayer was studied to compare TTMetaD with nontempered metadynamics and well-tempered metadynamics. The bias energies computed from various metadynamics simulations were compared to the potential of mean force calculated from umbrella sampling. Though all of the MetaD simulations agree with one another asymptotically, TTMetaD is able to predict the most accurate and reliable estimate of the potential of mean force for permeation in the early stages of the simulations and is robust to the choice of required additional parameters. We also show that using multiple randomly initialized replicas allows convergence analysis and also provides an efficient means to converge the simulations in shorter wall times and, more unexpectedly, in shorter CPU times; splitting the CPU time between multiple replicas appears to lead to less overall error. After validating the method, we studied the permeation of a more complicated drug-like molecule, trimethoprim. Three sets of TTMetaD simulations with different choices of collective variables were carried out, and all converged within feasible simulation time. The minimum free energy paths showed that TTMetaD was able to predict almost identical permeation mechanisms in each case despite significantly different definitions of collective variables.
Multi-scale simulations of droplets in generic time-dependent flows
NASA Astrophysics Data System (ADS)
Milan, Felix; Biferale, Luca; Sbragaglia, Mauro; Toschi, Federico
2017-11-01
We study the deformation and dynamics of droplets in time-dependent flows using a diffuse interface model for two immiscible fluids. The numerical simulations are at first benchmarked against analytical results of steady droplet deformation, and further extended to the more interesting case of time-dependent flows. The results of these time-dependent numerical simulations are compared against analytical models available in the literature, which assume the droplet shape to be an ellipsoid at all times, with time-dependent major and minor axis. In particular we investigate the time-dependent deformation of a confined droplet in an oscillating Couette flow for the entire capillary range until droplet break-up. In this way these multi component simulations prove to be a useful tool to establish from ``first principles'' the dynamics of droplets in complex flows involving multiple scales. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No 642069. & European Research Council under the European Community's Seventh Framework Program, ERC Grant Agreement No 339032.
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
The collaborative effect of ram pressure and merging on star formation and stripping fraction
NASA Astrophysics Data System (ADS)
Bischko, J. C.; Steinhauser, D.; Schindler, S.
2015-04-01
Aims: We investigate the effect of ram pressure stripping (RPS) on several simulations of merging pairs of gas-rich spiral galaxies. We are concerned with the changes in stripping efficiency and the time evolution of the star formation rate. Our goal is to provide an estimate of the combined effect of merging and RPS compared to the influence of the individual processes. Methods: We make use of the combined N-body/hydrodynamic code GADGET-2. The code features a threshold-based statistical recipe for star formation, as well as radiative cooling and modeling of galactic winds. In our simulations, we vary mass ratios between 1:4 and 1:8 in a binary merger. We sample different geometric configurations of the merging systems (edge-on and face-on mergers, different impact parameters). Furthermore, we vary the properties of the intracluster medium (ICM) in rough steps: the speed of the merging system relative to the ICM between 500 and 1000 km s-1, the ICM density between 10-29 and 10-27 g cm-3, and the ICM direction relative to the mergers' orbital plane. Ram pressure is kept constant within a simulation time period, as is the ICM temperature of 107 K. Each simulation in the ICM is compared to simulations of the merger in vacuum and the non-merging galaxies with acting ram pressure. Results: Averaged over the simulation time (1 Gyr) the merging pairs show a negligible 5% enhancement in SFR, when compared to single galaxies under the same environmental conditions. The SFRs peak at the time of the galaxies first fly-through. There, our simulations show SFRs of up to 20 M⊙ yr-1 (compared to 3 M⊙ yr-1 of the non-merging galaxies in vacuum). In the most extreme case, this constitutes a short-term (<50 Myr) SFR increase of 50 % over the non-merging galaxies experiencing ram pressure. The wake of merging galaxies in the ICM typically has a third to half the star mass seen in the non-merging galaxies and 5% to 10% less gas mass. The joint effect of RPS and merging, according to our simulations, is not significantly different from pure ram pressure effects.
Real-time global MHD simulation of the solar wind interaction with the earth’s magnetosphere
NASA Astrophysics Data System (ADS)
Shimazu, H.; Kitamura, K.; Tanaka, T.; Fujita, S.; Nakamura, M. S.; Obara, T.
2008-11-01
We have developed a real-time global MHD (magnetohydrodynamics) simulation of the solar wind interaction with the earth’s magnetosphere. By adopting the real-time solar wind parameters and interplanetary magnetic field (IMF) observed routinely by the ACE (Advanced Composition Explorer) spacecraft, responses of the magnetosphere are calculated with MHD code. The simulation is carried out routinely on the super computer system at National Institute of Information and Communications Technology (NICT), Japan. The visualized images of the magnetic field lines around the earth, pressure distribution on the meridian plane, and the conductivity of the polar ionosphere, can be referred to on the web site (http://www2.nict.go.jp/y/y223/simulation/realtime/). The results show that various magnetospheric activities are almost reproduced qualitatively. They also give us information how geomagnetic disturbances develop in the magnetosphere in relation with the ionosphere. From the viewpoint of space weather, the real-time simulation helps us to understand the whole image in the current condition of the magnetosphere. To evaluate the simulation results, we compare the AE indices derived from the simulation and observations. The simulation and observation agree well for quiet days and isolated substorm cases in general.
Metrics for comparing dynamic earthquake rupture simulations
Barall, Michael; Harris, Ruth A.
2014-01-01
Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirley, Rachel; Smidts, Carol; Boring, Ronald
Information-Decision-Action Crew (IDAC) operator model simulations of a Steam Generator Tube Rupture are compared to student operator performance in studies conducted in the Ohio State University’s Nuclear Power Plant Simulator Facility. This study is presented as a prototype for conducting simulator studies to validate key aspects of Human Reliability Analysis (HRA) methods. Seven student operator crews are compared to simulation results for crews designed to demonstrate three different decision-making strategies. The IDAC model used in the simulations is modified slightly to capture novice behavior rather that expert operators. Operator actions and scenario pacing are compared. A preliminary review of availablemore » performance shaping factors (PSFs) is presented. After the scenario in the NPP Simulator Facility, student operators review a video of the scenario and evaluate six PSFs at pre-determined points in the scenario. This provides a dynamic record of the PSFs experienced by the OSU student operators. In this preliminary analysis, Time Constraint Load (TCL) calculated in the IDAC simulations is compared to TCL reported by student operators. We identify potential modifications to the IDAC model to develop an “IDAC Student Operator Model.” This analysis provides insights into how similar experiments could be conducted using expert operators to improve the fidelity of IDAC simulations.« less
Beckers, Niek; Schreiner, Sam; Bertrand, Pierre; Mehler, Bruce; Reimer, Bryan
2017-01-01
The relative impact of using a Google Glass based voice interface to enter a destination address compared to voice and touch-entry methods using a handheld Samsung Galaxy S4 smartphone was assessed in a driving simulator. Voice entry (Google Glass and Samsung) had lower subjective workload ratings, lower standard deviation of lateral lane position, shorter task durations, faster remote Detection Response Task (DRT) reaction times, lower DRT miss rates, and resulted in less time glancing off-road than the primary visual-manual interaction with the Samsung Touch interface. Comparing voice entry methods, using Google Glass took less time, while glance metrics and reaction time to DRT events responded to were similar. In contrast, DRT miss rate was higher for Google Glass, suggesting that drivers may be under increased distraction levels but for a shorter period of time; whether one or the other equates to an overall safer driving experience is an open question. Copyright © 2016 Elsevier Ltd. All rights reserved.
Analysis of real-time numerical integration methods applied to dynamic clamp experiments.
Butera, Robert J; McCarthy, Maeve L
2004-12-01
Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.
A Modular Set of Mixed Reality Simulators for Blind and Guided Procedures
2017-08-01
Form Factor, Modular, DoD CVA Sim: Learning Outcome Study This between-groups study will compare performance scores on the CVA simulator to determine...simulation.health.ufl.edu/research/ra_sim.wmv. Preliminary data from a new study of the CVA simulator indicates that an integrated tutor may be non-inferior to a human...instructor, opening the possibility of self- study and self-debriefing which in turn facilitate competency-based, instead of time-based simulation
Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Tumeo, Antonino; Secchi, Simone
Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less
A SOFTWARE TOOL TO COMPARE MEASURED AND SIMULATED BUILDING ENERGY PERFORMANCE DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maile, Tobias; Bazjanac, Vladimir; O'Donnell, James
2011-11-01
Building energy performance is often inadequate when compared to design goals. To link design goals to actual operation one can compare measured with simulated energy performance data. Our previously developed comparison approach is the Energy Performance Comparison Methodology (EPCM), which enables the identification of performance problems based on a comparison of measured and simulated performance data. In context of this method, we developed a software tool that provides graphing and data processing capabilities of the two performance data sets. The software tool called SEE IT (Stanford Energy Efficiency Information Tool) eliminates the need for manual generation of data plots andmore » data reformatting. SEE IT makes the generation of time series, scatter and carpet plots independent of the source of data (measured or simulated) and provides a valuable tool for comparing measurements with simulation results. SEE IT also allows assigning data points on a predefined building object hierarchy and supports different versions of simulated performance data. This paper briefly introduces the EPCM, describes the SEE IT tool and illustrates its use in the context of a building case study.« less
Method of locating underground mines fires
Laage, Linneas; Pomroy, William
1992-01-01
An improved method of locating an underground mine fire by comparing the pattern of measured combustion product arrival times at detector locations with a real time computer-generated array of simulated patterns. A number of electronic fire detection devices are linked thru telemetry to a control station on the surface. The mine's ventilation is modeled on a digital computer using network analysis software. The time reguired to locate a fire consists of the time required to model the mines' ventilation, generate the arrival time array, scan the array, and to match measured arrival time patterns to the simulated patterns.
Analyzing JAVAD TR-G2 GPS Receiver's Sensitivities to SLS Trajectory
NASA Technical Reports Server (NTRS)
Schuler, Tristan
2017-01-01
Automated guidance and navigation systems are an integral part to successful space missions. Previous researchers created Python tools to receive and parse data from a JAVAD TR-G2 space-capable GPS receiver. I improved the tool by customizing the output for plotting and comparing several simulations. I analyzed position errors, data loss, and signal loss by comparing simulated receiver data from an IFEN GPS simulator to ‘truth data’ from a proposed trajectory. By adjusting the trajectory simulation’s gain, attitude, and start time, NASA can assess the best time to launch the SLS, where to position the antennas on the Block 1-B, and which filter to use. Some additional testing has begun with the Novatel SpaceQuestGPS receiver as well as a GNSS SDR receiver.
Simulation-Based Bronchoscopy Training
Kennedy, Cassie C.; Maldonado, Fabien
2013-01-01
Background: Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. Methods: We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. Results: From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n = 8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n = 7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, −1.47 to 2.69]) and process (0.33 [95% CI, −1.46 to 2.11]) outcomes (n = 2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Conclusions: Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few. PMID:23370487
A parallel algorithm for step- and chain-growth polymerization in molecular dynamics.
de Buyl, Pierre; Nies, Erik
2015-04-07
Classical Molecular Dynamics (MD) simulations provide insight into the properties of many soft-matter systems. In some situations, it is interesting to model the creation of chemical bonds, a process that is not part of the MD framework. In this context, we propose a parallel algorithm for step- and chain-growth polymerization that is based on a generic reaction scheme, works at a given intrinsic rate and produces continuous trajectories. We present an implementation in the ESPResSo++ simulation software and compare it with the corresponding feature in LAMMPS. For chain growth, our results are compared to the existing simulation literature. For step growth, a rate equation is proposed for the evolution of the crosslinker population that compares well to the simulations for low crosslinker functionality or for short times.
A parallel algorithm for step- and chain-growth polymerization in molecular dynamics
NASA Astrophysics Data System (ADS)
de Buyl, Pierre; Nies, Erik
2015-04-01
Classical Molecular Dynamics (MD) simulations provide insight into the properties of many soft-matter systems. In some situations, it is interesting to model the creation of chemical bonds, a process that is not part of the MD framework. In this context, we propose a parallel algorithm for step- and chain-growth polymerization that is based on a generic reaction scheme, works at a given intrinsic rate and produces continuous trajectories. We present an implementation in the ESPResSo++ simulation software and compare it with the corresponding feature in LAMMPS. For chain growth, our results are compared to the existing simulation literature. For step growth, a rate equation is proposed for the evolution of the crosslinker population that compares well to the simulations for low crosslinker functionality or for short times.
Computer simulation of multigrid body dynamics and control
NASA Technical Reports Server (NTRS)
Swaminadham, M.; Moon, Young I.; Venkayya, V. B.
1990-01-01
The objective is to set up and analyze benchmark problems on multibody dynamics and to verify the predictions of two multibody computer simulation codes. TREETOPS and DISCOS have been used to run three example problems - one degree-of-freedom spring mass dashpot system, an inverted pendulum system, and a triple pendulum. To study the dynamics and control interaction, an inverted planar pendulum with an external body force and a torsional control spring was modeled as a hinge connected two-rigid body system. TREETOPS and DISCOS affected the time history simulation of this problem. System state space variables and their time derivatives from two simulation codes were compared.
Team play in surgical education: a simulation-based study.
Marr, Mollie; Hemmert, Keith; Nguyen, Andrew H; Combs, Ronnie; Annamalai, Alagappan; Miller, George; Pachter, H Leon; Turner, James; Rifkind, Kenneth; Cohen, Steven M
2012-01-01
Simulation-based training provides a low-stress learning environment where real-life emergencies can be practiced. Simulation can improve surgical education and patient care in crisis situations through a team approach emphasizing interpersonal and communication skills. This study assessed the effects of simulation-based training in the context of trauma resuscitation in teams of trainees. In a New York State-certified level I trauma center, trauma alerts were assessed by a standardized video review process. Simulation training was provided in various trauma situations followed by a debriefing period. The outcomes measured included the number of healthcare workers involved in the resuscitation, the percentage of healthcare workers in role position, time to intubation, time to intubation from paralysis, time to obtain first imaging study, time to leave trauma bay for computed tomography scan or the operating room, presence of team leader, and presence of spinal stabilization. Thirty cases were video analyzed presimulation and postsimulation training. The two data sets were compared via a 1-sided t test for significance (p < 0.05). Nominal data were analyzed using the Fischer exact test. The data were compared presimulation and postsimulation. The number of healthcare workers involved in the resuscitation decreased from 8.5 to 5.7 postsimulation (p < 0.001). The percentage of people in role positions increased from 57.8% to 83.6% (p = 0.46). The time to intubation from paralysis decreased from 3.9 to 2.8 minutes (p < 0.05). The presence of a definitive team leader increased from 64% to 90% (p < 0.05). The rate of spine stabilization increased from 82% to 100% (p < 0.08). After simulation, training adherence to the advanced trauma life support algorithm improved from 56% to 83%. High-stress situations simulated in a low-stress environment can improve team interaction and educational competencies. Providing simulation training as a tool for surgical education may enhance patient care. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Chengguang; Drinkwater, Bruce W.
In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method.more » However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.« less
Limits to high-speed simulations of spiking neural networks using general-purpose computers.
Zenke, Friedemann; Gerstner, Wulfram
2014-01-01
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.
NASA Astrophysics Data System (ADS)
Wang, Ziwei; Jiang, Xiong; Chen, Ti; Hao, Yan; Qiu, Min
2018-05-01
Simulating the unsteady flow of compressor under circumferential inlet distortion and rotor/stator interference would need full-annulus grid with a dual time method. This process is time consuming and needs a large amount of computational resources. Harmonic balance method simulates the unsteady flow in compressor on single passage grid with a series of steady simulations. This will largely increase the computational efficiency in comparison with the dual time method. However, most simulations with harmonic balance method are conducted on the flow under either circumferential inlet distortion or rotor/stator interference. Based on an in-house CFD code, the harmonic balance method is applied in the simulation of flow in the NASA Stage 35 under both circumferential inlet distortion and rotor/stator interference. As the unsteady flow is influenced by two different unsteady disturbances, it leads to the computational instability. The instability can be avoided by coupling the harmonic balance method with an optimizing algorithm. The computational result of harmonic balance method is compared with the result of full-annulus simulation. It denotes that, the harmonic balance method simulates the flow under circumferential inlet distortion and rotor/stator interference as precise as the full-annulus simulation with a speed-up of about 8 times.
Generating survival times to simulate Cox proportional hazards models with time-varying covariates.
Austin, Peter C
2012-12-20
Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data-generating process: one must be able to simulate data from a specified statistical model. We describe data-generating processes for the Cox proportional hazards model with time-varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time-varying covariates: first, a dichotomous time-varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time-varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time-varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed-form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time-invariant covariates and to a single time-varying covariate. We illustrate the utility of our closed-form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time-varying covariates. This is compared with the statistical power to detect as statistically significant a binary time-invariant covariate. Copyright © 2012 John Wiley & Sons, Ltd.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-04-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-01-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325
Mehta, A; Patel, S; Robison, W; Senkowski, T; Allen, J; Shaw, E; Senkowski, C
2018-03-01
New techniques in minimally invasive and robotic surgical platforms require staged curricula to insure proficiency. Scant literature exists as to how much simulation should play a role in training those who have skills in advanced surgical technology. The abilities of novel users may help discriminate if surgically experienced users should start at a higher simulation level or if the tasks are too rudimentary. The study's purpose is to explore the ability of General Surgery residents to gain proficiency on the dVSS as compared to novel users. The hypothesis is that Surgery residents will have increased proficiency in skills acquisition as compared to naive users. Six General Surgery residents at a single institution were compared with six teenagers using metrics measured by the dVSS. Participants were given two 1-h sessions to achieve an MScoreTM in the 90th percentile on each of the five simulations. MScoreTM software compiles a variety of metrics including total time, number of attempts, and high score. Statistical analysis was run using Student's t test. Significance was set at p value <0.05. Total time, attempts, and high score were compared between the two groups. The General Surgery residents took significantly less Total Time to complete Pegboard 1 (PB1) (p = 0.043). No significant difference was evident between the two groups in the other four simulations across the same MScoreTM metrics. A focused look at the energy dissection task revealed that overall score might not be discriminant enough. Our findings indicate that prior medical knowledge or surgical experience does not significantly impact one's ability to acquire new skills on the dVSS. It is recommended that residency-training programs begin to include exposure to robotic technology.
Cannon, W Dilworth; Nicandri, Gregg T; Reinig, Karl; Mevis, Howard; Wittstein, Jocelyn
2014-04-02
Several virtual reality simulators have been developed to assist orthopaedic surgeons in acquiring the skills necessary to perform arthroscopic surgery. The purpose of this study was to assess the construct validity of the ArthroSim virtual reality arthroscopy simulator by evaluating whether skills acquired through increased experience in the operating room lead to improved performance on the simulator. Using the simulator, six postgraduate year-1 orthopaedic residents were compared with six postgraduate year-5 residents and with six community-based orthopaedic surgeons when performing diagnostic arthroscopy. The time to perform the procedure was recorded. To ensure that subjects did not sacrifice the quality of the procedure to complete the task in a shorter time, the simulator was programmed to provide a completeness score that indicated whether the surgeon accurately performed all of the steps of diagnostic arthroscopy in the correct sequence. The mean time to perform the procedure by each group was 610 seconds for community-based orthopaedic surgeons, 745 seconds for postgraduate year-5 residents, and 1028 seconds for postgraduate year-1 residents. Both the postgraduate year-5 residents and the community-based orthopaedic surgeons performed the procedure in significantly less time (p = 0.006) than the postgraduate year-1 residents. There was a trend toward significance (p = 0.055) in time to complete the procedure when the postgraduate year-5 residents were compared with the community-based orthopaedic surgeons. The mean level of completeness as assigned by the simulator for each group was 85% for the community-based orthopaedic surgeons, 79% for the postgraduate year-5 residents, and 71% for the postgraduate year-1 residents. As expected, these differences were not significant, indicating that the three groups had achieved an acceptable level of consistency in their performance of the procedure. Higher levels of surgeon experience resulted in improved efficiency when performing diagnostic knee arthroscopy on the simulator. Further validation studies utilizing the simulator are currently under way and the additional simulated tasks of arthroscopic meniscectomy, meniscal repair, microfracture, and loose body removal are being developed.
Franc-Law, Jeffrey Michael; Ingrassia, Pier Luigi; Ragazzoni, Luca; Della Corte, Francesco
2010-01-01
Training in practical aspects of disaster medicine is often impossible, and simulation may offer an educational opportunity superior to traditional didactic methods. We sought to determine whether exposure to an electronic simulation tool would improve the ability of medical students to manage a simulated disaster. We stratified 22 students by year of education and randomly assigned 50% from each category to form the intervention group, with the remaining 50% forming the control group. Both groups received the same didactic training sessions. The intervention group received additional disaster medicine training on a patient simulator (disastermed.ca), and the control group spent equal time on the simulator in a nondisaster setting. We compared markers of patient flow during a simulated disaster, including mean differences in time and number of patients to reach triage, bed assignment, patient assessment and disposition. In addition, we compared triage accuracy and scores on a structured command-and-control instrument. We collected data on the students' evaluations of the course for secondary purposes. Participants in the intervention group triaged their patients more quickly than participants in the control group (mean difference 43 s, 99.5% confidence interval [CI] 12 to 75 s). The score of performance indicators on a standardized scale was also significantly higher in the intervention group (18/18) when compared with the control group (8/18) (p < 0.001). All students indicated that they preferred the simulation-based curriculum to a lecture-based curriculum. When asked to rate the exercise overall, both groups gave a median score of 8 on a 10-point modified Likert scale. Participation in an electronic disaster simulation using the disastermed.ca software package appears to increase the speed at which medical students triage simulated patients and increase their score on a structured command-and-control performance indicator instrument. Participants indicated that the simulation-based curriculum in disaster medicine is preferable to a lecture-based curriculum. Overall student satisfaction with the simulation-based curriculum was high.
NASA Astrophysics Data System (ADS)
Lai, Hanh; McJunkin, Timothy R.; Miller, Carla J.; Scott, Jill R.; Almirall, José R.
2008-09-01
The combined use of SIMION 7.0 and the statistical diffusion simulation (SDS) user program in conjunction with SolidWorks® with COSMSOSFloWorks® fluid dynamics software to model a complete, commercial ion mobility spectrometer (IMS) was demonstrated for the first time and compared to experimental results for tests using compounds of immediate interest in the security industry (e.g., 2,4,6-trinitrotoluene, 2,7-dinitrofluorene, and cocaine). The effort of this research was to evaluate the predictive power of SIMION/SDS for application to IMS instruments. The simulation was evaluated against experimental results in three studies: (1) a drift:carrier gas flow rates study assesses the ability of SIMION/SDS to correctly predict the ion drift times; (2) a drift gas composition study evaluates the accuracy in predicting the resolution; (3) a gate width study compares the simulated peak shape and peak intensity with the experimental values. SIMION/SDS successfully predicted the correct drift time, intensity, and resolution trends for the operating parameters studied. Despite the need for estimations and assumptions in the construction of the simulated instrument, SIMION/SDS was able to predict the resolution between two ion species in air within 3% accuracy. The preliminary success of IMS simulations using SIMION/SDS software holds great promise for the design of future instruments with enhanced performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanh Lai; Timothy R. McJunkin; Carla J. Miller
2008-09-01
The combined use of SIMION 7.0 and the statistical diffusion simulation (SDS) user program in conjunction with SolidWorks® with COSMSOFloWorks® fluid dynamics software to model a complete, commercial ion mobility spectrometer (IMS) was demonstrated for the first time and compared to experimental results for tests using compounds of immediate interest in the security industry (e.g., 2,4,6-trinitrotoluene and cocaine). The effort of this research was to evaluate the predictive power of SIMION/SDS for application to IMS instruments. The simulation was evaluated against experimental results in three studies: 1) a drift:carrier gas flow rates study assesses the ability of SIMION/SDS to correctlymore » predict the ion drift times; 2) a drift gas composition study evaluates the accuracy in predicting the resolution; and 3) a gate width study compares the simulated peak shape and peak intensity with the experimental values. SIMION/SDS successfully predicted the correct drift time, intensity, and resolution trends for the operating parameters studied. Despite the need for estimations and assumptions in the construction of the simulated instrument, SIMION/SDS was able to predict the resolution between two ion species in air within 3% accuracy. The preliminary success of IMS simulations using SIMION/SDS software holds great promise for the design of future instruments with enhanced performance.« less
Snyder, Christopher W; Vandromme, Marianne J; Tyra, Sharon L; Hawn, Mary T
2009-01-01
Virtual reality (VR) simulators for laparoscopy and endoscopy may be valuable tools for resident education. However, the cost of such training in terms of trainee and instructor time may vary depending upon whether an independent or proctored approach is employed. We performed a randomized controlled trial to compare independent and proctored methods of proficiency-based VR simulator training. Medical students were randomized to independent or proctored training groups. Groups were compared with respect to the number of training hours and task repetitions required to achieve expert level proficiency on laparoscopic and endoscopic simulators. Cox regression modeling was used to compare time to proficiency between groups, with adjustment for appropriate covariates. Thirty-six medical students (18 independent, 18 proctored) were enrolled. Achievement of overall simulator proficiency required a median of 11 hours of training (range, 6-21 hours). Laparoscopic and endoscopic proficiency were achieved after a median of 11 (range, 6-32) and 10 (range, 5-27) task repetitions, respectively. The number of repetitions required to achieve proficiency was similar between groups. After adjustment for covariates, trainees in the independent group achieved simulator proficiency with significantly fewer hours of training (hazard ratio, 2.62; 95% confidence interval, 1.01-6.85; p = 0.048). Our study quantifies the cost, in instructor and trainee hours, of proficiency-based laparoscopic and endoscopic VR simulator training, and suggests that proctored instruction does not offer any advantages to trainees. The independent approach may be preferable for surgical residency programs desiring to implement VR simulator training.
Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics.
Zheng, Mo; Li, Xiaoxia; Guo, Li
2013-04-01
Reactive force field (ReaxFF), a recent and novel bond order potential, allows for reactive molecular dynamics (ReaxFF MD) simulations for modeling larger and more complex molecular systems involving chemical reactions when compared with computation intensive quantum mechanical methods. However, ReaxFF MD can be approximately 10-50 times slower than classical MD due to its explicit modeling of bond forming and breaking, the dynamic charge equilibration at each time-step, and its one order smaller time-step than the classical MD, all of which pose significant computational challenges in simulation capability to reach spatio-temporal scales of nanometers and nanoseconds. The very recent advances of graphics processing unit (GPU) provide not only highly favorable performance for GPU enabled MD programs compared with CPU implementations but also an opportunity to manage with the computing power and memory demanding nature imposed on computer hardware by ReaxFF MD. In this paper, we present the algorithms of GMD-Reax, the first GPU enabled ReaxFF MD program with significantly improved performance surpassing CPU implementations on desktop workstations. The performance of GMD-Reax has been benchmarked on a PC equipped with a NVIDIA C2050 GPU for coal pyrolysis simulation systems with atoms ranging from 1378 to 27,283. GMD-Reax achieved speedups as high as 12 times faster than Duin et al.'s FORTRAN codes in Lammps on 8 CPU cores and 6 times faster than the Lammps' C codes based on PuReMD in terms of the simulation time per time-step averaged over 100 steps. GMD-Reax could be used as a new and efficient computational tool for exploiting very complex molecular reactions via ReaxFF MD simulation on desktop workstations. Copyright © 2013 Elsevier Inc. All rights reserved.
Initial Data Analysis Results for ATD-2 ISAS HITL Simulation
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2017-01-01
To evaluate the operational procedures and information requirements for the core functional capabilities of the ATD-2 project, such as tactical surface metering tool, APREQ-CFR procedure, and data element exchanges between ramp and tower, human-in-the-loop (HITL) simulations were performed in March, 2017. This presentation shows the initial data analysis results from the HITL simulations. With respect to the different runway configurations and metering values in tactical surface scheduler, various airport performance metrics were analyzed and compared. These metrics include gate holding time, taxi-out in time, runway throughput, queue size and wait time in queue, and TMI flight compliance. In addition to the metering value, other factors affecting the airport performance in the HITL simulation, including run duration, runway changes, and TMI constraints, are also discussed.
NASA Astrophysics Data System (ADS)
Pohl, E.; Maximini, M.; Bauschulte, A.; vom Schloß, J.; Hermanns, R. T. E.
2015-02-01
HT-PEM fuel cells suffer from performance losses due to degradation effects. Therefore, the durability of HT-PEM is currently an important factor of research and development. In this paper a novel approach is presented for an integrated short term and long term simulation of HT-PEM accelerated lifetime testing. The physical phenomena of short term and long term effects are commonly modeled separately due to the different time scales. However, in accelerated lifetime testing, long term degradation effects have a crucial impact on the short term dynamics. Our approach addresses this problem by applying a novel method for dual time scale simulation. A transient system simulation is performed for an open voltage cycle test on a HT-PEM fuel cell for a physical time of 35 days. The analysis describes the system dynamics by numerical electrochemical impedance spectroscopy. Furthermore, a performance assessment is performed in order to demonstrate the efficiency of the approach. The presented approach reduces the simulation time by approximately 73% compared to conventional simulation approach without losing too much accuracy. The approach promises a comprehensive perspective considering short term dynamic behavior and long term degradation effects.
A fast method for optical simulation of flood maps of light-sharing detector modules.
Shi, Han; Du, Dong; Xu, JianFeng; Moses, William W; Peng, Qiyu
2015-12-01
Optical simulation of the detector module level is highly desired for Position Emission Tomography (PET) system design. Commonly used simulation toolkits such as GATE are not efficient in the optical simulation of detector modules with complicated light-sharing configurations, where a vast amount of photons need to be tracked. We present a fast approach based on a simplified specular reflectance model and a structured light-tracking algorithm to speed up the photon tracking in detector modules constructed with polished finish and specular reflector materials. We simulated conventional block detector designs with different slotted light guide patterns using the new approach and compared the outcomes with those from GATE simulations. While the two approaches generated comparable flood maps, the new approach was more than 200-600 times faster. The new approach has also been validated by constructing a prototype detector and comparing the simulated flood map with the experimental flood map. The experimental flood map has nearly uniformly distributed spots similar to those in the simulated flood map. In conclusion, the new approach provides a fast and reliable simulation tool for assisting in the development of light-sharing-based detector modules with a polished surface finish and using specular reflector materials.
The dataset represents the data depicted in the Figures and Tables of a Journal Manuscript with the following abstract: The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions.This dataset is associated with the following publication
Development of Simulated Directional Audio for Cockpit Applications
1986-01-01
011 ’rhoodore JT.. (2erth, Jeffrev M.. Enpolivinnn.,Will1am R. and Folds, Deennis J. 12a. T’fP6 OP REPORT I131. TIME QW1COVEREDT’PRPOT(r. oDO 5 AG ON...of the aludio si~nal, in the time and frequency domains, which enhance localization performance with simulated cues. Previous research is reviewed...dichotically. Localization accuracy and response time were compared for: (1) nine different filtered noise stimuli, designed to make available some
Jiang, Bailin; Ju, Hui; Zhao, Ying; Yao, Lan; Feng, Yi
2018-04-01
This study compared the efficacy and efficiency of virtual reality simulation (VRS) with high-fidelity mannequin in the simulation-based training of fiberoptic bronchoscope manipulation in novices. Forty-six anesthesia residents with no experience in fiberoptic intubation were divided into two groups: VRS (group VRS) and mannequin (group M). After a standard didactic teaching session, group VRS trained 25 times on VRS, whereas group M performed the same process on a mannequin. After training, participants' performance was assessed on a mannequin five consecutive times. Procedure times during training were recorded as pooled data to construct learning curves. Procedure time and global rating scale scores of manipulation ability were compared between groups, as well as changes in participants' confidence after training. Plateaus in the learning curves were achieved after 19 (95% confidence interval = 15-26) practice sessions in group VRS and 24 (95% confidence interval = 20-32) in group M. There was no significant difference in procedure time [13.7 (6.6) vs. 11.9 (4.1) seconds, t' = 1.101, P = 0.278] or global rating scale [3.9 (0.4) vs. 3.8 (0.4), t = 0.791, P = 0.433] between groups. Participants' confidence increased after training [group VRS: 1.8 (0.7) vs. 3.9 (0.8), t = 8.321, P < 0.001; group M = 2.0 (0.7) vs. 4.0 (0.6), t = 13.948, P < 0.001] but did not differ significantly between groups. Virtual reality simulation is more efficient than mannequin in simulation-based training of flexible fiberoptic manipulation in novices, but similar effects can be achieved in both modalities after adequate training.
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
Zhao, Chenhui; Zhang, Guangcheng; Wu, Yibo
2012-01-01
The resin flow behavior in the vacuum assisted resin infusion molding process (VARI) of foam sandwich composites was studied by both visualization flow experiments and computer simulation. Both experimental and simulation results show that: the distribution medium (DM) leads to a shorter molding filling time in grooved foam sandwich composites via the VARI process, and the mold filling time is linearly reduced with the increase of the ratio of DM/Preform. Patterns of the resin sources have a significant influence on the resin filling time. The filling time of center source is shorter than that of edge pattern. Point pattern results in longer filling time than of linear source. Short edge/center patterns need a longer time to fill the mould compared with Long edge/center sources.
SIMULATION OF FLOOD HYDROGRAPHS FOR GEORGIA STREAMS.
Inman, E.J.; Armbruster, J.T.
1986-01-01
Flood hydrographs are needed for the design of many highway drainage structures and embankments. A method for simulating these flood hydrographs at urban and rural ungauged sites in Georgia is presented. The O'Donnell method was used to compute unit hydrographs from 355 flood events from 80 stations. An average unit hydrograph and an average lag time were computed for each station. These average unit hydrographs were transformed to unit hydrographs having durations of one-fourth, one-third, one-half, and three-fourths lag time and then reduced to dimensionless terms by dividing the time by lag time and the discharge by peak discharge. Hydrographs were simulated for these 355 flood events and their widths were compared with the widths of the observed hydrographs at 50 and 75 percent of peak flow. For simulating hydrographs at sites larger than 500 mi**2, the U. S. Geological Survey computer model CONROUT can be used.
Driving with a short arm cast in a simulator.
Mansour, Damian; Mansour, Kristin Gotaas; Kenny, Benjamin William; Attia, John; Meads, Bryce
2015-12-01
To test the ability to steer in a driving simulator in subjects with a short arm cast. 17 men and 13 women aged 23 to 67 (mean, 37) years who had a valid driver's licence were randomised to the cast-first group (n=16; 7 had the cast on the dominant arm) or the cast-second group (n=14; 8 had the cast on the dominant arm) and drove in a simulator. A short arm plaster-of-Paris cast was applied in a neutral position, allowing free movement of the metacarpophalangeal joints, thumb, and elbow joint. Outcome measures included the number of driving off track instances, the number of crashes, the lap time, and the effect of hand dominance on these parameters. Subjects were asked whether the cast had impeded their steering ability. Subjects with or without a cast were comparable in terms of the number of driving off track instances, number of crashes, and lap time. Compared with no cast, the odds ratio (OR) of a subject in a cast driving off the track was 1.02 (p=0.921) and having a crash was 0.79 (p=0.047). All subjects were 1.23 times more likely to drive off the track in their first lap (OR=2.66, p=0.019). The mean lap time decreased for each consecutive lap from the 2nd to 5th laps. Subjects driving with a cast on the dominant or non-dominant arm were comparable. 26 out of the 30 participants considered that the plaster cast impeded their steering ability. Compared with no cast, driving with a short arm cast did not significantly decrease steering ability in a driving simulator.
Glick, Joshua; Lehman, Erik; Terndrup, Thomas
2014-03-01
Coordination of the tasks of performing chest compressions and defibrillation can lead to communication challenges that may prolong time spent off the chest. The purpose of this study was to determine whether defibrillation provided by the provider performing chest compressions led to a decrease in peri-shock pauses as compared to defibrillation administered by a second provider, in a simulated cardiac arrest scenario. This was a randomized, controlled study measuring pauses in chest compressions for defibrillation in a simulated cardiac arrest model. We approached hospital providers with current CPR certification for participation between July, 2011 and October, 2011. Volunteers were randomized to control (facilitator-administered defibrillation) or experimental (compressor-administered defibrillation) groups. All participants completed one minute of chest compressions on a mannequin in a shockable rhythm prior to administration of defibrillation. We measured and compared pauses for defibrillation in both groups. Out of 200 total participants, we analyzed data from 197 defibrillations. Compressor-initiated defibrillation resulted in a significantly lower pre-shock hands-off time (0.57 s; 95% CI: 0.47-0.67) compared to facilitator-initiated defibrillation (1.49 s; 95% CI: 1.35-1.64). Furthermore, compressor-initiated defibrillation resulted in a significantly lower peri-shock hands-off time (2.77 s; 95% CI: 2.58-2.95) compared to facilitator-initiated defibrillation (4.25 s; 95% CI: 4.08-4.43). Assigning the responsibility for shock delivery to the provider performing compressions encourages continuous compressions throughout the charging period and decreases total time spent off the chest. However, as this was a simulation-based study, clinical implementation is necessary to further evaluate these potential benefits.
Simulation-based validation and arrival-time correction for Patlak analyses of Perfusion-CT scans
NASA Astrophysics Data System (ADS)
Bredno, Jörg; Hom, Jason; Schneider, Thomas; Wintermark, Max
2009-02-01
Blood-brain-barrier (BBB) breakdown is a hypothesized mechanism for hemorrhagic transformation in acute stroke. The Patlak analysis of a Perfusion Computed Tomography (PCT) scan measures the BBB permeability, but the method yields higher estimates when applied to the first pass of the contrast bolus compared to a delayed phase. We present a numerical phantom that simulates vascular and parenchymal time-attenuation curves to determine the validity of permeability measurements obtained with different acquisition protocols. A network of tubes represents the major cerebral arteries ipsi- and contralateral to an ischemic event. These tubes branch off into smaller segments that represent capillary beds. Blood flow in the phantom is freely defined and simulated as non-Newtonian tubular flow. Diffusion of contrast in the vessels and permeation through vessel walls is part of the simulation. The phantom allows us to compare the results of a permeability measurement to the simulated vessel wall status. A Patlak analysis reliably detects areas with BBB breakdown for acquisitions of 240s duration, whereas results obtained from the first pass are biased in areas of reduced blood flow. Compensating for differences in contrast arrival times reduces this bias and gives good estimates of BBB permeability for PCT acquisitions of 90-150s duration.
Optimisation of 12 MeV electron beam simulation using variance reduction technique
NASA Astrophysics Data System (ADS)
Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul
2017-05-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.
Regente, J; de Zeeuw, J; Bes, F; Nowozin, C; Appelhoff, S; Wahnschaffe, A; Münch, M; Kunz, D
2017-05-01
In single night shifts, extending habitual wake episodes leads to sleep deprivation induced decrements of performance during the shift and re-adaptation effects the next day. We investigated whether short-wavelength depleted (=filtered) bright light (FBL) during a simulated night shift would counteract such effects. Twenty-four participants underwent a simulated night shift in dim light (DL) and in FBL. Reaction times, subjective sleepiness and salivary melatonin concentrations were assessed during both nights. Daytime sleep was recorded after both simulated night shifts. During FBL, we found no melatonin suppression compared to DL, but slightly faster reaction times in the second half of the night. Daytime sleep was not statistically different between both lighting conditions (n = 24) and there was no significant phase shift after FBL (n = 11). To conclude, our results showed positive effects from FBL during simulated single night shifts which need to be further tested with larger groups, in more applied studies and compared to standard lighting. Copyright © 2016 Elsevier Ltd. All rights reserved.
Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...
2017-02-28
Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less
Liaw, Sok Ying; Chan, Sally Wai-Chi; Chen, Fun-Gee; Hooi, Shing Chuan; Siau, Chiang
2014-09-17
Virtual patient simulation has grown substantially in health care education. A virtual patient simulation was developed as a refresher training course to reinforce nursing clinical performance in assessing and managing deteriorating patients. The objective of this study was to describe the development of the virtual patient simulation and evaluate its efficacy, by comparing with a conventional mannequin-based simulation, for improving the nursing students' performances in assessing and managing patients with clinical deterioration. A randomized controlled study was conducted with 57 third-year nursing students who were recruited through email. After a baseline evaluation of all participants' clinical performance in a simulated environment, the experimental group received a 2-hour fully automated virtual patient simulation while the control group received 2-hour facilitator-led mannequin-based simulation training. All participants were then re-tested one day (first posttest) and 2.5 months (second posttest) after the intervention. The participants from the experimental group completed a survey to evaluate their learning experiences with the newly developed virtual patient simulation. Compared to their baseline scores, both experimental and control groups demonstrated significant improvements (P<.001) in first and second post-test scores. While the experimental group had significantly lower (P<.05) second post-test scores compared with the first post-test scores, no significant difference (P=.94) was found between these two scores for the control group. The scores between groups did not differ significantly over time (P=.17). The virtual patient simulation was rated positively. A virtual patient simulation for a refreshing training course on assessing and managing clinical deterioration was developed. Although the randomized controlled study did not show that the virtual patient simulation was superior to mannequin-based simulation, both simulations have demonstrated to be effective refresher learning strategies for improving nursing students' clinical performance. Given the greater resource requirements of mannequin-based simulation, the virtual patient simulation provides a more promising alternative learning strategy to mitigate the decay of clinical performance over time.
Guo, Feng; Cheng, Xin-lu; Zhang, Hong
2012-04-12
Which is the first step in the decomposition process of nitromethane is a controversial issue, proton dissociation or C-N bond scission. We applied reactive force field (ReaxFF) molecular dynamics to probe the initial decomposition mechanisms of nitromethane. By comparing the impact on (010) surfaces and without impact (only heating) for nitromethane simulations, we found that proton dissociation is the first step of the pyrolysis of nitromethane, and the C-N bond decomposes in the same time scale as in impact simulations, but in the nonimpact simulation, C-N bond dissociation takes place at a later time. At the end of these simulations, a large number of clusters are formed. By analyzing the trajectories, we discussed the role of the hydrogen bond in the initial process of nitromethane decompositions, the intermediates observed in the early time of the simulations, and the formation of clusters that consisted of C-N-C-N chain/ring structures.
An efficient and reliable predictive method for fluidized bed simulation
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-13
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
An efficient and reliable predictive method for fluidized bed simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-29
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
NASA Astrophysics Data System (ADS)
Rytka, C.; Lungershausen, J.; Kristiansen, P. M.; Neyer, A.
2016-06-01
Flow simulations can cut down both costs and time for the development of injection moulded polymer parts with functional surfaces used in life science and optical applications. We simulated the polymer melt flow into 3D micro- and nanostructures with Moldflow and Comsol and compared the results to real iso- and variothermal injection moulding trials below, at and above the transition temperature of the polymer. By adjusting the heat transfer coefficient and the transition temperature in the simulation it was possible to achieve good correlation with experimental findings at different processing conditions (mould temperature, injection velocity) for two polymers, namely polymethylmethacrylate and amorphous polyamide. The macroscopic model can be scaled down in volume and number of elements to save computational time for microstructure simulation and to enable first and foremost the nanostructure simulation, as long as local boundary conditions such as flow front speed are transferred correctly. The heat transfer boundary condition used in Moldflow was further evaluated in Comsol. Results showed that the heat transfer coefficient needs to be increased compared to macroscopic moulding in order to represent interfacial polymer/mould effects correctly. The transition temperature is most important in the packing phase for variothermal injection moulding.
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
Cascade Defect Evolution Processes: Comparison of Atomistic Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Haixuan; Stoller, Roger E; Osetskiy, Yury N
2013-11-01
Determining the defect evolution beyond the molecular dynamics (MD) time scale is critical in bridging the gap between atomistic simulations and experiments. The recently developed self-evolving atomistic kinetic Monte Carlo (SEAKMC) method provides new opportunities to simulate long-term defect evolution with MD-like fidelity. In this study, SEAKMC is applied to investigate the cascade defect evolution in bcc iron. First, the evolution of a vacancy rich region is simulated and compared with results obtained using autonomous basin climbing (ABC) +KMC and kinetic activation-relaxation technique (kART) simulations. Previously, it is found the results from kART are orders of magnitude faster than ABC+KMC.more » The results obtained from SEAKMC are similar to kART but the time predicted is about one order of magnitude faster than kART. The fidelity of SEAKMC is confirmed by statistically relevant MD simulations at multiple higher temperatures, which proves that the saddle point sampling is close to complete in SEAKMC. The second is the irradiation-induced formation of C15 Laves phase nano-size defect clusters. In contrast to previous studies, which claim the defects can grow by capturing self-interstitials, we found these highly stable clusters can transform to <111> glissile configuration on a much longer time scale. Finally, cascade-annealing simulations using SEAKMC is compared with traditional object KMC (OKMC) method. SEAKMC predicts substantially fewer surviving defects compared with OKMC. The possible origin of this difference is discussed and a possible way to improve the accuracy of OKMC based on SEAKMC results is outlined. These studies demonstrate the atomistic fidelity of SEAKMC in comparison with other on-the-fly KMC methods and provide new information on long-term defect evolution in iron.« less
A New Approach to Modeling Jupiter's Magnetosphere
NASA Astrophysics Data System (ADS)
Fukazawa, K.; Katoh, Y.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.
2017-12-01
The scales in planetary magnetospheres range from 10s of planetary radii to kilometers. For a number of years we have studied the magnetospheres of Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations. However, we have not been able to reach even the limits of the MHD approximation because of the large amount of computer resources required. Recently thanks to the progress in supercomputer systems, we have obtained the capability to simulate Jupiter's magnetosphere with 1000 times the number of grid points used in our previous simulations. This has allowed us to combine the high resolution global simulation with a micro-scale simulation of the Jovian magnetosphere. In particular we can combine a hybrid (kinetic ions and fluid electrons) simulation with the MHD simulation. In addition, the new capability enables us to run multi-parameter survey simulations of the Jupiter-solar wind system. In this study we performed a high-resolution simulation of Jovian magnetosphere to connect with the hybrid simulation, and lower resolution simulations under the various solar wind conditions to compare with Hisaki and Juno observations. In the high-resolution simulation we used a regular Cartesian gird with 0.15 RJ grid spacing and placed the inner boundary at 7 RJ. From these simulation settings, we provide the magnetic field out to around 20 RJ from Jupiter as a background field for the hybrid simulation. For the first time we have been able to resolve Kelvin Helmholtz waves on the magnetopause. We have investigated solar wind dynamic pressures between 0.01 and 0.09 nPa for a number of IMF values. These simulation data are open for the registered users to download the raw data. We have compared the results of these simulations with Hisaki auroral observations.
Evaluating Clouds in Long-Term Cloud-Resolving Model Simulations with Observational Data
NASA Technical Reports Server (NTRS)
Zeng, Xiping; Tao, Wei-Kuo; Zhang, Minghua; Peters-Lidard, Christa; Lang, Stephen; Simpson, Joanne; Kumar, Sujay; Xie, Shaocheng; Eastman, Joseph L.; Shie, Chung-Lin;
2006-01-01
Two 20-day, continental midlatitude cases are simulated with a three-dimensional (3D) cloud-resolving model (CRM) and compared to Atmospheric Radiation Measurement (ARM) data. This evaluation of long-term cloud-resolving model simulations focuses on the evaluation of clouds and surface fluxes. All numerical experiments, as compared to observations, simulate surface precipitation well but over-predict clouds, especially in the upper troposphere. The sensitivity of cloud properties to dimensionality and other factors is studied to isolate the origins of the over prediction of clouds. Due to the difference in buoyancy damping between 2D and 3D models, surface precipitation fluctuates rapidly with time, and spurious dehumidification occurs near the tropopause in the 2D CRM. Surface fluxes from a land data assimilation system are compared with ARM observations. They are used in place of the ARM surface fluxes to test the sensitivity of simulated clouds to surface fluxes. Summertime simulations show that surface fluxes from the assimilation system bring about a better simulation of diurnal cloud variation in the lower troposphere.
C(α) torsion angles as a flexible criterion to extract secrets from a molecular dynamics simulation.
Victor Paul Raj, Fredrick Robin Devadoss; Exner, Thomas E
2014-04-01
Given the increasing complexity of simulated molecular systems, and the fact that simulation times have now reached milliseconds to seconds, immense amounts of data (in the gigabyte to terabyte range) are produced in current molecular dynamics simulations. Manual analysis of these data is a very time-consuming task, and important events that lead from one intermediate structure to another can become occluded in the noise resulting from random thermal fluctuations. To overcome these problems and facilitate a semi-automated data analysis, we introduce in this work a measure based on C(α) torsion angles: torsion angles formed by four consecutive C(α) atoms. This measure describes changes in the backbones of large systems on a residual length scale (i.e., a small number of residues at a time). Cluster analysis of individual C(α) torsion angles and its fuzzification led to continuous time patches representing (meta)stable conformations and to the identification of events acting as transitions between these conformations. The importance of a change in torsion angle to structural integrity is assessed by comparing this change to the average fluctuations in the same torsion angle over the complete simulation. Using this novel measure in combination with other measures such as the root mean square deviation (RMSD) and time series of distance measures, we performed an in-depth analysis of a simulation of the open form of DNA polymerase I. The times at which major conformational changes occur and the most important parts of the molecule and their interrelations were pinpointed in this analysis. The simultaneous determination of the time points and localizations of major events is a significant advantage of the new bottom-up approach presented here, as compared to many other (top-down) approaches in which only the similarity of the complete structure is analyzed.
NASA Astrophysics Data System (ADS)
Noble, David R.; Georgiadis, John G.; Buckius, Richard O.
1996-07-01
The lattice Boltzmann method (LBM) is used to simulate flow in an infinite periodic array of octagonal cylinders. Results are compared with those obtained by a finite difference (FD) simulation solved in terms of streamfunction and vorticity using an alternating direction implicit scheme. Computed velocity profiles are compared along lines common to both the lattice Boltzmann and finite difference grids. Along all such slices, both streamwise and transverse velocity predictions agree to within 05% of the average streamwise velocity. The local shear on the surface of the cylinders also compares well, with the only deviations occurring in the vicinity of the corners of the cylinders, where the slope of the shear is discontinuous. When a constant dimensionless relaxation time is maintained, LBM exhibits the same convergence behaviour as the FD algorithm, with the time step increasing as the square of the grid size. By adjusting the relaxation time such that a constant Mach number is achieved, the time step of LBM varies linearly with the grid size. The efficiency of LBM on the CM-5 parallel computer at the National Center for Supercomputing Applications (NCSA) is evaluated by examining each part of the algorithm. Overall, a speed of 139 GFLOPS is obtained using 512 processors for a domain size of 2176×2176.
Boundary conditions for simulating large SAW devices using ANSYS.
Peng, Dasong; Yu, Fengqi; Hu, Jian; Li, Peng
2010-08-01
In this report, we propose improved substrate left and right boundary conditions for simulating SAW devices using ANSYS. Compared with the previous methods, the proposed method can greatly reduce computation time. Furthermore, the longer the distance from the first reflector to the last one, the more computation time can be reduced. To verify the proposed method, a design example is presented with device center frequency 971.14 MHz.
Adaptive time steps in trajectory surface hopping simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spörkel, Lasse, E-mail: spoerkel@kofo.mpg.de; Thiel, Walter, E-mail: thiel@kofo.mpg.de
2016-05-21
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energymore » surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.« less
Adaptive time steps in trajectory surface hopping simulations
NASA Astrophysics Data System (ADS)
Spörkel, Lasse; Thiel, Walter
2016-05-01
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.
Free Flight Simulation: An Initial Examination of Air-Ground Integration Issues
NASA Technical Reports Server (NTRS)
Lozito, Sandra; McGann, Alison; Cashion, Patricia; Dunbar, Melisa; Mackintosh, Margaret; Dulchinos, Victoria; Jordan, Kevin; Remington, Roger (Technical Monitor)
2000-01-01
The concept of "free flight" is intended to emphasize more flexibility for operators in the National Airspace System (RTCA, 1995). This may include the potential for aircraft self-separation. The purpose of this simulation was to begin examining some of the communication and procedural issues associated with self-separation in an integrated air-ground environment. Participants were 10 commercial U.S. flight crews who flew the B747-400 simulator and 10 Denver ARTCC controllers who monitored traffic in an ATC simulation. A prototypic airborne alerting logic and flight deck display features were designed to allow for increased traffic and maneuvering information. Eight different scenarios representing different conflict types were developed. The effects of traffic density (high and low) and different traffic convergence angles (obtuse, acute, and right) were assessed. Conflict detection times were found to be lower for the flight crews in low density compared to high density scenarios. For the controllers, an interaction between density and convergence angle was revealed. Analyses on the controller detection times found longer detection times in the obtuse high density compared to obtuse low density, as well as the shortest detection times in the high density acute angle condition. Maneuvering and communication events are summarized, and a discussion of future research issues is provided.
Autonomous control of production networks using a pheromone approach
NASA Astrophysics Data System (ADS)
Armbruster, D.; de Beer, C.; Freitag, M.; Jagalski, T.; Ringhofer, C.
2006-04-01
The flow of parts through a production network is usually pre-planned by a central control system. Such central control fails in presence of highly fluctuating demand and/or unforeseen disturbances. To manage such dynamic networks according to low work-in-progress and short throughput times, an autonomous control approach is proposed. Autonomous control means a decentralized routing of the autonomous parts themselves. The parts’ decisions base on backward propagated information about the throughput times of finished parts for different routes. So, routes with shorter throughput times attract parts to use this route again. This process can be compared to ants leaving pheromones on their way to communicate with following ants. The paper focuses on a mathematical description of such autonomously controlled production networks. A fluid model with limited service rates in a general network topology is derived and compared to a discrete-event simulation model. Whereas the discrete-event simulation of production networks is straightforward, the formulation of the addressed scenario in terms of a fluid model is challenging. Here it is shown, how several problems in a fluid model formulation (e.g. discontinuities) can be handled mathematically. Finally, some simulation results for the pheromone-based control with both the discrete-event simulation model and the fluid model are presented for a time-dependent influx.
Evaluation of acoustic testing techniques for spacecraft systems
NASA Technical Reports Server (NTRS)
Cockburn, J. A.
1971-01-01
External acoustic environments, structural responses, noise reductions, and the internal acoustic environments have been predicted for a typical shroud/spacecraft system during lift-off and various critical stages of flight. Spacecraft responses caused by energy transmission from the shroud via mechanical and acoustic paths have been compared and the importance of the mechanical path has been evaluated. Theoretical predictions have been compared extensively with available laboratory and in-flight measurements. Equivalent laboratory acoustic fields for simulation of shroud response during the various phases of flight have been derived and compared in detail. Techniques for varying the time-space correlations of laboratory acoustic fields have been examined, together with methods for varying the time and spatial distribution of acoustic amplitudes. Possible acoustic testing configurations for shroud/spacecraft systems have been suggested and trade-off considerations have been reviewed. The problem of simulating the acoustic environments versus simulating the structural responses has been considered and techniques for testing without the shroud installed have been discussed.
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca; University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213; University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respondmore » over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.« less
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
NASA Astrophysics Data System (ADS)
Sharma, Gulshan B.; Robertson, Douglas D.
2013-07-01
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula's material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element's remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.
Real-time global MHD simulation of the solar wind interaction with the earth's magnetosphere
NASA Astrophysics Data System (ADS)
Shimazu, H.; Tanaka, T.; Fujita, S.; Nakamura, M.; Obara, T.
We have developed a real-time global MHD simulation of the solar wind interaction with the earth s magnetosphere By adopting the real-time solar wind parameters including the IMF observed routinely by the ACE spacecraft responses of the magnetosphere are calculated with the MHD code We adopted the modified spherical coordinates and the mesh point numbers for this simulation are 56 58 and 40 for the r theta and phi direction respectively The simulation is carried out routinely on the super computer system NEC SX-6 at National Institute of Information and Communications Technology Japan The visualized images of the magnetic field lines around the earth pressure distribution on the meridian plane and the conductivity of the polar ionosphere can be referred to on the Web site http www nict go jp dk c232 realtime The results show that various magnetospheric activities are almost reproduced qualitatively They also give us information how geomagnetic disturbances develop in the magnetosphere in relation with the ionosphere From the viewpoint of space weather the real-time simulation helps us to understand the whole image in the current condition of the magnetosphere To evaluate the simulation results we compare the AE index derived from the simulation and observations In the case of isolated substorms the indices almost agreed well in both timing and intensities In other cases the simulation can predict general activities although the exact timing of the onset of substorms and intensities did not always agree By analyzing
Compressive Spectral Method for the Simulation of the Nonlinear Gravity Waves
Bayındır, Cihan
2016-01-01
In this paper an approach for decreasing the computational effort required for the spectral simulations of the fully nonlinear ocean waves is introduced. The proposed approach utilizes the compressive sampling algorithm and depends on the idea of using a smaller number of spectral components compared to the classical spectral method. After performing the time integration with a smaller number of spectral components and using the compressive sampling technique, it is shown that the ocean wave field can be reconstructed with a significantly better efficiency compared to the classical spectral method. For the sparse ocean wave model in the frequency domain the fully nonlinear ocean waves with Jonswap spectrum is considered. By implementation of a high-order spectral method it is shown that the proposed methodology can simulate the linear and the fully nonlinear ocean waves with negligible difference in the accuracy and with a great efficiency by reducing the computation time significantly especially for large time evolutions. PMID:26911357
Innovative methods for calculation of freeway travel time using limited data : final report.
DOT National Transportation Integrated Search
2008-01-01
Description: Travel time estimations created by processing of simulated freeway loop detector data using proposed method have been compared with travel times reported from VISSIM model. An improved methodology was proposed to estimate freeway corrido...
Shim, Sung J; Kumar, Arun; Jiao, Roger
2016-01-01
A hospital is considering deploying a radiofrequency identification (RFID) system and setting up a new "discharge lounge" to improve the patient discharge process. This study uses computer simulation to model and compare the current process and the new process, and it assesses the impact of the RFID system and the discharge lounge on the process in terms of resource utilization and time taken in the process. The simulation results regarding resource utilization suggest that the RFID system can slightly relieve the burden on all resources, whereas the RFID system and the discharge lounge together can significantly mitigate the nurses' tasks. The simulation results in terms of the time taken demonstrate that the RFID system can shorten patient wait times, staff busy times, and bed occupation times. The results of the study could prove helpful to others who are considering the use of an RFID system in the patient discharge process in hospitals or similar processes.
Fujii, Keisuke; Shinya, Masahiro; Yamashita, Daichi; Kouzaki, Motoki; Oda, Shingo
2014-01-01
We previously estimated the timing when ball game defenders detect relevant information through visual input for reacting to an attacker's running direction after a cutting manoeuvre, called cue timing. The purpose of this study was to investigate what specific information is relevant for defenders, and how defenders process this information to decide on their opponents' running direction. In this study, we hypothesised that defenders extract information regarding the position and velocity of the attackers' centre of mass (CoM) and the contact foot. We used a model which simulates the future trajectory of the opponent's CoM based upon an inverted pendulum movement. The hypothesis was tested by comparing observed defender's cue timing, model-estimated cue timing using the inverted pendulum model (IPM cue timing) and cue timing using only the current CoM position (CoM cue timing). The IPM cue timing was defined as the time when the simulated pendulum falls leftward or rightward given the initial values for position and velocity of the CoM and the contact foot at the time. The model-estimated IPM cue timing and the empirically observed defender's cue timing were comparable in median value and were significantly correlated, whereas the CoM cue timing was significantly more delayed than the IPM and the defender's cue timings. Based on these results, we discuss the possibility that defenders may be able to anticipate the future direction of an attacker by forwardly simulating inverted pendulum movement.
Adaptive temporal refinement in injection molding
NASA Astrophysics Data System (ADS)
Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek
2018-05-01
Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.
NASA Astrophysics Data System (ADS)
Magaldi, Marcello G.; Haine, Thomas W. N.
2015-02-01
The cascade of dense waters of the Southeast Greenland shelf during summer 2003 is investigated with two very high-resolution (0.5-km) simulations. The first simulation is non-hydrostatic. The second simulation is hydrostatic and about 3.75 times less expensive. Both simulations are compared to a 2-km hydrostatic run, about 31 times less expensive than the 0.5 km non-hydrostatic case. Time-averaged volume transport values for deep waters are insensitive to the changes in horizontal resolution and vertical momentum dynamics. By this metric, both lateral stirring and vertical shear instabilities associated with the cascading process are accurately parameterized by the turbulent schemes used at 2-km horizontal resolution. All runs compare well with observations and confirm that the cascade is mainly driven by cyclones which are linked to dense overflow boluses at depth. The passage of the cyclones is also associated with the generation of internal gravity waves (IGWs) near the shelf. Surface fields and kinetic energy spectra do not differ significantly between the runs for horizontal scales L > 30 km. Complex structures emerge and the spectra flatten at scales L < 30 km in the 0.5-km runs. In the non-hydrostatic case, additional energy is found in the vertical kinetic energy spectra at depth in the 2 km < L < 10 km range and with frequencies around 7 times the inertial frequency. This enhancement is missing in both hydrostatic runs and is here argued to be due to the different IGW evolution and propagation offshore. The different IGW behavior in the non-hydrostatic case has strong implications for the energetics: compared to the 2-km case, the baroclinic conversion term and vertical kinetic energy are about 1.4 and at least 34 times larger, respectively. This indicates that the energy transfer from the geostrophic eddy field to IGWs and their propagation away from the continental slope is not properly represented in the hydrostatic runs.
NASA Astrophysics Data System (ADS)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
2017-10-01
Two single-column models are fully coupled via the weak-temperature gradient approach. The coupled-SCM is used to simulate the transition from suppressed to active convection under the influence of an interactive large-scale circulation. The sensitivity of this transition to the value of mixing entrainment within the convective parameterization is explored. The results from these simulations are compared with those from equivalent simulations using coupled cloud-resolving models. Coupled-column simulations over nonuniform surface forcing are used to initialize the simulations of the transition, in which the column with suppressed convection is forced to undergo a transition to active convection by changing the local and/or remote surface forcings. The direct contributions from the changes in surface forcing are to induce a weakening of the large-scale circulation which systematically modulates the transition. In the SCM, the contributions from the large-scale circulation are dominated by the heating effects, while in the CRM the heating and moistening effects are about equally divided. A transition time is defined as the time when the rain rate in the dry column is halfway to the value at equilibrium after the transition. For the control value of entrainment, the order of the transition times is identical to that obtained in the CRM, but the transition times are markedly faster. The locally forced transition is strongly delayed by a higher entrainment. A consequence is that for a 50% higher entrainment the transition times are reordered. The remotely forced transition remains fast while the locally forced transition becomes slow, compared to the CRM.
CFD simulation of a dry scroll vacuum pump with clearances, solid heating and thermal deformation
NASA Astrophysics Data System (ADS)
Spille-Kohoff, A.; Hesse, J.; Andres, R.; Hetze, F.
2017-08-01
Although dry scroll vacuum pumps (DSPV) are essential devices in many different industrial processes, the CFD simulation of such pumps is not widely used and often restricted to simplified cases due to its complexity: The working principle with a fixed and an orbiting scroll leads to working chambers that are changing in time and are connected through moving small radial and axial clearances in the range of 10 to 100 μm. Due to the low densities and low mass flow rates in vacuum pumps, it is important to include heat transfer towards and inside the solid components. Solid heating is very slow compared to the scroll revolution speed and the gas behaviour, thus a special workflow is necessary to reach the working conditions in reasonable simulation times. The resulting solid temperature is then used to compute the thermal deformation, which usually results in gap size changes that influence leakage flows. In this paper, setup steps and results for the simulation of a DSVP are shown and compared to theoretical and experimental results. The time-varying working chambers are meshed with TwinMesh, a hexahedral meshing programme for positive displacement machines. The CFD simulation with ANSYS CFX accounts for gas flow with compressibility and turbulence effects, conjugate heat transfer between gas and solids, and leakage flows through the clearances. Time-resolved results for torques, chamber pressure, mass flow, and heat flow between gas and solids are shown, as well as time- and space-resolved results for pressure, velocity, and temperature for different operating conditions of the DSVP.
Contextual interference effect on perceptual-cognitive skills training.
Broadbent, David P; Causer, Joe; Ford, Paul R; Williams, A Mark
2015-06-01
Contextual interference (CI) effect predicts that a random order of practice for multiple skills is superior for learning compared to a blocked order. We report a novel attempt to examine the CI effect during acquisition and transfer of anticipatory judgments from simulation training to an applied sport situation. Participants were required to anticipate tennis shots under either a random practice schedule or a blocked practice schedule. Response accuracy was recorded for both groups in pretest, during acquisition, and on a 7-d retention test. Transfer of learning was assessed through a field-based tennis protocol that attempted to assess performance in an applied sport setting. The random practice group had significantly higher response accuracy scores on the 7-d laboratory retention test compared to the blocked group. Moreover, during the transfer of anticipatory judgments to an applied sport situation, the decision times of the random practice group were significantly lower compared to the blocked group. The CI effect extends to the training of anticipatory judgments through simulation techniques. Furthermore, we demonstrate for the first time that the CI effect increases transfer of learning from simulation training to an applied sport task, highlighting the importance of using appropriate practice schedules during simulation training.
Simulation of a small muon tomography station system based on RPCs
NASA Astrophysics Data System (ADS)
Chen, S.; Li, Q.; Ma, J.; Kong, H.; Ye, Y.; Gao, J.; Jiang, Y.
2014-10-01
In this work, Monte Carlo simulations were used to study the performance of a small muon Tomography Station based on four glass resistive plate chambers(RPCs) with a spatial resolution of approximately 1.0mm (FWHM). We developed a simulation code to generate cosmic ray muons with the appropriate distribution of energies and angles. PoCA and EM algorithm were used to rebuild the objects for comparison. We compared Z discrimination time with and without muon momentum measurement. The relation between Z discrimination time and spatial resolution was also studied. Simulation results suggest that mean scattering angle is a better Z indicator and upgrading to larger RPCs will improve reconstruction image quality.
The new ATLAS Fast Calorimeter Simulation
NASA Astrophysics Data System (ADS)
Schaarschmidt, J.; ATLAS Collaboration
2017-10-01
Current and future need for large scale simulated samples motivate the development of reliable fast simulation techniques. The new Fast Calorimeter Simulation is an improved parameterized response of single particles in the ATLAS calorimeter that aims to accurately emulate the key features of the detailed calorimeter response as simulated with Geant4, yet approximately ten times faster. Principal component analysis and machine learning techniques are used to improve the performance and decrease the memory need compared to the current version of the ATLAS Fast Calorimeter Simulation. A prototype of this new Fast Calorimeter Simulation is in development and its integration into the ATLAS simulation infrastructure is ongoing.
Judd, Belinda Karyn; Alison, Jennifer Ailsey; Waters, Donna; Gordon, Christopher James
2016-08-01
Simulation-based clinical education often aims to replicate varying aspects of real clinical practice. It is unknown whether learners' stress levels in simulation are comparable with those in clinical practice. The current study compared acute stress markers during simulation-based clinical education with that experienced in situ in a hospital-based environment. Undergraduate physiotherapy students' (n = 33) acute stress responses [visual analog scales of stress and anxiety, continuous heart rate (HR), and saliva cortisol] were assessed during matched patient encounters in simulation-based laboratories using standardized patients and during hospital clinical placements with real patients. Group differences in stress variables were compared using repeated measures analysis of variance for 3 time points (before, during the patient encounter, and after) at 2 settings (simulation and hospital). Visual analog scale stress and anxiety as well as HR increased significantly from baseline levels before the encounter in both settings (all P < 0.05). Stress and anxiety were significantly higher in simulation [mean (SD), 45 (22) and 44 (25) mm; P = 0.003] compared with hospital [mean (SD), 31 (21) and 26 (20) mm; P = 0.002]. The mean (SD) HR during the simulation patient encounter was 90 (16) beats per minute and was not different compared with hospital [mean (SD), 87 (15) beats per minute; P = 0.89]. Changes in salivary cortisol before and after patient encounters were not statistically different between settings [mean (SD) simulation, 1.5 (2.4) nmol/L; hospital, 2.5 (2.9) nmol/L; P = 0.70]. Participants' experienced stress on clinical placements, irrespective of the clinical education setting (simulation vs. hospital). This study revealed that psychological stress and anxiety were greater during simulation compared with hospital settings; however, physiological stress responses (HR and cortisol) were comparable. These results indicate that psychological stress may be heightened in simulation, and health professional educators need to consider the impact of this on learners in simulation-based clinical education. New learners in their clinical education program may benefit from a less stressful simulation environment, before a gradual increase in stress demands as they approach clinical practice.
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Bogle, Brittany M; Asimos, Andrew W; Rosamond, Wayne D
2017-10-01
The Severity-Based Stroke Triage Algorithm for Emergency Medical Services endorses routing patients with suspected large vessel occlusion acute ischemic strokes directly to endovascular stroke centers (ESCs). We sought to evaluate different specifications of this algorithm within a region. We developed a discrete event simulation environment to model patients with suspected stroke transported according to algorithm specifications, which varied by stroke severity screen and permissible additional transport time for routing patients to ESCs. We simulated King County, Washington, and Mecklenburg County, North Carolina, distributing patients geographically into census tracts. Transport time to the nearest hospital and ESC was estimated using traffic-based travel times. We assessed undertriage, overtriage, transport time, and the number-needed-to-route, defined as the number of patients enduring additional transport to route one large vessel occlusion patient to an ESC. Undertriage was higher and overtriage was lower in King County compared with Mecklenburg County for each specification. Overtriage variation was primarily driven by screen (eg, 13%-55% in Mecklenburg County and 10%-40% in King County). Transportation time specifications beyond 20 minutes increased overtriage and decreased undertriage in King County but not Mecklenburg County. A low- versus high-specificity screen routed 3.7× more patients to ESCs. Emergency medical services spent nearly twice the time routing patients to ESCs in King County compared with Mecklenburg County. Our results demonstrate how discrete event simulation can facilitate informed decision making to optimize emergency medical services stroke severity-based triage algorithms. This is the first step toward developing a mature simulation to predict patient outcomes. © 2017 American Heart Association, Inc.
Model Free iPID Control for Glycemia Regulation of Type-1 Diabetes.
MohammadRidha, Taghreed; Ait-Ahmed, Mourad; Chaillous, Lucy; Krempf, Michel; Guilhem, Isabelle; Poirier, Jean-Yves; Moog, Claude H
2018-01-01
The objective is to design a fully automated glycemia controller of Type-1 Diabetes (T1D) in both fasting and postprandial phases on a large number of virtual patients. A model-free intelligent proportional-integral-derivative (iPID) is used to infuse insulin. The feasibility of iPID is tested in silico on two simulators with and without measurement noise. The first simulator is derived from a long-term linear time-invariant model. The controller is also validated on the UVa/Padova metabolic simulator on 10 adults under 25 runs/subject for noise robustness test. It was shown that without measurement noise, iPID mimicked the normal pancreatic secretion with a relatively fast reaction to meals as compared to a standard PID. With the UVa/Padova simulator, the robustness against CGM noise was tested. A higher percentage of time in target was obtained with iPID as compared to standard PID with reduced time spent in hyperglycemia. Two different T1D simulators tests showed that iPID detects meals and reacts faster to meal perturbations as compared to a classic PID. The intelligent part turns the controller to be more aggressive immediately after meals without neglecting safety. Further research is suggested to improve the computation of the intelligent part of iPID for such systems under actuator constraints. Any improvement can impact the overall performance of the model-free controller. The simple structure iPID is a step for PID-like controllers since it combines the classic PID nice properties with new adaptive features.
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
Simulating Microbial Community Patterning Using Biocellion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Seung-Hwa; Kahan, Simon H.; Momeni, Babak
2014-04-17
Mathematical modeling and computer simulation are important tools for understanding complex interactions between cells and their biotic and abiotic environment: similarities and differences between modeled and observed behavior provide the basis for hypothesis forma- tion. Momeni et al. [5] investigated pattern formation in communities of yeast strains engaging in different types of ecological interactions, comparing the predictions of mathematical modeling and simulation to actual patterns observed in wet-lab experiments. However, simu- lations of millions of cells in a three-dimensional community are ex- tremely time-consuming. One simulation run in MATLAB may take a week or longer, inhibiting exploration of the vastmore » space of parameter combinations and assumptions. Improving the speed, scale, and accu- racy of such simulations facilitates hypothesis formation and expedites discovery. Biocellion is a high performance software framework for ac- celerating discrete agent-based simulation of biological systems with millions to trillions of cells. Simulations of comparable scale and accu- racy to those taking a week of computer time using MATLAB require just hours using Biocellion on a multicore workstation. Biocellion fur- ther accelerates large scale, high resolution simulations using cluster computers by partitioning the work to run on multiple compute nodes. Biocellion targets computational biologists who have mathematical modeling backgrounds and basic C++ programming skills. This chap- ter describes the necessary steps to adapt the original Momeni et al.'s model to the Biocellion framework as a case study.« less
Turbine-99 unsteady simulations - Validation
NASA Astrophysics Data System (ADS)
Cervantes, M. J.; Andersson, U.; Lövgren, H. M.
2010-08-01
The Turbine-99 test case, a Kaplan draft tube model, aimed to determine the state of the art within draft tube simulation. Three workshops were organized on the matter in 1999, 2001 and 2005 where the geometry and experimental data were provided as boundary conditions to the participants. Since the last workshop, computational power and flow modelling have been developed and the available data completed with unsteady pressure measurements and phase resolved velocity measurements in the cone. Such new set of data together with the corresponding phase resolved velocity boundary conditions offer new possibilities to validate unsteady numerical simulations in Kaplan draft tube. The present work presents simulation of the Turbine-99 test case with time dependent angular resolved inlet velocity boundary conditions. Different grids and time steps are investigated. The results are compared to experimental time dependent pressure and velocity measurements.
Schwartz, L M; Bergman, D J; Dunn, K J; Mitra, P P
1996-01-01
Random walk computer simulations are an important tool in understanding magnetic resonance measurements in porous media. In this paper we focus on the description of pulsed field gradient spin echo (PGSE) experiments that measure the probability, P(R,t), that a diffusing water molecule will travel a distance R in a time t. Because PGSE simulations are often limited by statistical considerations, we will see that valuable insight can be gained by working with simple periodic geometries and comparing simulation data to the results of exact eigenvalue expansions. In this connection, our attention will be focused on (1) the wavevector, k, and time dependent magnetization, M(k, t); and (2) the normalized probability, Ps(delta R, t), that a diffusing particle will return to within delta R of the origin after time t.
International benchmarking of longitudinal train dynamics simulators: results
NASA Astrophysics Data System (ADS)
Wu, Qing; Spiryagin, Maksym; Cole, Colin; Chang, Chongyi; Guo, Gang; Sakalo, Alexey; Wei, Wei; Zhao, Xubao; Burgelman, Nico; Wiersma, Pier; Chollet, Hugues; Sebes, Michel; Shamdani, Amir; Melzi, Stefano; Cheli, Federico; di Gialleonardo, Egidio; Bosso, Nicola; Zampieri, Nicolò; Luo, Shihui; Wu, Honghua; Kaza, Guy-Léon
2018-03-01
This paper presents the results of the International Benchmarking of Longitudinal Train Dynamics Simulators which involved participation of nine simulators (TABLDSS, UM, CRE-LTS, TDEAS, PoliTo, TsDyn, CARS, BODYSIM and VOCO) from six countries. Longitudinal train dynamics results and computing time of four simulation cases are presented and compared. The results show that all simulators had basic agreement in simulations of locomotive forces, resistance forces and track gradients. The major differences among different simulators lie in the draft gear models. TABLDSS, UM, CRE-LTS, TDEAS, TsDyn and CARS had general agreement in terms of the in-train forces; minor differences exist as reflections of draft gear model variations. In-train force oscillations were observed in VOCO due to the introduction of wheel-rail contact. In-train force instabilities were sometimes observed in PoliTo and BODYSIM due to the velocity controlled transitional characteristics which could have generated unreasonable transitional stiffness. Regarding computing time per train operational second, the following list is in order of increasing computing speed: VOCO, TsDyn, PoliTO, CARS, BODYSIM, UM, TDEAS, CRE-LTS and TABLDSS (fastest); all simulators except VOCO, TsDyn and PoliTo achieved faster speeds than real-time simulations. Similarly, regarding computing time per integration step, the computing speeds in order are: CRE-LTS, VOCO, CARS, TsDyn, UM, TABLDSS and TDEAS (fastest).
NASA Technical Reports Server (NTRS)
Joncas, K. P.
1972-01-01
Concepts and techniques for identifying and simulating both the steady state and dynamic characteristics of electrical loads for use during integrated system test and evaluation are discussed. The investigations showed that it is feasible to design and develop interrogation and simulation equipment to perform the desired functions. During the evaluation, actual spacecraft loads were interrogated by stimulating the loads with their normal input voltage and measuring the resultant voltage and current time histories. Elements of the circuits were optimized by an iterative process of selecting element values and comparing the time-domain response of the model with those obtained from the real equipment during interrogation.
The evolution of the simulation environment in the ALMA Observatory
NASA Astrophysics Data System (ADS)
Shen, Tzu-Chiang; Soto, Ruben; Saez, Norman; Velez, Gaston; Staig, Tomas; Sepulveda, Jorge; Saez, Alejandro; Ovando, Nicolas; Ibsen, Jorge
2016-07-01
The Atacama Large Millimeter /submillimeter Array (ALMA) has entered into operation phase since 2013. This transition changed the priorities within the observatory, in which, most of the available time will be dedicated to science observations at the expense of technical time. Therefore, it was planned to design and implement a new simulation environment, which must be comparable - or at least- be representative of the production environment. Concepts of model in the loop and hardware in the loop were explored. In this paper we review experiences gained and lessons learnt during the design and implementation of the new simulation environment.
Boza, Camilo; León, Felipe; Buckel, Erwin; Riquelme, Arnoldo; Crovari, Fernando; Martínez, Jorge; Aggarwal, Rajesh; Grantcharov, Teodor; Jarufe, Nicolás; Varas, Julián
2017-01-01
Multiple simulation training programs have demonstrated that effective transfer of skills can be attained and applied into a more complex scenario, but evidence regarding transfer to the operating room is limited. To assess junior residents trained with simulation performing an advanced laparoscopic procedure in the OR and compare results to those of general surgeons without simulation training and expert laparoscopic surgeons. Experimental study: After a validated 16-session advanced laparoscopy simulation training program, junior trainees were compared to general surgeons (GS) with no simulation training and expert bariatric surgeons (BS) in performing a stapled jejuno-jejunostomy (JJO) in the OR. Global rating scale (GRS) and specific rating scale scores, operative time and the distance traveled by both hands measured with a tracking device, were assessed. In addition, all perioperative and immediate postoperative morbidities were registered. Ten junior trainees, 12 GS and 5 BS experts were assessed performing a JJO in the OR. All trainees completed the entire JJO in the OR without any takeovers by the BS. Six (50 %) BS takeovers took place in the GS group. Trainees had significantly better results in all measured outcomes when compared to GS with considerable higher GRS median [19.5 (18.8-23.5) vs. 12 (9-13.8) p < 0.001] and lower operative time. One morbidity was registered; a patient in the trainees group was readmitted at postoperative day 10 for mechanical ileus that resolved with medical treatment. This study demonstrated transfer of advanced laparoscopic skills acquired through a simulated training program in novice surgical residents to the OR.
NASA Astrophysics Data System (ADS)
Rodgers-Lee, D.; Ray, T. P.; Downes, T. P.
2016-11-01
The redistribution of angular momentum is a long standing problem in our understanding of protoplanetary disc (PPD) evolution. The magnetorotational instability (MRI) is considered a likely mechanism. We present the results of a study involving multifluid global simulations including Ohmic dissipation, ambipolar diffusion and the Hall effect in a dynamic, self-consistent way. We focus on the turbulence resulting from the non-linear development of the MRI in radially stratified PPDs and compare with ideal magnetohydrodynamics simulations. In the multifluid simulations, the disc is initially set up to transition from a weak Hall-dominated regime, where the Hall effect is the dominant non-ideal effect but approximately the same as or weaker than the inductive term, to a strong Hall-dominated regime, where the Hall effect dominates the inductive term. As the simulations progress, a substantial portion of the disc develops into a weak Hall-dominated disc. We find a transition from turbulent to laminar flow in the inner regions of the disc, but without any corresponding overall density feature. We introduce a dimensionless parameter, αRM, to characterize accretion with αRM ≳ 0.1 corresponding to turbulent transport. We calculate the eddy turnover time, teddy, and compared this with an effective recombination time-scale, trcb, to determine whether the presence of turbulence necessitates non-equilibrium ionization calculations. We find that trcb is typically around three orders of magnitude smaller than teddy. Also, the ionization fraction does not vary appreciably. These two results suggest that these multifluid simulations should be comparable to single-fluid non-ideal simulations.
NASA Astrophysics Data System (ADS)
Maute, A. I.; Hagan, M. E.; Richmond, A. D.; Liu, H.; Yudin, V. A.
2014-12-01
The ionosphere-thermosphere system is affected by solar and magnetospheric processes and by meteorological variability. Ionospheric observations of total electron content during the current solar cycle have shown that variability associated with meteorological forcing is important during solar minimum, and can have significant ionospheric effects during solar medium to maximum conditions. Numerical models can be used to study the comparative importance of geomagnetic and meterological forcing.This study focuses on the January 2013 Stratospheric Sudden Warming (SSW) period, which is associated with a very disturbed middle atmosphere as well as with moderately disturbed solar geomagntic conditions. We employ the NCAR Thermosphere-Ionosphere-Mesosphere-Electrodynamics General Circulation Model (TIME-GCM) with a nudging scheme using Whole-Atmosphere-Community-Climate-Model-Extended (WACCM-X)/Goddard Earth Observing System Model, Version 5 (GEOS5) results to simulate the effects of the meteorological and solar wind forcing on the upper atmosphere. The model results are evaluated by comparing with observations e.g., TEC, NmF2, ion drifts. We study the effect of the SSW on the wave spectrum, and the associated changes in the low latitude vertical drifts. These changes are compared to the impact of the moderate geomagnetic forcing on the TI-system during the January 2013 time period by conducting numerical experiments. We will present select highlights from our study and elude to the comparative importance of the forcing from above and below as simulated by the TIME-GCM.
Effects of Eddy Viscosity on Time Correlations in Large Eddy Simulation
NASA Technical Reports Server (NTRS)
He, Guowei; Rubinstein, R.; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Subgrid-scale (SGS) models for large. eddy simulation (LES) have generally been evaluated by their ability to predict single-time statistics of turbulent flows such as kinetic energy and Reynolds stresses. Recent application- of large eddy simulation to the evaluation of sound sources in turbulent flows, a problem in which time, correlations determine the frequency distribution of acoustic radiation, suggest that subgrid models should also be evaluated by their ability to predict time correlations in turbulent flows. This paper compares the two-point, two-time Eulerian velocity correlation evaluated from direct numerical simulation (DNS) with that evaluated from LES, using a spectral eddy viscosity, for isotropic homogeneous turbulence. It is found that the LES fields are too coherent, in the sense that their time correlations decay more slowly than the corresponding time. correlations in the DNS fields. This observation is confirmed by theoretical estimates of time correlations using the Taylor expansion technique. Tile reason for the slower decay is that the eddy viscosity does not include the random backscatter, which decorrelates fluid motion at large scales. An effective eddy viscosity associated with time correlations is formulated, to which the eddy viscosity associated with energy transfer is a leading order approximation.
Kinematic Evolution of Simulated Star-Forming Galaxies
NASA Technical Reports Server (NTRS)
Kassin, Susan A.; Brooks, Alyson; Governato, Fabio; Weiner, Benjamin J.; Gardner, Jonathan P.
2014-01-01
Recent observations have shown that star-forming galaxies like our own Milky Way evolve kinematically into ordered thin disks over the last approximately 8 billion years since z = 1.2, undergoing a process of "disk settling." For the first time, we study the kinematic evolution of a suite of four state of the art "zoom in" hydrodynamic simulations of galaxy formation and evolution in a fully cosmological context and compare with these observations. Until now, robust measurements of the internal kinematics of simulated galaxies were lacking as the simulations suffered from low resolution, overproduction of stars, and overly massive bulges. The current generation of simulations has made great progress in overcoming these difficulties and is ready for a kinematic analysis. We show that simulated galaxies follow the same kinematic trends as real galaxies: they progressively decrease in disordered motions (sigma(sub g)) and increase in ordered rotation (V(sub rot)) with time. The slopes of the relations between both sigma(sub g) and V(sub rot) with redshift are consistent between the simulations and the observations. In addition, the morphologies of the simulated galaxies become less disturbed with time, also consistent with observations. This match between the simulated and observed trends is a significant success for the current generation of simulations, and a first step in determining the physical processes behind disk settling.
CUDA-based real time surgery simulation.
Liu, Youquan; De, Suvranu
2008-01-01
In this paper we present a general software platform that enables real time surgery simulation on the newly available compute unified device architecture (CUDA)from NVIDIA. CUDA-enabled GPUs harness the power of 128 processors which allow data parallel computations. Compared to the previous GPGPU, it is significantly more flexible with a C language interface. We report implementation of both collision detection and consequent deformation computation algorithms. Our test results indicate that the CUDA enables a twenty times speedup for collision detection and about fifteen times speedup for deformation computation on an Intel Core 2 Quad 2.66 GHz machine with GeForce 8800 GTX.
Elbert, Yevgeniy; Burkom, Howard S
2009-11-20
This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.
A reliability analysis of cardiac repolarization time markers.
Scacchi, S; Franzone, P Colli; Pavarino, L F; Taccardi, B
2009-06-01
Only a limited number of studies have addressed the reliability of extracellular markers of cardiac repolarization time, such as the classical marker RT(eg) defined as the time of maximum upslope of the electrogram T wave. This work presents an extensive three-dimensional simulation study of cardiac repolarization time, extending the previous one-dimensional simulation study of a myocardial strand by Steinhaus [B.M. Steinhaus, Estimating cardiac transmembrane activation and recovery times from unipolar and bipolar extracellular electrograms: a simulation study, Circ. Res. 64 (3) (1989) 449]. The simulations are based on the bidomain - Luo-Rudy phase I system with rotational fiber anisotropy and homogeneous or heterogeneous transmural intrinsic membrane properties. The classical extracellular marker RT(eg) is compared with the gold standard of fastest repolarization time RT(tap), defined as the time of minimum derivative during the downstroke of the transmembrane action potential (TAP). Additionally, a new extracellular marker RT90(eg) is compared with the gold standard of late repolarization time RT90(tap), defined as the time when the TAP reaches 90% of its resting value. The results show a good global match between the extracellular and transmembrane repolarization markers, with small relative mean discrepancy (
Magee, Maclain J; Farkouh-Karoleski, Christiana; Rosen, Tove S
2018-04-01
Simulation training is an effective method to teach neonatal resuscitation (NR), yet many pediatrics residents do not feel comfortable with NR. Rapid cycle deliberate practice (RCDP) allows the facilitator to provide debriefing throughout the session. In RCDP, participants work through the scenario multiple times, eventually reaching more complex tasks once basic elements have been mastered. We determined if pediatrics residents have improved observed abilities, confidence level, and recall in NR after receiving RCDP training compared to the traditional simulation debriefing method. Thirty-eight pediatrics interns from a large academic training program were randomized to a teaching simulation session using RCDP or simulation debriefing methods. The primary outcome was the intern's cumulative score on the initial Megacode Assessment Form (MCAF). Secondary outcome measures included surveys of confidence level, recall MCAF scores at 4 months, and time to perform critical interventions. Thirty-four interns were included in analysis. Interns in the RCDP group had higher initial MCAF scores (89% versus 84%, P < .026), initiated positive pressure ventilation within 1 minute (100% versus 71%, P < .05), and administered epinephrine earlier (152 s versus 180 s, P < .039). Recall MCAF scores were not different between the 2 groups. Immediately following RCDP interns had improved observed abilities and decreased time to perform critical interventions in NR simulation as compared to those trained with the simulation debriefing. RCDP was not superior in improving confidence level or retention.
Automated Metrics in a Virtual-Reality Myringotomy Simulator: Development and Construct Validity.
Huang, Caiwen; Cheng, Horace; Bureau, Yves; Ladak, Hanif M; Agrawal, Sumit K
2018-06-15
The objectives of this study were: 1) to develop and implement a set of automated performance metrics into the Western myringotomy simulator, and 2) to establish construct validity. Prospective simulator-based assessment study. The Auditory Biophysics Laboratory at Western University, London, Ontario, Canada. Eleven participants were recruited from the Department of Otolaryngology-Head & Neck Surgery at Western University: four senior otolaryngology consultants and seven junior otolaryngology residents. Educational simulation. Discrimination between expert and novice participants on five primary automated performance metrics: 1) time to completion, 2) surgical errors, 3) incision angle, 4) incision length, and 5) the magnification of the microscope. Automated performance metrics were developed, programmed, and implemented into the simulator. Participants were given a standardized simulator orientation and instructions on myringotomy and tube placement. Each participant then performed 10 procedures and automated metrics were collected. The metrics were analyzed using the Mann-Whitney U test with Bonferroni correction. All metrics discriminated senior otolaryngologists from junior residents with a significance of p < 0.002. Junior residents had 2.8 times more errors compared with the senior otolaryngologists. Senior otolaryngologists took significantly less time to completion compared with junior residents. The senior group also had significantly longer incision lengths, more accurate incision angles, and lower magnification keeping both the umbo and annulus in view. Automated quantitative performance metrics were successfully developed and implemented, and construct validity was established by discriminating between expert and novice participants.
Vaccaro, Christine M; Crisp, Catrina C; Fellner, Angela N; Jackson, Christopher; Kleeman, Steven D; Pavelka, James
2013-01-01
The objective of this study was to compare the effect of virtual reality simulation training plus robotic orientation versus robotic orientation alone on performance of surgical tasks using an inanimate model. Surgical resident physicians were enrolled in this assessor-blinded randomized controlled trial. Residents were randomized to receive either (1) robotic virtual reality simulation training plus standard robotic orientation or (2) standard robotic orientation alone. Performance of surgical tasks was assessed at baseline and after the intervention. Nine of 33 modules from the da Vinci Skills Simulator were chosen. Experts in robotic surgery evaluated each resident's videotaped performance of the inanimate model using the Global Rating Scale (GRS) and Objective Structured Assessment of Technical Skills-modified for robotic-assisted surgery (rOSATS). Nine resident physicians were enrolled in the simulation group and 9 in the control group. As a whole, participants improved their total time, time to incision, and suture time from baseline to repeat testing on the inanimate model (P = 0.001, 0.003, <0.001, respectively). Both groups improved their GRS and rOSATS scores significantly (both P < 0.001); however, the GRS overall pass rate was higher in the simulation group compared with the control group (89% vs 44%, P = 0.066). Standard robotic orientation and/or robotic virtual reality simulation improve surgical skills on an inanimate model, although this may be a function of the initial "practice" on the inanimate model and repeat testing of a known task. However, robotic virtual reality simulation training increases GRS pass rates consistent with improved robotic technical skills learned in a virtual reality environment.
2-D Modeling of Nanoscale MOSFETs: Non-Equilibrium Green's Function Approach
NASA Technical Reports Server (NTRS)
Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan
2001-01-01
We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions and oxide tunneling are treated on an equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Electron-electron interaction is treated within Hartree approximation by solving NEGF and Poisson equations self-consistently. For the calculations presented here, parallelization is performed by distributing the solution of NEGF equations to various processors, energy wise. We present simulation of the "benchmark" MIT 25nm and 90nm MOSFETs and compare our results to those from the drift-diffusion simulator and the quantum-corrected results available. In the 25nm MOSFET, the channel length is less than ten times the electron wavelength, and the electron scattering time is comparable to its transit time. Our main results are: (1) Simulated drain subthreshold current characteristics are shown, where the potential profiles are calculated self-consistently by the corresponding simulation methods. The current predicted by our quantum simulation has smaller subthreshold slope of the Vg dependence which results in higher threshold voltage. (2) When gate oxide thickness is less than 2 nm, gate oxide leakage is a primary factor which determines off-current of a MOSFET (3) Using our 2-D NEGF simulator, we found several ways to drastically decrease oxide leakage current without compromising drive current. (4) Quantum mechanically calculated electron density is much smaller than the background doping density in the poly silicon gate region near oxide interface. This creates an additional effective gate voltage. Different ways to. include this effect approximately will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurata, T; Ono, M; Kozono, K
2014-06-01
Purpose: The purpose of this study is to investigate the feasibility of a low cost, small size positioning assistance simulator system for skull radiography using the Microsoft Kinect sensor. A conventional radiographic simulator system can only measure the three-dimensional coordinates of an x-ray tube using angle sensors, but not measure the movement of the subject. Therefore, in this study, we developed a real-time simulator system using the Microsoft Kinect to measure both the x-ray tube and the subject, and evaluated its accuracy and feasibility by comparing the simulated and the measured x-ray images. Methods: This system can track a headmore » phantom by using Face Tracking, which is one of the functions of the Kinect. The relative relationship between the Kinect and the head phantom was measured and the projection image was calculated by using the ray casting method, and by using three-dimensional CT head data with 220 slices at 512 × 512 pixels. X-ray images were thus obtained by using a computed radiography (CR) system. We could then compare the simulated projection images with the measured x-ray images from 0 degrees to 45 degrees at increments of 15 degrees by calculating the cross correlation coefficient C. Results: The calculation time of the simulated projection images was almost real-time (within 1 second) by using the Graphics Processing Unit(GPU). The cross-correlation coefficients C are: 0.916; 0.909; 0.891; and, 0.886 at 0, 15, 30, and 45 degrees, respectively. As a result, there were strong correlations between the simulated and measured images. Conclusion: This system can be used to perform head positioning more easily and accurately. It is expected that this system will be useful for learning radiographic techniques by students. Moreover, it could also be used for predicting the actual x-ray image prior to x-ray exposure in clinical environments.« less
Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M
2011-03-01
The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.
Investigation of the flight mechanics simulation of a hovering helicopter
NASA Technical Reports Server (NTRS)
Chaimovich, M.; Rosen, A.; Rand, O.; Mansur, M. H.; Tischler, M. B.
1992-01-01
The flight mechanics simulation of a hovering helicopter is investigated by comparing the results of two different numerical models with flight test data for a hovering AH-64 Apache. The two models are the U.S. Army BEMAP and the Technion model. These nonlinear models are linearized by applying a numerical linearization procedure. The results of the linear models are compared with identification results in terms of eigenvalues, stability and control derivatives, and frequency responses. Detailed time histories of the responses of the complete nonlinear models, as a result of various pilots' inputs, are compared with flight test results. In addition the sensitivity of the models to various effects are also investigated. The results are discussed and problematic aspects of the simulation are identified.
Accurate time delay technology in simulated test for high precision laser range finder
NASA Astrophysics Data System (ADS)
Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi
2015-10-01
With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.
A fast method for optical simulation of flood maps of light-sharing detector modules
Shi, Han; Du, Dong; Xu, JianFeng; Moses, William W.; Peng, Qiyu
2016-01-01
Optical simulation of the detector module level is highly desired for Position Emission Tomography (PET) system design. Commonly used simulation toolkits such as GATE are not efficient in the optical simulation of detector modules with complicated light-sharing configurations, where a vast amount of photons need to be tracked. We present a fast approach based on a simplified specular reflectance model and a structured light-tracking algorithm to speed up the photon tracking in detector modules constructed with polished finish and specular reflector materials. We simulated conventional block detector designs with different slotted light guide patterns using the new approach and compared the outcomes with those from GATE simulations. While the two approaches generated comparable flood maps, the new approach was more than 200–600 times faster. The new approach has also been validated by constructing a prototype detector and comparing the simulated flood map with the experimental flood map. The experimental flood map has nearly uniformly distributed spots similar to those in the simulated flood map. In conclusion, the new approach provides a fast and reliable simulation tool for assisting in the development of light-sharing-based detector modules with a polished surface finish and using specular reflector materials. PMID:27660376
A fast method for optical simulation of flood maps of light-sharing detector modules
Shi, Han; Du, Dong; Xu, JianFeng; ...
2015-09-03
Optical simulation of the detector module level is highly desired for Position Emission Tomography (PET) system design. Commonly used simulation toolkits such as GATE are not efficient in the optical simulation of detector modules with complicated light-sharing configurations, where a vast amount of photons need to be tracked. Here, we present a fast approach based on a simplified specular reflectance model and a structured light-tracking algorithm to speed up the photon tracking in detector modules constructed with polished finish and specular reflector materials. We also simulated conventional block detector designs with different slotted light guide patterns using the new approachmore » and compared the outcomes with those from GATE simulations. And while the two approaches generated comparable flood maps, the new approach was more than 200–600 times faster. The new approach has also been validated by constructing a prototype detector and comparing the simulated flood map with the experimental flood map. The experimental flood map has nearly uniformly distributed spots similar to those in the simulated flood map. In conclusion, the new approach provides a fast and reliable simulation tool for assisting in the development of light-sharing-based detector modules with a polished surface finish and using specular reflector materials.« less
Hamilton, Matthew B; Tartakovsky, Maria; Battocletti, Amy
2018-05-01
The genetic effective population size, N e , can be estimated from the average gametic disequilibrium (r2^) between pairs of loci, but such estimates require evaluation of assumptions and currently have few methods to estimate confidence intervals. speed-ne is a suite of matlab computer code functions to estimate Ne^ from r2^ with a graphical user interface and a rich set of outputs that aid in understanding data patterns and comparing multiple estimators. speed-ne includes functions to either generate or input simulated genotype data to facilitate comparative studies of Ne^ estimators under various population genetic scenarios. speed-ne was validated with data simulated under both time-forward and time-backward coalescent models of genetic drift. Three classes of estimators were compared with simulated data to examine several general questions: what are the impacts of microsatellite null alleles on Ne^, how should missing data be treated, and does disequilibrium contributed by reduced recombination among some loci in a sample impact Ne^. Estimators differed greatly in precision in the scenarios examined, and a widely employed Ne^ estimator exhibited the largest variances among replicate data sets. speed-ne implements several jackknife approaches to estimate confidence intervals, and simulated data showed that jackknifing over loci and jackknifing over individuals provided ~95% confidence interval coverage for some estimators and should be useful for empirical studies. speed-ne provides an open-source extensible tool for estimation of Ne^ from empirical genotype data and to conduct simulations of both microsatellite and single nucleotide polymorphism (SNP) data types to develop expectations and to compare Ne^ estimators. © 2018 John Wiley & Sons Ltd.
Using Discrete Event Simulation to predict KPI's at a Projected Emergency Room.
Concha, Pablo; Neriz, Liliana; Parada, Danilo; Ramis, Francisco
2015-01-01
Discrete Event Simulation (DES) is a powerful factor in the design of clinical facilities. DES enables facilities to be built or adapted to achieve the expected Key Performance Indicators (KPI's) such as average waiting times according to acuity, average stay times and others. Our computational model was built and validated using expert judgment and supporting statistical data. One scenario studied resulted in a 50% decrease in the average cycle time of patients compared to the original model, mainly by modifying the patient's attention model.
Patel, Archita D.; Meurer, David A.; Shuster, Jonathan J.
2016-01-01
Introduction. Limited evidence is available on simulation training of prehospital care providers, specifically the use of tourniquets and needle decompression. This study focused on whether the confidence level of prehospital personnel performing these skills improved through simulation training. Methods. Prehospital personnel from Alachua County Fire Rescue were enrolled in the study over a 2- to 3-week period based on their availability. Two scenarios were presented to them: a motorcycle crash resulting in a leg amputation requiring a tourniquet and an intoxicated patient with a stab wound, who experienced tension pneumothorax requiring needle decompression. Crews were asked to rate their confidence levels before and after exposure to the scenarios. Timing of the simulation interventions was compared with actual scene times to determine applicability of simulation in measuring the efficiency of prehospital personnel. Results. Results were collected from 129 participants. Pre- and postexposure scores increased by a mean of 1.15 (SD 1.32; 95% CI, 0.88–1.42; P < 0.001). Comparison of actual scene times with simulated scene times yielded a 1.39-fold difference (95% CI, 1.25–1.55) for Scenario 1 and 1.59 times longer for Scenario 2 (95% CI, 1.43–1.77). Conclusion. Simulation training improved prehospital care providers' confidence level in performing two life-saving procedures. PMID:27563467
NASA Technical Reports Server (NTRS)
Jewell, W. F.; Clement, W. F.
1984-01-01
The advent and widespread use of the computer-generated image (CGI) device to simulate visual cues has a mixed impact on the realism and fidelity of flight simulators. On the plus side, CGIs provide greater flexibility in scene content than terrain boards and closed circuit television based visual systems, and they have the potential for a greater field of view. However, on the minus side, CGIs introduce into the visual simulation relatively long time delays. In many CGIs, this delay is as much as 200 ms, which is comparable to the inherent delay time of the pilot. Because most GCIs use multiloop processing and smoothing algorithms and are linked to a multiloop host computer, it is seldom possible to identify a unique throughput time delay, and it is therefore difficult to quantify the performance of the closed loop pilot simulator system relative to the real world task. A method to address these issues using the critical task tester is described. Some empirical results from applying the method are presented, and a novel technique for improving the performance of GCIs is discussed.
NASA Astrophysics Data System (ADS)
Kempka, T.; Norden, B.; Tillner, E.; Nakaten, B.; Kühn, M.
2012-04-01
Geological modelling and dynamic flow simulations were conducted at the Ketzin pilot site showing a good agreement of history matched geological models with CO2 arrival times in both observation wells and timely development of reservoir pressure determined in the injection well. Recently, a re-evaluation of the seismic 3D data enabled a refinement of the structural site model and the implementation of the fault system present at the top of the Ketzin anticline. The updated geological model (model size: 5 km x 5 km) shows a horizontal discretization of 5 x 5 m and consists of three vertical zones, with the finest discretization at the top (0.5 m). According to the revised seismic analysis, the facies modelling to simulate the channel and floodplain facies distribution at Ketzin was updated. Using a sequential Gaussian simulator for the distribution of total and effective porosities and an empiric porosity-permeability relationship based on site and literature data available, the structural model was parameterized. Based on this revised reservoir model of the Stuttgart formation, numerical simulations using the TOUGH2-MP/ECO2N and Schlumberger Information Services (SIS) ECLIPSE 100 black-oil simulators were undertaken in order to evaluate the long-term (up to 10,000 years) migration of the injected CO2 (about 57,000 t at the end of 2011) and the development of reservoir pressure over time. The simulation results enabled us to quantitatively compare both reservoir simulators based on current operational data considering the long-term effects of CO2 storage including CO2 dissolution in the formation fluid. While the integration of the static geological model developed in the SIS Petrel modelling package into the ECLIPSE simulator is relatively flawless, a work-flow allowing for the export of Petrel models into the TOUGH2-MP input file format had to be implemented within the scope of this study. The challenge in this task was mainly determined by the presence of a complex faulted system in the revised reservoir model demanding for an integrated concept to deal with connections between the elements aligned to faults in the TOUGH2-MP simulator. Furthermore, we developed a methodology to visualize and compare the TOUGH2-MP simulation results with those of the Eclipse simulator using the Petrel software package. The long-term simulation results of both simulators are generally in good agreement. Spatial and timely migration of the CO2 plume as well as residual gas saturation are almost identical for both simulators, even though a time-dependent approach of CO2 dissolution in the formation fluid was chosen in the ECLIPSE simulator. Our results confirmed that a scientific open-source simulator as the TOUGH2-MP software package is capable to provide the same accuracy as the industrial standard simulator ECLIPSE 100. However, the computational time and additional efforts to implement a suitable workflow for using the TOUGH2-MP simulator are significantly higher, while the open-source concept of TOUGH2 provides more flexibility regarding process adaptation.
Experimental verification of dynamic simulation
NASA Technical Reports Server (NTRS)
Yae, K. Harold; Hwang, Howyoung; Chern, Su-Tai
1989-01-01
The dynamics model here is a backhoe, which is a four degree of freedom manipulator from the dynamics standpoint. Two types of experiment are chosen that can also be simulated by a multibody dynamics simulation program. In the experiment, recorded were the configuration and force histories; that is, velocity and position, and force output and differential pressure change from the hydraulic cylinder, in the time domain. When the experimental force history is used as driving force in the simulation model, the forward dynamics simulation produces a corresponding configuration history. Then, the experimental configuration history is used in the inverse dynamics analysis to generate a corresponding force history. Therefore, two sets of configuration and force histories--one set from experiment, and the other from the simulation that is driven forward and backward with the experimental data--are compared in the time domain. More comparisons are made in regard to the effects of initial conditions, friction, and viscous damping.
The Simulation of a Jumbo Jet Transport Aircraft. Volume 2: Modeling Data
NASA Technical Reports Server (NTRS)
Hanke, C. R.; Nordwall, D. R.
1970-01-01
The manned simulation of a large transport aircraft is described. Aircraft and systems data necessary to implement the mathematical model described in Volume I and a discussion of how these data are used in model are presented. The results of the real-time computations in the NASA Ames Research Center Flight Simulator for Advanced Aircraft are shown and compared to flight test data and to the results obtained in a training simulator known to be satisfactory.
FDTD simulation tools for UWB antenna analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brocato, Robert Wesley
2004-12-01
This paper describes the development of a set of software tools useful for analyzing ultra-wideband (UWB) antennas and structures. These tools are used to perform finite difference time domain (FDTD) simulation of a conical antenna with continuous wave (CW) and UWB pulsed excitations. The antenna is analyzed using spherical coordinate-based FDTD equations that are derived from first principles. The simulation results for CW excitation are compared to simulation and measured results from published sources; the results for UWB excitation are new.
NASA Astrophysics Data System (ADS)
Condon, L. E.; Maxwell, R. M.; Kollet, S. J.; Maher, K.; Haggerty, R.; Forrester, M. M.
2016-12-01
Although previous studies have demonstrated fractal residence time distributions in small watersheds, analyzing residence time scaling over large spatial areas is difficult with existing observational methods. For this study we use a fully integrated groundwater surface water simulation combined with Lagrangian particle tracking to evaluate connections between residence time distributions and watershed characteristics such as geology, topography and climate. Our simulation spans more than six million square kilometers of the continental US, encompassing a broad range of watershed sizes and physiographic settings. Simulated results demonstrate power law residence time distributions with peak age rages from 1.5 to 10.5 years. These ranges agree well with previous observational work and demonstrate the feasibility of using integrated models to simulate residence times. Comparing behavior between eight major watersheds, we show spatial variability in both the peak and the variance of the residence time distributions that can be related to model inputs. Peak age is well correlated with basin averaged hydraulic conductivity and the semi-variance corresponds to aridity. While power law age distributions have previously been attributed to fractal topography, these results illustrate the importance of subsurface characteristics and macro climate as additional controls on groundwater configuration and residence times.
Reime, Marit Hegg; Johnsgaard, Tone; Kvam, Fred Ivan; Aarflot, Morten; Engeberg, Janecke Merethe; Breivik, Marit; Brattebø, Guttorm
2017-01-01
Larger student groups and pressure on limited faculty time have raised the question of the learning value of merely observing simulation training in emergency medicine, instead of active team participation. The purpose of this study was to examine observers and hands-on participants' self-reported learning outcomes during simulation-based interprofessional team training regarding non-technical skills. In addition, we compared the learning outcomes for different professions and investigated team performance relative to the number of simulations in which they participated. A concurrent mixed-method design was chosen to evaluate the study, using questionnaires, observations, and focus group interviews. Participants included a total of 262 postgraduate and bachelor nursing students and medical students, organised into 44 interprofessional teams. The quantitative data showed that observers and participants had similar results in three of six predefined learning outcomes. The qualitative data emphasised the importance of participating in different roles, training several times, and training interprofessionally to enhance realism. Observing simulation training can be a valuable learning experience, but the students' preferred hands-on participation and learning by doing. For this reason, one can legitimise the observer role, given the large student groups and limited faculty time, as long as the students are also given some opportunity for hands-on participation in order to become more confident in their professional roles.
NASA Astrophysics Data System (ADS)
Onken, Jeffrey
This dissertation introduces a multidisciplinary framework for the enabling of future research and analysis of alternatives for control centers for real-time operations of safety-critical systems. The multidisciplinary framework integrates functional and computational models that describe the dynamics in fundamental concepts of previously disparate engineering and psychology research disciplines, such as group performance and processes, supervisory control, situation awareness, events and delays, and expertise. The application in this dissertation is the real-time operations within the NASA Mission Control Center in Houston, TX. This dissertation operationalizes the framework into a model and simulation, which simulates the functional and computational models in the framework according to user-configured scenarios for a NASA human-spaceflight mission. The model and simulation generates data according to the effectiveness of the mission-control team in supporting the completion of mission objectives and detecting, isolating, and recovering from anomalies. Accompanying the multidisciplinary framework is a proof of concept, which demonstrates the feasibility of such a framework. The proof of concept demonstrates that variability occurs where expected based on the models. The proof of concept also demonstrates that the data generated from the model and simulation is useful for analyzing and comparing MCC configuration alternatives because an investigator can give a diverse set of scenarios to the simulation and the output compared in detail to inform decisions about the effect of MCC configurations on mission operations performance.
Physiological responses and external validity of a new setting for taekwondo combat simulation.
Hausen, Matheus; Soares, Pedro Paulo; Araújo, Marcus Paulo; Porto, Flávia; Franchini, Emerson; Bridge, Craig Alan; Gurgel, Jonas
2017-01-01
Combat simulations have served as an alternative framework to study the cardiorespiratory demands of the activity in combat sports, but this setting imposes rule-restrictions that may compromise the competitiveness of the bouts. The aim of this study was to assess the cardiorespiratory responses to a full-contact taekwondo combat simulation using a safe and externally valid competitive setting. Twelve male national level taekwondo athletes visited the laboratory on two separate occasions. On the first visit, anthropometric and running cardiopulmonary exercise assessments were performed. In the following two to seven days, participants performed a full-contact combat simulation, using a specifically designed gas analyser protector. Oxygen uptake ([Formula: see text]), heart rate (HR) and capillary blood lactate measurements ([La-]) were obtained. Time-motion analysis was performed to compare activity profile. The simulation yielded broadly comparable activity profiles to those performed in competition, a mean [Formula: see text] of 36.6 ± 3.9 ml.kg-1.min-1 (73 ± 6% [Formula: see text]) and mean HR of 177 ± 10 beats.min-1 (93 ± 5% HRPEAK). A peak [Formula: see text] of 44.8 ± 5.0 ml.kg-1.min-1 (89 ± 5% [Formula: see text]), a peak heart rate of 190 ± 13 beats.min-1 (98 ± 3% HRmax) and peak [La-] of 12.3 ± 2.9 mmol.L-1 was elicited by the bouts. Regarding time-motion analysis, combat simulation presented a similar exchange time, a shorter preparation time and a longer exchange-preparation ratio. Taekwondo combats capturing the full-contact competitive elements of a bout elicit moderate to high cardiorespiratory demands on the competitors. These data are valuable to assist preparatory strategies within the sport.
Physiological responses and external validity of a new setting for taekwondo combat simulation
2017-01-01
Combat simulations have served as an alternative framework to study the cardiorespiratory demands of the activity in combat sports, but this setting imposes rule-restrictions that may compromise the competitiveness of the bouts. The aim of this study was to assess the cardiorespiratory responses to a full-contact taekwondo combat simulation using a safe and externally valid competitive setting. Twelve male national level taekwondo athletes visited the laboratory on two separate occasions. On the first visit, anthropometric and running cardiopulmonary exercise assessments were performed. In the following two to seven days, participants performed a full-contact combat simulation, using a specifically designed gas analyser protector. Oxygen uptake (V˙O2), heart rate (HR) and capillary blood lactate measurements ([La-]) were obtained. Time-motion analysis was performed to compare activity profile. The simulation yielded broadly comparable activity profiles to those performed in competition, a mean V˙O2 of 36.6 ± 3.9 ml.kg-1.min-1 (73 ± 6% V˙O2PEAK) and mean HR of 177 ± 10 beats.min-1 (93 ± 5% HRPEAK). A peak V˙O2 of 44.8 ± 5.0 ml.kg-1.min-1 (89 ± 5% V˙O2PEAK), a peak heart rate of 190 ± 13 beats.min-1 (98 ± 3% HRmax) and peak [La-] of 12.3 ± 2.9 mmol.L–1 was elicited by the bouts. Regarding time-motion analysis, combat simulation presented a similar exchange time, a shorter preparation time and a longer exchange-preparation ratio. Taekwondo combats capturing the full-contact competitive elements of a bout elicit moderate to high cardiorespiratory demands on the competitors. These data are valuable to assist preparatory strategies within the sport. PMID:28158252
The Effects of Dextromethorphan on Driving Performance and the Standardized Field Sobriety Test.
Perry, Paul J; Fredriksen, Kristian; Chew, Stephanie; Ip, Eric J; Lopes, Ingrid; Doroudgar, Shadi; Thomas, Kelan
2015-09-01
Dextromethorphan (DXM) is abused most commonly among adolescents as a recreational drug to generate a dissociative experience. The objective of the study was to assess driving with and without DXM ingestion. The effects of one-time maximum daily doses of DXM 120 mg versus a guaifenesin 400 mg dose were compared among 40 healthy subjects using a crossover design. Subjects' ability to drive was assessed by their performance in a driving simulator (STISIM® Drive driving simulator software) and by conducting a standardized field sobriety test (SFST) administered 1-h postdrug administration. The one-time dose of DXM 120 mg did not demonstrate driving impairment on the STISIM® Drive driving simulator or increase SFST failures compared to guaifenesin 400 mg. Doses greater than the currently recommended maximum daily dose of 120 mg are necessary to perturb driving behavior. © 2015 American Academy of Forensic Sciences.
Computational simulation of the creep-rupture process in filamentary composite materials
NASA Technical Reports Server (NTRS)
Slattery, Kerry T.; Hackett, Robert M.
1991-01-01
A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.
High-speed extended-term time-domain simulation for online cascading analysis of power system
NASA Astrophysics Data System (ADS)
Fu, Chuan
A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.
Comparative Implementation of High Performance Computing for Power System Dynamic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng
Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less
Modeling And Simulation Of Multimedia Communication Networks
NASA Astrophysics Data System (ADS)
Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.
1989-05-01
In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.
An Obstacle Alerting System for Agricultural Application
NASA Technical Reports Server (NTRS)
DeMaio, Joe
2003-01-01
Wire strikes are a significant cause of helicopter accidents. The aircraft most at risk are aerial applicators. The present study examines the effectiveness of a wire alert delivered by way of the lightbar, a GPS-based guidance system for aerial application. The alert lead-time needed to avoid an invisible wire is compared with that to avoid a visible wire. A flight simulator was configured to simulate an agricultural application helicopter. Two pilots flew simulated spray runs in fields with visible wires, invisible wires, and no wires. The wire alert was effective in reducing wire strikes. A lead-time of 3.5 sec was required for the alert to be effective. The lead- time required was the same whether the pilot could see the wire or not.
[Simulation and Design of Infant Incubator Assembly Line].
Ke, Huqi; Hu, Xiaoyong; Ge, Xia; Hu, Yanhai; Chen, Zaihong
2015-11-01
According to current assembly situation of infant incubator in company A, basic industrial engineering means such as time study was used to analyze the actual products assembly production and an assembly line was designed. The assembly line was modeled and simulated with software Flexsim. The problem of the assembly line was found by comparing simulation result and actual data, then through optimization to obtain high efficiency assembly line.
Effects of Preoperative Simulation on Minimally Invasive Hybrid Lumbar Interbody Fusion.
Rieger, Bernhard; Jiang, Hongzhen; Reinshagen, Clemens; Molcanyi, Marek; Zivcak, Jozef; Grönemeyer, Dietrich; Bosche, Bert; Schackert, Gabriele; Ruess, Daniel
2017-10-01
The main focus of this study was to evaluate how preoperative simulation affects the surgical work flow, radiation exposure, and outcome of minimally invasive hybrid lumbar interbody fusion (MIS-HLIF). A total of 132 patients who underwent single-level MIS-HLIF were enrolled in a cohort study design. Dose area product was analyzed in addition to surgical data. Once preoperative simulation was established, 66 cases (SIM cohort) were compared with 66 patients who had previously undergone MIS-HLIF without preoperative simulation (NO-SIM cohort). Dose area product was reduced considerably in the SIM cohort (320 cGy·cm 2 NO-SIM cohort: 470 cGy·cm 2 ; P < 0.01). Surgical time was shorter for the SIM cohort (155 minutes; NO-SIM cohort, 182 minutes; P < 0.05). SIM cohort had a better outcome in Numeric Rating Scale back at 6 months follow-up compared with the NO-SIM cohort (P < 0.05). Preoperative simulation reduced radiation exposure and resulted in less back pain at the 6 months follow-up time point. Preoperative simulation provided guidance in determining the correct cage height. Outcome controls enabled the surgeon to improve the procedure and the software algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.
Schumacher, Jan; Gray, Stuart A; Michel, Sophie; Alcock, Roger; Brinker, Andrea
2013-02-01
Emergency pediatric life support (EPLS) of children infected with transmissible respiratory diseases requires adequate respiratory protection for medical first responders. Conventional air-purifying respirators (APR) and modern loose-fitting powered air-purifying respirator-hoods (PAPR-hood) may have a different impact during pediatric resuscitation and therefore require evaluation. This study investigated the influence of APRs and PAPR-hoods during simulated pediatric cardiopulmonary resuscitation. Study design was a randomized, controlled, crossover study. Sixteen paramedics carried out a standardized EPLS scenario inside an ambulance, either unprotected (control) or wearing a conventional APR or a PAPR-hood. Treatment times and wearer comfort were determined and compared. All paramedics completed the treatment objectives of the study arms without adverse events. Study subjects reported that communication, dexterity and mobility were significantly better in the APR group, whereas the heat-build-up was significantly less in the PAPR-hood group. Treatment times compared to the control group did not significantly differ for the APR group but did with the PAPR-hood group (261±12 seconds for the controls, 275±9 seconds for the conventional APR and 286±13 seconds for the PAPR-hood group, P < .05. APRs showed a trend to better treatment times compared to PAPR-hoods during simulated pediatric cardiopulmonary resuscitation. Study participants rated mobility, ease of communication and dexterity with the tight-fitting APR system significantly better compared to the loose-fitting PAPR-hood.
Influences of chemical sympathectomy and simulated weightlessness on male and female rats
NASA Technical Reports Server (NTRS)
Woodman, Christopher R.; Stump, Craig S.; Stump, Jane A.; Sebastian, Lisa A.; Rahman, Z.; Tipton, Charles M.
1991-01-01
Consideration is given to a study aimed at determining whether the sympathetic nervous system is associated with the changes in maximum oxygen consumption (VO2max), run time, and mechanical efficiency observed during simulated weightlessness in male and female rats. Female and male rats were compared for food consumption, body mass, and body composition in conditions of simulated weightlessness to provide an insight into how these parameters may influence aerobic capacity and exercise performance. It is concluded that chemical sympathectomy and/or a weight-bearing stimulus will attenuate the loss in VO2max associated with simulated weightlessness in rats despite similar changes in body mass and composition. It is noted that the mechanisms remain unclear at this time.
NASA Astrophysics Data System (ADS)
Zhang, Jin-Zhao; Tuo, Xian-Guo
2014-07-01
We present the design and optimization of a prompt γ-ray neutron activation analysis (PGNAA) thermal neutron output setup based on Monte Carlo simulations using MCNP5 computer code. In these simulations, the moderator materials, reflective materials, and structure of the PGNAA 252Cf neutrons of thermal neutron output setup are optimized. The simulation results reveal that the thin layer paraffin and the thick layer of heavy water moderating effect work best for the 252Cf neutron spectrum. Our new design shows a significantly improved performance of the thermal neutron flux and flux rate, that are increased by 3.02 times and 3.27 times, respectively, compared with the conventional neutron source design.
An algorithm for fast elastic wave simulation using a vectorized finite difference operator
NASA Astrophysics Data System (ADS)
Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna
2018-07-01
Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.
Yang, Yiqun; Urban, Matthew W; McGough, Robert J
2018-05-15
Shear wave calculations induced by an acoustic radiation force are very time-consuming on desktop computers, and high-performance graphics processing units (GPUs) achieve dramatic reductions in the computation time for these simulations. The acoustic radiation force is calculated using the fast near field method and the angular spectrum approach, and then the shear waves are calculated in parallel with Green's functions on a GPU. This combination enables rapid evaluation of shear waves for push beams with different spatial samplings and for apertures with different f/#. Relative to shear wave simulations that evaluate the same algorithm on an Intel i7 desktop computer, a high performance nVidia GPU reduces the time required for these calculations by a factor of 45 and 700 when applied to elastic and viscoelastic shear wave simulation models, respectively. These GPU-accelerated simulations also compared to measurements in different viscoelastic phantoms, and the results are similar. For parametric evaluations and for comparisons with measured shear wave data, shear wave simulations with the Green's function approach are ideally suited for high-performance GPUs.
Ten Eyck, Raymond P; Tews, Matthew; Ballester, John M; Hamilton, Glenn C
2010-06-01
To determine the impact of simulation-based instruction on student performance in the role of emergency department resuscitation team leader. A randomized, single-blinded, controlled study using an intention to treat analysis. Eighty-three fourth-year medical students enrolled in an emergency medicine clerkship were randomly allocated to two groups differing only by instructional format. Each student individually completed an initial simulation case, followed by a standardized curriculum of eight cases in either group simulation or case-based group discussion format before a second individual simulation case. A remote coinvestigator measured eight objective performance end points using digital recordings of all individual simulation cases. McNemar chi2, Pearson correlation, repeated measures multivariate analysis of variance, and follow-up analysis of variance were used for statistical evaluation. Sixty-eight students (82%) completed both initial and follow-up individual simulations. Eight students were lost from the simulation group and seven from the discussion group. The mean postintervention case performance was significantly better for the students allocated to simulation instruction compared with the group discussion students for four outcomes including a decrease in mean time to (1) order an intravenous line; (2) initiate cardiac monitoring; (3) order initial laboratory tests; and (4) initiate blood pressure monitoring. Paired comparisons of each student's initial and follow-up simulations demonstrated significant improvement in the same four areas, in mean time to order an abdominal radiograph and in obtaining an allergy history. A single simulation-based teaching session significantly improved student performance as a team leader. Additional simulation sessions provided further improvement compared with instruction provided in case-based group discussion format.
A Global Three-Dimensional Radiation Hydrodynamic Simulation of a Self-Gravitating Accretion Disk
NASA Astrophysics Data System (ADS)
Phillipson, Rebecca; Vogeley, Michael S.; McMillan, Stephen; Boyd, Patricia
2018-01-01
We present three-dimensional, radiation hydrodynamic simulations of initially thin accretion disks with self-gravity using the grid-based code PLUTO. We produce simulated light curves and spectral energy distributions and compare to observational data of X-ray binary (XRB) and active galactic nuclei (AGN) variability. These simulations are of interest for modeling the role of radiation in accretion physics across decades of mass and frequency. In particular, the characteristics of the time variability in various bandwidths can probe the timescales over which different physical processes dominate the accretion flow. For example, in the case of some XRBs, superorbital periods much longer than the companion orbital period have been observed. Smoothed particle hydrodynamics (SPH) calculations have shown that irradiation-driven warping could be the mechanism underlying these long periods. In the case of AGN, irradiation-driven warping is also predicted to occur in addition to strong outflows originating from thermal and radiation pressure driving forces, which are important processes in understanding feedback and star formation in active galaxies. We compare our simulations to various toy models via traditional time series analysis of our synthetic and observed light curves.
Zero dimensional model of atmospheric SMD discharge and afterglow in humid air
NASA Astrophysics Data System (ADS)
Smith, Ryan; Kemaneci, Efe; Offerhaus, Bjoern; Stapelmann, Katharina; Peter Brinkmann, Ralph
2016-09-01
A novel mesh-like Surface Micro Discharge (SMD) device designed for surface wound treatment is simulated by multiple time-scaled zero-dimensional models. The chemical dynamics of the discharge are resolved in time at atmospheric pressure in humid conditions. Simulated are the particle densities of electrons, 26 ionic species, and 26 reactive neutral species including: O3, NO, and HNO3. The total of 53 described species are constrained by 624 reactions within the simulated plasma discharge volume. The neutral species are allowed to diffuse into a diffusive gas regime which is of primary interest. Two interdependent zero-dimensional models separated by nine orders of magnitude in temporal resolution are used to accomplish this; thereby reducing the computational load. Through variation of control parameters such as: ignition frequency, deposited power density, duty cycle, humidity level, and N2 content, the ideal operation conditions for the SMD device can be predicted. The described model has been verified by matching simulation parameters and comparing results to that of previous works. Current operating conditions of the experimental mesh-like SMD were matched and results are compared to the simulations. Work supported by SFB TR 87.
The effect of fidelity: how expert behavior changes in a virtual reality environment.
Ioannou, Ioanna; Avery, Alex; Zhou, Yun; Szudek, Jacek; Kennedy, Gregor; O'Leary, Stephen
2014-09-01
We compare the behavior of expert surgeons operating on the "gold standard" of simulation-the cadaveric temporal bone-against a high-fidelity virtual reality (VR) simulation. We aim to determine whether expert behavior changes within the virtual environment and to understand how the fidelity of simulation affects users' behavior. Five expert otologists performed cortical mastoidectomy and cochleostomy on a human cadaveric temporal bone and a VR temporal bone simulator. Hand movement and video recordings were used to derive a range of measures, to facilitate an analysis of surgical technique, and to compare expert behavior between the cadaveric and simulator environments. Drilling time was similar across the two environments. Some measures such as total time and burr change count differed predictably due to the ease of switching burrs within the simulator. Surgical strokes were generally longer in distance and duration in VR, but these measures changed proportionally to cadaveric measures across the stages of the procedure. Stroke shape metrics differed, which was attributed to the modeling of burr behavior within the simulator. This will be corrected in future versions. Slight differences in drill interaction between a virtual environment and the real world can have measurable effects on surgical technique, particularly in terms of stroke length, duration, and curvature. It is important to understand these effects when designing and implementing surgical training programs based on VR simulation--and when improving the fidelity of VR simulators to facilitate use of a similar technique in both real and simulated situations. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Kinetics of transient electroluminescence in organic light emitting diodes
NASA Astrophysics Data System (ADS)
Shukla, Manju; Kumar, Pankaj; Chand, Suresh; Brahme, Nameeta; Kher, R. S.; Khokhar, M. S. K.
2008-08-01
Mathematical simulation on the rise and decay kinetics of transient electroluminescence (EL) in organic light emitting diodes (OLEDs) is presented. The transient EL is studied with respect to a step voltage pulse. While rising, for lower values of time, the EL intensity shows a quadratic dependence on (t - tdel), where tdel is the time delay observed in the onset of EL, and finally attains saturation at a sufficiently large time. When the applied voltage is switched off, the initial EL decay shows an exponential dependence on (t - tdec), where tdec is the time when the voltage is switched off. The simulated results are compared with the transient EL performance of a bilayer OLED based on small molecular bis(2-methyl 8-hydroxyquinoline)(triphenyl siloxy) aluminium (SAlq). Transient EL studies have been carried out at different voltage pulse amplitudes. The simulated results show good agreement with experimental data. Using these simulated results the lifetime of the excitons in SAlq has also been calculated.
Broadband impedance boundary conditions for the simulation of sound propagation in the time domain.
Bin, Jonghoon; Yousuff Hussaini, M; Lee, Soogab
2009-02-01
An accurate and practical surface impedance boundary condition in the time domain has been developed for application to broadband-frequency simulation in aeroacoustic problems. To show the capability of this method, two kinds of numerical simulations are performed and compared with the analytical/experimental results: one is acoustic wave reflection by a monopole source over an impedance surface and the other is acoustic wave propagation in a duct with a finite impedance wall. Both single-frequency and broadband-frequency simulations are performed within the framework of linearized Euler equations. A high-order dispersion-relation-preserving finite-difference method and a low-dissipation, low-dispersion Runge-Kutta method are used for spatial discretization and time integration, respectively. The results show excellent agreement with the analytical/experimental results at various frequencies. The method accurately predicts both the amplitude and the phase of acoustic pressure and ensures the well-posedness of the broadband time-domain impedance boundary condition.
Heating, Hydrodynamics, and Radiation From a Laser Heated Non-LTE High-Z Target
NASA Astrophysics Data System (ADS)
Gray, William; Foord, M. E.; Schneider, M. B.; Barrios, M. A.; Brown, G. V.; Heeter, R. F.; Jarrott, L. C.; Liedahl, D. A.; Marley, E. V.; Mauche, C. W.; Widmann, K.
2016-10-01
We present 2D R-z simulations that model the hydrodynamics and x-ray output of a laser heated, tamped foil, using the rad-hydro code LASNEX. The foil consists of a thin (2400 A) cylindrical disk of iron/vanadium/gold that is embedded in a thicker Be tamper. The simulations utilize a non-LTE detailed configuration (DCA) model, which generates the emission spectra. Simulated pinhole images are compared with data, finding qualitative agreement with the time-history of the face-on emission profiles, and exhibiting an interesting reduction in emission size over a few ns time period. Furthermore, we find that the simulations recover similar burn through times in both the target and Be tamper as measured by a time-dependent filtered x-ray detector (DANTE). Additional results and characterization of the experimental plasma will be presented. This work performed under the auspices of U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Anderson, Brian J.; Korth, Haje; Welling, Daniel T.; Merkin, Viacheslav G.; Wiltberger, Michael J.; Raeder, Joachim; Barnes, Robin J.; Waters, Colin L.; Pulkkinen, Antti A.; Rastaetter, Lutz
2017-02-01
Two of the geomagnetic storms for the Space Weather Prediction Center Geospace Environment Modeling challenge occurred after data were first acquired by the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE). We compare Birkeland currents from AMPERE with predictions from four models for the 4-5 April 2010 and 5-6 August 2011 storms. The four models are the Weimer (2005b) field-aligned current statistical model, the Lyon-Fedder-Mobarry magnetohydrodynamic (MHD) simulation, the Open Global Geospace Circulation Model MHD simulation, and the Space Weather Modeling Framework MHD simulation. The MHD simulations were run as described in Pulkkinen et al. (2013) and the results obtained from the Community Coordinated Modeling Center. The total radial Birkeland current, ITotal, and the distribution of radial current density, Jr, for all models are compared with AMPERE results. While the total currents are well correlated, the quantitative agreement varies considerably. The Jr distributions reveal discrepancies between the models and observations related to the latitude distribution, morphologies, and lack of nightside current systems in the models. The results motivate enhancing the simulations first by increasing the simulation resolution and then by examining the relative merits of implementing more sophisticated ionospheric conductance models, including ionospheric outflows or other omitted physical processes. Some aspects of the system, including substorm timing and location, may remain challenging to simulate, implying a continuing need for real-time specification.
Reduced order model based on principal component analysis for process simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Y.; Malacina, A.; Biegler, L.
2009-01-01
It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less
Computer simulations and real-time control of ELT AO systems using graphical processing units
NASA Astrophysics Data System (ADS)
Wang, Lianqi; Ellerbroek, Brent
2012-07-01
The adaptive optics (AO) simulations at the Thirty Meter Telescope (TMT) have been carried out using the efficient, C based multi-threaded adaptive optics simulator (MAOS, http://github.com/lianqiw/maos). By porting time-critical parts of MAOS to graphical processing units (GPU) using NVIDIA CUDA technology, we achieved a 10 fold speed up for each GTX 580 GPU used compared to a modern quad core CPU. Each time step of full scale end to end simulation for the TMT narrow field infrared AO system (NFIRAOS) takes only 0.11 second in a desktop with two GTX 580s. We also demonstrate that the TMT minimum variance reconstructor can be assembled in matrix vector multiply (MVM) format in 8 seconds with 8 GTX 580 GPUs, meeting the TMT requirement for updating the reconstructor. Analysis show that it is also possible to apply the MVM using 8 GTX 580s within the required latency.
Modeling and Simulating Airport Surface Operations with Gate Conflicts
NASA Technical Reports Server (NTRS)
Zelinski, Shannon; Windhorst, Robert
2017-01-01
The Surface Operations Simulator and Scheduler (SOSS) is a fast-time simulation platform used to develop and test future surface scheduling concepts such as NASA's Air Traffic Demonstration 2 of time-based surface metering at Charlotte Douglass International Airport (CLT). Challenges associated with CLT surface operations have driven much of SOSS development. Recently, SOSS functionality for modeling harsdstand operations was developed to address gate conflicts, which occur when an arrival and departure wish to occupy the same gate at the same time. Because surface metering concepts such as ATD2 have the potential to increase gates conflicts as departures are held at their gates, it is important to study the interaction between surface metering and gate conflict management. Several approaches to managing gate conflicts with and without the use of hardstands were simulated and their effects on surface operations and scheduler performance compared.
Modeling and Simulating Airport Surface Operations with Gate Conflicts
NASA Technical Reports Server (NTRS)
Zelinski, Shannon; Windhorst, Robert
2017-01-01
The Surface Operations Simulator and Scheduler (SOSS) is a fast-time simulation platform used to develop and test future surface scheduling concepts such as NASAs Air Traffic Demonstration 2 of time-based surface metering at Charlotte Douglas International Airport (CLT). Challenges associated with CLT surface operations have driven much of SOSS development. Recently, SOSS functionality for modeling hardstand operations was developed to address gate conflicts, which occur when an arrival and departure wish to occupy the same gate at the same time. Because surface metering concepts such as ATD2 have the potential to increase gates conflicts as departure are held at their gates, it is important to study the interaction between surface metering and gate conflict management. Several approaches to managing gate conflicts with and without the use of hardstands were simulated and their effects on surface operations and scheduler performance compared.
Differential die-away instrument: Report on comparison of fuel assembly experiments and simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsell, Alison Victoria; Henzl, Vladimir; Swinhoe, Martyn Thomas
2015-01-14
Experimental results of the assay of mock-up (fresh) fuel with the differential die-away (DDA) instrument were compared to the Monte Carlo N-Particle eXtended (MCNPX) simulation results. Most principal experimental observables, the die-away time and the in tegral of the DDA signal in several time domains, have been found in good agreement with the MCNPX simulation results. The remaining discrepancies between the simulation and experimental results are likely due to small differences between the actual experimental setup and the simulated geometry, including uncertainty in the DT neutron generator yield. Within this report we also present a sensitivity study of the DDAmore » instrument which is a complex and sensitive system and demonstrate to what degree it can be impacted by geometry, material composition, and electronics performance.« less
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
NASA Astrophysics Data System (ADS)
Wang, Jinting; Lu, Liqiao; Zhu, Fei
2018-01-01
Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
Time series inversion of spectra from ground-based radiometers
NASA Astrophysics Data System (ADS)
Christensen, O. M.; Eriksson, P.
2013-02-01
Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the OSO water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.
Thorneywork, Alice L; Rozas, Roberto E; Dullens, Roel P A; Horbach, Jürgen
2015-12-31
We compare experimental results from a quasi-two-dimensional colloidal hard sphere fluid to a Monte Carlo simulation of hard disks with small particle displacements. The experimental short-time self-diffusion coefficient D(S) scaled by the diffusion coefficient at infinite dilution, D(0), strongly depends on the area fraction, pointing to significant hydrodynamic interactions at short times in the experiment, which are absent in the simulation. In contrast, the area fraction dependence of the experimental long-time self-diffusion coefficient D(L)/D(0) is in quantitative agreement with D(L)/D(0) obtained from the simulation. This indicates that the reduction in the particle mobility at short times due to hydrodynamic interactions does not lead to a proportional reduction in the long-time self-diffusion coefficient. Furthermore, the quantitative agreement between experiment and simulation at long times indicates that hydrodynamic interactions effectively do not affect the dependence of D(L)/D(0) on the area fraction. In light of this, we discuss the link between structure and long-time self-diffusion in terms of a configurational excess entropy and do not find a simple exponential relation between these quantities for all fluid area fractions.
Wei, Fanan; Yang, Haitao; Liu, Lianqing; Li, Guangyong
2017-03-01
Dynamic mechanical behaviour of living cells has been described by viscoelasticity. However, quantitation of the viscoelastic parameters for living cells is far from sophisticated. In this paper, combining inverse finite element (FE) simulation with Atomic Force Microscope characterization, we attempt to develop a new method to evaluate and acquire trustworthy viscoelastic index of living cells. First, influence of the experiment parameters on stress relaxation process is assessed using FE simulation. As suggested by the simulations, cell height has negligible impact on shape of the force-time curve, i.e. the characteristic relaxation time; and the effect originates from substrate can be totally eliminated when stiff substrate (Young's modulus larger than 3 GPa) is used. Then, so as to develop an effective optimization strategy for the inverse FE simulation, the parameters sensitivity evaluation is performed for Young's modulus, Poisson's ratio, and characteristic relaxation time. With the experiment data obtained through typical stress relaxation measurement, viscoelastic parameters are extracted through the inverse FE simulation by comparing the simulation results and experimental measurements. Finally, reliability of the acquired mechanical parameters is verified with different load experiments performed on the same cell.
Dynamic Time Warping compared to established methods for validation of musculoskeletal models.
Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael
2017-04-11
By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine
NASA Astrophysics Data System (ADS)
Liu, Yuan; Zhang, Xin; Zhang, Tianhong
2017-11-01
A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.
Efficient scatter model for simulation of ultrasound images from computed tomography data
NASA Astrophysics Data System (ADS)
D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.
2015-12-01
Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.
NASA Astrophysics Data System (ADS)
Kandel, D. D.; Western, A. W.; Grayson, R. B.
2004-12-01
Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).
NASA Astrophysics Data System (ADS)
Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe
2017-08-01
Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus <54% in two-choice classification accuracy. Significance. We believe GMMAC will be useful for clinical fNIRS-based brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.
Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.
Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy
2018-01-23
Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.
NASA Astrophysics Data System (ADS)
Vachálek, Ján
2011-12-01
The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.
3D Parallel Multigrid Methods for Real-Time Fluid Simulation
NASA Astrophysics Data System (ADS)
Wan, Feifei; Yin, Yong; Zhang, Suiyu
2018-03-01
The multigrid method is widely used in fluid simulation because of its strong convergence. In addition to operating accuracy, operational efficiency is also an important factor to consider in order to enable real-time fluid simulation in computer graphics. For this problem, we compared the performance of the Algebraic Multigrid and the Geometric Multigrid in the V-Cycle and Full-Cycle schemes respectively, and analyze the convergence and speed of different methods. All the calculations are done on the parallel computing of GPU in this paper. Finally, we experiment with the 3D-grid for each scale, and give the exact experimental results.
NASA Astrophysics Data System (ADS)
Allen, D. M.; Henry, C.; Demon, H.; Kirste, D. M.; Huang, J.
2011-12-01
Sustainable management of groundwater resources, particularly in water stressed regions, requires estimates of groundwater recharge. This study in southern Mali, Africa compares approaches for estimating groundwater recharge and understanding recharge processes using a variety of methods encompassing groundwater level-climate data analysis, GRACE satellite data analysis, and recharge modelling for current and future climate conditions. Time series data for GRACE (2002-2006) and observed groundwater level data (1982-2001) do not overlap. To overcome this problem, GRACE time series data were appended to the observed historical time series data, and the records compared. Terrestrial water storage anomalies from GRACE were corrected for soil moisture (SM) using the Global Land Data Assimilation System (GLDAS) to obtain monthly groundwater storage anomalies (GRACE-SM), and monthly recharge estimates. Historical groundwater storage anomalies and recharge were determined using the water table fluctuation method using observation data from 15 wells. Historical annual recharge averaged 145.0 mm (or 15.9% of annual rainfall) and compared favourably with the GRACE-SM estimate of 149.7 mm (or 14.8% of annual rainfall). Both records show lows and peaks in May and September, respectively; however, the peak for the GRACE-SM data is shifted later in the year to November, suggesting that the GLDAS may poorly predict the timing of soil water storage in this region. Recharge simulation results show good agreement between the timing and magnitude of the mean monthly simulated recharge and the regional mean monthly storage anomaly hydrograph generated from all monitoring wells. Under future climate conditions, annual recharge is projected to decrease by 8% for areas with luvisols and by 11% for areas with nitosols. Given this potential reduction in groundwater recharge, there may be added stress placed on an already stressed resource.
NASA Astrophysics Data System (ADS)
Yeh, Mei-Ling
We have performed a parallel decomposition of the fictitious Lagrangian method for molecular dynamics with tight-binding total energy expression into the hypercube computer. This is the first time in literature that the dynamical simulation of semiconducting systems containing more than 512 silicon atoms has become possible with the electrons treated as quantum particles. With the utilization of the Intel Paragon system, our timing analysis predicts that our code is expected to perform realistic simulations on very large systems consisting of thousands of atoms with time requirements of the order of tens of hours. Timing results and performance analysis of our parallel code are presented in terms of calculation time, communication time, and setup time. The accuracy of the fictitious Lagrangian method in molecular dynamics simulation is also investigated, especially the energy conservation of the total energy of ions. We find that the accuracy of the fictitious Lagrangian scheme in small silicon cluster and very large silicon system simulations is good for as long as the simulations proceed, even though we quench the electronic coordinates to the Born-Oppenheimer surface only in the beginning of the run. The kinetic energy of electrons does not increase as time goes on, and the energy conservation of the ionic subsystem remains very good. This means that, as far as the ionic subsystem is concerned, the electrons are on the average in the true quantum ground states. We also tie up some odds and ends regarding a few remaining questions about the fictitious Lagrangian method, such as the difference between the results obtained from the Gram-Schmidt and SHAKE method of orthonormalization, and differences between simulations where the electrons are quenched to the Born -Oppenheimer surface only once compared with periodic quenching.
A multi-GPU real-time dose simulation software framework for lung radiotherapy.
Santhanam, A P; Min, Y; Neelakkantan, H; Papp, N; Meeks, S L; Kupelian, P A
2012-09-01
Medical simulation frameworks facilitate both the preoperative and postoperative analysis of the patient's pathophysical condition. Of particular importance is the simulation of radiation dose delivery for real-time radiotherapy monitoring and retrospective analyses of the patient's treatment. In this paper, a software framework tailored for the development of simulation-based real-time radiation dose monitoring medical applications is discussed. A multi-GPU-based computational framework coupled with inter-process communication methods is introduced for simulating the radiation dose delivery on a deformable 3D volumetric lung model and its real-time visualization. The model deformation and the corresponding dose calculation are allocated among the GPUs in a task-specific manner and is performed in a pipelined manner. Radiation dose calculations are computed on two different GPU hardware architectures. The integration of this computational framework with a front-end software layer and back-end patient database repository is also discussed. Real-time simulation of the dose delivered is achieved at once every 120 ms using the proposed framework. With a linear increase in the number of GPU cores, the computational time of the simulation was linearly decreased. The inter-process communication time also improved with an increase in the hardware memory. Variations in the delivered dose and computational speedup for variations in the data dimensions are investigated using D70 and D90 as well as gEUD as metrics for a set of 14 patients. Computational speed-up increased with an increase in the beam dimensions when compared with a CPU-based commercial software while the error in the dose calculation was <1%. Our analyses show that the framework applied to deformable lung model-based radiotherapy is an effective tool for performing both real-time and retrospective analyses.
NASA Technical Reports Server (NTRS)
Imig, L. A.; Garrett, L. E.
1973-01-01
Possibilities for reducing fatigue-test time for supersonic-transport materials and structures were studied in tests with simulated flight-by-flight loading. In order to determine whether short-time tests were feasible, the results of accelerated tests (2 sec per flight) were compared with the results of real-time tests (96 min per flight). The effects of design mean stress, the stress range for ground-air-ground cycles, simulated thermal stress, the number of stress cycles in each flight, and salt corrosion were studied. The flight-by-flight stress sequences were applied to notched sheet specimens of Ti-8Al-1Mo-1V and Ti-6Al-4V titanium alloys. A linear cumulative-damage analysis accounted for large changes in stress range of the simulated flights but did not account for the differences between real-time and accelerated tests. The fatigue lives from accelerated tests were generally within a factor of two of the lives from real-time tests; thus, within the scope of the investigation, accelerated testing seems feasible.
ERIC Educational Resources Information Center
Hldreth, Laura A.; Robison-Cox, Jim; Schmidt, Jade
2018-01-01
This study examines the transferability of results from previous studies of simulation-based curriculum in introductory statistics using data from 3,500 students enrolled in an introductory statistics course at Montana State University from fall 2013 through spring 2016. During this time, four different curricula, a traditional curriculum and…
Initial polishing time affects gloss retention in resin composites.
Waheeb, Nehal; Silikas, Nick; Watts, David
2012-10-01
To determine the effect of finishing and polishing time on the surface gloss of various resin-composites before and after simulated toothbrushing. Eight representative resin-composites (Ceram X mono, Ceram X duo, Tetric EvoCeram, Venus Diamond, EsteliteSigma Quick, Esthet.X HD, Filtek Supreme XT and Spectrum TPH) were used to prepare 80 disc-shaped (12 mm x 2 mm) specimens. The two step system Venus Supra was used for polishing the specimens for 3 minutes (Group A) and 10 minutes (Group B). All specimens were subjected to 16,000 cycles of simulated toothbrushing. The surface gloss was measured after polishing and after brushing using the gloss meter. Results were evaluated using one way ANOVA, two ways ANOVA and Dennett's post hoc test (P = 0.05). Group B (10-minute polishing) resulted in higher gloss values (GV) for all specimens compared to Group A (3 minutes). Also Group B showed better gloss retention compared to Group A after simulated toothbrushing. In each group, there was a significant difference between the polished composite resins (P < 0.05). For all specimens there was a decrease in gloss after the simulated toothbrushing.
NASA Astrophysics Data System (ADS)
Keskinen, M. J.; Karasik, Max; Bates, J. W.; Schmitt, A. J.
2006-10-01
A limitation on the efficiency of high gain direct drive inertial confinement fusion is the extent of pellet disruption caused by the Rayleigh-Taylor (RT) instability. The RT instability can be seeded by pellet surface irregularities and/or laser imprint nonuniformities. It is important to characterize the evolution of the RT instability, e.g., the k-spectrum of areal mass. In this paper we study the time-dependent evolution of the spectrum of the Rayleigh-Taylor instability due to laser imprint in planar targets. This is achieved using the NRL FAST hydrodynamic simulation code together with analytical models. It is found that the optically smoothed laser imprint-driven RT spectrum develops into an inverse power law in k-space after several linear growth times. FAST simulation code results are compared with recent NRL Nike KrF laser experimental data. An analytical model, which is a function of Froude and Atwood numbers, is derived for the RT spectrum and favorably compared with both FAST simulation and Nike observations.
Quick, Jacob A; MacIntyre, Allan D; Barnes, Stephen L
2014-02-01
Surgical airway creation has a high potential for disaster. Conventional methods can be cumbersome and require special instruments. A simple method utilizing three steps and readily available equipment exists, but has yet to be adequately tested. Our objective was to compare conventional cricothyroidotomy with the three-step method utilizing high-fidelity simulation. Utilizing a high-fidelity simulator, 12 experienced flight nurses and paramedics performed both methods after a didactic lecture, simulator briefing, and demonstration of each technique. Six participants performed the three-step method first, and the remaining 6 performed the conventional method first. Each participant was filmed and timed. We analyzed videos with respect to the number of hand repositions, number of airway instrumentations, and technical complications. Times to successful completion were measured from incision to balloon inflation. The three-step method was completed faster (52.1 s vs. 87.3 s; p = 0.007) as compared with conventional surgical cricothyroidotomy. The two methods did not differ statistically regarding number of hand movements (3.75 vs. 5.25; p = 0.12) or instrumentations of the airway (1.08 vs. 1.33; p = 0.07). The three-step method resulted in 100% successful airway placement on the first attempt, compared with 75% of the conventional method (p = 0.11). Technical complications occurred more with the conventional method (33% vs. 0%; p = 0.05). The three-step method, using an elastic bougie with an endotracheal tube, was shown to require fewer total hand movements, took less time to complete, resulted in more successful airway placement, and had fewer complications compared with traditional cricothyroidotomy. Published by Elsevier Inc.
Lattice Boltzmann Model of 3D Multiphase Flow in Artery Bifurcation Aneurysm Problem
Abas, Aizat; Mokhtar, N. Hafizah; Ishak, M. H. H.; Abdullah, M. Z.; Ho Tian, Ang
2016-01-01
This paper simulates and predicts the laminar flow inside the 3D aneurysm geometry, since the hemodynamic situation in the blood vessels is difficult to determine and visualize using standard imaging techniques, for example, magnetic resonance imaging (MRI). Three different types of Lattice Boltzmann (LB) models are computed, namely, single relaxation time (SRT), multiple relaxation time (MRT), and regularized BGK models. The results obtained using these different versions of the LB-based code will then be validated with ANSYS FLUENT, a commercially available finite volume- (FV-) based CFD solver. The simulated flow profiles that include velocity, pressure, and wall shear stress (WSS) are then compared between the two solvers. The predicted outcomes show that all the LB models are comparable and in good agreement with the FVM solver for complex blood flow simulation. The findings also show minor differences in their WSS profiles. The performance of the parallel implementation for each solver is also included and discussed in this paper. In terms of parallelization, it was shown that LBM-based code performed better in terms of the computation time required. PMID:27239221
Data Intensive Analysis of Biomolecular Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straatsma, TP; Soares, Thereza A.
2007-12-01
The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less
Data Intensive Analysis of Biomolecular Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straatsma, TP
2008-03-01
The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less
Harmony search optimization for HDR prostate brachytherapy
NASA Astrophysics Data System (ADS)
Panchal, Aditya
In high dose-rate (HDR) prostate brachytherapy, multiple catheters are inserted interstitially into the target volume. The process of treating the prostate involves calculating and determining the best dose distribution to the target and organs-at-risk by means of optimizing the time that the radioactive source dwells at specified positions within the catheters. It is the goal of this work to investigate the use of a new optimization algorithm, known as Harmony Search, in order to optimize dwell times for HDR prostate brachytherapy. The new algorithm was tested on 9 different patients and also compared with the genetic algorithm. Simulations were performed to determine the optimal value of the Harmony Search parameters. Finally, multithreading of the simulation was examined to determine potential benefits. First, a simulation environment was created using the Python programming language and the wxPython graphical interface toolkit, which was necessary to run repeated optimizations. DICOM RT data from Varian BrachyVision was parsed and used to obtain patient anatomy and HDR catheter information. Once the structures were indexed, the volume of each structure was determined and compared to the original volume calculated in BrachyVision for validation. Dose was calculated using the AAPM TG-43 point source model of the GammaMed 192Ir HDR source and was validated against Varian BrachyVision. A DVH-based objective function was created and used for the optimization simulation. Harmony Search and the genetic algorithm were implemented as optimization algorithms for the simulation and were compared against each other. The optimal values for Harmony Search parameters (Harmony Memory Size [HMS], Harmony Memory Considering Rate [HMCR], and Pitch Adjusting Rate [PAR]) were also determined. Lastly, the simulation was modified to use multiple threads of execution in order to achieve faster computational times. Experimental results show that the volume calculation that was implemented in this thesis was within 2% of the values computed by Varian BrachyVision for the prostate, within 3% for the rectum and bladder and 6% for the urethra. The calculation of dose compared to BrachyVision was determined to be different by only 0.38%. Isodose curves were also generated and were found to be similar to BrachyVision. The comparison between Harmony Search and genetic algorithm showed that Harmony Search was over 4 times faster when compared over multiple data sets. The optimal Harmony Memory Size was found to be 5 or lower; the Harmony Memory Considering Rate was determined to be 0.95, and the Pitch Adjusting Rate was found to be 0.9. Ultimately, the effect of multithreading showed that as intensive computations such as optimization and dose calculation are involved, the threads of execution scale with the number of processors, achieving a speed increase proportional to the number of processor cores. In conclusion, this work showed that Harmony Search is a viable alternative to existing algorithms for use in HDR prostate brachytherapy optimization. Coupled with the optimal parameters for the algorithm and a multithreaded simulation, this combination has the capability to significantly decrease the time spent on minimizing optimization problems in the clinic that are time intensive, such as brachytherapy, IMRT and beam angle optimization.
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
Lopes-Silva, João Paulo; Da Silva Santos, Jonatas Ferreira; Artioli, Guilherme Giannini; Loturco, Irineu; Abbiss, Chris; Franchini, Emerson
2018-04-01
To investigate the effect of sodium bicarbonate (NaHCO 3 ) on performance and estimated energy system contribution during simulated taekwondo combat. Nine taekwondo athletes completed two experimental sessions separated by at least 48 h. Athletes consumed 300 mg/kg body mass of NaHCO 3 or placebo (CaCO 3 ) 90 min before the combat simulation (three rounds of 2 min separated by 1 min passive recovery), in a double-blind, randomized, repeated-measures crossover design. All simulated combat was filmed to quantify the time spent fighting in each round. Lactate concentration [La - ] and rating of perceived exertion (RPE) were measured before and after each round, whereas heart rate (HR) and the estimated contribution of the oxidative (W OXI ), ATP (adenosine triphosphate)-phosphocreatine (PCr) (W PCR ), and glycolytic (W [ La - ] ) systems were calculated during the combat simulation. [La - ] increased significantly after NaHCO 3 ingestion, when compared with the placebo condition (+14%, P = 0.04, d = 3.70). NaHCO 3 ingestion resulted in greater estimated glycolytic energy contribution in the first round when compared with the placebo condition (+31%, P = 0.01, d = 3.48). Total attack time was significantly greater after NaHCO 3 when compared with placebo (+13%, P = 0.05, d = 1.15). W OXI , W PCR , VO 2 , HR and RPE were not different between conditions (P > 0.05). NaHCO 3 ingestion was able to increase the contribution of glycolytic metabolism and, therefore, improve performance during simulated taekwondo combat.
Evaluation of a 6% hydrogen peroxide tooth-whitening gel on enamel microhardness after extended use.
Toteda, Mariarosaria; Philpotts, Carole J; Cox, Trevor F; Joiner, Andrew
2008-11-01
To evaluate the effects of a 6% hydrogen peroxide tooth whitener, Xtra White, on sound human enamel microhardness in vitro after an extended and exaggerated simulated 8 weeks of product use. Polished human enamel specimens were prepared and baseline microhardness and color measurements determined. The enamel specimens were exposed to a fluoride-containing toothpaste for 30 seconds and then exposed to water, Xtra White, a control carbopol gel containing no hydrogen peroxide, or a carbonated beverage (each group, n = 8) for 20 minutes. Specimens were exposed to whole saliva at all other times. In order to simulate 8 weeks of extended product use, quadruple the length of the manufacturer's instructions, 112 treatments, were conducted. Microhardness measurements were taken after 2, 4, 6, and 8 weeks of simulated treatments, and color was measured after 2 and 8 weeks. The Xtra White-treated specimens showed a statistically significant (P < .0001) increase in L* and decrease in b* compared to the water-treated specimens after 2 weeks simulated use, indicating bleaching had occurred. The carbonated beverage-treated specimens were significantly softened (P = .0009) compared to baseline after only 1 treatment. The carbopol gel-treated specimens were significantly softened (P = .0028) after 2 weeks of simulated treatments compared to baseline. There were no statistically significant differences in enamel microhardness between baseline and all treatment times for XW and water groups. Xtra White does not have any deleterious effects on sound human enamel microhardness after an extended and exaggerated simulated 8 weeks of product use.
Tang, Guoping; Shafer, Sarah L.; Barlein, Patrick J.; Holman, Justin O.
2009-01-01
Prognostic vegetation models have been widely used to study the interactions between environmental change and biological systems. This study examines the sensitivity of vegetation model simulations to: (i) the selection of input climatologies representing different time periods and their associated atmospheric CO2 concentrations, (ii) the choice of observed vegetation data for evaluating the model results, and (iii) the methods used to compare simulated and observed vegetation. We use vegetation simulated for Asia by the equilibrium vegetation model BIOME4 as a typical example of vegetation model output. BIOME4 was run using 19 different climatologies and their associated atmospheric CO2 concentrations. The Kappa statistic, Fuzzy Kappa statistic and a newly developed map-comparison method, the Nomad index, were used to quantify the agreement between the biomes simulated under each scenario and the observed vegetation from three different global land- and tree-cover data sets: the global Potential Natural Vegetation data set (PNV), the Global Land Cover Characteristics data set (GLCC), and the Global Land Cover Facility data set (GLCF). The results indicate that the 30-year mean climatology (and its associated atmospheric CO2 concentration) for the time period immediately preceding the collection date of the observed vegetation data produce the most accurate vegetation simulations when compared with all three observed vegetation data sets. The study also indicates that the BIOME4-simulated vegetation for Asia more closely matches the PNV data than the other two observed vegetation data sets. Given the same observed data, the accuracy assessments of the BIOME4 simulations made using the Kappa, Fuzzy Kappa and Nomad index map-comparison methods agree well when the compared vegetation types consist of a large number of spatially continuous grid cells. The results of this analysis can assist model users in designing experimental protocols for simulating vegetation.
Health IT-assisted population-based preventive cancer screening: a cost analysis.
Levy, Douglas E; Munshi, Vidit N; Ashburner, Jeffrey M; Zai, Adrian H; Grant, Richard W; Atlas, Steven J
2015-12-01
Novel health information technology (IT)-based strategies harnessing patient registry data seek to improve care at a population level. We analyzed costs from a randomized trial of 2 health IT strategies to improve cancer screening compared with usual care from the perspective of a primary care network. Monte Carlo simulations were used to compare costs across management strategies. We assessed the cost of the software, materials, and personnel for baseline usual care (BUC) compared with augmented usual care (AUC [ie, automated patient outreach]) and augmented usual care with physician input (AUCPI [ie, outreach mediated by physicians' knowledge of their patient panels]) over 1 year. AUC and AUCPI each reduced the time physicians spent on cancer screening by 6.5 minutes per half-day clinical session compared with BUC without changing cancer screening rates. Assuming the value of this time accrues to the network, total costs of cancer screening efforts over the study year were $3.83 million for AUC, $3.88 million for AUCPI, and $4.10 million for BUC. AUC was cost-saving relative to BUC in 87.1% of simulations. AUCPI was cost-saving relative to BUC in 82.5% of simulations. Ongoing per patient costs were lower for both AUC ($35.63) and AUCPI ($35.58) relative to BUC ($39.51). Over the course of the study year, the value of reduced physician time devoted to preventive cancer screening outweighed the costs of the interventions. Primary care networks considering similar interventions will need to capture adequate physician time savings to offset the costs of expanding IT infrastructure.
Lindauer, Andreas; Laveille, Christian; Stockis, Armel
2017-11-01
To quantify the relationship between exposure to lacosamide monotherapy and seizure probability, and to simulate the effect of changing the dose regimen. Structural time-to-event models for dropouts (not because of a lack of efficacy) and seizures were developed using data from 883 adult patients newly diagnosed with epilepsy and experiencing focal or generalized tonic-clonic seizures, participating in a trial (SP0993; ClinicalTrials.gov identifier: NCT01243177) comparing the efficacy of lacosamide and carbamazepine controlled-release monotherapy. Lacosamide dropout and seizure models were used for simulating the effect of changing the initial target dose on seizure freedom. Repeated time-to-seizure data were described by a Weibull distribution with parameters estimated separately for the first and subsequent seizures. Daily area under the plasma concentration-time curve was related linearly to the log-hazard. Disease severity, expressed as the number of seizures during the 3 months before the trial (baseline), was a strong predictor of seizure probability: patients with 7-50 seizures at baseline had a 2.6-fold (90% confidence interval 2.01-3.31) higher risk of seizures compared with the reference two to six seizures. Simulations suggested that a 400-mg/day, rather than a 200-mg/day initial target dose for patients with seven or more seizures at baseline could potentially result in an additional 8% of seizure-free patients for 6 months at the last evaluated dose level. Patients receiving lacosamide had a slightly lower dropout risk compared with those receiving carbamazepine. Baseline disease severity was the most important predictor of seizure probability. Simulations suggest that an initial target dose >200 mg/day could potentially benefit patients with greater disease severity.
Digital Timing Recovery for High Speed Optical Drives
NASA Astrophysics Data System (ADS)
Ko, Seok Jun; Kim, Pan Soo; Choi, Hyung Jin; Lee, Jae-Wook
2002-03-01
A new digital timing recovery scheme for the optical drive system is presented. By comparative simulations using digital versatile disc (DVD) patterns with marginal input conditions, the proposed algorithm shows enhanced performances in jitter variance and signal-to-noise ratio (SNR) margin by four times and 3 [dB], respectively.
Real time implementation and control validation of the wind energy conversion system
NASA Astrophysics Data System (ADS)
Sattar, Adnan
The purpose of the thesis is to analyze dynamic and transient characteristics of wind energy conversion systems including the stability issues in real time environment using the Real Time Digital Simulator (RTDS). There are different power system simulation tools available in the market. Real time digital simulator (RTDS) is one of the powerful tools among those. RTDS simulator has a Graphical User Interface called RSCAD which contains detail component model library for both power system and control relevant analysis. The hardware is based upon the digital signal processors mounted in the racks. RTDS simulator has the advantage of interfacing the real world signals from the external devices, hence used to test the protection and control system equipments. Dynamic and transient characteristics of the fixed and variable speed wind turbine generating systems (WTGSs) are analyzed, in this thesis. Static Synchronous Compensator (STATCOM) as a flexible ac transmission system (FACTS) device is used to enhance the fault ride through (FRT) capability of the fixed speed wind farm. Two level voltage source converter based STATCOM is modeled in both VSC small time-step and VSC large time-step of RTDS. The simulation results of the RTDS model system are compared with the off-line EMTP software i.e. PSCAD/EMTDC. A new operational scheme for a MW class grid-connected variable speed wind turbine driven permanent magnet synchronous generator (VSWT-PMSG) is developed. VSWT-PMSG uses fully controlled frequency converters for the grid interfacing and thus have the ability to control the real and reactive powers simultaneously. Frequency converters are modeled in the VSC small time-step of the RTDS and three phase realistic grid is adopted with RSCAD simulation through the use of optical analogue digital converter (OADC) card of the RTDS. Steady state and LVRT characteristics are carried out to validate the proposed operational scheme. Simulation results show good agreement with real time simulation software and thus can be used to validate the controllers for the real time operation. Integration of the Battery Energy Storage System (BESS) with wind farm can smoothen its intermittent power fluctuations. The work also focuses on the real time implementation of the Sodium Sulfur (NaS) type BESS. BESS is integrated with the STATCOM. The main advantage of this system is that it can also provide the reactive power support to the system along with the real power exchange from BESS unit. BESS integrated with STATCOM is modeled in the VSC small time-step of the RTDS. The cascaded vector control scheme is used for the control of the STATCOM and suitable control is developed to control the charging/discharging of the NaS type BESS. Results are compared with Laboratory standard power system software PSCAD/EMTDC and the advantages of using RTDS in dynamic and transient characteristics analyses of wind farm are also demonstrated clearly.
Bipedal vs. unipedal: a comparison between one-foot and two-foot driving in a driving simulator.
Wang, Dong-Yuan Debbie; Richard, F Dan; Cino, Cullen R; Blount, Trevin; Schmuller, Joseph
2017-04-01
Is it better to drive with one foot or with two feet? Although two-foot driving has fostered interminable debate in the media, no scientific and systematic research has assessed this issue and federal and local state governments have provided no answers. The current study compared traditional unipedal (one-foot driving, using the right foot to control the accelerator and the brake pedal) with bipedal (two-foot driving, using the right foot to control the accelerator and the left foot to control the brake pedal) responses to a visual stimulus in a driving simulator study. Each of 30 undergraduate participants drove in a simulated driving scenario. They responded to a STOP sign displayed on the centre of the screen by bringing their vehicle to a complete stop. Brake RT was shorter under the bipedal condition, while throttle RT showed advantage under the unipedal condition. Stopping time and distance showed a bipedal advantage, however. We discuss further limitations of the current study and implications in a driving task. Before drawing any conclusions from the simulator study, further on-road driving tests are necessary to confirm these obtained bipedal advantages. Practitioner Summary: Traditional unipedal (using the right foot to control the accelerator and the brake pedal) with bipedal (using the right foot to control the accelerator and the left foot to control the brake pedal) responses to a visual stimulus in a driving simulator were compared. Our results showed a bipedal advantage. Promotion: Although two-foot driving has fostered interminable debate in the media, no scientific and systematic research has assessed this issue and federal and local state governments have provided no answers. Traditional (one-foot driving, using the right foot to control the accelerator and the brake pedal) with bipedal (using the right foot to control the accelerator and the left foot to control the brake pedal) responses to a visual stimulus in a simulated driving study were compared. Throttle reaction time was faster in the unipedal condition whereas brake reaction time, stopping time and stopping distance showed a bipedal advantage. We discuss further theoretical issues and implications in a driving task.
Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L
2011-12-01
To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.
Computer-Simulated Arthroscopic Knee Surgery: Effects of Distraction on Resident Performance.
Cowan, James B; Seeley, Mark A; Irwin, Todd A; Caird, Michelle S
2016-01-01
Orthopedic surgeons cite "full focus" and "distraction control" as important factors for achieving excellent outcomes. Surgical simulation is a safe and cost-effective way for residents to practice surgical skills, and it is a suitable tool to study the effects of distraction on resident surgical performance. This study investigated the effects of distraction on arthroscopic knee simulator performance among residents at various levels of experience. The authors hypothesized that environmental distractions would negatively affect performance. Twenty-five orthopedic surgery residents performed a diagnostic knee arthroscopy computer simulation according to a checklist of structures to identify and tasks to complete. Participants were evaluated on arthroscopy time, number of chondral injuries, instances of looking down at their hands, and completion of checklist items. Residents repeated this task at least 2 weeks later while simultaneously answering distracting questions. During distracted simulation, the residents had significantly fewer completed checklist items (P<.02) compared with the initial simulation. Senior residents completed the initial simulation in less time (P<.001), with fewer chondral injuries (P<.005) and fewer instances of looking down at their hands (P<.012), compared with junior residents. Senior residents also completed 97% of the diagnostic checklist, whereas junior residents completed 89% (P<.019). During distracted simulation, senior residents continued to complete tasks more quickly (P<.006) and with fewer instances of looking down at their hands (P<.042). Residents at all levels appear to be susceptible to the detrimental effects of distraction when performing arthroscopic simulation. Addressing even straightforward questions intraoperatively may affect surgeon performance. Copyright 2016, SLACK Incorporated.
Particle identification using the time-over-threshold measurements in straw tube detectors
NASA Astrophysics Data System (ADS)
Jowzaee, S.; Fioravanti, E.; Gianotti, P.; Idzik, M.; Korcyl, G.; Palka, M.; Przyborowski, D.; Pysz, K.; Ritman, J.; Salabura, P.; Savrie, M.; Smyrski, J.; Strzempek, P.; Wintz, P.
2013-08-01
The identification of charged particles based on energy losses in straw tube detectors has been simulated. The response of a new front-end chip developed for the PANDA straw tube tracker was implemented in the simulations and corrections for track distance to sense wire were included. Separation power for p - K, p - π and K - π pairs obtained using the time-over-threshold technique was compared with the one based on the measurement of collected charge.
Saw-tooth instability in storage rings: simulations and dynamical model
NASA Astrophysics Data System (ADS)
Migliorati, M.; Palumbo, L.; Dattoli, G.; Mezi, L.
1999-11-01
The saw-tooth instability in storage rings is studied by means of a time-domain simulation code which takes into account the self-induced wake fields. The results are compared with those from a dynamical heuristic model exploiting two coupled non-linear differential equations, accounting for the time behavior of the instability growth rate and for the anomalous growth of the energy spread. This model is shown to reproduce the characteristic features of the instability in a fairly satisfactory way.
Parallel algorithms for simulating continuous time Markov chains
NASA Technical Reports Server (NTRS)
Nicol, David M.; Heidelberger, Philip
1992-01-01
We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.
NASA Astrophysics Data System (ADS)
Guo, Yue; Du, Lei; Jiang, Long; Li, Qing; Zhao, Zhenning
2017-01-01
In this paper, the combustion and NOx emission characteristics of a 300 MW tangential boiler are simulated, we obtain the flue gas velocity field in the hearth, component concentration distribution of temperature field and combustion products, and the speed, temperature, concentration of oxygen and NOx emissions compared with the test results in the waisting air distribution conditions, found the simulation values coincide well with the test value, to verify the rationality of the model. At the same time, the flow field in the furnace, the combustion and the influence of NOx emission characteristics are simulated by different conditions, including compared with primary zone secondary waisting air distribution, uniform air distribution and pagodas go down air distribution, the results show that, waisting air distribution is useful to reduce NOx emissions.
Malataras, G; Kappas, C; Lovelock, D M; Mohan, R
1997-01-01
This article presents a comparison between two implementations of an EGS4 Monte Carlo simulation of a radiation therapy machine. The first implementation was run on a high performance RISC workstation, and the second was run on an inexpensive PC. The simulation was performed using the MCRAD user code. The photon energy spectra, as measured at a plane transverse to the beam direction and containing the isocenter, were compared. The photons were also binned radially in order to compare the variation of the spectra with radius. With 500,000 photons recorded in each of the two simulations, the running times were 48 h and 116 h for the workstation and the PC, respectively. No significant statistical differences between the two implementations were found.
A FLUKA simulation of the KLOE electromagnetic calorimeter
NASA Astrophysics Data System (ADS)
Di Micco, B.; Branchini, P.; Ferrari, A.; Loffredo, S.; Passeri, A.; Patera, V.
2007-10-01
We present the simulation of the KLOE calorimeter with the FLUKA Monte Carlo program. The response of the detector to electromagnetic showers has been studied and compared with the publicly available KLOE data. The energy and the time resolution of the electromagnetic clusters is in good agreement with the data. The simulation has been also used to study a possible improvement of the KLOE calorimeter using multianode photo-multipliers. An HAMAMATSU R7600-M16 photomultiplier has been assembled in order to determine the whole cross talk matrix that has been included in the simulation. The cross talk matrix takes into account the effects of a realistic photo-multiplier's electronics and of its coupling to the active material. The performance of the modified readout has been compared to the usual KLOE configuration.
Timing performance of phased-locked loops in optical pulse position modulation communication systems
NASA Technical Reports Server (NTRS)
Lafaw, D. A.; Gardner, C. S.
1984-01-01
An optical digital communication system requires that an accurate clock signal be available at the receiver for proper synchronization with the transmitted signal. Phase synchronization is especially critical in M-ary pulse position modulation (PPM) systems where the optimum decision scheme is an energy detector which compares the energy in each of M time slots to decide which of M possible words was sent. Timing errors cause energy spillover into adjacent time slots (a form of intersymbol interference) so that only a portion of the signal energy may be attributed to the correct time slot. This effect decreases the effective signal, increases the effective noise, and increases the probability of error. A timing subsystem for a satellite-to-satellite optical PPM communication link is simulated. The receiver employs direct photodetection, preprocessing of the detected signal, and a phase-locked loop for timing synchronization. The variance of the relative phase error is examined under varying signal strength conditions as an indication of loop performance, and simulation results are compared to theoretical calculations.
Timing performance of phased-locked loops in optical pulse position modulation communication systems
NASA Astrophysics Data System (ADS)
Lafaw, D. A.; Gardner, C. S.
1984-08-01
An optical digital communication system requires that an accurate clock signal be available at the receiver for proper synchronization with the transmitted signal. Phase synchronization is especially critical in M-ary pulse position modulation (PPM) systems where the optimum decision scheme is an energy detector which compares the energy in each of M time slots to decide which of M possible words was sent. Timing errors cause energy spillover into adjacent time slots (a form of intersymbol interference) so that only a portion of the signal energy may be attributed to the correct time slot. This effect decreases the effective signal, increases the effective noise, and increases the probability of error. A timing subsystem for a satellite-to-satellite optical PPM communication link is simulated. The receiver employs direct photodetection, preprocessing of the detected signal, and a phase-locked loop for timing synchronization. The variance of the relative phase error is examined under varying signal strength conditions as an indication of loop performance, and simulation results are compared to theoretical calculations.
Comparison of algorithms to generate event times conditional on time-dependent covariates.
Sylvestre, Marie-Pierre; Abrahamowicz, Michal
2008-06-30
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.
2-dimensional simulations of electrically asymmetric capacitively coupled RF-discharges
NASA Astrophysics Data System (ADS)
Mohr, Sebastian; Schulze, Julian; Schuengel, Edmund; Czarnetzki, Uwe
2011-10-01
Capactively coupled RF-discharges are widely used for surface treatment like the deposition of thin films. For industrial applications, the independent control of the ion flux to and the mean energy of the electrons impinging on the surfaces is desired. Experiments and 1D3v-PIC/MCC-simulations have shown that this independent control is possible by applying a fundamental frequency and its second harmonic to the powered electrode. This way, even in geometrically symmetric discharges, as they are often used in industrial reactors, a discharge asymmetry can be induced electrically, hence the name Electrical Asymmetry Effect (EAE). We performed 2D-simulations of electrically asymmetric discharges using HPEM by the group of Mark Kushner, a simulation tool suitable for simulating industrial reactors. First results are presented and compared to previously obtained experimental and simulation data. The comparison shows that for the first time, we succeeded in simulating electrically asymmetric discharges with a 2-dimensional simulation. Capactively coupled RF-discharges are widely used for surface treatment like the deposition of thin films. For industrial applications, the independent control of the ion flux to and the mean energy of the electrons impinging on the surfaces is desired. Experiments and 1D3v-PIC/MCC-simulations have shown that this independent control is possible by applying a fundamental frequency and its second harmonic to the powered electrode. This way, even in geometrically symmetric discharges, as they are often used in industrial reactors, a discharge asymmetry can be induced electrically, hence the name Electrical Asymmetry Effect (EAE). We performed 2D-simulations of electrically asymmetric discharges using HPEM by the group of Mark Kushner, a simulation tool suitable for simulating industrial reactors. First results are presented and compared to previously obtained experimental and simulation data. The comparison shows that for the first time, we succeeded in simulating electrically asymmetric discharges with a 2-dimensional simulation. Funding: German Ministry for the Environment (0325210B).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storelli, A., E-mail: alexandre.storelli@lpp.polytechnique.fr; Vermare, L.; Hennequin, P.
2015-06-15
In a dedicated collisionality scan in Tore Supra, the geodesic acoustic mode (GAM) is detected and identified with the Doppler backscattering technique. Observations are compared to the results of a simulation with the gyrokinetic code GYSELA. We found that the GAM frequency in experiments is lower than predicted by simulation and theory. Moreover, the disagreement is higher in the low collisionality scenario. Bursts of non harmonic GAM oscillations have been characterized with filtering techniques, such as the Hilbert-Huang transform. When comparing this dynamical behaviour between experiments and simulation, the probability density function of GAM amplitude and the burst autocorrelation timemore » are found to be remarkably similar. In the simulation, where the radial profile of GAM frequency is continuous, we observed a phenomenon of radial phase mixing of the GAM oscillations, which could influence the burst autocorrelation time.« less
NASA Technical Reports Server (NTRS)
Walker, R.; Gupta, N.
1984-01-01
The important algorithm issues necessary to achieve a real time flutter monitoring system; namely, the guidelines for choosing appropriate model forms, reduction of the parameter convergence transient, handling multiple modes, the effect of over parameterization, and estimate accuracy predictions, both online and for experiment design are addressed. An approach for efficiently computing continuous-time flutter parameter Cramer-Rao estimate error bounds were developed. This enables a convincing comparison of theoretical and simulation results, as well as offline studies in preparation for a flight test. Theoretical predictions, simulation and flight test results from the NASA Drones for Aerodynamic and Structural Test (DAST) Program are compared.
Simulation-Based Testing of Pager Interruptions During Laparoscopic Cholecystectomy.
Sujka, Joseph A; Safcsak, Karen; Bhullar, Indermeet S; Havron, William S
2018-01-30
To determine if pager interruptions affect operative time, safety, or complications and management of pager issues during a simulated laparoscopic cholecystectomy. Twelve surgery resident volunteers were tested on a Simbionix Lap Mentor II simulator. Each resident performed 6 randomized simulated laparoscopic cholecystectomies; 3 with pager interruptions (INT) and 3 without pager interruptions (NO-INT). The pager interruptions were sent in the form of standardized patient vignettes and timed to distract the resident during dissection of the critical view of safety and clipping of the cystic duct. The residents were graded on a pass/fail scale for eliciting appropriate patient history and management of the pager issue. Data was extracted from the simulator for the following endpoints: operative time, safety metrics, and incidence of operative complications. The Mann-Whitney U test and contingency table analysis were used to compare the 2 groups (INT vs. NO-INT). Level I trauma center; Simulation laboratory. Twelve general surgery residents. There was no significant difference between the 2 groups in any of the operative endpoints as measured by the simulator. However, in the INT group, only 25% of the time did the surgery residents both adequately address the issue and provide effective patient management in response to the pager interruption. Pager interruptions did not affect operative time, safety, or complications during the simulated procedure. However, there were significant failures in the appropriate evaluations and management of pager issues. Consideration for diversion of patient care issues to fellow residents not operating to improve quality and safety of patient care outside the operating room requires further study. Copyright © 2018. Published by Elsevier Inc.
Symmetry Breaking in a random passive scalar
NASA Astrophysics Data System (ADS)
Kilic, Zeliha; McLaughlin, Richard; Camassa, Roberto
2017-11-01
We consider the evolution of a decaying passive scalar in the presence of a gaussian white noise fluctuating shear flow. We focus on deterministic initial data and establish the short, intermediate, and long time symmetry properties of the evolving point wise probability measure for the random passive scalar. Analytical results are compared directly to Monte Carlo simulations. Time permitting we will compare the predictions to experimental observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Dynamic Communicability Predicts Infectiousness
NASA Astrophysics Data System (ADS)
Mantzaris, Alexander V.; Higham, Desmond J.
Using real, time-dependent social interaction data, we look at correlations between some recently proposed dynamic centrality measures and summaries from large-scale epidemic simulations. The evolving network arises from email exchanges. The centrality measures, which are relatively inexpensive to compute, assign rankings to individual nodes based on their ability to broadcast information over the dynamic topology. We compare these with node rankings based on infectiousness that arise when a full stochastic SI simulation is performed over the dynamic network. More precisely, we look at the proportion of the network that a node is able to infect over a fixed time period, and the length of time that it takes for a node to infect half the network. We find that the dynamic centrality measures are an excellent, and inexpensive, proxy for the full simulation-based measures.
NASA Astrophysics Data System (ADS)
Lamdjaya, T.; Jobiliong, E.
2017-01-01
PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.
Application of Arrester Simulation Device in Training
NASA Astrophysics Data System (ADS)
Baoquan, Zhang; Ziqi, Chai; Genghua, Liu; Wei, Gao; Kaiyue, Wu
2017-12-01
Combining with the arrester simulation device put into use successfully, this paper introduces the application of arrester test in the insulation resistance measurement, counter test, Leakage current test under DC 1mA voltage and leakage current test under 0.75U1mA. By comparing with the existing training, this paper summarizes the arrester simulation device’s outstanding advantages including real time monitoring, multi-type fault data analysis and acousto-optic simulation. It effectively solves the contradiction between authenticity and safety in the existing test training, and provides a reference for further training.
Lee, Sanghun; Park, Sung Soo
2011-11-03
Dielectric constants of electrolytic organic solvents are calculated employing nonpolarizable Molecular Dynamics simulation with Electronic Continuum (MDEC) model and Density Functional Theory. The molecular polarizabilities are obtained by the B3LYP/6-311++G(d,p) level of theory to estimate high-frequency refractive indices while the densities and dipole moment fluctuations are computed using nonpolarizable MD simulations. The dielectric constants reproduced from these procedures are evaluated to provide a reliable approach for estimating the experimental data. An additional feature, two representative solvents which have similar molecular weights but are different dielectric properties, i.e., ethyl methyl carbonate and propylene carbonate, are compared using MD simulations and the distinctly different dielectric behaviors are observed at short times as well as at long times.
DLTPulseGenerator: A library for the simulation of lifetime spectra based on detector-output pulses
NASA Astrophysics Data System (ADS)
Petschke, Danny; Staab, Torsten E. M.
2018-01-01
The quantitative analysis of lifetime spectra relevant in both life and materials sciences presents one of the ill-posed inverse problems and, hence, leads to most stringent requirements on the hardware specifications and the analysis algorithms. Here we present DLTPulseGenerator, a library written in native C++ 11, which provides a simulation of lifetime spectra according to the measurement setup. The simulation is based on pairs of non-TTL detector output-pulses. Those pulses require the Constant Fraction Principle (CFD) for the determination of the exact timing signal and, thus, the calculation of the time difference i.e. the lifetime. To verify the functionality, simulation results were compared to experimentally obtained data using Positron Annihilation Lifetime Spectroscopy (PALS) on pure tin.
The Effect of Barotropic and Baroclinic Tides on Coastal Stratification and Mixing
NASA Astrophysics Data System (ADS)
Suanda, S. H.; Feddersen, F.; Kumar, N.
2017-12-01
The effects of barotropic and baroclinic tides on subtidal stratification and vertical mixing are examined with high-resolution, three-dimensional numerical simulations of the Central Californian coastal upwelling region. A base simulation with realistic atmospheric and regional-scale boundary forcing but no tides (NT) is compared to two simulations with the addition of predominantly barotropic local tides (LT) and with combined barotropic and remotely generated, baroclinic tides (WT) with ≈ 100 W m-1 onshore baroclinic energy flux. During a 10 day period of coastal upwelling when the domain volume-averaged temperature is similar in all three simulations, LT has little difference in subtidal temperature and stratification compared to NT. In contrast, the addition of remote baroclinic tides (WT) reduces the subtidal continental shelf stratification up to 50% relative to NT. Idealized simulations to isolate barotropic and baroclinic effects demonstrate that within a parameter space of typical U.S. West Coast continental shelf slopes, barotropic tidal currents, incident energy flux, and subtidal stratification, the dissipating baroclinic tide destroys stratification an order of magnitude faster than barotropic tides. In WT, the modeled vertical temperature diffusivity at the top (base) of the bottom (surface) boundary layer is increased up to 20 times relative to NT. Therefore, the width of the inner-shelf (region of surface and bottom boundary layer overlap) is increased approximately 4 times relative to NT. The change in stratification due to dissipating baroclinic tides is comparable to the magnitude of the observed seasonal cycle of stratification.
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.; Fulakeza, Matthew B.
2014-01-01
The Atlantic cold tongue (ACT) develops during spring and early summer near the Equator in the Eastern Atlantic Ocean and Gulf of Guinea. The hypothesis that the ACT accelerates the timing of West African monsoon (WAM) onset is tested by comparing two regional climate model (RM3) simulation ensembles. Observed sea surface temperatures (SST) that include the ACT are used to force a control ensemble. An idealized, warm SST perturbation is designed to represent lower boundary forcing without the ACT for the experiment ensemble. Summer simulations forced by observed SST and reanalysis boundary conditions for each of five consecutive years are compared to five parallel runs forced by SST with the warm perturbation. The article summarizes the sequence of events leading to the onset of the WAM in the Sahel region. The representation of WAM onset in RM3 simulations is examined and compared to Tropical Rainfall Measuring Mission (TRMM), Global Precipitation Climatology Project (GPCP) and reanalysis data. The study evaluates the sensitivity of WAM onset indicators to the presence of the ACT by analysing the differences between the two simulation ensembles. Results show that the timing of major rainfall events and therefore theWAM onset in the Sahel are not sensitive to the presence of the ACT. However, the warm SST perturbation does increase downstream rainfall rates over West Africa as a consequence of enhanced specific humidity and enhanced northward moisture flux in the lower troposphere.
Fully kinetic simulations of dense plasma focus Z-pinch devices.
Schmidt, A; Tang, V; Welch, D
2012-11-16
Dense plasma focus Z-pinch devices are sources of copious high energy electrons and ions, x rays, and neutrons. The mechanisms through which these physically simple devices generate such high-energy beams in a relatively short distance are not fully understood. We now have, for the first time, demonstrated a capability to model these plasmas fully kinetically, allowing us to simulate the pinch process at the particle scale. We present here the results of the initial kinetic simulations, which reproduce experimental neutron yields (~10(7)) and high-energy (MeV) beams for the first time. We compare our fluid, hybrid (kinetic ions and fluid electrons), and fully kinetic simulations. Fluid simulations predict no neutrons and do not allow for nonthermal ions, while hybrid simulations underpredict neutron yield by ~100x and exhibit an ion tail that does not exceed 200 keV. Only fully kinetic simulations predict MeV-energy ions and experimental neutron yields. A frequency analysis in a fully kinetic simulation shows plasma fluctuations near the lower hybrid frequency, possibly implicating lower hybrid drift instability as a contributor to anomalous resistivity in the plasma.
Use of a Novel Airway Kit and Simulation in Resident Training on Emergent Pediatric Airways.
Melzer, Jonathan M; Hamersley, Erin R S; Gallagher, Thomas Q
2017-06-01
Objective Development of a novel pediatric airway kit and implementation with simulation to improve resident response to emergencies with the goal of improving patient safety. Methods Prospective study with 9 otolaryngology residents (postgraduate years 1-5) from our tertiary care institution. Nine simulated pediatric emergency airway drills were carried out with the existing system and a novel portable airway kit. Response times and time to successful airway control were noted with both the extant airway system and the new handheld kit. Results were analyzed to ensure parametric data and compared with t tests. A Bonferroni adjustment indicated that an alpha of 0.025 was needed for significance. Results Use of the airway kit significantly reduced the mean time of resident arrival by 47% ( P = .013) and mean time of successful intubation by 50% ( P = .007). Survey data indicated 100% improved resident comfort with emergent airway scenarios with use of the kit. Discussion Times to response and meaningful intervention were significantly reduced with implementation of the handheld airway kit. Use of simulation training to implement the new kit improved residents' comfort and airway skills. This study describes an affordable novel mobile airway kit and demonstrates its ability to improve response times. Implications for Practice The low cost of this airway kit makes it a tenable option even for smaller hospitals. Simulation provides a safe and effective way to familiarize oneself with novel equipment, and, when possible, realistic emergent airway simulations should be used to improve provider performance.
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Simulations of snow distribution and hydrology in a mountain basin
Hartman, Melannie D.; Baron, Jill S.; Lammers, Richard B.; Cline, Donald W.; Band, Larry E.; Liston, Glen E.; Tague, Christina L.
1999-01-01
We applied a version of the Regional Hydro-Ecologic Simulation System (RHESSys) that implements snow redistribution, elevation partitioning, and wind-driven sublimation to Loch Vale Watershed (LVWS), an alpine-subalpine Rocky Mountain catchment where snow accumulation and ablation dominate the hydrologic cycle. We compared simulated discharge to measured discharge and the simulated snow distribution to photogrammetrically rectified aerial (remotely sensed) images. Snow redistribution was governed by a topographic similarity index. We subdivided each hillslope into elevation bands that had homogeneous climate extrapolated from observed climate. We created a distributed wind speed field that was used in conjunction with daily measured wind speeds to estimate sublimation. Modeling snow redistribution was critical to estimating the timing and magnitude of discharge. Incorporating elevation partitioning improved estimated timing of discharge but did not improve patterns of snow cover since wind was the dominant controller of areal snow patterns. Simulating wind-driven sublimation was necessary to predict moisture losses.
Simulations of 6-DOF Motion with a Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)
2003-01-01
Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.
A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O. S.; Callahan, D. A.; Cerjan, C. J.
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less
A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments
Jones, O. S.; Callahan, D. A.; Cerjan, C. J.; ...
2012-05-29
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less
Towards an Integrated Model of the NIC Layered Implosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O S; Callahan, D A; Cerjan, C J
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-45% of the calculated yields.« less
Barrett, Jeffrey S; Jayaraman, Bhuvana; Patel, Dimple; Skolnik, Jeffrey M
2008-06-01
Previous exploration of oncology study design efficiency has focused on Markov processes alone (probability-based events) without consideration for time dependencies. Barriers to study completion include time delays associated with patient accrual, inevaluability (IE), time to dose limiting toxicities (DLT) and administrative and review time. Discrete event simulation (DES) can incorporate probability-based assignment of DLT and IE frequency, correlated with cohort in the case of DLT, with time-based events defined by stochastic relationships. A SAS-based solution to examine study efficiency metrics and evaluate design modifications that would improve study efficiency is presented. Virtual patients are simulated with attributes defined from prior distributions of relevant patient characteristics. Study population datasets are read into SAS macros which select patients and enroll them into a study based on the specific design criteria if the study is open to enrollment. Waiting times, arrival times and time to study events are also sampled from prior distributions; post-processing of study simulations is provided within the decision macros and compared across designs in a separate post-processing algorithm. This solution is examined via comparison of the standard 3+3 decision rule relative to the "rolling 6" design, a newly proposed enrollment strategy for the phase I pediatric oncology setting.
Simulation of Assembly Line Balancing in Automotive Component Manufacturing
NASA Astrophysics Data System (ADS)
Jamil, Muthanna; Mohd Razali, Noraini
2016-02-01
This study focuses on the simulation of assembly line balancing in an automotive component in a vendor manufacturing company. A mixed-model assembly line of charcoal canister product that is used in an engine system as fuel's vapour filter was observed and found that the current production rate of the line does not achieve customer demand even though the company practices buffer stock for two days in advance. This study was carried out by performing detailed process flow and time studies along the line. To set up a model of the line by simulation, real data was taken from a factory floor and tested for distribution fit. The data gathered was then transformed into a simulation model. After verification of the model by comparing it with the actual system, it was found that the current line efficiency is not at its optimum condition due to blockage and idle time. Various what-if analysis were applied to eliminate the cause. Proposed layout shows that the line is balanced by adding buffer to avoid the blockage. Whereas, manpower is added the stations to reduce process time therefore reducing idling time. The simulation study was carried out using ProModel software.
Fully kinetic 3D simulations of the Hermean magnetosphere under realistic conditions: a new approach
NASA Astrophysics Data System (ADS)
Amaya, Jorge; Gonzalez-Herrero, Diego; Lembège, Bertrand; Lapenta, Giovanni
2017-04-01
Simulations of the magnetosphere of planets are usually performed using the MHD and the hybrid approaches. However, these two methods still rely on approximations for the computation of the pressure tensor, and require the neutrality of the plasma at every point of the domain by construction. These approximations undermine the role of electrons on the emergence of plasma features in the magnetosphere of planets. The high mobility of electrons, their characteristic time and space scales, and the lack of perfect neutrality, are the source of many observed phenomena in the magnetospheres, including the turbulence energy cascade, the magnetic reconnection, the particle acceleration in the shock front and the formation of current systems around the magnetosphere. Fully kinetic codes are extremely demanding of computing time, and have been unable to perform simulations of the full magnetosphere at the real scales of a planet with realistic plasma conditions. This is caused by two main reasons: 1) explicit codes must resolve the electron scales limiting the time and space discretisation, and 2) current versions of semi-implicit codes are unstable for cell sizes larger than a few Debye lengths. In this work we present new simulations performed with ECsim, an Energy Conserving semi-implicit method [1], that can overcome these two barriers. We compare the solutions obtained with ECsim with the solutions obtained by the classic semi-implicit code iPic3D [2]. The new simulations with ECsim demand a larger computational effort, but the time and space discretisations are larger than those in iPic3D allowing for a faster simulation time of the full planetary environment. The new code, ECsim, can reach a resolution allowing the capture of significant large scale physics without loosing kinetic electron information, such as wave-electron interaction and non-Maxwellian electron velocity distributions [3]. The code is able to better capture the thickness of the different boundary layers of the magnetosphere of Mercury. Electron kinetics are consistent with the spatial and temporal scale resolutions. Simulations are compared with measurements from the MESSENGER spacecraft showing a better fit when compared against the classic fully kinetic code iPic3D. These results show that the new generation of Energy Conserving semi-implicit codes can be used for an accurate analysis and interpretation of particle data from magnetospheric missions like BepiColombo and MMS, including electron velocity distributions and electron temperature anisotropies. [1] Lapenta, G. (2016). Exactly Energy Conserving Implicit Moment Particle in Cell Formulation. arXiv preprint arXiv:1602.06326. [2] Markidis, S., & Lapenta, G. (2010). Multi-scale simulations of plasma with iPIC3D. Mathematics and Computers in Simulation, 80(7), 1509-1519. [3] Lapenta, G., Gonzalez-Herrero, D., & Boella, E. (2016). Multiple scale kinetic simulations with the energy conserving semi implicit particle in cell (PIC) method. arXiv preprint arXiv:1612.08289.
Evaluation of Three Models for Simulating Pesticide Runoff from Irrigated Agricultural Fields.
Zhang, Xuyang; Goh, Kean S
2015-11-01
Three models were evaluated for their accuracy in simulating pesticide runoff at the edge of agricultural fields: Pesticide Root Zone Model (PRZM), Root Zone Water Quality Model (RZWQM), and OpusCZ. Modeling results on runoff volume, sediment erosion, and pesticide loss were compared with measurements taken from field studies. Models were also compared on their theoretical foundations and ease of use. For runoff events generated by sprinkler irrigation and rainfall, all models performed equally well with small errors in simulating water, sediment, and pesticide runoff. The mean absolute percentage errors (MAPEs) were between 3 and 161%. For flood irrigation, OpusCZ simulated runoff and pesticide mass with the highest accuracy, followed by RZWQM and PRZM, likely owning to its unique hydrological algorithm for runoff simulations during flood irrigation. Simulation results from cold model runs by OpusCZ and RZWQM using measured values for model inputs matched closely to the observed values. The MAPE ranged from 28 to 384 and 42 to 168% for OpusCZ and RZWQM, respectively. These satisfactory model outputs showed the models' abilities in mimicking reality. Theoretical evaluations indicated that OpusCZ and RZWQM use mechanistic approaches for hydrology simulation, output data on a subdaily time-step, and were able to simulate management practices and subsurface flow via tile drainage. In contrast, PRZM operates at daily time-step and simulates surface runoff using the USDA Soil Conservation Service's curve number method. Among the three models, OpusCZ and RZWQM were suitable for simulating pesticide runoff in semiarid areas where agriculture is heavily dependent on irrigation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher
2017-09-01
Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.
Influence of wheel-rail contact modelling on vehicle dynamic simulation
NASA Astrophysics Data System (ADS)
Burgelman, Nico; Sichani, Matin Sh.; Enblom, Roger; Berg, Mats; Li, Zili; Dollevoet, Rolf
2015-08-01
This paper presents a comparison of four models of rolling contact used for online contact force evaluation in rail vehicle dynamics. Until now only a few wheel-rail contact models have been used for online simulation in multibody software (MBS). Many more models exist and their behaviour has been studied offline, but a comparative study of the mutual influence between the calculation of the creep forces and the simulated vehicle dynamics seems to be missing. Such a comparison would help researchers with the assessment of accuracy and calculation time. The contact methods investigated in this paper are FASTSIM, Linder, Kik-Piotrowski and Stripes. They are compared through a coupling between an MBS for the vehicle simulation and Matlab for the contact models. This way the influence of the creep force calculation on the vehicle simulation is investigated. More specifically this study focuses on the influence of the contact model on the simulation of the hunting motion and on the curving behaviour.
Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
Accelerated Monte Carlo Simulation on the Chemical Stage in Water Radiolysis using GPU
Tian, Zhen; Jiang, Steve B.; Jia, Xun
2018-01-01
The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2. PMID:28323637
Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU
NASA Astrophysics Data System (ADS)
Tian, Zhen; Jiang, Steve B.; Jia, Xun
2017-04-01
The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.
Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU.
Tian, Zhen; Jiang, Steve B; Jia, Xun
2017-04-21
The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.
Real-time aerodynamic heating and surface temperature calculations for hypersonic flight simulation
NASA Technical Reports Server (NTRS)
Quinn, Robert D.; Gong, Leslie
1990-01-01
A real-time heating algorithm was derived and installed on the Ames Research Center Dryden Flight Research Facility real-time flight simulator. This program can calculate two- and three-dimensional stagnation point surface heating rates and surface temperatures. The two-dimensional calculations can be made with or without leading-edge sweep. In addition, upper and lower surface heating rates and surface temperatures for flat plates, wedges, and cones can be calculated. Laminar or turbulent heating can be calculated, with boundary-layer transition made a function of free-stream Reynolds number and free-stream Mach number. Real-time heating rates and surface temperatures calculated for a generic hypersonic vehicle are presented and compared with more exact values computed by a batch aeroheating program. As these comparisons show, the heating algorithm used on the flight simulator calculates surface heating rates and temperatures well within the accuracy required to evaluate flight profiles for acceptable heating trajectories.
Toward transient finite element simulation of thermal deformation of machine tools in real-time
NASA Astrophysics Data System (ADS)
Naumann, Andreas; Ruprecht, Daniel; Wensch, Joerg
2018-01-01
Finite element models without simplifying assumptions can accurately describe the spatial and temporal distribution of heat in machine tools as well as the resulting deformation. In principle, this allows to correct for displacements of the Tool Centre Point and enables high precision manufacturing. However, the computational cost of FE models and restriction to generic algorithms in commercial tools like ANSYS prevents their operational use since simulations have to run faster than real-time. For the case where heat diffusion is slow compared to machine movement, we introduce a tailored implicit-explicit multi-rate time stepping method of higher order based on spectral deferred corrections. Using the open-source FEM library DUNE, we show that fully coupled simulations of the temperature field are possible in real-time for a machine consisting of a stock sliding up and down on rails attached to a stand.
Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali
2005-09-01
To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.
Camp, Christopher L; Krych, Aaron J; Stuart, Michael J; Regnier, Terry D; Mills, Karen M; Turner, Norman S
2016-02-03
Cadaveric skills laboratories and virtual reality simulators are two common methods used outside of the operating room to improve residents' performance of knee arthroscopy. We are not aware of any head-to-head comparisons of the educational values of these two methodologies. The purpose of this prospective randomized trial was to assess the efficacy of these training methods, compare their rates of improvement, and provide economic value data to programs seeking to implement such technologies. Orthopaedic surgery residents were randomized to one of three groups: control, training on cadavera (cadaver group), and training with use of a simulator (simulator group). Residents completed pretest and posttest diagnostic knee arthroscopies on cadavera that were timed and video-recorded. Between the pretest and posttest, the control group performed no arthroscopy, the cadaver group performed four hours of practice on cadavera, and the simulator group trained for four hours on a simulator. All tests were scored in a blinded, randomized fashion using the validated Arthroscopy Surgical Skill Evaluation Tool (ASSET). The mean improvement in the ASSET score and in the time to complete the procedure were compared between the pretest and posttest and among the groups. Forty-five residents (fifteen per group) completed the study. The mean difference in the ASSET score from the pretest to the posttest was -0.40 (p = 0.776) in the control group, +4.27 (p = 0.002) in the cadaver group, and +1.92 (p = 0.096) in the simulator group (p = 0.015 for the comparison among the groups). The mean difference in the test-completion time (minutes:seconds) from the pretest to the posttest was 0:07 (p = 0.902) in the control group, 3:01 (p = 0.002) in the cadaver group, and 0:28 (p = 0.708) in the simulator group (p = 0.044 for the comparison among groups). Residents in the cadaver group improved their performance at a mean of 1.1 ASSET points per hour spent training whereas those in the simulator group improved 0.5 ASSET point per hour of training. Cadaveric skills laboratories improved residents' performance of knee arthroscopy compared with that of matched controls. Residents practicing on cadaveric specimens improved twice as fast as those utilizing a high-fidelity simulator; however, based on cost estimation specific to our institution, the simulator may be more cost-effective if it is used at least 300 hours per year. Additional study of this possibility is warranted. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.
Efficiencies of joint non-local update moves in Monte Carlo simulations of coarse-grained polymers
NASA Astrophysics Data System (ADS)
Austin, Kieran S.; Marenz, Martin; Janke, Wolfhard
2018-03-01
In this study four update methods are compared in their performance in a Monte Carlo simulation of polymers in continuum space. The efficiencies of the update methods and combinations thereof are compared with the aid of the autocorrelation time with a fixed (optimal) acceptance ratio. Results are obtained for polymer lengths N = 14, 28 and 42 and temperatures below, at and above the collapse transition. In terms of autocorrelation, the optimal acceptance ratio is approximately 0.4. Furthermore, an overview of the step sizes of the update methods that correspond to this optimal acceptance ratio is given. This shall serve as a guide for future studies that rely on efficient computer simulations.
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output
Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.
2011-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†
Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.
2013-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136
A large-signal dynamic simulation for the series resonant converter
NASA Technical Reports Server (NTRS)
King, R. J.; Stuart, T. A.
1983-01-01
A simple nonlinear discrete-time dynamic model for the series resonant dc-dc converter is derived using approximations appropriate to most power converters. This model is useful for the dynamic simulation of a series resonant converter using only a desktop calculator. The model is compared with a laboratory converter for a large transient event.
Molecular dynamics in principal component space.
Michielssens, Servaas; van Erp, Titus S; Kutzner, Carsten; Ceulemans, Arnout; de Groot, Bert L
2012-07-26
A molecular dynamics algorithm in principal component space is presented. It is demonstrated that sampling can be improved without changing the ensemble by assigning masses to the principal components proportional to the inverse square root of the eigenvalues. The setup of the simulation requires no prior knowledge of the system; a short initial MD simulation to extract the eigenvectors and eigenvalues suffices. Independent measures indicated a 6-7 times faster sampling compared to a regular molecular dynamics simulation.
Using Monte Carlo Simulation to Prioritize Key Maritime Environmental Impacts of Port Infrastructure
NASA Astrophysics Data System (ADS)
Perez Lespier, L. M.; Long, S.; Shoberg, T.
2016-12-01
This study creates a Monte Carlo simulation model to prioritize key indicators of environmental impacts resulting from maritime port infrastructure. Data inputs are derived from LandSat imagery, government databases, and industry reports to create the simulation. Results are validated using subject matter experts and compared with those returned from time-series regression to determine goodness of fit. The Port of Prince Rupert, Canada is used as the location for the study.
Energy consumption during simulated minimal access surgery with and without using an armrest.
Jafri, Mansoor; Brown, Stuart; Arnold, Graham; Abboud, Rami; Wang, Weijie
2013-03-01
Minimal access surgery (MAS) can be a lengthy procedure when compared to open surgery and therefore surgeon fatigue becomes an important issue and surgeons may expose themselves to chronic injuries and making errors. There have been few studies on this topic and they have used only questionnaires and electromyography rather than direct measurement of energy expenditure (EE). The aim of this study was to investigate whether the use of an armrest could reduce the EE of surgeons during MAS. Sixteen surgeons performed simulated MAS with and without using an armrest. They were required to perform the time-consuming task of using scissors to cut a rubber glove through its top layer in a triangular fashion with the help of a laparoscopic camera. Energy consumptions were measured using the Oxycon Mobile system during all the procedures. Error rate and duration time for simulated surgery were recorded. After performing the simulated surgery, subjects scored how comfortable they felt using the armrest. It was found that O(2) uptake (VO(2)) was 5 % less when surgeons used the armrest. The error rate when performing the procedure with the armrest was 35 % compared with 42.29 % without the armrest. Additionally, comfort levels with the armrest were higher than without the armrest. 75 % of surgeons indicated a preference for using the armrest during the simulated surgery. The armrest provides support for surgeons and cuts energy consumption during simulated MAS.
Sea-ice deformation in a coupled ocean-sea-ice model and in satellite remote sensing data
NASA Astrophysics Data System (ADS)
Spreen, Gunnar; Kwok, Ron; Menemenlis, Dimitris; Nguyen, An T.
2017-07-01
A realistic representation of sea-ice deformation in models is important for accurate simulation of the sea-ice mass balance. Simulated sea-ice deformation from numerical simulations with 4.5, 9, and 18 km horizontal grid spacing and a viscous-plastic (VP) sea-ice rheology are compared with synthetic aperture radar (SAR) satellite observations (RGPS, RADARSAT Geophysical Processor System) for the time period 1996-2008. All three simulations can reproduce the large-scale ice deformation patterns, but small-scale sea-ice deformations and linear kinematic features (LKFs) are not adequately reproduced. The mean sea-ice total deformation rate is about 40 % lower in all model solutions than in the satellite observations, especially in the seasonal sea-ice zone. A decrease in model grid spacing, however, produces a higher density and more localized ice deformation features. The 4.5 km simulation produces some linear kinematic features, but not with the right frequency. The dependence on length scale and probability density functions (PDFs) of absolute divergence and shear for all three model solutions show a power-law scaling behavior similar to RGPS observations, contrary to what was found in some previous studies. Overall, the 4.5 km simulation produces the most realistic divergence, vorticity, and shear when compared with RGPS data. This study provides an evaluation of high and coarse-resolution viscous-plastic sea-ice simulations based on spatial distribution, time series, and power-law scaling metrics.
Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters
NASA Astrophysics Data System (ADS)
Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana
2016-02-01
This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.
Shot Peening Numerical Simulation of Aircraft Aluminum Alloy Structure
NASA Astrophysics Data System (ADS)
Liu, Yong; Lv, Sheng-Li; Zhang, Wei
2018-03-01
After shot peening, the 7050 aluminum alloy has good anti-fatigue and anti-stress corrosion properties. In the shot peening process, the pellet collides with target material randomly, and generated residual stress distribution on the target material surface, which has great significance to improve material property. In this paper, a simplified numerical simulation model of shot peening was established. The influence of pellet collision velocity, pellet collision position and pellet collision time interval on the residual stress of shot peening was studied, which is simulated by the ANSYS/LS-DYNA software. The analysis results show that different velocity, different positions and different time intervals have great influence on the residual stress after shot peening. Comparing with the numerical simulation results based on Kriging model, the accuracy of the simulation results in this paper was verified. This study provides a reference for the optimization of the shot peening process, and makes an effective exploration for the precise shot peening numerical simulation.
On the upscaling of process-based models in deltaic applications
NASA Astrophysics Data System (ADS)
Li, L.; Storms, J. E. A.; Walstra, D. J. R.
2018-03-01
Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.
Evaluating Real-Time Platforms for Aircraft Prognostic Health Management Using Hardware-In-The-Loop
2008-08-01
obtained when using HIL and a simulated load. Initially, noticeable differences are seen when comparing the results from each real - time operating system . However...same model in native Simulink. These results show that each real - time operating system can be configured to accurately run transient Simulink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.
In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less
Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.
2017-04-20
In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less
Unmanned aerial vehicles (drones) to prevent drowning.
Seguin, Celia; Blaquière, Gilles; Loundou, Anderson; Michelet, Pierre; Markarian, Thibaut
2018-06-01
Drowning literature have highlighted the submersion time as the most powerful predictor in assessing the prognosis. Reducing the time taken to provide a flotation device and prevent submersion appears of paramount importance. Unmanned aerial vehicles (UAVs) can provide the location of the swimmer and a flotation device. The objective of this simulation study was to evaluate the efficiency of a UAV in providing a flotation device in different sea conditions, and to compare the times taken by rescue operations with and without a UAV (standard vs UAV intervention). Several comparisons were made using professional lifeguards acting as simulated victims. A specifically-shaped UAV was used to allow us to drop an inflatable life buoy into the water. During the summer of 2017, 28 tests were performed. UAV use was associated with a reduction of time it took to provide a flotation device to the simulated victim compared with standard rescue operations (p < 0.001 for all measurements) and the time was reduced even further in moderate (81 ± 39 vs 179 ± 78 s; p < 0.001) and rough sea conditions (99 ± 34 vs 198 ± 130 s; p < 0.001). The times taken for UAV to locate the simulated victim, identify them and drop the life buoy were not altered by the weather conditions. UAV can deliver a flotation device to a swimmer safely and quickly. The addition of a UAV in rescue operations could improve the quality and speed of first aid while keeping lifeguards away from dangerous sea conditions. Copyright © 2018 Elsevier B.V. All rights reserved.
Monte Carlo Simulation of Sudden Death Bearing Testing
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.
NASA Astrophysics Data System (ADS)
Yang, Yiqun; Urban, Matthew W.; McGough, Robert J.
2018-05-01
Shear wave calculations induced by an acoustic radiation force are very time-consuming on desktop computers, and high-performance graphics processing units (GPUs) achieve dramatic reductions in the computation time for these simulations. The acoustic radiation force is calculated using the fast near field method and the angular spectrum approach, and then the shear waves are calculated in parallel with Green’s functions on a GPU. This combination enables rapid evaluation of shear waves for push beams with different spatial samplings and for apertures with different f/#. Relative to shear wave simulations that evaluate the same algorithm on an Intel i7 desktop computer, a high performance nVidia GPU reduces the time required for these calculations by a factor of 45 and 700 when applied to elastic and viscoelastic shear wave simulation models, respectively. These GPU-accelerated simulations also compared to measurements in different viscoelastic phantoms, and the results are similar. For parametric evaluations and for comparisons with measured shear wave data, shear wave simulations with the Green’s function approach are ideally suited for high-performance GPUs.
NASA Astrophysics Data System (ADS)
Abustan, M. S.; Rahman, N. A.; Gotoh, H.; Harada, E.; Talib, S. H. A.
2016-07-01
In Malaysia, not many researches on crowd evacuation simulation had been reported. Hence, the development of numerical crowd evacuation process by taking into account people behavioral patterns and psychological characteristics is crucial in Malaysia. On the other hand, tsunami disaster began to gain attention of Malaysian citizens after the 2004 Indian Ocean Tsunami that need quick evacuation process. In relation to the above circumstances, we have conducted simulations of tsunami evacuation process at the Miami Beach of Penang Island by using Distinct Element Method (DEM)-based crowd behavior simulator. The main objectives are to investigate and reproduce current conditions of evacuation process at the said locations under different hypothetical scenarios for the efficiency study of the evacuation. The sim-1 is initial condition of evacuation planning while sim-2 as improvement of evacuation planning by adding new evacuation area. From the simulation result, sim-2 have a shorter time of evacuation process compared to the sim-1. The evacuation time recuded 53 second. The effect of the additional evacuation place is confirmed from decreasing of the evacuation completion time. Simultaneously, the numerical simulation may be promoted as an effective tool in studying crowd evacuation process.
High Speed Civil Transport Aircraft Simulation: Reference-H Cycle 1, MATLAB Implementation
NASA Technical Reports Server (NTRS)
Sotack, Robert A.; Chowdhry, Rajiv S.; Buttrill, Carey S.
1999-01-01
The mathematical model and associated code to simulate a high speed civil transport aircraft - the Boeing Reference H configuration - are described. The simulation was constructed in support of advanced control law research. In addition to providing time histories of the dynamic response, the code includes the capabilities for calculating trim solutions and for generating linear models. The simulation relies on the nonlinear, six-degree-of-freedom equations which govern the motion of a rigid aircraft in atmospheric flight. The 1962 Standard Atmosphere Tables are used along with a turbulence model to simulate the Earth atmosphere. The aircraft model has three parts - an aerodynamic model, an engine model, and a mass model. These models use the data from the Boeing Reference H cycle 1 simulation data base. Models for the actuator dynamics, landing gear, and flight control system are not included in this aircraft model. Dynamic responses generated by the nonlinear simulation are presented and compared with results generated from alternate simulations at Boeing Commercial Aircraft Company and NASA Langley Research Center. Also, dynamic responses generated using linear models are presented and compared with dynamic responses generated using the nonlinear simulation.
Deconvolution of acoustic emissions for source localization using time reverse modeling
NASA Astrophysics Data System (ADS)
Kocur, Georg Karl
2017-01-01
Impact experiments on small-scale slabs made of concrete and aluminum were carried out. Wave motion radiated from the epicenter of the impact was recorded as voltage signals by resonant piezoelectric transducers. Numerical simulations of the elastic wave propagation are performed to simulate the physical experiments. The Hertz theory of contact is applied to estimate the force impulse, which is subsequently used for the numerical simulation. Displacements at the transducer positions are calculated numerically. A deconvolution function is obtained by comparing the physical (voltage signal) and the numerical (calculated displacement) experiments. Acoustic emission signals due to pencil-lead breaks are recorded, deconvolved and applied for localization using time reverse modeling.
Modeling and simulation of an enzymatic reactor for hydrolysis of palm oil.
Bhatia, S; Naidu, A D; Kamaruddin, A H
1999-01-01
Hydrolysis of palm oil has become an important process in Oleochemical industries. Therefore, an investigation was carried out for hydrolysis of palm oil to fatty acid and glycerol using immobilized lipase in packed bed reactor. The conversion vs. residence time data were used in Michaelis-Menten rate equation to evaluate the kinetic parameters. A mathematical model for the rate of palm oil hydrolysis was proposed incorporating role of external mass transfer and pore diffusion. The model was simulated for steady-state isothermal operation of immobilized lipase packed bed reactor. The experimental data were compared with the simulated results. External mass transfer was found to affect the rate of palm oil hydrolysis at higher residence time.
NASA Astrophysics Data System (ADS)
Fairchild, A.; Chirayath, V.; Gladen, R.; McDonald, A.; Lim, Z.; Chrysler, M.; Koymen, A.; Weiss, A.
Simion 8.1®simulations were used to determine the energy resolution of a 1 meter long Time of Flight Positron annihilation induced Auger Electron Spectrometer (TOF-PAES). The spectrometer consists of: 1. a magnetic gradient section used to parallelize the electrons leaving the sample along the beam axis, 2. an electric field free time of flight tube and 3. a detection section with a set of ExB plates that deflect electrons exiting the TOF tube into a Micro-Channel Plate (MCP). Simulations of the time of flight distribution of electrons emitted according to a known secondary electron emission distribution, for various sample biases, were compared to experimental energy calibration peaks and found to be in excellent agreement. The TOF spectra at the highest sample bias was used to determine the timing resolution function describing the timing spread due to the electronics. Simulations were then performed to calculate the energy resolution at various electron energies in order to deconvolute the combined influence of the magnetic field parallelizer, the timing resolution, and the voltage gradient at the ExB plates. The energy resolution of the 1m TOF-PAES was compared to a newly constructed 3 meter long system. The results were used to optimize the geometry and the potentials of the ExB plates for obtaining the best energy resolution. This work was supported by NSF Grant NSF Grant No. DMR 1508719 and DMR 1338130.
Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M
2013-02-01
The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.
Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.
2013-01-01
The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650
Gupta, Charlotte C; Dorrian, Jill; Grant, Crystal L; Pajcin, Maja; Coates, Alison M; Kennaway, David J; Wittert, Gary A; Heilbronn, Leonie K; Della Vedova, Chris B; Banks, Siobhan
2017-01-01
Shiftworkers have impaired performance when driving at night and they also alter their eating patterns during nightshifts. However, it is unknown whether driving at night is influenced by the timing of eating. This study aims to explore the effects of timing of eating on simulated driving performance across four simulated nightshifts. Healthy, non-shiftworking males aged 18-35 years (n = 10) were allocated to either an eating at night (n = 5) or no eating at night (n = 5) condition. During the simulated nightshifts at 1730, 2030 and 0300 h, participants performed a 40-min driving simulation, 3-min Psychomotor Vigilance Task (PVT-B), and recorded their ratings of sleepiness on a subjective scale. Participants had a 6-h sleep opportunity during the day (1000-1600 h). Total 24-h food intake was consistent across groups; however, those in the eating at night condition ate a large meal (30% of 24-h intake) during the nightshift at 0130 h. It was found that participants in both conditions experienced increased sleepiness and PVT-B impairments at 0300 h compared to 1730 and 2030 h (p < 0.001). Further, at 0300 h, those in the eating condition displayed a significant decrease in time spent in the safe zone (p < 0.05; percentage of time within 10 km/h of the speed limit and 0.8 m of the centre of the lane) and significant increases in speed variability (p < 0.001), subjective sleepiness (p < 0.01) and number of crashes (p < 0.01) compared to those in the no eating condition. Results suggest that, for optimal performance, shiftworkers should consider restricting food intake during the night.
Role of a plausible nuisance contributor in the declining obesity-mortality risks over time.
Mehta, Tapan; Pajewski, Nicholas M; Keith, Scott W; Fontaine, Kevin; Allison, David B
2016-12-15
Recent analyses of epidemiological data including the National Health and Nutrition Examination Survey (NHANES) have suggested that the harmful effects of obesity may have decreased over calendar time. The shifting BMI distribution over time coupled with the application of fixed broad BMI categories in these analyses could be a plausible "nuisance contributor" to this observed change in the obesity-associated mortality over calendar time. To evaluate the extent to which observed temporal changes in the obesity-mortality association may be due to a shifting population distribution for body mass index (BMI), coupled with analyses based on static, broad BMI categories. Simulations were conducted using data from NHANES I and III linked with mortality data. Data from NHANES I were used to fit a "true" model treating BMI as a continuous variable. Coefficients estimated from this model were used to simulate mortality for participants in NHANES III. Hence, the population-level association between BMI and mortality in NHANES III was fixed to be identical to the association estimated in NHANES I. Hazard ratios (HRs) for obesity categories based on BMI for NHANES III with simulated mortality data were compared to the corresponding estimated HRs from NHANES I. Change in hazard ratios for simulated data in NHANES III compared to observed estimates from NHANES I. On average, hazard ratios for NHANES III based on simulated mortality data were 29.3% lower than the estimates from NHANES I using observed mortality follow-up. This reduction accounted for roughly three-fourths of the apparent decrease in the obesity-mortality association observed in a previous analysis of these data. Some of the apparent diminution of the association between obesity and mortality may be an artifact of treating BMI as a categorical variable. Copyright © 2016. Published by Elsevier Inc.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
NASA Astrophysics Data System (ADS)
Krawiecki, A.
A multi-agent spin model for changes of prices in the stock market based on the Ising-like cellular automaton with interactions between traders randomly varying in time is investigated by means of Monte Carlo simulations. The structure of interactions has topology of a small-world network obtained from regular two-dimensional square lattices with various coordination numbers by randomly cutting and rewiring edges. Simulations of the model on regular lattices do not yield time series of logarithmic price returns with statistical properties comparable with the empirical ones. In contrast, in the case of networks with a certain degree of randomness for a wide range of parameters the time series of the logarithmic price returns exhibit intermittent bursting typical of volatility clustering. Also the tails of distributions of returns obey a power scaling law with exponents comparable to those obtained from the empirical data.
Lacour, C; Joannis, C; Schuetze, M; Chebbo, G
2011-01-01
This paper compares several real-time control (RTC) strategies for a generic configuration consisting of a storage tank with two overflow facilities. Two of the strategies only make use of flow rate data, while the third also introduces turbidity data in order to exercise dynamic control between two overflow locations. The efficiency of each strategy is compared over a wide range of system setups, described by two parameters. This assessment is performed by simulating the application of control strategies to actual measurements time series recorded on two sites. Adding turbidity measurements into an RTC strategy leads to a significant reduction in the annual overflow pollutant load. The pollutant spills spared by such a control strategy strongly depend on the site and on the flow rate based strategy considered as a reference. With the datasets used in this study, values ranging from 5 to 50% were obtained.
Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves
NASA Astrophysics Data System (ADS)
Liu, Shukui; Papanikolaou, Apostolos D.
2011-03-01
Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT) of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.
Baird, Rachel; Maxwell, Scott E
2016-06-01
Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Molecular Dynamics Simulations of Nucleic Acids. From Tetranucleotides to the Ribosome.
Šponer, Jiří; Banáš, Pavel; Jurečka, Petr; Zgarbová, Marie; Kührová, Petra; Havrila, Marek; Krepl, Miroslav; Stadlbauer, Petr; Otyepka, Michal
2014-05-15
We present a brief overview of explicit solvent molecular dynamics (MD) simulations of nucleic acids. We explain physical chemistry limitations of the simulations, namely, the molecular mechanics (MM) force field (FF) approximation and limited time scale. Further, we discuss relations and differences between simulations and experiments, compare standard and enhanced sampling simulations, discuss the role of starting structures, comment on different versions of nucleic acid FFs, and relate MM computations with contemporary quantum chemistry. Despite its limitations, we show that MD is a powerful technique for studying the structural dynamics of nucleic acids with a fast growing potential that substantially complements experimental results and aids their interpretation.
Initial Development of a Quadcopter Simulation Environment for Auralization
NASA Technical Reports Server (NTRS)
Christian, Andrew; Lawrence, Joseph
2016-01-01
This paper describes a recently created computer simulation of quadcopter flight dynamics for the NASA DELIVER project. The goal of this effort is to produce a simulation that includes a number of physical effects that are not usually found in other dynamics simulations (e.g., those used for flight controller development). These effects will be shown to have a significant impact on the fidelity of auralizations - entirely synthetic time-domain predictions of sound - based on this simulation when compared to a recording. High-fidelity auralizations are an important precursor to human subject tests that seek to understand the impact of vehicle configurations on noise and annoyance.
Effects of water-management alternatives on streamflow in the Ipswich River basin, Massachusetts
Zarriello, Philip J.
2001-01-01
Management alternatives that could help mitigate the effects of water withdrawals on streamflow in the Ipswich River Basin were evaluated by simulation with a calibrated Hydrologic Simulation Program--Fortran (HSPF) model. The effects of management alternatives on streamflow were simulated for a 35-year period (196195). Most alternatives examined increased low flows compared to the base simulation of average 1989-93 withdrawals. Only the simulation of no septic-effluent inflow, and the simulation of a 20-percent increase in withdrawals, further lowered flows or caused the river to stop flowing for longer periods of time than the simulation of average 198993 withdrawals. Simulations of reduced seasonal withdrawals by 20 percent, and by 50 percent, resulted in a modest increase in low flow in a critical habitat reach (model reach 8 near the Reading town well field); log-Pearson Type III analysis of simulated daily-mean flow indicated that under these reduced withdrawals, model reach 8 would stop flowing for a period of seven consecutive days about every other year, whereas under average 198993 withdrawals this reach would stop flowing for a seven consecutive day period almost every year. Simulations of no seasonal withdrawals, and simulations that stopped streamflow depletion when flow in model reach 19 was below 22 cubic feet per second, indicated flow would be maintained in model reach 8 at all times. Simulations indicated wastewater-return flows would augment low flow in proportion to the rate of return flow. Simulations of a 1.5 million gallons per day return flow rate indicated model reach 8 would stop flowing for a period of seven consecutive days about once every 5 years; simulated return flow rates of 1.1 million gallons per day indicated that model reach 8 would stop flowing for a period of seven consecutive days about every other year. Simulation of reduced seasonal withdrawals, combined with no septic effluent return flow, indicated only a slight increase in low flow compared to low flows simulated under average 198993 withdrawals. Simulation of reduced seasonal withdrawal, combined with 2.6 million gallons per day wastewater-return flows, provided more flow in model reach 8 than that simulated under no withdrawals.
Patti, Alessandro; Cuetos, Alejandro
2012-07-01
We report on the diffusion of purely repulsive and freely rotating colloidal rods in the isotropic, nematic, and smectic liquid crystal phases to probe the agreement between Brownian and Monte Carlo dynamics under the most general conditions. By properly rescaling the Monte Carlo time step, being related to any elementary move via the corresponding self-diffusion coefficient, with the acceptance rate of simultaneous trial displacements and rotations, we demonstrate the existence of a unique Monte Carlo time scale that allows for a direct comparison between Monte Carlo and Brownian dynamics simulations. To estimate the validity of our theoretical approach, we compare the mean square displacement of rods, their orientational autocorrelation function, and the self-intermediate scattering function, as obtained from Brownian dynamics and Monte Carlo simulations. The agreement between the results of these two approaches, even under the condition of heterogeneous dynamics generally observed in liquid crystalline phases, is excellent.
NASA Technical Reports Server (NTRS)
Kibler, J. F.; Suttles, J. T.
1977-01-01
One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.
NASA Astrophysics Data System (ADS)
Karimzadeh, Shaghayegh; Askan, Aysegul; Yakut, Ahmet
2017-09-01
Simulated ground motions can be used in structural and earthquake engineering practice as an alternative to or to augment the real ground motion data sets. Common engineering applications of simulated motions are linear and nonlinear time history analyses of building structures, where full acceleration records are necessary. Before using simulated ground motions in such applications, it is important to assess those in terms of their frequency and amplitude content as well as their match with the corresponding real records. In this study, a framework is outlined for assessment of simulated ground motions in terms of their use in structural engineering. Misfit criteria are determined for both ground motion parameters and structural response by comparing the simulated values against the corresponding real values. For this purpose, as a case study, the 12 November 1999 Duzce earthquake is simulated using stochastic finite-fault methodology. Simulated records are employed for time history analyses of frame models of typical residential buildings. Next, the relationships between ground motion misfits and structural response misfits are studied. Results show that the seismological misfits around the fundamental period of selected buildings determine the accuracy of the simulated responses in terms of their agreement with the observed responses.
Applied Time Domain Stability Margin Assessment for Nonlinear Time-Varying Systems
NASA Technical Reports Server (NTRS)
Kiefer, J. M.; Johnson, M. D.; Wall, J. H.; Dominguez, A.
2016-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation. This technique was implemented by using the Stability Aerospace Vehicle Analysis Tool (SAVANT) computer simulation to evaluate the stability of the SLS system with the Adaptive Augmenting Control (AAC) active and inactive along its ascent trajectory. The gains for which the vehicle maintains apparent time-domain stability defines the gain margins, and the time delay similarly defines the phase margin. This method of extracting the control stability margins from the time-domain simulation is relatively straightforward and the resultant margins can be compared to the linearized system results. The sections herein describe the techniques employed to extract the time-domain margins, compare the results between these nonlinear and the linear methods, and provide explanations for observed discrepancies. The SLS ascent trajectory was simulated with SAVANT and the classical linear stability margins were evaluated at one second intervals. The linear analysis was performed with the AAC algorithm disabled to attain baseline stability margins. At each time point, the system was linearized about the current operating point using Simulink's built-in solver. Each linearized system in time was evaluated for its rigid-body gain margin (high frequency gain margin), rigid-body phase margin, and aero gain margin (low frequency gain margin) for each control axis. Using the stability margins derived from the baseline linearization approach, the time domain derived stability margins were determined by executing time domain simulations in which axis-specific incremental gain and phase adjustments were made to the nominal system about the expected neutral stability point at specific flight times. The baseline stability margin time histories were used to shift the system gain to various values around the zero margin point such that a precise amount of expected gain margin was maintained throughout flight. When assessing the gain margins, the gain was applied starting at the time point under consideration, thereafter following the variation in the margin found in the linear analysis. When assessing the rigid-body phase margin, a constant time delay was applied to the system starting at the time point under consideration. If the baseline stability margins were correctly determined via the linear analysis, the time domain simulation results should contain unstable behavior at certain gain and phase values. Examples will be shown from repeated simulations with variable added gain and phase lag. Faithfulness of margins calculated from the linear analysis to the nonlinear system will be demonstrated.
Time series inversion of spectra from ground-based radiometers
NASA Astrophysics Data System (ADS)
Christensen, O. M.; Eriksson, P.
2013-07-01
Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument, which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the Onsala Space Observatory (OSO) water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.
Comparisons Between TIME-GCM/MERRA Simulations and LEO Satellite Observations
NASA Astrophysics Data System (ADS)
Hagan, M. E.; Haeusler, K.; Forbes, J. M.; Zhang, X.; Doornbos, E.; Bruinsma, S.; Lu, G.
2014-12-01
We report on yearlong National Center for Atmospheric Research (NCAR) thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (TIME-GCM) simulations where we utilize the recently developed lower boundary condition based on 3-hourly MERRA (Modern-Era Retrospective Analysis for Research and Application) reanalysis data to account for tropospheric waves and tides propagating upward into the model domain. The solar and geomagnetic forcing is based on prevailing geophysical conditions. The simulations show a strong day-to-day variability in the upper thermospheric neutral temperature tidal fields, which is smoothed out quickly when averaging is applied over several days, e.g. up to 50% DE3 amplitude reduction for a 10-day average. This is an important result with respect to tidal diagnostics from satellite observations where averaging over multiple days is inevitable. In order to assess TIME-GCM performance we compare the simulations with measurements from the Gravity field and steady-state Ocean Circulation Explorer (GOCE), Challenging Minisatellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE) satellites.
Nascimento, Daniel R; DePrince, A Eugene
2017-07-06
An explicitly time-dependent (TD) approach to equation-of-motion (EOM) coupled-cluster theory with single and double excitations (CCSD) is implemented for simulating near-edge X-ray absorption fine structure in molecular systems. The TD-EOM-CCSD absorption line shape function is given by the Fourier transform of the CCSD dipole autocorrelation function. We represent this transform by its Padé approximant, which provides converged spectra in much shorter simulation times than are required by the Fourier form. The result is a powerful framework for the blackbox simulation of broadband absorption spectra. K-edge X-ray absorption spectra for carbon, nitrogen, and oxygen in several small molecules are obtained from the real part of the absorption line shape function and are compared with experiment. The computed and experimentally obtained spectra are in good agreement; the mean unsigned error in the predicted peak positions is only 1.2 eV. We also explore the spectral signatures of protonation in these molecules.
FNCS: A Framework for Power System and Communication Networks Co-Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.
2014-04-13
This paper describes the Fenix framework that uses a federated approach for integrating power grid and communication network simulators. Compared existing approaches, Fenix al- lows co-simulation of both transmission and distribution level power grid simulators with the communication network sim- ulator. To reduce the performance overhead of time synchro- nization, Fenix utilizes optimistic synchronization strategies that make speculative decisions about when the simulators are going to exchange messages. GridLAB-D (a distribution simulator), PowerFlow (a transmission simulator), and ns-3 (a telecommunication simulator) are integrated with the frame- work and are used to illustrate the enhanced performance pro- vided by speculative multi-threadingmore » on a smart grid applica- tion. Our speculative multi-threading approach achieved on average 20% improvement over the existing synchronization methods« less
Walliczek-Dworschak, U; Schmitt, M; Dworschak, P; Diogo, I; Ecke, A; Mandapathil, M; Teymoortash, A; Güldner, C
2017-06-01
Increasing usage of robotic surgery presents surgeons with the question of how to acquire the special skills required. This study aimed to analyze the effect of different exercises on their performance outcomes. This prospective study was conducted on the da Vinci Skills Simulator from December 2014 till August 2015. Sixty robotic novices were included and randomized to three groups of 20 participants each. Each group performed three different exercises with comparable difficulty levels. The exercises were performed three times in a row within two training sessions, with an interval of 1 week in between. On the final training day, two new exercises were added and a questionnaire was completed. Technical metrics of performance (overall score, time to complete, economy of motion, instrument collisions, excessive instrument force, instruments out of view, master work space range, drops, missed targets, misapplied energy time, blood loss and broken vessels) were recorded by the simulator software for further analysis. Training with different exercises led to comparable results in performance metrics for the final exercises among the three groups. A significant skills gain was recorded between the first and last exercises, with improved performance in overall score, time to complete and economy of motion for all exercises in all three groups. As training with different exercises led to comparable results in robotic training, the type of exercise seems to play a minor role in the outcome. For a robotic training curriculum, it might be important to choose exercises with comparable difficulty levels. In addition, it seems to be advantageous to limit the duration of the training to maintain the concentration throughout the entire session.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shekar, Venkateswaran; Fiondella, Lance; Chatterjee, Samrat
Several transportation network vulnerability models have been proposed. However, most only consider disruptions as a static snapshot in time and the impact on total travel time. These approaches cannot consider the time-varying nature of travel demand nor other undesirable outcomes that follow from transportation network disruptions. This paper proposes an algorithmic approach to assess the vulnerability of a transportation network that considers the time-varying demand with an open source dynamic transportation simulation tool. The open source nature of the tool allows us to systematically consider many disruption scenarios and quantitatively compare their relative criticality. This is far more efficient thanmore » traditional approaches which would require days or weeks of a transportation engineers time to manually set up, run, and assess these simulations. In addition to travel time, we also collect statistics on additional fuel consumed and the corresponding carbon dioxide emissions. Our approach, thus provides a more systematic approach that is both time-varying and can consider additional negative consequences of disruptions for decision makers to evaluate.« less
Terriff, Colleen M; McKeirnan, Kimberly
2017-07-01
This study compared traditional training (TT) and just-in-time training (JITT) of P3 student pharmacists regarding interest, confidence, and comfort pre- and post-training (primary objective); and assessment and administration competency (secondary objective) during a simulated influenza vaccination clinic. Student pharmacists were randomized 1:1 to receive either TT or JITT, completed pre- and post-training surveys assessing interest, confidence and comfort; and evaluated on performance during a simulated emergency infant vaccination. An infant manikin simulated a child <1 year of age, and an actor role-played the mother. All students received a briefing about the simulated mass vaccination prior to their performance assessment. Survey differences between groups were analyzed by ANOVA. The competency assessment was analyzed by a Chi-square or Fisher's exact test for individual steps and Student t-test for mean scores. Pre-training interest was high and maintained post-training. Pre-training confidence and comfort levels were low and improved in both groups. Mean competency scores were comparable between the TT and JITT groups. Comparing groups, TT students more commonly missed proper injection site selection and care; while JITT missed distracting the infant and administration documentation. JITT for student pharmacists to learn skills required to immunize infants elicits similar outcomes (interest, confidence, comfort, and administration competency) as TT for emergency pediatric influenza vaccination. Copyright © 2017 Elsevier Inc. All rights reserved.
Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics.
Martínez, Enrique; Cawkwell, Marc J; Voter, Arthur F; Niklasson, Anders M N
2015-04-21
Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached at each time step. The thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.
Numerical simulation of a 100-ton ANFO detonation
NASA Astrophysics Data System (ADS)
Weber, P. W.; Millage, K. K.; Crepeau, J. E.; Happ, H. J.; Gitterman, Y.; Needham, C. E.
2015-03-01
This work describes the results from a US government-owned hydrocode (SHAMRC, Second-Order Hydrodynamic Automatic Mesh Refinement Code) that simulated an explosive detonation experiment with 100,000 kg of Ammonium Nitrate-Fuel Oil (ANFO) and 2,080 kg of Composition B (CompB). The explosive surface charge was nearly hemispherical and detonated in desert terrain. Two-dimensional axisymmetric (2D) and three-dimensional (3D) simulations were conducted, with the 3D model providing a more accurate representation of the experimental setup geometry. Both 2D and 3D simulations yielded overpressure and impulse waveforms that agreed qualitatively with experiment, including the capture of the secondary shock observed in the experiment. The 2D simulation predicted the primary shock arrival time correctly but secondary shock arrival time was early. The 2D-predicted impulse waveforms agreed very well with the experiment, especially at later calculation times, and prediction of the early part of the impulse waveform (associated with the initial peak) was better quantitatively for 2D compared to 3D. The 3D simulation also predicted the primary shock arrival time correctly, and secondary shock arrival times in 3D were closer to the experiment than in the 2D results. The 3D-predicted impulse waveform had better quantitative agreement than 2D for the later part of the impulse waveform. The results of this numerical study show that SHAMRC may be used reliably to predict phenomena associated with the 100-ton detonation. The ultimate fidelity of the simulations was limited by both computer time and memory. The results obtained provide good accuracy and indicate that the code is well suited to predicting the outcomes of explosive detonations.
Research and implementation of simulation for TDICCD remote sensing in vibration of optical axis
NASA Astrophysics Data System (ADS)
Liu, Zhi-hong; Kang, Xiao-jun; Lin, Zhe; Song, Li
2013-12-01
During the exposure time, the charge transfer speed in the push-broom direction and the line-by-lines canning speed of the sensor are required to match each other strictly for a space-borne TDICCD push-broom camera. However, as attitude disturbance of satellite and vibration of camera are inevitable, it is impossible to eliminate the speed mismatch, which will make the signal of different targets overlay each other and result in a decline of image resolution. The effects of velocity mismatch will be visually observed and analyzed by simulating the degradation of image quality caused by the vibration of the optical axis, and it is significant for the evaluation of image quality and design of the image restoration algorithm. How to give a model in time domain and space domain during the imaging time is the problem needed to be solved firstly. As vibration information for simulation is usually given by a continuous curve, the pixels of original image matrix and sensor matrix are discrete, as a result, they cannot always match each other well. The effect of simulation will also be influenced by the discrete sampling in integration time. In conclusion, it is quite significant for improving simulation accuracy and efficiency to give an appropriate discrete modeling and simulation method. The paper analyses discretization schemes in time domain and space domain and presents a method to simulate the quality of image of the optical system in the vibration of the line of sight, which is based on the principle of TDICCD sensor. The gray value of pixels in sensor matrix is obtained by a weighted arithmetic, which solves the problem of pixels dismatch. The result which compared with the experiment of hardware test indicate that this simulation system performances well in accuracy and reliability.
Equilibration of experimentally determined protein structures for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Walton, Emily B.; Vanvliet, Krystyn J.
2006-12-01
Preceding molecular dynamics simulations of biomolecular interactions, the molecule of interest is often equilibrated with respect to an initial configuration. This so-called equilibration stage is required because the input structure is typically not within the equilibrium phase space of the simulation conditions, particularly in systems as complex as proteins, which can lead to artifactual trajectories of protein dynamics. The time at which nonequilibrium effects from the initial configuration are minimized—what we will call the equilibration time—marks the beginning of equilibrium phase-space exploration. Note that the identification of this time does not imply exploration of the entire equilibrium phase space. We have found that current equilibration methodologies contain ambiguities that lead to uncertainty in determining the end of the equilibration stage of the trajectory. This results in equilibration times that are either too long, resulting in wasted computational resources, or too short, resulting in the simulation of molecular trajectories that do not accurately represent the physical system. We outline and demonstrate a protocol for identifying the equilibration time that is based on the physical model of Normal Mode Analysis. We attain the computational efficiency required of large-protein simulations via a stretched exponential approximation that enables an analytically tractable and physically meaningful form of the root-mean-square deviation of atoms comprising the protein. We find that the fitting parameters (which correspond to physical properties of the protein) fluctuate initially but then stabilize for increased simulation time, independently of the simulation duration or sampling frequency. We define the end of the equilibration stage—and thus the equilibration time—as the point in the simulation when these parameters attain constant values. Compared to existing methods, our approach provides the objective identification of the time at which the simulated biomolecule has entered an energetic basin. For the representative protein considered, bovine pancreatic trypsin inhibitor, existing methods indicate a range of 0.2-10ns of simulation until a local minimum is attained. Our approach identifies a substantially narrower range of 4.5-5.5ns , which will lead to a much more objective choice of equilibration time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toltz, Allison; Hoesl, Michaela; Schuemann, Jan
Purpose: A method to refine the implementation of an in vivo, adaptive proton therapy range verification methodology was investigated. Simulation experiments and in-phantom measurements were compared to validate the calibration procedure of a time-resolved diode dosimetry technique. Methods: A silicon diode array system has been developed and experimentally tested in phantom for passively scattered proton beam range verification by correlating properties of the detector signal to the water equivalent path length (WEPL). The implementation of this system requires a set of calibration measurements to establish a beam-specific diode response to WEPL fit for the selected ‘scout’ beam in a solidmore » water phantom. This process is both tedious, as it necessitates a separate set of measurements for every ‘scout’ beam that may be appropriate to the clinical case, as well as inconvenient due to limited access to the clinical beamline. The diode response to WEPL relationship for a given ‘scout’ beam may be determined within a simulation environment, facilitating the applicability of this dosimetry technique. Measurements for three ‘scout’ beams were compared against simulated detector response with Monte Carlo methods using the Tool for Particle Simulation (TOPAS). Results: Detector response in water equivalent plastic was successfully validated against simulation for spread out Bragg peaks of range 10 cm, 15 cm, and 21 cm (168 MeV, 177 MeV, and 210 MeV) with adjusted R{sup 2} of 0.998. Conclusion: Feasibility has been shown for performing calibration of detector response for a given ‘scout’ beam through simulation for the time resolved diode dosimetry technique.« less
NASA Astrophysics Data System (ADS)
KIM, J.; Smith, M. B.; Koren, V.; Salas, F.; Cui, Z.; Johnson, D.
2017-12-01
The National Oceanic and Atmospheric Administration (NOAA)-National Weather Service (NWS) developed the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) framework as an initial step towards spatially distributed modeling at River Forecast Centers (RFCs). Recently, the NOAA/NWS worked with the National Center for Atmospheric Research (NCAR) to implement the National Water Model (NWM) for nationally-consistent water resources prediction. The NWM is based on the WRF-Hydro framework and is run at a 1km spatial resolution and 1-hour time step over the contiguous United States (CONUS) and contributing areas in Canada and Mexico. In this study, we compare streamflow simulations from HL-RDHM and WRF-Hydro to observations from 279 USGS stations. For streamflow simulations, HL-RDHM is run on 4km grids with the temporal resolution of 1 hour for a 5-year period (Water Years 2008-2012), using a priori parameters provided by NOAA-NWS. The WRF-Hydro streamflow simulations for the same time period are extracted from NCAR's 23 retrospective run of the NWM (version 1.0) over CONUS based on 1km grids. We choose 279 USGS stations which are relatively less affected by dams or reservoirs, in the domains of six different RFCs. We use the daily average values of simulations and observations for the convenience of comparison. The main purpose of this research is to evaluate how HL-RDHM and WRF-Hydro perform at USGS gauge stations. We compare daily time-series of observations and both simulations, and calculate the error values using a variety of error functions. Using these plots and error values, we evaluate the performances of HL-RDHM and WRF-Hydro models. Our results show a mix of model performance across geographic regions.
Constant pressure and temperature discrete-time Langevin molecular dynamics
NASA Astrophysics Data System (ADS)
Grønbech-Jensen, Niels; Farago, Oded
2014-11-01
We present a new and improved method for simultaneous control of temperature and pressure in molecular dynamics simulations with periodic boundary conditions. The thermostat-barostat equations are built on our previously developed stochastic thermostat, which has been shown to provide correct statistical configurational sampling for any time step that yields stable trajectories. Here, we extend the method and develop a set of discrete-time equations of motion for both particle dynamics and system volume in order to seek pressure control that is insensitive to the choice of the numerical time step. The resulting method is simple, practical, and efficient. The method is demonstrated through direct numerical simulations of two characteristic model systems—a one-dimensional particle chain for which exact statistical results can be obtained and used as benchmarks, and a three-dimensional system of Lennard-Jones interacting particles simulated in both solid and liquid phases. The results, which are compared against the method of Kolb and Dünweg [J. Chem. Phys. 111, 4453 (1999)], show that the new method behaves according to the objective, namely that acquired statistical averages and fluctuations of configurational measures are accurate and robust against the chosen time step applied to the simulation.
Soucek, Alexander; Ostkamp, Lutz; Paternesi, Roberta
2015-04-01
Space suit simulators are used for extravehicular activities (EVAs) during Mars analog missions. Flight planning and EVA productivity require accurate time estimates of activities to be performed with such simulators, such as experiment execution or traverse walking. We present a benchmarking methodology for the Aouda.X space suit simulator of the Austrian Space Forum. By measuring and comparing the times needed to perform a set of 10 test activities with and without Aouda.X, an average time delay was derived in the form of a multiplicative factor. This statistical value (a second-over-second time ratio) is 1.30 and shows that operations in Aouda.X take on average a third longer than the same operations without the suit. We also show that activities predominantly requiring fine motor skills are associated with larger time delays (between 1.17 and 1.59) than those requiring short-distance locomotion or short-term muscle strain (between 1.10 and 1.16). The results of the DELTA experiment performed during the MARS2013 field mission increase analog mission planning reliability and thus EVA efficiency and productivity when using Aouda.X.
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...
2016-10-20
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.
2016-01-01
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187
Evolving locomotion for a 12-DOF quadruped robot in simulated environments.
Klaus, Gordon; Glette, Kyrre; Høvin, Mats
2013-05-01
We demonstrate the power of evolutionary robotics (ER) by comparing to a more traditional approach its performance and cost on the task of simulated robot locomotion. A novel quadruped robot is introduced, the legs of which - each having three non-coplanar degrees of freedom - are very maneuverable. Using a simplistic control architecture and a physics simulation of the robot, gaits are designed both by hand and using a highly parallel evolutionary algorithm (EA). It is found that the EA produces, in a small fraction of the time that takes to design by hand, gaits that travel at two to four times the speed of the hand-designed one. The flexibility of this approach is demonstrated by applying it across a range of differently configured simulators. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Akihiko, E-mail: takahsr@hs.med.kyushu-u.ac.jp; Sasaki, Masayuki; Himuro, Kazuhiko
2015-04-15
Purpose: Yittrium-90 ({sup 90}Y) is traditionally thought of as a pure beta emitter, and is used in targeted radionuclide therapy, with imaging performed using bremsstrahlung single-photon emission computed tomography (SPECT). However, because {sup 90}Y also emits positrons through internal pair production with a very small branching ratio, positron emission tomography (PET) imaging is also available. Because of the insufficient image quality of {sup 90}Y bremsstrahlung SPECT, PET imaging has been suggested as an alternative. In this paper, the authors present the Monte Carlo-based simulation–reconstruction framework for {sup 90}Y to comprehensively analyze the PET and SPECT imaging techniques and to quantitativelymore » consider the disadvantages associated with them. Methods: Our PET and SPECT simulation modules were developed using Monte Carlo simulation of Electrons and Photons (MCEP), developed by Dr. S. Uehara. PET code (MCEP-PET) generates a sinogram, and reconstructs the tomography image using a time-of-flight ordered subset expectation maximization (TOF-OSEM) algorithm with attenuation compensation. To evaluate MCEP-PET, simulated results of {sup 18}F PET imaging were compared with the experimental results. The results confirmed that MCEP-PET can simulate the experimental results very well. The SPECT code (MCEP-SPECT) models the collimator and NaI detector system, and generates the projection images and projection data. To save the computational time, the authors adopt the prerecorded {sup 90}Y bremsstrahlung photon data calculated by MCEP. The projection data are also reconstructed using the OSEM algorithm. The authors simulated PET and SPECT images of a water phantom containing six hot spheres filled with different concentrations of {sup 90}Y without background activity. The amount of activity was 163 MBq, with an acquisition time of 40 min. Results: The simulated {sup 90}Y-PET image accurately simulated the experimental results. PET image is visually superior to SPECT image because of the low background noise. The simulation reveals that the detected photon number in SPECT is comparable to that of PET, but the large fraction (approximately 75%) of scattered and penetration photons contaminates SPECT image. The lower limit of {sup 90}Y detection in SPECT image was approximately 200 kBq/ml, while that in PET image was approximately 100 kBq/ml. Conclusions: By comparing the background noise level and the image concentration profile of both the techniques, PET image quality was determined to be superior to that of bremsstrahlung SPECT. The developed simulation codes will be very useful in the future investigations of PET and bremsstrahlung SPECT imaging of {sup 90}Y.« less
Biodurability of chrysotile and tremolite asbestos
NASA Astrophysics Data System (ADS)
Oze, C.; Solt, K.
2008-12-01
Chrysotile and tremolite asbestos represent two mineralogical categories of regulated asbestos commonly evaluated in epidemiological, toxicological, and pathological studies. Lung and digestive fluids are undersaturated with respect to chrysotile and tremolite asbestos (i.e. dissolution is thermodynamically favorable), where the dissolution kinetics control the durability of these minerals in respiratory and gastric systems. Here we examined the biodurability of chrysotile and tremolite asbestos in simulated body fluids (SBFs) as a function of mineral surface area over time. Batch experiments in simulated gastric fluid (SGF; HCl and NaCl solution at pH 1.2) and simulated lung fluid (SLF; a modified Gamble's solution at pH 7.4) were performed at 37°C over 720 hours. The rate-limiting step of Si release for both minerals was used to determine and compare dissolution rates. Chrysotile and tremolite asbestos are less biodurable in SGF compared to SLF. Based on equal suspension densities (surface area per volume of solution, m2 L- 1), chrysotile undergoes dissolution approximately 44 times faster than tremolite asbestos in SGF; however, amphibole asbestos dissolves approximately 6 times faster than chrysotile in SLF. Provided identical fiber dimensions, fiber dissolution models demonstrate that chrysotile is more biodurable in SLF and less biodurable in SGF compared to tremolite asbestos. Overall, the methodology employed here provides an alternative means to evaluate asbestos material fiber lifetimes based on mineral surface considerations.
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
Estimating short-period dynamics using an extended Kalman filter
NASA Technical Reports Server (NTRS)
Bauer, Jeffrey E.; Andrisani, Dominick
1990-01-01
An extended Kalman filter (EKF) is used to estimate the parameters of a low-order model from aircraft transient response data. The low-order model is a state space model derived from the short-period approximation of the longitudinal aircraft dynamics. The model corresponds to the pitch rate to stick force transfer function currently used in flying qualities analysis. Because of the model chosen, handling qualities information is also obtained. The parameters are estimated from flight data as well as from a six-degree-of-freedom, nonlinear simulation of the aircraft. These two estimates are then compared and the discrepancies noted. The low-order model is able to satisfactorily match both flight data and simulation data from a high-order computer simulation. The parameters obtained from the EKF analysis of flight data are compared to those obtained using frequency response analysis of the flight data. Time delays and damping ratios are compared and are in agreement. This technique demonstrates the potential to determine, in near real time, the extent of differences between computer models and the actual aircraft. Precise knowledge of these differences can help to determine the flying qualities of a test aircraft and lead to more efficient envelope expansion.
Turbulent flame spreading mechanisms after spark ignition
NASA Astrophysics Data System (ADS)
Subramanian, V.; Domingo, Pascale; Vervisch, Luc
2009-12-01
Numerical simulation of forced ignition is performed in the framework of Large-Eddy Simulation (LES) combined with a tabulated detailed chemistry approach. The objective is to reproduce the flame properties observed in a recent experimental work reporting probability of ignition in a laboratory-scale burner operating with Methane/air non premixed mixture [1]. The smallest scales of chemical phenomena, which are unresolved by the LES grid, are approximated with a flamelet model combined with presumed probability density functions, to account for the unresolved part of turbulent fluctuations of species and temperature. Mono-dimensional flamelets are simulated using GRI-3.0 [2] and tabulated under a set of parameters describing the local mixing and progress of reaction. A non reacting case was simulated at first, to study the unsteady velocity and mixture fields. The time averaged velocity and mixture fraction, and their respective turbulent fluctuations, are compared against the experimental measurements, in order to estimate the prediction capabilities of LES. The time history of axial and radial components of velocity and mixture fraction is cumulated and analysed for different burner regimes. Based on this information, spark ignition is mimicked on selected ignition spots and the dynamics of kernel development analyzed to be compared against the experimental observations. The possible link between the success or failure of the ignition and the flow conditions (in terms of velocity and composition) at the sparking time are then explored.
Comparison of methods of alert acknowledgement by critical care clinicians in the ICU setting
Harrison, Andrew M.; Thongprayoon, Charat; Aakre, Christopher A.; Jeng, Jack Y.; Dziadzko, Mikhail A.; Gajic, Ognjen; Pickering, Brian W.
2017-01-01
Background Electronic Health Record (EHR)-based sepsis alert systems have failed to demonstrate improvements in clinically meaningful endpoints. However, the effect of implementation barriers on the success of new sepsis alert systems is rarely explored. Objective To test the hypothesis time to severe sepsis alert acknowledgement by critical care clinicians in the ICU setting would be reduced using an EHR-based alert acknowledgement system compared to a text paging-based system. Study Design In one arm of this simulation study, real alerts for patients in the medical ICU were delivered to critical care clinicians through the EHR. In the other arm, simulated alerts were delivered through text paging. The primary outcome was time to alert acknowledgement. The secondary outcomes were a structured, mixed quantitative/qualitative survey and informal group interview. Results The alert acknowledgement rate from the severe sepsis alert system was 3% (N = 148) and 51% (N = 156) from simulated severe sepsis alerts through traditional text paging. Time to alert acknowledgement from the severe sepsis alert system was median 274 min (N = 5) and median 2 min (N = 80) from text paging. The response rate from the EHR-based alert system was insufficient to compare primary measures. However, secondary measures revealed important barriers. Conclusion Alert fatigue, interruption, human error, and information overload are barriers to alert and simulation studies in the ICU setting. PMID:28316887
Comparison of methods of alert acknowledgement by critical care clinicians in the ICU setting.
Harrison, Andrew M; Thongprayoon, Charat; Aakre, Christopher A; Jeng, Jack Y; Dziadzko, Mikhail A; Gajic, Ognjen; Pickering, Brian W; Herasevich, Vitaly
2017-01-01
Electronic Health Record (EHR)-based sepsis alert systems have failed to demonstrate improvements in clinically meaningful endpoints. However, the effect of implementation barriers on the success of new sepsis alert systems is rarely explored. To test the hypothesis time to severe sepsis alert acknowledgement by critical care clinicians in the ICU setting would be reduced using an EHR-based alert acknowledgement system compared to a text paging-based system. In one arm of this simulation study, real alerts for patients in the medical ICU were delivered to critical care clinicians through the EHR. In the other arm, simulated alerts were delivered through text paging. The primary outcome was time to alert acknowledgement. The secondary outcomes were a structured, mixed quantitative/qualitative survey and informal group interview. The alert acknowledgement rate from the severe sepsis alert system was 3% ( N = 148) and 51% ( N = 156) from simulated severe sepsis alerts through traditional text paging. Time to alert acknowledgement from the severe sepsis alert system was median 274 min ( N = 5) and median 2 min ( N = 80) from text paging. The response rate from the EHR-based alert system was insufficient to compare primary measures. However, secondary measures revealed important barriers. Alert fatigue, interruption, human error, and information overload are barriers to alert and simulation studies in the ICU setting.
Signatures Of Coronal Heating Driven By Footpoint Shuffling: Closed and Open Structures.
NASA Astrophysics Data System (ADS)
Velli, M. C. M.; Rappazzo, A. F.; Dahlburg, R. B.; Einaudi, G.; Ugarte-Urra, I.
2017-12-01
We have previously described the characteristic state of the confined coronal magnetic field as a special case of magnetically dominated magnetohydrodynamic (MHD) turbulence, where the free energy in the transverse magnetic field is continuously cascaded to small scales, even though the overall kinetic energy is small. This coronal turbulence problem is defined by the photospheric boundary conditions: here we discuss recent numerical simulations of the fully compressible 3D MHD equations using the HYPERION code. Loops are forced at their footpoints by random photospheric motions, energizing the field to a state with continuous formation and dissipation of field-aligned current sheets: energy is deposited at small scales where heating occurs. Only a fraction of the coronal mass and volume gets heated at any time. Temperature and density are highly structured at scales that, in the solar corona, remain observationally unresolved: the plasma of simulated loops is multithermal, where highly dynamical hotter and cooler plasma strands are scattered throughout the loop at sub-observational scales. We will also compare Reduced MHD simulations with fully compressible simulations and photospheric forcings with different time-scales compared to the Alfv'en transit time. Finally, we will discuss the differences between the closed field and open field (solar wind) turbulence heating problem, leading to observational consequences that may be amenable to Parker Solar Probe and Solar Orbiter.
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
CFD simulation of mechanical draft tube mixing in anaerobic digester tanks.
Meroney, Robert N; Colorado, P E
2009-03-01
Computational Fluid Dynamics (CFD) was used to simulate the mixing characteristics of four different circular anaerobic digester tanks (diameters of 13.7, 21.3, 30.5, and 33.5m) equipped with single and multiple draft impeller tube mixers. Rates of mixing of step and slug injection of tracers were calculated from which digester volume turnover time (DVTT), mixture diffusion time (MDT), and hydraulic retention time (HRT) could be calculated. Washout characteristics were compared to analytic formulae to estimate any presence of partial mixing, dead volume, short-circuiting, or piston flow. CFD satisfactorily predicted performance of both model and full-scale circular tank configurations.
Switching synchronization in one-dimensional memristive networks
NASA Astrophysics Data System (ADS)
Slipko, Valeriy A.; Shumovskyi, Mykola; Pershin, Yuriy V.
2015-11-01
We report on a switching synchronization phenomenon in one-dimensional memristive networks, which occurs when several memristive systems with different switching constants are switched from the high- to low-resistance state. Our numerical simulations show that such a collective behavior is especially pronounced when the applied voltage slightly exceeds the combined threshold voltage of memristive systems. Moreover, a finite increase in the network switching time is found compared to the average switching time of individual systems. An analytical model is presented to explain our observations. Using this model, we have derived asymptotic expressions for memory resistances at short and long times, which are in excellent agreement with results of our numerical simulations.
Discrete time modelization of human pilot behavior
NASA Technical Reports Server (NTRS)
Cavalli, D.; Soulatges, D.
1975-01-01
This modelization starts from the following hypotheses: pilot's behavior is a time discrete process, he can perform only one task at a time and his operating mode depends on the considered flight subphase. Pilot's behavior was observed using an electro oculometer and a simulator cockpit. A FORTRAN program has been elaborated using two strategies. The first one is a Markovian process in which the successive instrument readings are governed by a matrix of conditional probabilities. In the second one, strategy is an heuristic process and the concepts of mental load and performance are described. The results of the two aspects have been compared with simulation data.
Shear wave arrival time estimates correlate with local speckle pattern.
Mcaleavey, Stephen A; Osapoetra, Laurentius O; Langdon, Jonathan
2015-12-01
We present simulation and phantom studies demonstrating a strong correlation between errors in shear wave arrival time estimates and the lateral position of the local speckle pattern in targets with fully developed speckle. We hypothesize that the observed arrival time variations are largely due to the underlying speckle pattern, and call the effect speckle bias. Arrival time estimation is a key step in quantitative shear wave elastography, performed by tracking tissue motion via cross-correlation of RF ultrasound echoes or similar methods. Variations in scatterer strength and interference of echoes from scatterers within the tracking beam result in an echo that does not necessarily describe the average motion within the beam, but one favoring areas of constructive interference and strong scattering. A swept-receive image, formed by fixing the transmit beam and sweeping the receive aperture over the region of interest, is used to estimate the local speckle pattern. Metrics for the lateral position of the speckle are found to correlate strongly (r > 0.7) with the estimated shear wave arrival times both in simulations and in phantoms. Lateral weighting of the swept-receive pattern improved the correlation between arrival time estimates and speckle position. The simulations indicate that high RF echo correlation does not equate to an accurate shear wave arrival time estimate-a high correlation coefficient indicates that motion is being tracked with high precision, but the location tracked is uncertain within the tracking beam width. The presence of a strong on-axis speckle is seen to imply high RF correlation and low bias. The converse does not appear to be true-highly correlated RF echoes can still produce biased arrival time estimates. The shear wave arrival time bias is relatively stable with variations in shear wave amplitude and sign (-20 μm to 20 μm simulated) compared with the variation with different speckle realizations obtained along a given tracking vector. We show that the arrival time bias is weakly dependent on shear wave amplitude compared with the variation with axial position/ local speckle pattern. Apertures of f/3 to f/8 on transmit and f/2 and f/4 on receive were simulated. Arrival time error and correlation with speckle pattern are most strongly determined by the receive aperture.
Shear Wave Arrival Time Estimates Correlate with Local Speckle Pattern
McAleavey, Stephen A.; Osapoetra, Laurentius O.; Langdon, Jonathan
2016-01-01
We present simulation and phantom studies demonstrating a strong correlation between errors in shear wave arrival time estimates and the lateral position of the local speckle pattern in targets with fully developed speckle. We hypothesize that the observed arrival time variations are largely due to the underlying speckle pattern, and call the effect speckle bias. Arrival time estimation is a key step in quantitative shear wave elastography, performed by tracking tissue motion via cross correlation of RF ultrasound echoes or similar methods. Variations in scatterer strength and interference of echoes from scatterers within the tracking beam result in an echo that does not necessarily describe the average motion within the beam, but one favoring areas of constructive interference and strong scattering. A swept-receive image, formed by fixing the transmit beam and sweeping the receive aperture over the region of interest, is used to estimate the local speckle pattern. Metrics for the lateral position of the speckle are found to correlate strongly (r>0.7) with the estimated shear wave arrival times both in simulations and in phantoms. Lateral weighting of the swept-receive pattern improved the correlation between arrival time estimates and speckle position. The simulations indicate that high RF echo correlation does not equate to an accurate shear wave arrival time estimate – a high correlation coefficient indicates that motion is being tracked with high precision, but the location tracked is uncertain within the tracking beam width. The presence of a strong on-axis speckle is seen to imply high RF correlation and low bias. The converse does not appear to be true – highly correlated RF echoes can still produce biased arrival time estimates. The shear wave arrival time bias is relatively stable with variations in shear wave amplitude and sign (−20 μm to 20 μm simulated) compared to the variation with different speckle realizations obtained along a given tracking vector. We show that the arrival time bias is weakly dependent on shear wave amplitude compared to the variation with axial position/local speckle pattern. Apertures of f/3 to f/8 on transmit and f/2 and f/4 on receive were simulated. Arrival time error and correlation with speckle pattern are most strongly determined by the receive aperture. PMID:26670847
Performance of ICTP's RegCM4 in Simulating the Rainfall Characteristics over the CORDEX-SEA Domain
NASA Astrophysics Data System (ADS)
Neng Liew, Ju; Tangang, Fredolin; Tieh Ngai, Sheau; Chung, Jing Xiang; Narisma, Gemma; Cruz, Faye Abigail; Phan Tan, Van; Thanh, Ngo-Duc; Santisirisomboon, Jerasron; Milindalekha, Jaruthat; Singhruck, Patama; Gunawan, Dodo; Satyaningsih, Ratna; Aldrian, Edvin
2015-04-01
The performance of the RegCM4 in simulating rainfall variations over the Southeast Asia regions was examined. Different combinations of six deep convective parameterization schemes, namely i) Grell scheme with Arakawa-Schubert closure assumption, ii) Grell scheme with Fritch-Chappel closure assumption, iii) Emanuel MIT scheme, iv) mixed scheme with Emanuel MIT scheme over the Ocean and the Grell scheme over the land, v) mixed scheme with Grell scheme over the land and Emanuel MIT scheme over the ocean and (vi) Kuo scheme, and three ocean flux treatments were tested. In order to account for uncertainties among the observation products, four different gridded rainfall products were used for comparison. The simulated climate is generally drier over the equatorial regions and slightly wetter over the mainland Indo-China compare to the observation. However, simulation with MIT cumulus scheme used over the land area consistently produces large amplitude of positive rainfall biases, although it simulates more realistic annual rainfall variations. The simulations are found less sensitive to treatment of ocean fluxes. Although the simulations produced the rainfall climatology well, all of them simulated much stronger interannual variability compare to that of the observed. Nevertheless, the time evolution of the inter-annual variations was well reproduced particularly over the eastern part of maritime continent. Over the mainland Southeast Asia (SEA), unrealistic rainfall anomalies processes were simulated. The lacking of summer season air-sea interaction results in strong oceanic forcings over the regions, leading to positive rainfall anomalies during years with warm ocean temperature anomalies. This incurs much stronger atmospheric forcings on the land surface processes compare to that of the observed. A score ranking system was designed to rank the simulations according to their performance in reproducing different aspects of rainfall characteristics. The result suggests that the simulation with Emanuel MIT convective scheme and BATs land surface scheme produces better collective performance compare to the rest of the simulations.
Dunn, John C; Belmont, Philip J; Lanzi, Joseph; Martin, Kevin; Bader, Julia; Owens, Brett; Waterman, Brian R
2015-01-01
Surgical education is evolving as work hour constraints limit the exposure of residents to the operating room. Potential consequences may include erosion of resident education and decreased quality of patient care. Surgical simulation training has become a focus of study in an effort to counter these challenges. Previous studies have validated the use of arthroscopic surgical simulation programs both in vitro and in vivo. However, no study has examined if the gains made by residents after a simulation program are retained after a period away from training. In all, 17 orthopedic surgery residents were randomized into simulation or standard practice groups. All subjects were oriented to the arthroscopic simulator, a 14-point anatomic checklist, and Arthroscopic Surgery Skill Evaluation Tool (ASSET). The experimental group received 1 hour of simulation training whereas the control group had no additional training. All subjects performed a recorded, diagnostic arthroscopy intraoperatively. These videos were scored by 2 blinded, fellowship-trained orthopedic surgeons and outcome measures were compared within and between the groups. After 1 year in which neither group had exposure to surgical simulation training, all residents were retested intraoperatively and scored in the exact same fashion. Individual surgical case logs were reviewed and surgical case volume was documented. There was no difference between the 2 groups after initial simulation testing and there was no correlation between case volume and initial scores. After training, the simulation group improved as compared with baseline in mean ASSET (p = 0.023) and mean time to completion (p = 0.01). After 1 year, there was no difference between the groups in any outcome measurements. Although individual technical skills can be cultivated with surgical simulation training, these advancements can be lost without continued education. It is imperative that residency programs implement a simulation curriculum and continue to train throughout the academic year. Published by Elsevier Inc.
Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross
2016-06-01
To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dpetstep images and noise properties agreed better with MC. The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
Slope stability effects of fuel management strategies – inferences from Monte Carlo simulations
R. M. Rice; R. R. Ziemer; S. C. Hankin
1982-01-01
A simple Monte Carlo simulation evaluated the effect of several fire management strategies on soil slip erosion and wildfires. The current condition was compared to (1) a very intensive fuelbreak system without prescribed fires, and (2) prescribed fire at four time intervals with (a) current fuelbreaks and (b) intensive fuel-breaks. The intensive fuelbreak system...
Dynamic finite element analysis and moving particle simulation of human enamel on a microscale.
Yamaguchi, Satoshi; Coelho, Paulo G; Thompson, Van P; Tovar, Nick; Yamauchi, Junpei; Imazato, Satoshi
2014-12-01
The study of biomechanics of deformation and fracture of hard biological tissues involving organic matrix remains a challenge as variations in mechanical properties and fracture mode may have time-dependency. Finite element analysis (FEA) has been widely used but the shortcomings of FEA such as the long computation time owing to re-meshing in simulating fracture mechanics have warranted the development of alternative computational methods with higher throughput. The aim of this study was to compare dynamic two-dimensional FEA and moving particle simulation (MPS) when assuming a plane strain condition in the modeling of human enamel on a reduced scale. Two-dimensional models with the same geometry were developed for MPS and FEA and tested in tension generated with a single step of displacement. The displacement, velocity, pressure, and stress levels were compared and Spearman׳s rank-correlation coefficients R were calculated (p<0.001). The MPS and FEA were significantly correlated for displacement, velocity, pressure, and Y-stress. The MPS may be further developed as an alternative approach without mesh generation to simulate deformation and fracture phenomena of dental and potentially other hard tissues with complex microstructure. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gyrokinetic simulations of particle transport in pellet fuelled JET discharges
NASA Astrophysics Data System (ADS)
Tegnered, D.; Oberparleiter, M.; Nordman, H.; Strand, P.; Garzotti, L.; Lupelli, I.; Roach, C. M.; Romanelli, M.; Valovič, M.; Contributors, JET
2017-10-01
Pellet injection is a likely fuelling method of reactor grade plasmas. When the pellet ablates, it will transiently perturb the density and temperature profiles of the plasma. This will in turn change dimensionless parameters such as a/{L}n,a/{L}T and plasma β. The microstability properties of the plasma then changes which influences the transport of heat and particles. In this paper, gyrokinetic simulations of a JET L-mode pellet fuelled discharge are performed. The ion temperature gradient/trapped electron mode turbulence is compared at the time point when the effect from the pellet is the most pronounced with a hollow density profile and when the profiles have relaxed again. Linear and nonlinear simulations are performed using the gyrokinetic code GENE including electromagnetic effects and collisions in a realistic geometry in local mode. Furthermore, global nonlinear simulations are performed in order to assess any nonlocal effects. It is found that the positive density gradient has a stabilizing effect that is partly counteracted by the increased temperature gradient in the this region. The effective diffusion coefficients are reduced in the positive density region region compared to the intra pellet time point. No major effect on the turbulent transport due to nonlocal effects are observed.
NASA Astrophysics Data System (ADS)
Herrmann, M.; Velikovich, A. L.; Abarzhi, S. I.
2014-10-01
A study of incompressible two-dimensional Richtmyer-Meshkov instability by means of high-order Eulerian perturbation theory and numerical simulations is reported. Nonlinear corrections to Richtmyer's impulsive formula for the bubble and spike growth rates have been calculated analytically for arbitrary Atwood number and an explicit formula has been obtained for it in the Boussinesq limit. Conditions for early-time acceleration and deceleration of the bubble and the spike have been derived. In our simulations we have solved 2D unsteady Navier-Stokes equations for immiscible incompressible fluids using the finite volume fractional step flow solver NGA developed by, coupled to the level set based interface solver LIT,. The impact of small amounts of viscosity and surface tension on the RMI flow dynamics is studied numerically. Simulation results are compared to the theory to demonstrate successful code verification and highlight the influence of the theory's ideal inviscid flow assumption. Theoretical time histories of the interface curvature at the bubble and spike tip and the profiles of vertical and horizontal velocities have been favorably compared to simulation results, which converge to the theoretical predictions as the Reynolds and Weber numbers are increased. Work supported by the US DOE/NNSA.
NASA Technical Reports Server (NTRS)
Chevalier, Christine T.; Herrmann, Kimberly A.; Kory, Carol L.; Wilson, Jeffrey D.; Cross, Andrew W.; Santana , Samuel
2003-01-01
The electromagnetic field simulation software package CST MICROWAVE STUDIO (MWS) was used to compute the cold-test parameters - frequency-phase dispersion, on-axis impedance, and attenuation - for a traveling-wave tube (TWT) slow-wave circuit. The results were compared to experimental data, as well as to results from MAFIA, another three-dimensional simulation code from CST currently used at the NASA Glenn Research Center (GRC). The strong agreement between cold-test parameters simulated with MWS and those measured experimentally demonstrates the potential of this code to reduce the time and cost of TWT development.
NASA Astrophysics Data System (ADS)
Colombant, Denis; Manheimer, Wallace
2008-11-01
The Krook model described in the previous talk has been incorporated into a fluid simulation. These fluid simulations are then compared with Fokker Planck simulations and also with a recent NRL Nike experiment. We also examine several other models for electron energy transport that have been used in laser fusion research. As regards comparison with Fokker Planck simulation, the Krook model gives better agreement than the other models, especially in the time asymptotic limit. As regards the NRL experiment, all models except one give reasonable agreement.
How can we deal with ANN in flood forecasting? As a simulation model or updating kernel!
NASA Astrophysics Data System (ADS)
Hassan Saddagh, Mohammad; Javad Abedini, Mohammad
2010-05-01
Flood forecasting and early warning, as a non-structural measure for flood control, is often considered to be the most effective and suitable alternative to mitigate the damage and human loss caused by flood. Forecast results which are output of hydrologic, hydraulic and/or black box models should secure accuracy of flood values and timing, especially for long lead time. The application of the artificial neural network (ANN) in flood forecasting has received extensive attentions in recent years due to its capability to capture the dynamics inherent in complex processes including flood. However, results obtained from executing plain ANN as simulation model demonstrate dramatic reduction in performance indices as lead time increases. This paper is intended to monitor the performance indices as it relates to flood forecasting and early warning using two different methodologies. While the first method employs a multilayer neural network trained using back-propagation scheme to forecast output hydrograph of a hypothetical river for various forecast lead time up to 6.0 hr, the second method uses 1D hydrodynamic MIKE11 model as forecasting model and multilayer neural network as updating kernel to monitor and assess the performance indices compared to ANN alone in light of increase in lead time. Results presented in both graphical and tabular format indicate superiority of MIKE11 coupled with ANN as updating kernel compared to ANN as simulation model alone. While plain ANN produces more accurate results for short lead time, the errors increase expeditiously for longer lead time. The second methodology provides more accurate and reliable results for longer forecast lead time.
Soil Carbon Residence Time in the Arctic - Potential Drivers of Past and Future Change
NASA Astrophysics Data System (ADS)
Huntzinger, D. N.; Fisher, J.; Schwalm, C. R.; Hayes, D. J.; Stofferahn, E.; Hantson, W.; Schaefer, K. M.; Fang, Y.; Michalak, A. M.; Wei, Y.
2017-12-01
Carbon residence time is one of the most important factors controlling carbon cycling in ecosystems. Residence time depends on carbon allocation and conversion among various carbon pools and the rate of organic matter decomposition; all of which rely on environmental conditions, primarily temperature and soil moisture. As a result, residence time is an emergent property of models and a strong determinant of terrestrial carbon storage capacity. However, residence time is poorly constrained in process-based models due, in part, to the lack of data with which to benchmark global-scale models in order to guide model improvements and, ultimately, reduce uncertainty in model projections. Here we focus on improving the understanding of the drivers to observed and simulated carbon residence time in the Arctic-Boreal region (ABR). Carbon-cycling in the ABR represents one of the largest sources of uncertainty in historical and future projections of land-atmosphere carbon dynamics. This uncertainty is depicted in the large spread of terrestrial biospheric model (TBM) estimates of carbon flux and ecosystem carbon pool size in this region. Recent efforts, such as the Arctic-Boreal Vulnerability Experiment (ABoVE), have increased the availability of spatially explicit in-situ and remotely sensed carbon and ecosystem focused data products in the ABR. Together with simulations from Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP), we use these observations to evaluate the ability of models to capture soil carbon stocks and changes in the ABR. Specifically, we compare simulated versus observed soil carbon residence times in order to evaluate the functional response and sensitivity of modeled soil carbon stocks to changes in key environmental drivers. Understanding how simulated carbon residence time compares with observations and what drives these differences is critical for improving projections of changing carbon dynamics in the ABR and globally.
Atmospheric Dispersion Modeling of the February 2014 Waste Isolation Pilot Plant Release
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nasstrom, John; Piggott, Tom; Simpson, Matthew
2015-07-22
This report presents the results of a simulation of the atmospheric dispersion and deposition of radioactivity released from the Waste Isolation Pilot Plant (WIPP) site in New Mexico in February 2014. These simulations were made by the National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory (LLNL), and supersede NARAC simulation results published in a previous WIPP report (WIPP, 2014). The results presented in this report use additional, more detailed data from WIPP on the specific radionuclides released, radioactivity release amounts and release times. Compared to the previous NARAC simulations, the new simulation results in this report aremore » based on more detailed modeling of the winds, turbulence, and particle dry deposition. In addition, the initial plume rise from the exhaust vent was considered in the new simulations, but not in the previous NARAC simulations. The new model results show some small differences compared to previous results, but do not change the conclusions in the WIPP (2014) report. Presented are the data and assumptions used in these model simulations, as well as the model-predicted dose and deposition on and near the WIPP site. A comparison of predicted and measured radionuclide-specific air concentrations is also presented.« less
Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A
2018-05-01
An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.
A reduced basis method for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Vincent-Finley, Rachel Elisabeth
In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.
Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy
Testa, M.; Schümann, J.; Lu, H.-M.; Shin, J.; Faddegon, B.; Perl, J.; Paganetti, H.
2013-01-01
Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the nuclear interaction models used in the simulations. Results: SOBPs’ range and modulation width were reproduced, on average, with an accuracy of +1, −2 and ±3 mm, respectively. OF simulations reproduced measured data within ±3%. Simulated 2D dose-profiles show field flatness and average field radius within ±3% of measured profiles. The field symmetry resulted, on average in ±3% agreement with commissioned profiles. TOPAS accuracy in reproducing measured dose profiles downstream the half beam shifter is better than 2%. Dose rate function simulation reproduced the measurements within ∼2% showing that the four-dimensional modeling of the passively modulation system was implement correctly and millimeter accuracy can be achieved in reproducing measured data. For MLFCs simulations, 2% agreement was found between TOPAS and both sets of experimental measurements. The overall results show that TOPAS simulations are within the clinical accepted tolerances for all QA measurements performed at our institution. Conclusions: Our Monte Carlo simulations reproduced accurately the experimental data acquired through all the measurements performed in this study. Thus, TOPAS can reliably be applied to quality assurance for proton therapy and also as an input for commissioning of commercial treatment planning systems. This work also provides the basis for routine clinical dose calculations in patients for all passive scattering proton therapy centers using TOPAS. PMID:24320505
NASA Astrophysics Data System (ADS)
Garrigues, S.; Olioso, A.; Calvet, J.-C.; Lafont, S.; Martin, E.; Chanzy, A.; Marloie, O.; Bertrand, N.; Desfonds, V.; Renard, D.
2012-04-01
Vegetation productivity and water balance of Mediterranean regions will be particularly affected by climate and land-use changes. In order to analyze and predict these changes through land surface models, a critical step is to quantify the uncertainties associated with these models (processes, parameters) and their implementation over a long period of time. Besides, uncertainties attached to the data used to force these models (atmospheric forcing, vegetation and soil characteristics, crop management practices...) which are generally available at coarse spatial resolution (>1-10 km) and for a limited number of plant functional types, need to be evaluated. This paper aims at assessing the uncertainties in water (evapotranspiration) and energy fluxes estimated from a Soil Vegetation Atmosphere Transfer (SVAT) model over a Mediterranean agricultural site. While similar past studies focused on particular crop types and limited period of time, the originality of this paper consists in implementing the SVAT model and assessing its uncertainties over a long period of time (10 years), encompassing several cycles of distinct crops (wheat, sorghum, sunflower, peas). The impacts on the SVAT simulations of the following sources of uncertainties are characterized: - Uncertainties in atmospheric forcing are assessed comparing simulations forced with local meteorological measurements and simulations forced with re-analysis atmospheric dataset (SAFRAN database). - Uncertainties in key surface characteristics (soil, vegetation, crop management practises) are tested comparing simulations feeded with standard values from global database (e.g. ECOCLIMAP) and simulations based on in situ or site-calibrated values. - Uncertainties dues to the implementation of the SVAT model over a long period of time are analyzed with regards to crop rotation. The SVAT model being analyzed in this paper is ISBA in its a-gs version which simulates the photosynthesis and its coupling with the stomata conductance, as well as the time course of the plant biomass and the Leaf Area Index (LAI). The experiment was conducted at the INRA-Avignon (France) crop site (ICOS associated site), for which 10 years of energy and water eddy fluxes, soil moisture profiles, vegetation measurements, agricultural practises are available for distinct crop types. The uncertainties in evapotranspiration and energy flux estimates are quantified from both 10-year trend analysis and selected daily cycles spanning a range of atmospheric conditions and phenological stages. While the net radiation flux is correctly simulated, the cumulated latent heat flux is under-estimated. Daily plots indicate i) an overestimation of evapotranspiration over bare soil probably due to an overestimation of the soil water reservoir available for evaporation and ii) an under-estimation of transpiration for developed canopy. Uncertainties attached to the re-analysis atmospheric data show little influence on the cumulated values of evapotranspiration. Better performances are reached using in situ soil depths and site-calibrated photosynthesis parameters compared to the simulations based on the ECOCLIMAP standard values. Finally, this paper highlights the impact of the temporal succession of vegetation cover and bare soil on the simulation of soil moisture and evapotranspiration over a long period of time. Thus, solutions to account for crop rotation in the implementation of SVAT models are discussed.
Chalouhi, Gihad E; Bernardi, Valeria; Gueneuc, Alexandra; Houssin, Isabelle; Stirnemann, Julien J; Ville, Yves
2016-04-01
Evaluation of trainee's ability in obstetrical ultrasound is a time-consuming process, which requires involving patients as volunteers. With the use of obstetrical ultrasound simulators, virtual reality could help in assessing competency and evaluating trainees in this field. The objective of the study was to test the validity of an obstetrical ultrasound simulator as a tool for evaluating trainees following structured training by comparing scores obtained on obstetrical ultrasound simulator with those obtained on volunteers and by assessing correlations between scores of images and of dexterity given by 2 blinded examiners. Trainees, taking the 2013 French national examination for the practice of obstetrical ultrasound were asked to obtain standardized ultrasound planes both on volunteer pregnant women and on an obstetrical ultrasound simulator. These planes included measurements of biparietal diameter, abdominal circumference, and femur length as well as reference planes for cardiac 4-chamber and outflow tracts, kidneys, stomach/diaphragm, spine, and face. Images were stored and evaluated subsequently by 2 national examiners who scored each picture according to previously established quality criteria. Dexterity was also evaluated and subjectively scored between 0 and 10. The Raghunathan's modification of Pearson, Filon's z, Spearman's rank correlation, and analysis of variance tests were used to assess correlations between the scores by the 2 examiners and scores of dexterity and also to compare the final scores between the 2 different methods. We evaluated 29 trainees. The mean dexterity scores in simulation (6.5 ± 2.0) and real examination (5.9 ± 2.3) were comparable (P = .31). Scores with an obstetrical ultrasound simulator were significantly higher than those obtained on volunteers (P = .027). Nevertheless, there was a good correlation between the scores of the 2 examiners judging on simulation (R = 0.888) and on volunteers (R = 0.873) (P = .81). An obstetrical ultrasound simulator is as good a method as volunteer-based examination for evaluating practical skills in trainees following structured training in obstetrical ultrasound. The threshold for success/failure should, however, be adapted as candidates obtain higher scores on the simulator. Advantages of the obstetrical ultrasound simulator include the absence of location and time constraints without the need to involve volunteers or to interfere with the running of ultrasound clinics. However, an obstetrical ultrasound simulator still lacks the ability to evaluate the trainees' ability to interact with patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Lutgendorf, Monica A; Spalding, Carmen; Drake, Elizabeth; Spence, Dennis; Heaton, Jason O; Morocco, Kristina V
2017-03-01
Postpartum hemorrhage is a common obstetric emergency affecting 3 to 5% of deliveries, with significant maternal morbidity and mortality. Effective management of postpartum hemorrhage requires strong teamwork and collaboration. We completed a multidisciplinary in situ postpartum hemorrhage simulation training exercise with structured team debriefing to evaluate hospital protocols, team performance, operational readiness, and real-time identification of system improvements. Our objective was to assess participant comfort with managing obstetric hemorrhage following our multidisciplinary in situ simulation training exercise. This was a quality improvement project that utilized a comprehensive multidisciplinary in situ postpartum hemorrhage simulation exercise. Participants from the Departments of Obstetrics and Gynecology, Anesthesia, Nursing, Pediatrics, and Transfusion Services completed the training exercise in 16 scenarios run over 2 days. The intervention was a high fidelity, multidisciplinary in situ simulation training to evaluate hospital protocols, team performance, operational readiness, and system improvements. Structured debriefing was conducted with the participants to discuss communication and team functioning. Our main outcome measure was participant self-reported comfort levels for managing postpartum hemorrhage before and after simulation training. A 5-point Likert scale (1 being very uncomfortable and 5 being very comfortable) was used to measure participant comfort. A paired t test was used to assess differences in participant responses before and after the simulation exercise. We also measured the time to prepare simulated blood products and followed the number of postpartum hemorrhage cases before and after the simulation exercise. We trained 113 health care professionals including obstetricians, midwives, residents, anesthesiologists, nurse anesthetists, nurses, and medical assistants. Participants reported a higher comfort level in managing obstetric emergencies and postpartum hemorrhage after simulation training compared to before training. For managing hypertensive emergencies, the post-training mean score was 4.14 compared to a pretraining mean score of 3.88 (p = 0.01, 95% confidence interval [CI] = 0.06-0.47). For shoulder dystocia, the post-training mean score was 4.29 compared to a pretraining mean score of 3.66 (p = 0.001, 95% CI = 0.41-0.88). For postpartum hemorrhage, the post-training mean score was 4.35 compared to pretraining mean score of 3.86 (p = 0.001, 95% CI = 0.36-0.63). We also observed a decrease in the time to prepare simulated blood products over the course of the simulation, and a decreasing trend of postpartum hemorrhage cases, which continued after initiating the postpartum hemorrhage simulation exercise. Postpartum hemorrhage remains a leading cause of maternal morbidity and mortality in the United States. Comprehensive hemorrhage protocols have been shown to improve outcomes related to postpartum hemorrhage, and a critical component in these processes include communication, teamwork, and team-based practice/simulation. As medicine becomes increasingly complex, the ability to practice in a safe setting is ever more critical, especially for low-volume, high-stakes events such as postpartum hemorrhage. These events require well-functioning teams and systems coupled with rapid assessment and appropriate clinical action to ensure best patient outcomes. We have shown that a multidisciplinary in situ simulation exercise improves self-reported comfort with managing obstetric emergencies, and is a safe and effective way to practice skills and improve systems processes in the health care setting. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
Detailed Validation Assessment of Turbine Stage Disc Cavity Rotating Flows
NASA Astrophysics Data System (ADS)
Kanjiyani, Shezan
The subject of this thesis is concerned with the amount of cooling air assigned to seal high pressure turbine rim cavities which is critical for performance as well as component life. Insufficient air leads to excessive hot annulus gas ingestion and its penetration deep into the cavity compromising disc life. Excessive purge air, adversely affects performance. Experiments on a rotating turbine stage rig which included a rotor-stator forward disc cavity were performed at Arizona State University. The turbine rig has 22 vanes and 28 blades, while the rim cavity is composed of a single-tooth rim lab seal and a rim platform overlap seal. Time-averaged static pressures were measured in the gas path and the cavity, while mainstream gas ingestion into the cavity was determined by measuring the concentration distribution of tracer gas (carbon dioxide). Additionally, particle image velocimetry (PIV) was used to measure fluid velocity inside the rim cavity between the lab seal and the overlap. The data from the experiments were compared to an 360-degree unsteady RANS (URANS) CFD simulations. Although not able to match the time-averaged test data satisfactorily, the CFD simulations brought to light the unsteadiness present in the flow during the experiment which the slower response data did not fully capture. To interrogate the validity of URANS simulations in capturing complex rotating flow physics, the scope of this work also included to validating the CFD tool by comparing its predictions against experimental LDV data in a closed rotor-stator cavity. The enclosed cavity has a stationary shroud, a rotating hub, and mass flow does not enter or exit the system. A full 360 degree numerical simulation was performed comparing Fluent LES, with URANS turbulence models. Results from these investigations point to URANS state of art under-predicting closed cavity tangential velocity by 32% to 43%, and open rim cavity effectiveness by 50% compared to test data. The goal of this thesis is to assess the validity of URANS turbulence models in more complex rotating flows, compare accuracy with LES simulations, suggest CFD settings to better simulate turbine stage mainstream/disc cavity interaction with ingestion, and recommend experimentation techniques.
Mahy, Caitlin E V; Voigt, Babett; Ballhausen, Nicola; Schnitzspahn, Katharina; Ellis, Judi; Kliegel, Matthias
2015-01-01
The present study investigated whether developmental changes in cognitive control may underlie improvements of time-based prospective memory. Five-, 7-, 9-, and 11-year-olds (N = 166) completed a driving simulation task (ongoing task) in which they had to refuel their vehicle at specific points in time (PM task). The availability of cognitive control resources was experimentally manipulated by imposing a secondary task that required divided attention. Children completed the driving simulation task both in a full-attention condition and a divided-attention condition where they had to carry out a secondary task. Results revealed that older children performed better than younger children on the ongoing task and PM task. Children performed worse on the ongoing and PM tasks in the divided-attention condition compared to the full-attention condition. With respect to time monitoring in the final interval prior to the PM target, divided attention interacted with age such that older children's time monitoring was more negatively affected by the secondary task compared to younger children. Results are discussed in terms of developmental shifts from reactive to proactive monitoring strategies.
Day, Lukejohn W; Belson, David; Dessouky, Maged; Hawkins, Caitlin; Hogan, Michael
2014-11-01
Improvements in endoscopy center efficiency are needed, but scant data are available. To identify opportunities to improve patient throughput while balancing resource use and patient wait times in a safety-net endoscopy center. Safety-net endoscopy center. Outpatients undergoing endoscopy. A time and motion study was performed and a discrete event simulation model constructed to evaluate multiple scenarios aimed at improving endoscopy center efficiency. Procedure volume and patient wait time. Data were collected on 278 patients. Time and motion study revealed that 53.8 procedures were performed per week, with patients spending 2.3 hours at the endoscopy center. By using discrete event simulation modeling, a number of proposed changes to the endoscopy center were assessed. Decreasing scheduled endoscopy appointment times from 60 to 45 minutes led to a 26.4% increase in the number of procedures performed per week, but also increased patient wait time. Increasing the number of endoscopists by 1 each half day resulted in increased procedure volume, but there was a concomitant increase in patient wait time and nurse utilization exceeding capacity. By combining several proposed scenarios together in the simulation model, the greatest improvement in performance metrics was created by moving patient endoscopy appointments from the afternoon to the morning. In this simulation at 45- and 40-minute appointment times, procedure volume increased by 30.5% and 52.0% and patient time spent in the endoscopy center decreased by 17.4% and 13.0%, respectively. The predictions of the simulation model were found to be accurate when compared with actual changes implemented in the endoscopy center. Findings may not be generalizable to non-safety-net endoscopy centers. The combination of minor, cost-effective changes such as reducing appointment times, minimizing and standardizing recovery time, and making small increases in preprocedure ancillary staff maximized endoscopy center efficiency across a number of performance metrics. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
Timing performance of phase-locked loops in optical pulse position modulation communication systems
NASA Astrophysics Data System (ADS)
Lafaw, D. A.
In an optical digital communication system, an accurate clock signal must be available at the receiver to provide proper synchronization with the transmitted signal. Phase synchronization is especially critical in M-ary pulse position modulation (PPM) systems where the optimum decision scheme is an energy detector which compares the energy in each of M time slots to decide which of M possible words was sent. A timing error causes energy spillover into adjacent time slots (a form of intersymbol interference) so that only a portion of the signal energy may be attributed to the correct time slot. This effect decreases the effective signal, increases the effective noise, and increases the probability of error. This report simulates a timing subsystem for a satellite-to-satellite optical PPM communication link. The receiver employs direct photodetection, preprocessing of the optical signal, and a phase-locked loop for timing synchronization. The photodetector output is modeled as a filtered, doubly stochastic Poisson shot noise process. The variance of the relative phase error is examined under varying signal strength conditions as an indication of loop performance, and simulation results are compared to theoretical relations.
Simulation trainer for practicing emergent open thoracotomy procedures.
Hamilton, Allan J; Prescher, Hannes; Biffar, David E; Poston, Robert S
2015-07-01
An emergent open thoracotomy (OT) is a high-risk, low-frequency procedure uniquely suited for simulation training. We developed a cost-effective Cardiothoracic (CT) Surgery trainer and assessed its potential for improving technical and interprofessional skills during an emergent simulated OT. We modified a commercially available mannequin torso with artificial tissue models to create a custom CT Surgery trainer. The trainer's feasibility for simulating emergent OT was tested using a multidisciplinary CT team in three consecutive in situ simulations. Five discretely observable milestones were identified as requisite steps in carrying out an emergent OT; namely (1) diagnosis and declaration of a code situation, (2) arrival of the code cart, (3) arrival of the thoracotomy tray, (4) initiation of the thoracotomy incision, and (5) defibrillation of a simulated heart. The time required for a team to achieve each discrete step was measured by an independent observer over the course of each OT simulation trial and compared. Over the course of the three OT simulation trials conducted in the coronary care unit, there was an average reduction of 29.5% (P < 0.05) in the times required to achieve the five critical milestones. The time required to complete the whole OT procedure improved by 7 min and 31 s from the initial to the final trial-an overall improvement of 40%. In our preliminary evaluation, the CT Surgery trainer appears to be useful for improving team performance during a simulated emergent bedside OT in the coronary care unit. Copyright © 2015 Elsevier Inc. All rights reserved.
Zhang, Fang; Wagner, Anita K; Ross-Degnan, Dennis
2011-11-01
Interrupted time series is a strong quasi-experimental research design to evaluate the impacts of health policy interventions. Using simulation methods, we estimated the power requirements for interrupted time series studies under various scenarios. Simulations were conducted to estimate the power of segmented autoregressive (AR) error models when autocorrelation ranged from -0.9 to 0.9 and effect size was 0.5, 1.0, and 2.0, investigating balanced and unbalanced numbers of time periods before and after an intervention. Simple scenarios of autoregressive conditional heteroskedasticity (ARCH) models were also explored. For AR models, power increased when sample size or effect size increased, and tended to decrease when autocorrelation increased. Compared with a balanced number of study periods before and after an intervention, designs with unbalanced numbers of periods had less power, although that was not the case for ARCH models. The power to detect effect size 1.0 appeared to be reasonable for many practical applications with a moderate or large number of time points in the study equally divided around the intervention. Investigators should be cautious when the expected effect size is small or the number of time points is small. We recommend conducting various simulations before investigation. Copyright © 2011 Elsevier Inc. All rights reserved.
Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro
2003-04-15
Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003
Cheng, Adam; Brown, Linda L; Duff, Jonathan P; Davidson, Jennifer; Overly, Frank; Tofil, Nancy M; Peterson, Dawn T; White, Marjorie L; Bhanji, Farhan; Bank, Ilana; Gottesman, Ronald; Adler, Mark; Zhong, John; Grant, Vincent; Grant, David J; Sudikoff, Stephanie N; Marohn, Kimberly; Charnovich, Alex; Hunt, Elizabeth A; Kessler, David O; Wong, Hubert; Robertson, Nicola; Lin, Yiqun; Doan, Quynh; Duval-Arnould, Jordan M; Nadkarni, Vinay M
2015-02-01
The quality of cardiopulmonary resuscitation (CPR) affects hemodynamics, survival, and neurological outcomes following pediatric cardiopulmonary arrest (CPA). Most health care professionals fail to perform CPR within established American Heart Association guidelines. To determine whether "just-in-time" (JIT) CPR training with visual feedback (VisF) before CPA or real-time VisF during CPA improves the quality of chest compressions (CCs) during simulated CPA. Prospective, randomized, 2 × 2 factorial-design trial with explicit methods (July 1, 2012, to April 15, 2014) at 10 International Network for Simulation-Based Pediatric Innovation, Research, & Education (INSPIRE) institutions running a standardized simulated CPA scenario, including 324 CPR-certified health care professionals assigned to 3-person resuscitation teams (108 teams). Each team was randomized to 1 of 4 permutations, including JIT training vs no JIT training before CPA and real-time VisF vs no real-time VisF during simulated CPA. The proportion of CCs with depth exceeding 50 mm, the proportion of CPR time with a CC rate of 100 to 120 per minute, and CC fraction (percentage CPR time) during simulated CPA. The quality of CPR was poor in the control group, with 12.7% (95% CI, 5.2%-20.1%) mean depth compliance and 27.1% (95% CI, 14.2%-40.1%) mean rate compliance. JIT training compared with no JIT training improved depth compliance by 19.9% (95% CI, 11.1%-28.7%; P < .001) and rate compliance by 12.0% (95% CI, 0.8%-23.2%; P = .037). Visual feedback compared with no VisF improved depth compliance by 15.4% (95% CI, 6.6%-24.2%; P = .001) and rate compliance by 40.1% (95% CI, 28.8%-51.3%; P < .001). Neither intervention had a statistically significant effect on CC fraction, which was excellent (>89.0%) in all groups. Combining both interventions showed the highest compliance with American Heart Association guidelines but was not significantly better than either intervention in isolation. The quality of CPR provided by health care professionals is poor. Using novel and practical technology, JIT training before CPA or real-time VisF during CPA, alone or in combination, improves compliance with American Heart Association guidelines for CPR that are associated with better outcomes. clinicaltrials.gov Identifier: NCT02075450.
Transient Nonequilibrium Molecular Dynamic Simulations of Thermal Conductivity: 1. Simple Fluids
NASA Astrophysics Data System (ADS)
Hulse, R. J.; Rowley, R. L.; Wilding, W. V.
2005-01-01
Thermal conductivity has been previously obtained from molecular dynamics (MD) simulations using either equilibrium (EMD) simulations (from Green--Kubo equations) or from steady-state nonequilibrium (NEMD) simulations. In the case of NEMD, either boundary-driven steady states are simulated or constrained equations of motion are used to obtain steady-state heat transfer rates. Like their experimental counterparts, these nonequilibrium steady-state methods are time consuming and may have convection problems. Here we report a new transient method developed to provide accurate thermal conductivity predictions from MD simulations. In the proposed MD method, molecules that lie within a specified volume are instantaneously heated. The temperature decay of the system of molecules inside the heated volume is compared to the solution of the transient energy equation, and the thermal diffusivity is regressed. Since the density of the fluid is set in the simulation, only the isochoric heat capacity is needed in order to obtain the thermal conductivity. In this study the isochoric heat capacity is determined from energy fluctuations within the simulated fluid. The method is valid in the liquid, vapor, and critical regions. Simulated values for the thermal conductivity of a Lennard-Jones (LJ) fluid were obtained using this new method over a temperature range of 90 to 900 K and a density range of 1-35 kmol · m-3. These values compare favorably with experimental values for argon. The new method has a precision of ±10%. Compared to other methods, the algorithm is quick, easy to code, and applicable to small systems, making the simulations very efficient.
Extended Magnetohydrodynamics with Embedded Particle-in-Cell Simulation of Ganymede's Magnetosphere
NASA Technical Reports Server (NTRS)
Toth, Gabor; Jia, Xianzhe; Markidis, Stefano; Peng, Ivy Bo; Chen, Yuxi; Daldorff, Lars K. S.; Tenishev, Valeriy M.; Borovikov, Dmitry; Haiducek, John D.; Gombosi, Tamas I.;
2016-01-01
We have recently developed a new modeling capability to embed the implicit particle-in-cell (PIC) model iPIC3D into the Block-Adaptive-Tree-Solarwind-Roe-Upwind-Scheme magnetohydrodynamic (MHD) model. The MHD with embedded PIC domains (MHO-EPIC) algorithm Is a two-way coupled kinetic-fluid model. As one of the very first applications of the MHD-EPIC algorithm, we simulate the Interaction between Jupiter's magnetospherlc plasma and Ganymede's magnetosphere. We compare the MHO-EPIC simulations with pure Hall MHD simulations and compare both model results with Galileo observations to assess the Importance of kinetic effects In controlling the configuration and dynamics of Ganymede's magnetosphere. We find that the Hall MHD and MHO-EPIC solutions are qualitatively similar, but there are significant quantitative differences. In particular. the density and pressure inside the magnetosphere show different distributions. For our baseline grid resolution the PIC solution is more dynamic than the Hall MHD simulation and it compares significantly better with the Galileo magnetic measurements than the Hall MHD solution. The power spectra of the observed and simulated magnetic field fluctuations agree extremely well for the MHD-EPIC model. The MHO-EPIC simulation also produced a few flux transfer events (FTEs) that have magnetic signatures very similar to an observed event. The simulation shows that the FTEs often exhibit complex 3-0 structures with their orientations changing substantially between the equatorial plane and the Galileo trajectory, which explains the magnetic signatures observed during the magnetopause crossings. The computational cost of the MHO-EPIC simulation was only about 4 times more than that of the Hall MHD simulation.
Sams, J. I.; Witt, E. C.
1995-01-01
The Hydrological Simulation Program - Fortran (HSPF) was used to simulate streamflow and sediment transport in two surface-mined basins of Fayette County, Pa. Hydrologic data from the Stony Fork Basin (0.93 square miles) was used to calibrate HSPF parameters. The calibrated parameters were applied to an HSPF model of the Poplar Run Basin (8.83 square miles) to evaluate the transfer value of model parameters. The results of this investigation provide information to the Pennsylvania Department of Environmental Resources, Bureau of Mining and Reclamation, regarding the value of the simulated hydrologic data for use in cumulative hydrologic-impact assessments of surface-mined basins. The calibration period was October 1, 1985, through September 30, 1988 (water years 1986-88). The simulated data were representative of the observed data from the Stony Fork Basin. Mean simulated streamflow was 1.64 cubic feet per second compared to measured streamflow of 1.58 cubic feet per second for the 3-year period. The difference between the observed and simulated peak stormflow ranged from 4.0 to 59.7 percent for 12 storms. The simulated sediment load for the 1987 water year was 127.14 tons (0.21 ton per acre), which compares to a measured sediment load of 147.09 tons (0.25 ton per acre). The total simulated suspended-sediment load for the 3-year period was 538.2 tons (0.30 ton per acre per year), which compares to a measured sediment load of 467.61 tons (0.26 ton per acre per year). The model was verified by comparing observed and simulated data from October 1, 1988, through September 30, 1989. The results obtained were comparable to those from the calibration period. The simulated mean daily discharge was representative of the range of data observed from the basin and of the frequency with which specific discharges were equalled or exceeded. The calibrated and verified parameters from the Stony Fork model were applied to an HSPF model of the Poplar Run Basin. The two basins are in a similar physical setting. Data from October 1, 1987, through September 30, 1989, were used to evaluate the Poplar Run model. In general, the results from the Poplar Run model were comparable to those obtained from the Stony Fork model. The difference between observed and simulated total streamflow was 1.1 percent for the 2-year period. The mean annual streamflow simulated by the Poplar Run model was 18.3 cubic feet per second. This compares to an observed streamflow of 18.15 cubic feet per second. For the 2-year period, the simulated sediment load was 2,754 tons (0.24 ton per acre per year), which compares to a measured sediment load of 3,051.2 tons (0.27 ton per acre per year) for the Poplar Run Basin. Cumulative frequency-distribution curves of the observed and simulated streamflow compared well. The comparison between observed and simulated data improved as the time span increased. Simulated annual means and totals were more representative of the observed data than hourly data used in comparing storm events. The structure and organization of the HSPF model facilitated the simulation of a wide range of hydrologic processes. The simulation results from this investigation indicate that model parameters may be transferred to ungaged basins to generate representative hydrologic data through modeling techniques.
Behavioural responses of sardines Sardina pilchardus to simulated purse-seine capture and slipping.
Marçalo, A; Araújo, J; Pousão-Ferreira, P; Pierce, G J; Stratoudakis, Y; Erzini, K
2013-09-01
The behavioural effects of confinement of sardine Sardina pilchardus in a purse seine were evaluated through three laboratory experiments simulating the final stages of purse seining; the process of slipping (deliberately allowing fishes to escape) and subsequent exposure to potential predators. Effects of holding time (the time S. pilchardus were held or entangled in the simulation apparatus) and S. pilchardus density were investigated. Experiment 1 compared the effect of a mild fishing stressor (20 min in the net and low S. pilchardus density) with a control (fishing not simulated) while the second and third experiments compared the mild stressor with a severe stressor (40 min in the net and high S. pilchardus density). In all cases, sea bass Dicentrarchus labrax were used as potential predators. Results indicated a significant effect of crowding time and density on the survival and behaviour of slipped S. pilchardus. After simulated fishing, S. pilchardus showed significant behavioural changes including lower swimming speed, closer approaches to predators and higher nearest-neighbour distances (wider school area) than controls, regardless of stressor severity. These results suggest that, in addition to the delayed and unobserved mortality caused by factors related to fishing operations, slipped pelagic fishes can suffer behavioural impairments that may increase vulnerability to predation. Possible sub-lethal effects of behavioural impairment on fitness are discussed, with suggestions on how stock assessment might be modified to account for both unobserved mortality and sub-lethal effects, and possible approaches to provide better estimates of unobserved mortality in the field are provided. © 2013 The Fisheries Society of the British Isles.
Confusing placebo effect with natural history in epilepsy: A big data approach.
Goldenholz, Daniel M; Moss, Robert; Scott, Jonathan; Auh, Sungyoung; Theodore, William H
2015-09-01
For unknown reasons, placebos reduce seizures in clinical trials in many patients. It is also unclear why some drugs showing statistical superiority to placebo in one trial may fail to do so in another. Using Seizuretracker.com, a patient-centered database of 684,825 seizures, we simulated "placebo" and "drug" trials. These simulations were employed to clarify the sources of placebo effects in epilepsy, and to identify methods of diminishing placebo effects. Simulation 1 included 9 trials with a 6-week baseline and 6-week test period, starting at time 0, 3, 6…24 months. Here, "placebo" reduced seizures regardless of study start time. Regression-to-the-mean persisted only for 3 to 6 months. Simulation 2 comprised a 6-week baseline and then 2 years of follow-up. Seizure frequencies continued to improve throughout follow-up. Although the group improved, individuals switched from improvement to worsening and back. Simulation 3 involved a placebo-controlled "drug" trial, to explore methods of placebo response reduction. An efficacious "drug" failed to demonstrate a significant effect compared with "placebo" (p = 0.12), although modifications either in study start time (p = 0.025) or baseline population reduction (p = 0.0028) allowed the drug to achieve a statistically significant effect compared with placebo. In epilepsy clinical trials, some seizure reduction traditionally attributed to placebo effect may reflect the natural course of the disease itself. Understanding these dynamics will allow future investigations into optimal clinical trial design and may lead to identification of more effective therapies. Ann Neurol 2015;78:329-336. © 2015 American Neurological Association.
A novel coupling of noise reduction algorithms for particle flow simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.
2016-09-15
Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less
Visualization Improves Supraclavicular Access to the Subclavian Vein in a Mixed Reality Simulator.
Sappenfield, Joshua Warren; Smith, William Brit; Cooper, Lou Ann; Lizdas, David; Gonsalves, Drew B; Gravenstein, Nikolaus; Lampotang, Samsun; Robinson, Albert R
2018-07-01
We investigated whether visual augmentation (3D, real-time, color visualization) of a procedural simulator improved performance during training in the supraclavicular approach to the subclavian vein, not as widely known or used as its infraclavicular counterpart. To train anesthesiology residents to access a central vein, a mixed reality simulator with emulated ultrasound imaging was created using an anatomically authentic, 3D-printed, physical mannequin based on a computed tomographic scan of an actual human. The simulator has a corresponding 3D virtual model of the neck and upper chest anatomy. Hand-held instruments such as a needle, an ultrasound probe, and a virtual camera controller are directly manipulated by the trainee and tracked and recorded with submillimeter resolution via miniature, 6 degrees of freedom magnetic sensors. After Institutional Review Board approval, 69 anesthesiology residents and faculty were enrolled and received scripted instructions on how to perform subclavian venous access using the supraclavicular approach based on anatomic landmarks. The volunteers were randomized into 2 cohorts. The first used real-time 3D visualization concurrently with trial 1, but not during trial 2. The second did not use real-time 3D visualization concurrently with trial 1 or 2. However, after trial 2, they observed a 3D visualization playback of trial 2 before performing trial 3 without visualization. An automated scoring system based on time, success, and errors/complications generated objective performance scores. Nonparametric statistical methods were used to compare the scores between subsequent trials, differences between groups (real-time visualization versus no visualization versus delayed visualization), and improvement in scores between trials within groups. Although the real-time visualization group demonstrated significantly better performance than the delayed visualization group on trial 1 (P = .01), there was no difference in gain scores, between performance on the first trial and performance on the final trial, that were dependent on group (P = .13). In the delayed visualization group, the difference in performance between trial 1 and trial 2 was not significant (P = .09); reviewing performance on trial 2 before trial 3 resulted in improved performance when compared to trial 1 (P < .0001). There was no significant difference in median scores (P = .13) between the real-time visualization and delayed visualization groups for the last trial after both groups had received visualization. Participants reported a significant improvement in confidence in performing supraclavicular access to the subclavian vein. Standard deviations of scores, a measure of performance variability, decreased in the delayed visualization group after viewing the visualization. Real-time visual augmentation (3D visualization) in the mixed reality simulator improved performance during supraclavicular access to the subclavian vein. No difference was seen in the final trial of the group that received real-time visualization compared to the group that had delayed visualization playback of their prior attempt. Training with the mixed reality simulator improved participant confidence in performing an unfamiliar technique.
Leckey, Cara A C; Rogge, Matthew D; Raymond Parker, F
2014-01-01
Three-dimensional (3D) elastic wave simulations can be used to investigate and optimize nondestructive evaluation (NDE) and structural health monitoring (SHM) ultrasonic damage detection techniques for aerospace materials. 3D anisotropic elastodynamic finite integration technique (EFIT) has been implemented for ultrasonic waves in carbon fiber reinforced polymer (CFRP) composite laminates. This paper describes 3D EFIT simulations of guided wave propagation in undamaged and damaged anisotropic and quasi-isotropic composite plates. Comparisons are made between simulations of guided waves in undamaged anisotropic composite plates and both experimental laser Doppler vibrometer (LDV) wavefield data and dispersion curves. Time domain and wavenumber domain comparisons are described. Wave interaction with complex geometry delamination damage is then simulated to investigate how simulation tools incorporating realistic damage geometries can aid in the understanding of wave interaction with CFRP damage. In order to move beyond simplistic assumptions of damage geometry, volumetric delamination data acquired via X-ray microfocus computed tomography is directly incorporated into the simulation. Simulated guided wave interaction with the complex geometry delamination is compared to experimental LDV time domain data and 3D wave interaction with the volumetric damage is discussed. Published by Elsevier B.V.
Role of meteorology in simulating methane seasonal cycle and growth rate
NASA Astrophysics Data System (ADS)
Ghosh, A.; Patra, P. K.; Ishijima, K.; Morimoto, S.; Aoki, S.; Nakazawa, T.
2012-12-01
Methane (CH4) is the second most important anthropogenically produced greenhouse gas whose radiative effect is comparable to that of carbon dioxide since the preindustrial time. Methane also contributes to formation of tropospheric ozone and water vapor in the stratosphere, further increasing its importance to the Earth's radiative balance. In the present study, model simulation of CH4 for three different emission scenarios has been conducted using the CCSR/NIES/FRCGC Atmospheric General Circulation Model (AGCM) based Chemistry Transport Model (ACTM) with and without nudging of meteorological parameters for the period of 1981-2011. The model simulations are compared with measurements at monthly timescale at surface monitoring stations. We show the overall trends in CH4 growth rate and seasonal cycle at most measurement sites can be fairly successfully modeled by using existing knowledge of CH4 flux trends and seasonality. Detailed analysis reveals the model simulation without nudging has greater seasonal cycle amplitude compared to observation as well as the model simulation with nudging. The growth rate is slightly overestimated for the model simulation without nudging. For better representation of regional/global flux distribution pattern and strength in the future, we are exploring various dynamical and chemical aspects in the forward model with and without nudging.
Tablet-based cardiac arrest documentation: a pilot study.
Peace, Jack M; Yuen, Trevor C; Borak, Meredith H; Edelson, Dana P
2014-02-01
Conventional paper-based resuscitation transcripts are notoriously inaccurate, often lacking the precision that is necessary for recording a fast-paced resuscitation. The aim of this study was to evaluate whether a tablet computer-based application could improve upon conventional practices for resuscitation documentation. Nurses used either the conventional paper code sheet or a tablet application during simulated resuscitation events. Recorded events were compared to a gold standard record generated from video recordings of the simulations and a CPR-sensing defibrillator/monitor. Events compared included defibrillations, medication deliveries, and other interventions. During the study period, 199 unique interventions were observed in the gold standard record. Of these, 102 occurred during simulations recorded by the tablet application, 78 by the paper code sheet, and 19 during scenarios captured simultaneously by both documentation methods These occurred over 18 simulated resuscitation scenarios, in which 9 nurses participated. The tablet application had a mean sensitivity of 88.0% for all interventions, compared to 67.9% for the paper code sheet (P=0.001). The median time discrepancy was 3s for the tablet, and 77s for the paper code sheet when compared to the gold standard (P<0.001). Similar to prior studies, we found that conventional paper-based documentation practices are inaccurate, often misreporting intervention delivery times or missing their delivery entirely. However, our study also demonstrated that a tablet-based documentation method may represent a means to substantially improve resuscitation documentation quality, which could have implications for resuscitation quality improvement and research. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Asquith, W.H.; Mosier, J. G.; Bush, P.W.
1997-01-01
The watershed simulation model Hydrologic Simulation Program—Fortran (HSPF) was used to generate simulated flow (runoff) from the 13 watersheds to the six bay systems because adequate gaged streamflow data from which to estimate freshwater inflows are not available; only about 23 percent of the adjacent contributing watershed area is gaged. The model was calibrated for the gaged parts of three watersheds—that is, selected input parameters (meteorologic and hydrologic properties and conditions) that control runoff were adjusted in a series of simulations until an adequate match between model-generated flows and a set (time series) of gaged flows was achieved. The primary model input is rainfall and evaporation data and the model output is a time series of runoff volumes. After calibration, simulations driven by daily rainfall for a 26-year period (1968–93) were done for the 13 watersheds to obtain runoff under current (1983–93), predevelopment (pre-1940 streamflow and pre-urbanization), and future (2010) land-use conditions for estimating freshwater inflows and for comparing runoff under the three land-use conditions; and to obtain time series of runoff from which to estimate time series of freshwater inflows for trend analysis.
GPU-based prompt gamma ray imaging from boron neutron capture therapy.
Yoon, Do-Kun; Jung, Joo-Young; Jo Hong, Key; Sil Lee, Keum; Suk Suh, Tae
2015-01-01
The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.
Rise time of proton cut-off energy in 2D and 3D PIC simulations
NASA Astrophysics Data System (ADS)
Babaei, J.; Gizzi, L. A.; Londrillo, P.; Mirzanejad, S.; Rovelli, T.; Sinigardi, S.; Turchetti, G.
2017-04-01
The Target Normal Sheath Acceleration regime for proton acceleration by laser pulses is experimentally consolidated and fairly well understood. However, uncertainties remain in the analysis of particle-in-cell simulation results. The energy spectrum is exponential with a cut-off, but the maximum energy depends on the simulation time, following different laws in two and three dimensional (2D, 3D) PIC simulations so that the determination of an asymptotic value has some arbitrariness. We propose two empirical laws for the rise time of the cut-off energy in 2D and 3D PIC simulations, suggested by a model in which the proton acceleration is due to a surface charge distribution on the target rear side. The kinetic energy of the protons that we obtain follows two distinct laws, which appear to be nicely satisfied by PIC simulations, for a model target given by a uniform foil plus a contaminant layer that is hydrogen-rich. The laws depend on two parameters: the scaling time, at which the energy starts to rise, and the asymptotic cut-off energy. The values of the cut-off energy, obtained by fitting 2D and 3D simulations for the same target and laser pulse configuration, are comparable. This suggests that parametric scans can be performed with 2D simulations since 3D ones are computationally very expensive, delegating their role only to a correspondence check. In this paper, the simulations are carried out with the PIC code ALaDyn by changing the target thickness L and the incidence angle α, with a fixed a0 = 3. A monotonic dependence, on L for normal incidence and on α for fixed L, is found, as in the experimental results for high temporal contrast pulses.
TH-CD-207A-08: Simulated Real-Time Image Guidance for Lung SBRT Patients Using Scatter Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redler, G; Cifter, G; Templeton, A
2016-06-15
Purpose: To develop a comprehensive Monte Carlo-based model for the acquisition of scatter images of patient anatomy in real-time, during lung SBRT treatment. Methods: During SBRT treatment, images of patient anatomy can be acquired from scattered radiation. To rigorously examine the utility of scatter images for image guidance, a model is developed using MCNP code to simulate scatter images of phantoms and lung cancer patients. The model is validated by comparing experimental and simulated images of phantoms of different complexity. The differentiation between tissue types is investigated by imaging objects of known compositions (water, lung, and bone equivalent). A lungmore » tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is used to investigate image noise properties for various quantities of delivered radiation (monitor units(MU)). Patient scatter images are simulated using the validated simulation model. 4DCT patient data is converted to an MCNP input geometry accounting for different tissue composition and densities. Lung tumor phantom images acquired with decreasing imaging time (decreasing MU) are used to model the expected noise amplitude in patient scatter images, producing realistic simulated patient scatter images with varying temporal resolution. Results: Image intensity in simulated and experimental scatter images of tissue equivalent objects (water, lung, bone) match within the uncertainty (∼3%). Lung tumor phantom images agree as well. Specifically, tumor-to-lung contrast matches within the uncertainty. The addition of random noise approximating quantum noise in experimental images to simulated patient images shows that scatter images of lung tumors can provide images in as fast as 0.5 seconds with CNR∼2.7. Conclusions: A scatter imaging simulation model is developed and validated using experimental phantom scatter images. Following validation, lung cancer patient scatter images are simulated. These simulated patient images demonstrate the clinical utility of scatter imaging for real-time tumor tracking during lung SBRT.« less
Development and validation of a piloted simulation of a helicopter and external sling load
NASA Technical Reports Server (NTRS)
Shaughnessy, J. D.; Deaux, T. N.; Yenni, K. R.
1979-01-01
A generalized, real time, piloted, visual simulation of a single rotor helicopter, suspension system, and external load is described and validated for the full flight envelope of the U.S. Army CH-54 helicopter and cargo container as an example. The mathematical model described uses modified nonlinear classical rotor theory for both the main rotor and tail rotor, nonlinear fuselage aerodynamics, an elastic suspension system, nonlinear load aerodynamics, and a loadground contact model. The implementation of the mathematical model on a large digital computing system is described, and validation of the simulation is discussed. The mathematical model is validated by comparing measured flight data with simulated data, by comparing linearized system matrices, eigenvalues, and eigenvectors with manufacturers' data, and by the subjective comparison of handling characteristics by experienced pilots. A visual landing display system for use in simulation which generates the pilot's forward looking real world display was examined and a special head up, down looking load/landing zone display is described.
Integrated modeling of temperature and rotation profiles in JET ITER-like wall discharges
NASA Astrophysics Data System (ADS)
Rafiq, T.; Kritz, A. H.; Kim, Hyun-Tae; Schuster, E.; Weiland, J.
2017-10-01
Simulations of 78 JET ITER-like wall D-D discharges and 2 D-T reference discharges are carried out using the TRANSP predictive integrated modeling code. The time evolved temperature and rotation profiles are computed utilizing the Multi-Mode anomalous transport model. The discharges involve a broad range of conditions including scans over gyroradius, collisionality, and values of q95. The D-T reference discharges are selected in anticipation of the D-T experimental campaign planned at JET in 2019. The simulated temperature and rotation profiles are compared with the corresponding experimental profiles in the radial range from the magnetic axis to the ρ = 0.9 flux surface. The comparison is quantified by calculating the RMS deviations and Offsets. Overall, good agreement is found between the profiles produced in the simulations and the experimental data. It is planned that the simulations obtained using the Multi-Mode model will be compared with the simulations using the TGLF model. Research supported in part by the US, DoE, Office of Sciences.
Console, Rodolfo; Nardi, Anna; Carluccio, Roberto; Murru, Maura; Falcone, Giuseppe; Parsons, Thomas E.
2017-01-01
The use of a newly developed earthquake simulator has allowed the production of catalogs lasting 100 kyr and containing more than 100,000 events of magnitudes ≥4.5. The model of the fault system upon which we applied the simulator code was obtained from the DISS 3.2.0 database, selecting all the faults that are recognized on the Calabria region, for a total of 22 fault segments. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which can be compared with those of the real observations. The results of the physics-based simulator algorithm were compared with those obtained by an alternative method using a slip-rate balanced technique. Finally, as an example of a possible use of synthetic catalogs, an attenuation law has been applied to all the events reported in the synthetic catalog for the production of maps showing the exceedance probability of given values of PGA on the territory under investigation.
Duality quantum algorithm efficiently simulates open quantum systems
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-01-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855
A log-Weibull spatial scan statistic for time to event data.
Usman, Iram; Rosychuk, Rhonda J
2018-06-13
Spatial scan statistics have been used for the identification of geographic clusters of elevated numbers of cases of a condition such as disease outbreaks. These statistics accompanied by the appropriate distribution can also identify geographic areas with either longer or shorter time to events. Other authors have proposed the spatial scan statistics based on the exponential and Weibull distributions. We propose the log-Weibull as an alternative distribution for the spatial scan statistic for time to events data and compare and contrast the log-Weibull and Weibull distributions through simulation studies. The effect of type I differential censoring and power have been investigated through simulated data. Methods are also illustrated on time to specialist visit data for discharged patients presenting to emergency departments for atrial fibrillation and flutter in Alberta during 2010-2011. We found northern regions of Alberta had longer times to specialist visit than other areas. We proposed the spatial scan statistic for the log-Weibull distribution as a new approach for detecting spatial clusters for time to event data. The simulation studies suggest that the test performs well for log-Weibull data.
Suppressing correlations in massively parallel simulations of lattice models
NASA Astrophysics Data System (ADS)
Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle
2017-11-01
For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.
A fast image simulation algorithm for scanning transmission electron microscopy.
Ophus, Colin
2017-01-01
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. We present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this method with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.
A fast image simulation algorithm for scanning transmission electron microscopy
Ophus, Colin
2017-05-10
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. Here, we present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this methodmore » with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.« less
Direct imaging of small scatterers using reduced time dependent data
NASA Astrophysics Data System (ADS)
Cakoni, Fioralba; Rezac, Jacob D.
2017-06-01
We introduce qualitative methods for locating small objects using time dependent acoustic near field waves. These methods have reduced data collection requirements compared to typical qualitative imaging techniques. In particular, we only collect scattered field data in a small region surrounding the location from which an incident field was transmitted. The new methods are partially theoretically justified and numerical simulations demonstrate their efficacy. We show that these reduced data techniques give comparable results to methods which require full multistatic data and that these time dependent methods require less scattered field data than their time harmonic analogs.
NASA Technical Reports Server (NTRS)
Detman, T. R.; Intriligator, D. S.; Dryer, M.; Sun, W.; Deehr, C. S.; Intriligator, J.
2012-01-01
We describe our 3-D, time ]dependent, MHD solar wind model that we recently modified to include the physics of pickup protons from interstellar neutral hydrogen. The model has a time-dependent lower boundary condition, at 0.1 AU, that is driven by source surface map files through an empirical interface module. We describe the empirical interface and its parameter tuning to maximize model agreement with background (quiet) solar wind observations at ACE. We then give results of a simulation study of the famous Halloween 2003 series of solar events. We began with shock inputs from the Fearless Forecast real ]time shock arrival prediction study, and then we iteratively adjusted input shock speeds to obtain agreement between observed and simulated shock arrival times at ACE. We then extended the model grid to 5.5 AU and compared those simulation results with Ulysses observations at 5.2 AU. Next we undertook the more difficult tuning of shock speeds and locations to get matching shock arrival times at both ACE and Ulysses. Then we ran this last case again with neutral hydrogen density set to zero, to identify the effect of pickup ions. We show that the speed of interplanetary shocks propagating from the Sun to Ulysses is reduced by the effects of pickup protons. We plan to make further improvements to the model as we continue our benchmarking process to 10 AU, comparing our results with Cassini observations, and eventually on to 100 AU, comparing our results with Voyager 1 and 2 observations.
Comparison of different types of medium scale field rainfall simulators
NASA Astrophysics Data System (ADS)
Dostál, Tomáš; Strauss, Peter; Schindewolf, Marcus; Kavka, Petr; Schmidt, Jürgen; Bauer, Miroslav; Neumann, Martin; Kaiser, Andreas; Iserloh, Thomas
2015-04-01
Rainfall simulators are used in numerous experiments to study runoff and soil erosion characteristics. However, they usually differ in their construction details, rainfall generation, plot size and other technical parameters. As field experiments using medium to large scale rainfall simulators (plot length 3 - 8 m) are very much time and labor consuming, close cooperation of individual teams and comparability of results is highly desirable to enlarge the database of results. Two experimental campaigns were organized to compare three field rainfall simulators of similar scale (plot size), but with different technical parameters. The results were then compared, to identify parameters that are crucial for soil loss and surface runoff formation and test if results from individual devices can be reliably compared. The rainfall simulators compared were: field rainfall simulator of CTU Prague (the Czech Republic) (Kavka et al., 2012; EGU2015-11025), field simulator of BAW (Austria) (Strauss et al., 2002) and field simulator of TU Bergakademie Freiberg (Germany) (Schindewolf & Schmidt 2012). The device of CTU Prague is usually applied to a plot size of 9,5 x 2 m employing 4 nozzles SS Full Jet 40WSQ mounted on folding arm, working pressure is 0.8 bar, height of nozzles is 2.65 m. The intensity of rainfall is regulated electronically, which leaves the nozzle opened only for certain time. The rainfall simulator of BAW is constructed as a modular system, which is usually applied for a length of 5 m (area 2 x 5 m), using 6 nozzles SS Full Jet 40WSQ. Usual working pressure is 0.25 bar. Elevation of nozzles is 2.6 m. The intensity of rainfall is regulated electronically, which leaves the nozzle opened only for certain time. The device of TU Bergakademie Freiberg is also standard modular system, working usually with a plot size of 3 x 1 m, using 3 oscillating VeeJet 80/100 nozzles with an usual operating pressure of 0.5 bar. Intensity is regulated by the frequency of sweeps above the experimental plot. Comparison was done during two independent campaigns, where always two devices were present. Rainfall intensity for the experiments varied between 40 to 60 mm/h. Mutual comparison was carried out between the CTU Prague and TU Freiberg RSs at plot size of 3 x 1 m and Between CTU Prague and BAW RSs at plot size of 5 x 2 m. In general, the experiments revealed a significant effect of potential heterogeneities at the experimental plots and an effect of raindrop energy on both surface runoff formation and mainly soil loss. Therefore, coordination of methodology of the experiments and careful control of initial conditions seem to be a crucial point for comparability of results from individual devices. Detailed results will be presented on the poster. The research has been supported by the research grants SGS14/180/OHK1/3T/11, QJ1230056 and 7AMB14AT020. References Kavka, P., Davidová, T., Janotová, B., Bauer, M. a Dostál, T. 2012. Mobilní dešťový simulátor.(in Czech), Stavební obzor. 8, 2012. Schindewolf, M. & J. Schmidt (2012): Parameterization of the EROSION 2D/3D soil erosion model using a small-scale rainfall simulator and upstream runoff simulation, Catena 91, pp. 47-55, DOI: 10.1016/j.catena.2011.01.007 Strauss P., J.Pitty, M.Pfeffer, A. Mentler (2000): Rainfall Simulation for Outdoor Experiments. In: P. Jamet, J. Cornejo(eds.): Current research methods to assess the environmental fate of pesticides. pp. 329-333, INRA Editions.
Cartesian-Grid Simulations of a Canard-Controlled Missile with a Free-Spinning Tail
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Kwak, Dochan (Technical Monitor)
2002-01-01
The proposed paper presents a series of simulations of a geometrically complex, canard-controlled, supersonic missile with free-spinning tail fins. Time-dependent simulations were performed using an inviscid Cartesian-grid-based method with results compared to both experimental data and high-resolution Navier-Stokes computations. At fixed free stream conditions and canard deflections, the tail spin rate was iteratively determined such that the net rolling moment on the empennage is zero. This rate corresponds to the time-asymptotic rate of the free-to-spin fin system. After obtaining spin-averaged aerodynamic coefficients for the missile, the investigation seeks a fixed-tail approximation to the spin-averaged aerodynamic coefficients, and examines the validity of this approximation over a variety of freestream conditions.
Simulations of Field-Emission Electron Beams from CNT Cathodes in RF Photoinjectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mihalcea, Daniel; Faillace, Luigi; Panuganti, Harsha
2015-06-01
Average field emission currents of up to 700 mA were produced by Carbon Nano Tube (CNT) cathodes in a 1.3 GHz RF gun at Fermilab High Brightness Electron Source Lab. (HBESL). The CNT cathodes were manufactured at Xintek and tested under DC conditions at RadiaBeam. The electron beam intensity as well as the other beam properties are directly related to the time-dependent electric field at the cathode and the geometry of the RF gun. This report focuses on simulations of the electron beam generated through field-emission and the results are compared with experimental measurements. These simulations were performed with themore » time-dependent Particle In Cell (PIC) code WARP.« less
Comparison of optimization algorithms in intensity-modulated radiation therapy planning
NASA Astrophysics Data System (ADS)
Kendrick, Rachel
Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.
Teaching binocular indirect ophthalmoscopy to novice residents using an augmented reality simulator.
Rai, Amandeep S; Rai, Amrit S; Mavrikakis, Emmanouil; Lam, Wai Ching
2017-10-01
To compare the traditional teaching approach of binocular indirect ophthalmoscopy (BIO) to the EyeSI augmented reality (AR) BIO simulator. Prospective randomized control trial. 28 post-graduate year one (PGY1) ophthalmology residents. Residents were recruited at the 2012 Toronto Ophthalmology Residents Introductory Course (TORIC). 15 were randomized to conventional teaching (Group 1), and 13 to augmented reality simulator training (Group 2). 3 vitreoretinal fellows were enrolled to serve as experts. Evaluations were completed on the simulator, with 3 tasks, and outcome measures were total raw score, total time elapsed, and performance. Following conventional training, Group 1 residents were outperformed by vitreoretinal fellows with respect to all 3 outcome measures. Following AR training, Group 2 residents demonstrated superior total scores and performance compared to Group 1 residents. Once the Group 1 residents also completed the AR BIO training, there was a significant improvement compared to their baseline scores, and were now on par with Group 2 residents. This study provides construct validity for the EyeSI AR BIO simulator and demonstrates that it may be superior to conventional BIO teaching for novice ophthalmology residents. Copyright © 2017 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Oruc, Ilker
This thesis presents the development of computationally efficient coupling of Navier-Stokes CFD with a helicopter flight dynamics model, with the ultimate goal of real-time simulation of fully coupled aerodynamic interactions between rotor flow and the surrounding terrain. A particular focus of the research is on coupled airwake effects in the helicopter / ship dynamic interface. A computationally efficient coupling interface was developed between the helicopter flight dynamics model, GENHEL-PSU and the Navier-Stokes solvers, CRUNCH/CRAFT-CFD using both FORTRAN and C/C++ programming languages. In order to achieve real-time execution speeds, the main rotor was modeled with a simplified actuator disk using unsteady momentum sources, instead of resolving the full blade geometry in the CFD. All the airframe components, including the fuselage are represented by single aerodynamic control points in the CFD calculations. The rotor downwash influence on the fuselage and empennage are calculated by using the CFD predicted local flow velocities at these aerodynamic control points defined on the helicopter airframe. In the coupled simulations, the flight dynamics model is free to move within a computational domain, where the main rotor forces are translated into source terms in the momentum equations of the Navier-Stokes equations. Simultaneously, the CFD calculates induced velocities those are fed back to the simulation and affect the aerodynamic loads in the flight dynamics. The CFD solver models the inflow, ground effect, and interactional aerodynamics in the flight dynamics simulation, and these calculations can be coupled with solution of the external flow (e.g. ship airwake effects). The developed framework was utilized for various investigations of hovering, forward flight and helicopter/terrain interaction simulations including standard ground effect, partial ground effect, sloped terrain, and acceleration in ground effect; and results compared with different flight and experimental data. In near ground cases, the fully-coupled flight dynamics and CFD simulations predicted roll oscillations due to interactions of the rotor downwash, ground plane, and the feedback controller, which are not predicted by the conventional simulation models. Fully coupled simulations of a helicopter accelerating near ground predicted flow formations similar to the recirculation and ground vortex flow regimes observed in experiments. The predictions of hover power reductions due to ground effect compared well to a recent experimental data and the results showed 22% power reduction for a hover flight z/R=0.55 above ground level. Fully coupled simulations performed for a helicopter hovering over and approaching to a ship flight deck and results compared with the standalone GENHEL-PSU simulations without ship airwake and one-way coupled simulations. The fully-coupled simulations showed higher pilot workload compared to the other two cases. In order to increase the execution speeds of the CFD calculations, several improvements were made on the CFD solver. First, the initial coupling approach File I/O was replaced with a more efficient method called Multiple Program Multiple Data MPI framework, where the two executables communicate with each other by MPI calls. Next, the unstructured solver (CRUNCH CFD), which is 2nd-order accurate in space, was replaced with the faster running structured solver (CRAFT CFD) that is 5th-order accurate in space. Other improvements including a more efficient k-d tree search algorithm and the bounding of the source term search space within a small region of the grid surrounding the rotor were made on the CFD solver. The final improvement was to parallelize the search task with the CFD solver tasks within the solver. To quantify the speed-up of the improvements to the coupling interface described above, a study was performed to demonstrate the speedup achieved from each of the interface improvements. The improvements made on the CFD solver showed more than 40 times speedup from the baseline file I/O and unstructured solver CRUNCH CFD. Using a structured CFD solver with 5th-order spacial accuracy provided the largest reductions in execution times. Disregarding the solver numeric, the total speedup of all of the interface improvements including the MPMD rotor point exchange, k-d tree search algorithm, bounded search space, and paralleled search task, was approximately 231%, more than a factor of 2. All these improvements provided the necessary speedup for approach real-time CFD. (Abstract shortened by ProQuest.).
Use NU-WRF and GCE Model to Simulate the Precipitation Processes During MC3E Campaign
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Wu, Di; Matsui, Toshi; Li, Xiaowen; Zeng, Xiping; Peter-Lidard, Christa; Hou, Arthur
2012-01-01
One of major CRM approaches to studying precipitation processes is sometimes referred to as "cloud ensemble modeling". This approach allows many clouds of various sizes and stages of their lifecycles to be present at any given simulation time. Large-scale effects derived from observations are imposed into CRMs as forcing, and cyclic lateral boundaries are used. The advantage of this approach is that model results in terms of rainfall and QI and Q2 usually are in good agreement with observations. In addition, the model results provide cloud statistics that represent different types of clouds/cloud systems during their lifetime (life cycle). The large-scale forcing derived from MC3EI will be used to drive GCE model simulations. The model-simulated results will be compared with observations from MC3E. These GCE model-simulated datasets are especially valuable for LH algorithm developers. In addition, the regional scale model with very high-resolution, NASA Unified WRF is also used to real time forecast during the MC3E campaign to ensure that the precipitation and other meteorological forecasts are available to the flight planning team and to interpret the forecast results in terms of proposed flight scenarios. Post Mission simulations are conducted to examine the sensitivity of initial and lateral boundary conditions to cloud and precipitation processes and rainfall. We will compare model results in terms of precipitation and surface rainfall using GCE model and NU-WRF
Parametric study of graphite foam fins and application in heat exchangers
NASA Astrophysics Data System (ADS)
Collins, Michael
This thesis focuses on the simulation and experimental studies of finned graphite foam extended surfaces to test their heat transfer characteristics and potential applications in condensers. Different fin designs were developed to conduct a parametric study on the thermal effectiveness with respect to thickness, spacing and fin offset angle. Each fin design was computationally simulated to estimate the heat transfer under specific conditions. The simulations showed that this optimal fin configuration could conduct more than 297% the amount of thermal energy as compared to straight aluminum fins. Graphite foam fins were then implemented into a simulation of the condenser system. The condenser was simulated with six different orientations of baffles to examine the incoming vapor and resulting two-phase flow patterns. The simulations showed that using both horizontal and vertical baffling provided the configuration with the highest heat transfer and minimized the bypass regions where the vapor would circumvent the graphite foam. This baffle configuration increased the amount of vapor flow through the inner graphite fins and cold water pipes, which gave this configuration the highest heat transfer. The results from experimental tests using the condenser system confirmed that using three baffles will increase performance consistent with the simulation results. The experimental data showed that the condenser using graphite foam had five times the heat transfer compared to the condenser using only aluminum fins. Incorporating baffles into the condenser using graphite foam enabled this system to conduct nearly ten times more heat transfer than the condenser system which only had aluminum fins without baffles. The results from this research indicate that graphite foam is a far superior material heat transfer enhancement material for heat transfer compared to aluminum used as an extended surface. The longitudinal and horizontal baffles incorporated into the condenser system greatly enhanced the heat transfer because of the increased interaction with the porous graphite foam fins.
Verifying Safeguards Declarations with INDEPTH: A Sensitivity Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grogan, Brandon R; Richards, Scott
2017-01-01
A series of ORIGEN calculations were used to simulate the irradiation and decay of a number of spent fuel assemblies. These simulations focused on variations in the irradiation history that achieved the same terminal burnup through a different set of cycle histories. Simulated NDA measurements were generated for each test case from the ORIGEN data. These simulated measurement types included relative gammas, absolute gammas, absolute gammas plus neutrons, and concentrations of a set of six isotopes commonly measured by NDA. The INDEPTH code was used to reconstruct the initial enrichment, cooling time, and burnup for each irradiation using each simulatedmore » measurement type. The results were then compared to the initial ORIGEN inputs to quantify the size of the errors induced by the variations in cycle histories. Errors were compared based on the underlying changes to the cycle history, as well as the data types used for the reconstructions.« less
NASA Technical Reports Server (NTRS)
Mlynczak, Pamela E.; Houghton, David D.; Diak, George R.
1986-01-01
Using a numerical mesoscale model, four simulations were performed to determine the effects of suppressing the initial mesoscale information in the moisture and wind fields on the precipitation forecasts. The simulations included a control forecast 12-h simulation that began at 1200 GMT March 1982 and three experiment simulations with modifications to the moisture and vertical motion fields incorporated at 1800 GMT. The forecasts from 1800 GMT were compared to the second half of the control forecast. It was found that, compared to the control forecast, suppression of the moisture and/or wind initial field(s) produces a drier forecast. However, the characteristics of the precipitation forecasts of the experiments were not different enough to conclude that either mesoscale moisture or mesoscale vertical velocity at the initial time are more important for producing a forecast closer to that of the control.
FASTPM: a new scheme for fast simulations of dark matter and haloes
NASA Astrophysics Data System (ADS)
Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick
2016-12-01
We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.
Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, L.G.; Norman, P.I.; Leadbeater, T.W.
Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less
Very low-dose adult whole-body tumor imaging with F-18 FDG PET/CT
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Naveed, Muhammad; McGrath, Mary; Lisi, Michele; Lavalley, Cathy; Feiglin, David
2015-03-01
The aim of this study was to evaluate if effective radiation dose due to PET component in adult whole-body tumor imaging with time-of-flight F-18 FDG PET/CT could be significantly reduced. We retrospectively analyzed data for 10 patients with the body mass index ranging from 25 to 50. We simulated F-18 FDG dose reduction to 25% of the ACR recommended dose via reconstruction of simulated shorter acquisition time per bed position scans from the acquired list data. F-18 FDG whole-body scans were reconstructed using time-of-flight OSEM algorithm and advanced system modeling. Two groups of images were obtained: group A with a standard dose of F-18 FDG and standard reconstruction parameters and group B with simulated 25% dose and modified reconstruction parameters, respectively. Three nuclear medicine physicians blinded to the simulated activity independently reviewed the images and compared diagnostic quality of images. Based on the input from the physicians, we selected optimal modified reconstruction parameters for group B. In so obtained images, all the lesions observed in the group A were visible in the group B. The tumor SUV values were different in the group A, as compared to group B, respectively. However, no significant differences were reported in the final interpretation of the images from A and B groups. In conclusion, for a small number of patients, we have demonstrated that F-18 FDG dose reduction to 25% of the ACR recommended dose, accompanied by appropriate modification of the reconstruction parameters provided adequate diagnostic quality of PET images acquired on time-of-flight PET/CT.
On the use of programmable hardware and reduced numerical precision in earth-system modeling.
Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N
2015-09-01
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.
NASA Astrophysics Data System (ADS)
Furton, Kenneth G.; Almirall, Jose R.; Wang, Jing
1999-02-01
In this paper, we present data comparing a variety of different conditions for extracting ignitable liquid residues from simulated fire debris samples in order to optimize the conditions for using Solid Phase Microextraction. A simulated accelerant mixture containing 30 components, including those from light petroleum distillates, medium petroleum distillates and heavy petroleum distillates were used to study the important variables controlling Solid Phase Microextraction (SPME) recoveries. SPME is an inexpensive, rapid and sensitive method for the analysis of volatile residues from the headspace over solid debris samples in a container or directly from aqueous samples followed by GC. The relative effects of controllable variables, including fiber chemistry, adsorption and desorption temperature, extraction time, and desorption time, have been optimized. The addition of water and ethanol to simulated debris samples in a can was shown to increase the sensitivity when using headspace SPME extraction. The relative enhancement of sensitivity has been compared as a function of the hydrocarbon chain length, sample temperature, time, and added ethanol concentrations. The technique has also been optimized to the extraction of accelerants directly from water added to the fire debris samples. The optimum adsorption time for the low molecular weight components was found to be approximately 25 minutes. The high molecular weight components were found at a higher concentration the longer the fiber was exposed to the headspace (up to 1 hr). The higher molecular weight components were also found in higher concentrations in the headspace when water and/or ethanol was added to the debris.
Lin, Zibei; Cogan, Noel O I; Pembleton, Luke W; Spangenberg, German C; Forster, John W; Hayes, Ben J; Daetwyler, Hans D
2016-03-01
Genomic selection (GS) provides an attractive option for accelerating genetic gain in perennial ryegrass () improvement given the long cycle times of most current breeding programs. The present study used simulation to investigate the level of genetic gain and inbreeding obtained from GS breeding strategies compared with traditional breeding strategies for key traits (persistency, yield, and flowering time). Base population genomes were simulated through random mating for 60,000 generations at an effective population size of 10,000. The degree of linkage disequilibrium (LD) in the resulting population was compared with that obtained from empirical studies. Initial parental varieties were simulated to match diversity of current commercial cultivars. Genomic selection was designed to fit into a company breeding program at two selection points in the breeding cycle (spaced plants and miniplot). Genomic estimated breeding values (GEBVs) for productivity traits were trained with phenotypes and genotypes from plots. Accuracy of GEBVs was 0.24 for persistency and 0.36 for yield for single plants, while for plots it was lower (0.17 and 0.19, respectively). Higher accuracy of GEBVs was obtained for flowering time (up to 0.7), partially as a result of the larger reference population size that was available from the clonal row stage. The availability of GEBVs permit a 4-yr reduction in cycle time, which led to at least a doubling and trebling genetic gain for persistency and yield, respectively, than the traditional program. However, a higher rate of inbreeding per cycle among varieties was also observed for the GS strategy. Copyright © 2016 Crop Science Society of America.
Lansberg, Maarten G; Bhat, Ninad S; Yeatts, Sharon D; Palesch, Yuko Y; Broderick, Joseph P; Albers, Gregory W; Lai, Tze L; Lavori, Philip W
2016-12-01
Adaptive trial designs that allow enrichment of the study population through subgroup selection can increase the chance of a positive trial when there is a differential treatment effect among patient subgroups. The goal of this study is to illustrate the potential benefit of adaptive subgroup selection in endovascular stroke studies. We simulated the performance of a trial design with adaptive subgroup selection and compared it with that of a traditional design. Outcome data were based on 90-day modified Rankin Scale scores, observed in IMS III (Interventional Management of Stroke III), among patients with a vessel occlusion on baseline computed tomographic angiography (n=382). Patients were categorized based on 2 methods: (1) according to location of the arterial occlusive lesion and onset-to-randomization time and (2) according to onset-to-randomization time alone. The power to demonstrate a treatment benefit was based on 10 000 trial simulations for each design. The treatment effect was relatively homogeneous across categories when patients were categorized based on arterial occlusive lesion and time. Consequently, the adaptive design had similar power (47%) compared with the fixed trial design (45%). There was a differential treatment effect when patients were categorized based on time alone, resulting in greater power with the adaptive design (82%) than with the fixed design (57%). These simulations, based on real-world patient data, indicate that adaptive subgroup selection has merit in endovascular stroke trials as it substantially increases power when the treatment effect differs among subgroups in a predicted pattern. © 2016 American Heart Association, Inc.
Time domain simulation of the response of geometrically nonlinear panels subjected to random loading
NASA Technical Reports Server (NTRS)
Moyer, E. Thomas, Jr.
1988-01-01
The response of composite panels subjected to random pressure loads large enough to cause geometrically nonlinear responses is studied. A time domain simulation is employed to solve the equations of motion. An adaptive time stepping algorithm is employed to minimize intermittent transients. A modified algorithm for the prediction of response spectral density is presented which predicts smooth spectral peaks for discrete time histories. Results are presented for a number of input pressure levels and damping coefficients. Response distributions are calculated and compared with the analytical solution of the Fokker-Planck equations. RMS response is reported as a function of input pressure level and damping coefficient. Spectral densities are calculated for a number of examples.
NASA Astrophysics Data System (ADS)
Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik
2018-05-01
Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.
Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik
2018-05-10
Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2012-01-01
This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.
NASA Technical Reports Server (NTRS)
Kim, J.-H.; Sud, Y. C.
1993-01-01
A 10-year (1979-1988) integration of Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) under Atmospheric Model Intercomparison Project (AMIP) is analyzed and compared with observation. The first momentum fields of circulation variables and also hydrological variables including precipitation, evaporation, and soil moisture are presented. Our goals are (1) to produce a benchmark documentation of the GLA GCM for future model improvements; (2) to examine systematic errors between the simulated and the observed circulation, precipitation, and hydrologic cycle; (3) to examine the interannual variability of the simulated atmosphere and compare it with observation; and (4) to examine the ability of the model to capture the major climate anomalies in response to events such as El Nino and La Nina. The 10-year mean seasonal and annual simulated circulation is quite reasonable compared to the analyzed circulation, except the polar regions and area of high orography. Precipitation over tropics are quite well simulated, and the signal of El Nino/La Nina episodes can be easily identified. The time series of evaporation and soil moisture in the 12 biomes of the biosphere also show reasonable patterns compared to the estimated evaporation and soil moisture.
NASA Astrophysics Data System (ADS)
Goldsmith, K. J. A.; Pittard, J. M.
2018-05-01
The similarities, or otherwise, of a shock or wind interacting with a cloud of density contrast χ = 10 were explored in a previous paper. Here, we investigate such interactions with clouds of higher density contrast. We compare the adiabatic hydrodynamic interaction of a Mach 10 shock with a spherical cloud of χ = 103 with that of a cloud embedded in a wind with identical parameters to the post-shock flow. We find that initially there are only minor morphological differences between the shock-cloud and wind-cloud interactions, compared to when χ = 10. However, once the transmitted shock exits the cloud, the development of a turbulent wake and fragmentation of the cloud differs between the two simulations. On increasing the wind Mach number, we note the development of a thin, smooth tail of cloud material, which is then disrupted by the fragmentation of the cloud core and subsequent `mass-loading' of the flow. We find that the normalized cloud mixing time (tmix) is shorter at higher χ. However, a strong Mach number dependence on tmix and the normalized cloud drag time, t_{drag}^' }, is not observed. Mach-number-dependent values of tmix and t_{drag}^' } from comparable shock-cloud interactions converge towards the Mach-number-independent time-scales of the wind-cloud simulations. We find that high χ clouds can be accelerated up to 80-90 per cent of the wind velocity and travel large distances before being significantly mixed. However, complete mixing is not achieved in our simulations and at late times the flow remains perturbed.
NASA Technical Reports Server (NTRS)
White, Warren B.; Tai, Chang-Kou; Holland, William R.
1990-01-01
The optimal interpolation method of Lorenc (1981) was used to conduct continuous assimilation of altimetric sea level differences from the simulated Geosat exact repeat mission (ERM) into a three-layer quasi-geostrophic eddy-resolving numerical ocean box model that simulates the statistics of mesoscale eddy activity in the western North Pacific. Assimilation was conducted continuously as the Geosat tracks appeared in simulated real time/space, with each track repeating every 17 days, but occurring at different times and locations within the 17-day period, as would have occurred in a realistic nowcast situation. This interpolation method was also used to conduct the assimilation of referenced altimetric sea level differences into the same model, performing the referencing of altimetric sea sevel differences by using the simulated sea level. The results of this dynamical interpolation procedure are compared with those of a statistical (i.e., optimum) interpolation procedure.
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu
2014-12-01
High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.
Insights into the paleoclimate of the PETM from an ensemble of EMIC simulations
NASA Astrophysics Data System (ADS)
Keery, John; Holden, Philip; Edwards, Neil; Monteiro, Fanny; Ridgwell, Andy
2016-04-01
The Eocene epoch, and in particular, the Paleocene-Eocene Thermal Maximum (PETM) of 55.8 Ma, exhibit several features of particular interest for probing our understanding of the Earth system and carbon cycle. CO2 levels have not yet been definitively established, but were known to have varied considerably, peaking at up to several times modern values. Temperatures were several degrees higher than in the modern era, and there were periods of relatively rapid warming, with substantial variability in carbon cycle processes. The Eocene is therefore highly relevant for our understanding of the climate of the 21st Century. Earth system models of intermediate complexity (EMICs), with less detailed simulation of the dynamics of the atmosphere and oceans than general circulation models (GCMs), are sufficiently fast to allow climate modelling over long periods of geological time in comparatively short periods of computer run-time. This speed advantage of EMICs over GCMs permits an "ensemble" of model simulations to be run, allowing statistical analysis of results to be carried out, and allowing the uncertainties in model predictions to be estimated. Here we apply the EMICs PLASIM-GENIE, and GENIE-1, with an Eocene paleogeography which incorporates the major continental configurations and ocean connections, including a shallow strait linking the Arctic to the Tethys, but with neither the Tasman Gateway nor the Drake Passage yet open. Our two model strategy benefits from the detailed simulation of ocean biogeochemistry in GENIE-1, and the 3D spectral atmospheric dynamics in PLASIM-GENIE, which also provides boundary conditions for the GENIE-1 simulations. Using a 50-member ensemble of 1000-year quasi-equilibrium simulations with PLASIM-GENIE, we investigate the relative contributions of orbital and CO2 variability on climate and equator-pole temperature gradients. Results from PLASIM-GENIE are used to configure a harmonised ensemble of GENIE-1 simulations, which will be compared with newly obtained geochemical data on ocean oxygenation through the Eocene from the UK NERC RESPIRE project.
Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning.
Boas, F Edward; Srimathveeravalli, Govindarajan; Durack, Jeremy C; Kaye, Elena A; Erinjeri, Joseph P; Ziv, Etay; Maybody, Majid; Yarmohammadi, Hooman; Solomon, Stephen B
2017-05-01
To create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated. Ice ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1-6 cryoablation probes and 1-2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements were obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions. Average absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm. Cryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.
GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.
Liu, Yangchuan; Tang, Yuguo; Gao, Xin
2017-12-01
The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.
Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, C.R.; Shaddix, C.R.; Smyth, K.C.
This paper presents time-dependent numerical simulations of both steady and time-varying CH{sub 4}/air diffusion flames to examine the differences in combustion conditions which lead to the observed enhancement in soot production in the flickering flames. The numerical model solves the two-dimensional, time-dependent, reactive-flow Navier-Stokes equations coupled with submodels for soot formation and radiation transport. Qualitative comparisons between the experimental and computed steady flame show good agreement for the soot burnout height and overall flame shape except near the burner lip. Quantitative comparisons between experimental and computed radial profiles of temperature and soot volume fraction for the steady flame show goodmore » to excellent agreement at mid-flame heights, but some discrepancies near the burner lip and at high flame heights. For the time-varying CH{sub 4}/air flame, the simulations successfully predict that the maximum soot concentration increases by over four times compared to the steady flame with the same mean fuel and air velocities. By numerically tracking fluid parcels in the flowfield, the temperature and stoichiometry history were followed along their convective pathlines. Results for the pathline which passes through the maximum sooting region show that flickering flames exhibit much longer residence times during which the local temperatures and stoichiometries are favorable for soot production. The simulations also suggest that soot inception occurs later in flickering flames, and at slightly higher temperatures and under somewhat leaner conditions compared to the steady flame. The integrated soot model of Syed et al., which was developed from a steady CH{sub 4}/air flame, successfully predicts soot production in the time-varying CH{sub 4}/air flames.« less
Mi, Xiaojuan; Hammill, Bradley G; Curtis, Lesley H; Lai, Edward Chia-Cheng; Setoguchi, Soko
2016-11-20
Observational comparative effectiveness and safety studies are often subject to immortal person-time, a period of follow-up during which outcomes cannot occur because of the treatment definition. Common approaches, like excluding immortal time from the analysis or naïvely including immortal time in the analysis, are known to result in biased estimates of treatment effect. Other approaches, such as the Mantel-Byar and landmark methods, have been proposed to handle immortal time. Little is known about the performance of the landmark method in different scenarios. We conducted extensive Monte Carlo simulations to assess the performance of the landmark method compared with other methods in settings that reflect realistic scenarios. We considered four landmark times for the landmark method. We found that the Mantel-Byar method provided unbiased estimates in all scenarios, whereas the exclusion and naïve methods resulted in substantial bias when the hazard of the event was constant or decreased over time. The landmark method performed well in correcting immortal person-time bias in all scenarios when the treatment effect was small, and provided unbiased estimates when there was no treatment effect. The bias associated with the landmark method tended to be small when the treatment rate was higher in the early follow-up period than it was later. These findings were confirmed in a case study of chronic obstructive pulmonary disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Powolny, F.; Auffray, E.; Brunner, S. E.; Garutti, E.; Goettlich, M.; Hillemanns, H.; Jarron, P.; Lecoq, P.; Meyer, T.; Schultz-Coulon, H. C.; Shen, W.; Williams, M. C. S.
2011-06-01
Time of flight (TOF) measurements in positron emission tomography (PET) are very challenging in terms of timing performance, and should ideally achieve less than 100 ps FWHM precision. We present a time-based differential technique to read out silicon photomultipliers (SiPMs) which has less than 20 ps FWHM electronic jitter. The novel readout is a fast front end circuit (NINO) based on a first stage differential current mode amplifier with 20 Ω input resistance. Therefore the amplifier inputs are connected differentially to the SiPM's anode and cathode ports. The leading edge of the output signal provides the time information, while the trailing edge provides the energy information. Based on a Monte Carlo photon-generation model, HSPICE simulations were run with a 3 × 3 mm2 SiPM-model, read out with a differential current amplifier. The results of these simulations are presented here and compared with experimental data obtained with a 3 × 3 × 15 mm3 LSO crystal coupled to a SiPM. The measured time coincidence precision and the limitations in the overall timing accuracy are interpreted using Monte Carlo/SPICE simulation, Poisson statistics, and geometric effects of the crystal.
Verification of Reproduction Simulation of the 2011 Great East Japan Tsunami Using Time-Stamp Data
NASA Astrophysics Data System (ADS)
Honma, Motohiro; Ushiyama, Motoyuki
2014-05-01
In the 2011 off the pacific coast of Tohoku earthquake tsunami, the significant damage and loss of lives were caused by large tsunami in the pacific coastal areas of the northern Japan. It is important to understand the situation of tsunami inundation in detail in order to establish the effective measures of disaster prevention. In this study, we calculated the detailed tsunami inundation simulation of Rikuzentakata city and verified the simulation results using not only the static observed data such as inundation area and tsunami height estimated by traces but also time stamp data which were recorded to digital camera etc. We calculated the tsunami simulation by non-linear long-wave theory using the staggered grid and leap flog scheme. We used Fujii and Satake (2011)'s model ver.4.2 as the tsunami source. The inundation model of Rikuzentakata city was constructed by fine ground level data of 10m mesh. In this simulation, the shore and river banks were set in boundary of calculation mesh. At that time, we have calculated two patterns of simulation, one condition is that a bank doesn't collapse even if tsunami overflows on it, another condition is that a bank collapses if tsunami overflows on it and its discharge exceeds the threshold. We can use the inundation area data, which was obtained by Geospatial Information Authority of Japan (GSI), and height data of tsunami trace, which were obtained by the 2011 Tohoku Earthquake Joint Survey (TTJS) group, as "static" verification data. Comparing the inundation area of simulation result with its observation by GSI, both areas are matched very well. And then, correlation coefficient between tsunami height data resulted from simulation and observed by TTJS is 0.756. In order to verify tsunami arrival time, we used the time stamp data which were recorded to digital camera etc. by citizens. Ushiyama and Yokomaku (2012) collected these tsunami stamp data and estimated the arrival time in Rikuzentakata city. We compared the arrival time resulted from tsunami simulation with estimated by Ushiyama and Yokomaku (2012) for some major points. The arrival time is earlier 2-4 minutes in the condition that a bank collapses when tsunami overflows and its discharge exceeds 0.05m2/s at each mesh boundary than in the condition that a bank doesn't collapse. And, on the whole the arrival time estimated from time stamp data is in accord with the result which were calculated in the condition that a bank collapse. We could verify reproducibility about not only the final tsunami inundation situation but also the temporal change of tsunami inundation situation by using the time stamp data. Acknowledgement In this study, we used tsunami trace data obtained by The 2011 Tohoku Earthquake Tsunami Joint Survey (TTJS) Group. Reference 1) Fujii and Satake: Tsunami Source of the Off Tohoku-Pacific Earthquake on March 11, 2011, http://iisee.kenken.go.jp/staff/fujii/OffTohokuPacific2011/tsunami_ja_ver4.2and4.6.html, 2011. 2) Ushiyama and Yokomaku: Estimation of situation in Rikuzentakata city just before tsunami attack based on time stamp data, J.JSNDS31-1, pp.47-58, 2012.
NASA Astrophysics Data System (ADS)
Hayashi, K.; Tokumaru, M.; Kojima, M.; Fujiki, K.
2008-12-01
We present our new boundary treatment to introduce the temporal variation of the observation-based magnetic field and plasma parameters on the inner boundary sphere (at 30 to 50 Rs) to the MHD simulation of the interplanetary space and the simulation results. The boundary treatment to induce the time-variation of the magnetic field including the radial component is essentially same as shown in our previous AGU meetings and newly modified so that the model can also include the variation of the plasma variables detected by IPS (interplanetary scintillation) observation, a ground-based remote sensing technique for the solar wind plasma. We used the WSO (Wilcox Solar Observatory at Stanford University) for the solar magnetic field input. By using the time-varying boundary condition, smooth variations of heliospheric MHD variables during the several Carrington solar rotation period are obtained. The simulation movie will show how the changes in the inner heliosphere observable by the ground-based instrument propagate outward and affects the outer heliosphere. The simulated MHD variables are compared with the Ulysses in-situ measurement data including ones made during its travel from the Earth to Jupiter for validation, and we obtain better agreements than with the simulation with fixed boundary conditions.
Mental time travel and the shaping of the human mind
Suddendorf, Thomas; Addis, Donna Rose; Corballis, Michael C.
2009-01-01
Episodic memory, enabling conscious recollection of past episodes, can be distinguished from semantic memory, which stores enduring facts about the world. Episodic memory shares a core neural network with the simulation of future episodes, enabling mental time travel into both the past and the future. The notion that there might be something distinctly human about mental time travel has provoked ingenious attempts to demonstrate episodic memory or future simulation in non-human animals, but we argue that they have not yet established a capacity comparable to the human faculty. The evolution of the capacity to simulate possible future events, based on episodic memory, enhanced fitness by enabling action in preparation of different possible scenarios that increased present or future survival and reproduction chances. Human language may have evolved in the first instance for the sharing of past and planned future events, and, indeed, fictional ones, further enhancing fitness in social settings. PMID:19528013
NASA Technical Reports Server (NTRS)
Miller, G. K., Jr.; Riley, D. R.
1978-01-01
The effect of secondary tasks in determining permissible time delays in visual-motion simulation of a pursuit tracking task was examined. A single subject, a single set of aircraft handling qualities, and a single motion condition in tracking a target aircraft that oscillates sinusoidally in altitude were used. In addition to the basic simulator delays the results indicate that the permissible time delay is about 250 msec for either a tapping task, an adding task, or an audio task and is approximately 125 msec less than when no secondary task is involved. The magnitudes of the primary task performance measures, however, differ only for the tapping task. A power spectraldensity analysis basically confirms the result by comparing the root-mean-square performance measures. For all three secondary tasks, the total pilot workload was quite high.