Jabor, A; Vlk, T; Boril, P
1996-04-15
We designed a simulation model for the assessment of the financial risks involved when a new diagnostic test is introduced in the laboratory. The model is based on a neural network consisting of ten neurons and assumes that input entities can have assigned appropriate uncertainty. Simulations are done on a 1-day interval basis. Risk analysis completes the model and the financial effects are evaluated for a selected time period. The basic output of the simulation consists of total expenses and income during the simulation time, net present value of the project at the end of simulation, total number of control samples during simulation, total number of patients evaluated and total number of used kits.
Simulating recurrent event data with hazard functions defined on a total time scale.
Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald
2015-03-08
In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.
[Effects of high +Gx during simulated spaceship emergency return on learning and memory in rats].
Xu, Zhi-peng; Sun, Xi-qing; Liu, Ting-song; Wu, Bin; Zhang, Shu; Wu, Ping
2005-02-01
To observe the effects of high +Gx during simulated spaceship emergency return on learning and memory in rats. Thirty two male SD rats were randomly divided into control group, 7 d simulated weightlessness group, +15 Gx/180 s group and +15 Gx/180 s exposure after 7 d simulated weightlessness group, with 8 rats in each group. The changes of learning and memory in rats were measured after stresses by means of Y-maze test and step-through test. In Y-maze test, as compared with control group, percentage of correct reactions decreased significantly (P<0.01) and reaction time increased significantly (P<0.01) in hypergravity after simulated weightlessness group at all time after stress; as compared with +15 Gx group or simulated weightlessness group, percentage of correct reactions decreased significantly (P< 0.05) and reaction time increased significantly (P< 0.05) immediately after stress. In step-through test, as compared with control group, total time increased significantly (P<0.01) in hypergravity after simulated weightlessness group at 1 d after stress; latent time decreased significantly (P<0.01) and number of errors increased significantly (P< 0.01) at all the time after stress. As compared with +15 Gx group, total time increased significantly (P<0.05) immediately, 1 d after stress. As compared with simulated weightlessness group, total time and number of errors increased significantly (P<0.05) immediately after stress. It is suggested that +15 Gx/180 s and simulated weightlessness may affect the ability of learning and memory of rats. Simulated weightlessness for 7 d can aggravate the effect of +Gx on learning and memory ability in rats.
Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2011-01-01
Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less
Martins-Costa, Marilia T C; Ruiz-López, Manuel F
2017-04-15
We report an enhanced sampling technique that allows to reach the multi-nanosecond timescale in quantum mechanics/molecular mechanics molecular dynamics simulations. The proposed technique, called horsetail sampling, is a specific type of multiple molecular dynamics approach exhibiting high parallel efficiency. It couples a main simulation with a large number of shorter trajectories launched on independent processors at periodic time intervals. The technique is applied to study hydrogen peroxide at the water liquid-vapor interface, a system of considerable atmospheric relevance. A total simulation time of a little more than 6 ns has been attained for a total CPU time of 5.1 years representing only about 20 days of wall-clock time. The discussion of the results highlights the strong influence of the solvation effects at the interface on the structure and the electronic properties of the solute. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Three-dimensional particle-particle simulations: Dependence of relaxation time on plasma parameter
NASA Astrophysics Data System (ADS)
Zhao, Yinjian
2018-05-01
A particle-particle simulation model is applied to investigate the dependence of the relaxation time on the plasma parameter in a three-dimensional unmagnetized plasma. It is found that the relaxation time increases linearly as the plasma parameter increases within the range of the plasma parameter from 2 to 10; when the plasma parameter equals 2, the relaxation time is independent of the total number of particles, but when the plasma parameter equals 10, the relaxation time slightly increases as the total number of particles increases, which indicates the transition of a plasma from collisional to collisionless. In addition, ions with initial Maxwell-Boltzmann (MB) distribution are found to stay in the MB distribution during the whole simulation time, and the mass of ions does not significantly affect the relaxation time of electrons. This work also shows the feasibility of the particle-particle model when using GPU parallel computing techniques.
TURBULENCE AND PROTON–ELECTRON HEATING IN KINETIC PLASMA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthaeus, William H; Parashar, Tulasi N; Wu, P.
2016-08-10
Analysis of particle-in-cell simulations of kinetic plasma turbulence reveals a connection between the strength of cascade, the total heating rate, and the partitioning of dissipated energy into proton heating and electron heating. A von Karman scaling of the cascade rate explains the total heating across several families of simulations. The proton to electron heating ratio increases in proportion to total heating. We argue that the ratio of gyroperiod to nonlinear turnover time at the ion kinetic scales controls the ratio of proton and electron heating. The proposed scaling is consistent with simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Chengguang; Drinkwater, Bruce W.
In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method.more » However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.« less
Dynamic load balance scheme for the DSMC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jin; Geng, Xiangren; Jiang, Dingwu
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less
Visuospatial ability correlates with performance in simulated gynecological laparoscopy.
Ahlborg, Liv; Hedman, Leif; Murkes, Daniel; Westman, Bo; Kjellin, Ann; Felländer-Tsai, Li; Enochsson, Lars
2011-07-01
To analyze the relationship between visuospatial ability and simulated laparoscopy performed by consultants in obstetrics and gynecology (OBGYN). This was a prospective cohort study carried out at two community hospitals in Sweden. Thirteen consultants in obstetrics and gynecology were included. They had previously independently performed 10-100 advanced laparoscopies. Participants were tested for visuospatial ability by the Mental Rotations Test version A (MRT-A). After a familiarization session and standardized instruction, all participants subsequently conducted three consecutive virtual tubal occlusions followed by three virtual salpingectomies. Performance in the simulator was measured by Total Time, Score and Ovarian Diathermy Damage. Linear regression was used to analyze the relationship between visuospatial ability and simulated laparoscopic performance. The learning curves in the simulator were assessed in order to interpret the relationship with the visuospatial ability. Visuospatial ability correlated with Total Time (r=-0.62; p=0.03) and Score (r=0.57; p=0.05) in the medium level of the virtual tubal occlusion. In the technically more advanced virtual salpingectomy the visuospatial ability correlated with Total Time (r=-0.64; p=0.02), Ovarian Diathermy Damage (r=-0.65; p=0.02) and with overall Score (r=0.64; p=0.02). Visuospatial ability appears to be related to the performance of gynecological laparoscopic procedures in a simulator. Testing visuospatial ability might be helpful when designing individual training programs. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sharma, Pankaj; Jain, Ajai
2014-12-01
Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.
Simulation and Analysis of EXPRESS Run Frequency
2013-12-01
indicator, Customer Wait Time ( CWT ), is a measure of total wait time for a customer from the time they submit a need until it is fulfilled...Department of Defense 2000). MICAP hours is a special subset of CWT reserved for requirements that represent a mission capability need (i.e. an aircraft is...performance is tracked by total CWT and MICAP days, which are convertible to hours by multiplying by 24. CWT is tracked by measuring the total time
An algorithm for fast elastic wave simulation using a vectorized finite difference operator
NASA Astrophysics Data System (ADS)
Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna
2018-07-01
Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.
The Creation of a CPU Timer for High Fidelity Programs
NASA Technical Reports Server (NTRS)
Dick, Aidan A.
2011-01-01
Using C and C++ programming languages, a tool was developed that measures the efficiency of a program by recording the amount of CPU time that various functions consume. By inserting the tool between lines of code in the program, one can receive a detailed report of the absolute and relative time consumption associated with each section. After adapting the generic tool for a high-fidelity launch vehicle simulation program called MAVERIC, the components of a frequently used function called "derivatives ( )" were measured. Out of the 34 sub-functions in "derivatives ( )", it was found that the top 8 sub-functions made up 83.1% of the total time spent. In order to decrease the overall run time of MAVERIC, a launch vehicle simulation program, a change was implemented in the sub-function "Event_Controller ( )". Reformatting "Event_Controller ( )" led to a 36.9% decrease in the total CPU time spent by that sub-function, and a 3.2% decrease in the total CPU time spent by the overarching function "derivatives ( )".
NASA Astrophysics Data System (ADS)
Yeh, Mei-Ling
We have performed a parallel decomposition of the fictitious Lagrangian method for molecular dynamics with tight-binding total energy expression into the hypercube computer. This is the first time in literature that the dynamical simulation of semiconducting systems containing more than 512 silicon atoms has become possible with the electrons treated as quantum particles. With the utilization of the Intel Paragon system, our timing analysis predicts that our code is expected to perform realistic simulations on very large systems consisting of thousands of atoms with time requirements of the order of tens of hours. Timing results and performance analysis of our parallel code are presented in terms of calculation time, communication time, and setup time. The accuracy of the fictitious Lagrangian method in molecular dynamics simulation is also investigated, especially the energy conservation of the total energy of ions. We find that the accuracy of the fictitious Lagrangian scheme in small silicon cluster and very large silicon system simulations is good for as long as the simulations proceed, even though we quench the electronic coordinates to the Born-Oppenheimer surface only in the beginning of the run. The kinetic energy of electrons does not increase as time goes on, and the energy conservation of the ionic subsystem remains very good. This means that, as far as the ionic subsystem is concerned, the electrons are on the average in the true quantum ground states. We also tie up some odds and ends regarding a few remaining questions about the fictitious Lagrangian method, such as the difference between the results obtained from the Gram-Schmidt and SHAKE method of orthonormalization, and differences between simulations where the electrons are quenched to the Born -Oppenheimer surface only once compared with periodic quenching.
Fu, Shangxi; Liu, Xiao; Zhou, Li; Zhou, Meisheng; Wang, Liming
2017-08-01
The purpose of this study was to estimate the effects of surgical laparoscopic operation course on laparoscopic operation skills after the simulated training for medical students with relatively objective results via data gained before and after the practice course of laparoscopic simulator of the resident standardized trainees. Experiment 1: 20 resident standardized trainees with no experience in laparoscopic surgery were included in the inexperienced group and finished simulated cholecystectomy according to simulator videos. Simulator data was collected (total operation time, path length, average speed of instrument movement, movement efficiency, number of perforations, the time cautery is applied without appropriate contact with adhesions, number of serious complications). Ten attending doctors were included in the experienced group and conducted the operation of simulated cholecystectomy directly. Data was collected with simulator. Data of two groups was compared. Experiment 2: Participants in inexperienced group were assigned to basic group (receiving 8 items of basic operation training) and special group (receiving 8 items of basic operation training and 4 items of specialized training), and 10 persons for each group. They received training course designed by us respectively. After training level had reached the expected target, simulated cholecystectomy was performed, and data was collected. Experimental data between basic group and special group was compared and then data between special group and experienced group was compared. Results of experiment 1 showed that there is significant difference between data in inexperienced group in which participants operated simulated cholecystectomy only according to instructors' teaching and operation video and data in experienced group. Result of experiment 2 suggested that, total operation time, number of perforations, number of serious complications, number of non-cauterized bleeding and the time cautery is applied without appropriate contact with adhesions in special group were all superior to those in basic group. There was no statistical difference on other data between special group and basic group. Comparing special group with experienced group, data of total operation time and the time cautery is applied without appropriate contact with adhesions in experienced group was superior to that in special group. There was no statistical difference on other data between special group and experienced group. Laparoscopic simulators are effective for surgical skills training. Basic courses could mainly improve operator's hand-eye coordination and perception of sense of the insertion depth for instruments. Specialized training courses could not only improve operator's familiarity with surgeries, but also reduce operation time and risk, and improve safety.
Radiotherapy Monte Carlo simulation using cloud computing technology.
Poole, C M; Cornelius, I; Trapp, J V; Langton, C M
2012-12-01
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.
NASA Astrophysics Data System (ADS)
Walsh, T.; Layton, T.; Mellor, J. E.
2017-12-01
Storm damage to the electric grid impacts 23 million electric utility customers and costs US consumers $119 billion annually. Current restoration techniques rely on the past experiences of emergency managers. There are few analytical simulation and prediction tools available for utility managers to optimize storm recovery and decrease consumer cost, lost revenue and restoration time. We developed an agent based model (ABM) for storm recovery in Connecticut. An ABM is a computer modeling technique comprised of agents who are given certain behavioral rules and operate in a given environment. It allows the user to simulate complex systems by varying user-defined parameters to study emergent, unpredicted behavior. The ABM incorporates the road network and electric utility grid for the state, is validated using actual storm event recoveries and utilizes the Dijkstra routing algorithm to determine the best path for repair crews to travel between outages. The ABM has benefits for both researchers and utility managers. It can simulate complex system dynamics, rank variable importance, find tipping points that could significantly reduce restoration time or costs and test a broad range of scenarios. It is a modular, scalable and adaptable technique that can simulate scenarios in silico to inform emergency managers before and during storm events to optimize restoration strategies and better manage expectations of when power will be restored. Results indicate that total restoration time is strongly dependent on the number of crews. However, there is a threshold whereby more crews will not decrease the restoration time, which depends on the total number of outages. The addition of outside crews is more beneficial for storms with a higher number of outages. The time to restoration increases linearly with increasing repair time, while the travel speed has little overall effect on total restoration time. Crews traveling to the nearest outage reduces the total restoration time, while crews going to the outage with most customers affected increases the overall restoration time but more quickly decreases the customers remaining without power. This model can give utility company managers the ability to optimize their restoration strategies before or during a storm event to reduce restoration times and costs.
Four Weeks of β-alanine Supplementation Improves High-Intensity Game Activities in Water Polo.
Brisola, Gabriel Motta Pinheiro; de Souza Malta, Elvis; Santiago, Paulo Roberto Pereira; Vieira, Luiz Henrique Palucci; Zagatto, Alessandro Moura
2018-04-13
The present study aimed to investigate whether four weeks of β-alanine supplementation improves total distance covered, distance covered and time spent in different speed zones, and sprint numbers during a simulated water polo game. The study design was double-blind, parallel and placebo controlled. Eleven male water polo players participated in the study, divided randomly into two homogeneous groups (placebo and β-alanine groups). The participants performed a simulated water polo game before and after the supplementation period (4 weeks). Participants received 4.8g∙day -1 of dextrose or β-alanine on the first ten days and 6.4g∙day -1 on the final 18 days. Only the β-alanine group presented a significant improvement in total sprint numbers compared to the pre-supplementation moment (PRE=7.8±5.2a.u.; POST=20.2±7.8a.u.; p=.002). Furthermore, β-alanine supplementation presented a likely beneficial effect on improving total distance covered (83%) and total time spent (81%) in zone 4 of speed (i.e., speed≥1.8m∙s -1 ). There was no significant interaction effect (group×time) for any variable. To conclude, four weeks of β-alanine supplementation can slightly improve sprint numbers and had a likely beneficial effect on improving distance covered and time spent in zone 4 of speed in a water polo simulated game.
NASA Astrophysics Data System (ADS)
Kaplan, Alexis C.; Henzl, Vladimir; Menlove, Howard O.; Swinhoe, Martyn T.; Belian, Anthony P.; Flaska, Marek; Pozzi, Sara A.
2014-11-01
As a part of the Next Generation Safeguards Initiative Spent Fuel project, we simulate the response of the Differential Die-away Self-Interrogation (DDSI) instrument to determine total elemental plutonium content in an assayed spent nuclear fuel assembly (SFA). We apply recently developed concepts that relate total plutonium mass with SFA multiplication and passive neutron count rate. In this work, the multiplication of the SFA is determined from the die-away time in the early time domain of the Rossi-Alpha distributions measured directly by the DDSI instrument. We utilize MCNP to test the method against 44 pressurized water reactor SFAs from a simulated spent fuel library with a wide dynamic range of characteristic parameters such as initial enrichment, burnup, and cooling time. Under ideal conditions, discounting possible errors of a real world measurement, a root mean square agreement between true and determined total Pu mass of 2.1% is achieved.
ERIC Educational Resources Information Center
Romero-Hall, E.; Watson, G. S.; Adcock, A.; Bliss, J.; Adams Tufts, K.
2016-01-01
This research assessed how emotive animated agents in a simulation-based training affect the performance outcomes and perceptions of the individuals interacting in real time with the training application. A total of 56 participants consented to complete the study. The material for this investigation included a nursing simulation in which…
Hydrologic modeling of two glaciated watersheds in Northeast Pennsylvania
Srinivasan, M.S.; Hamlett, J.M.; Day, R.L.; Sams, J.I.; Petersen, G.W.
1998-01-01
A hydrologic modeling study, using the Hydrologic Simulation Program - FORTRAN (HSPF), was conducted in two glaciated watersheds, Purdy Creek and Ariel Creek in northeastern Pennsylvania. Both watersheds have wetlands and poorly drained soils due to low hydraulic conductivity and presence of fragipans. The HSPF model was calibrated in the Purdy Creek watershed and verified in the Ariel Creek watershed for June 1992 to December 1993 period. In Purdy Creek, the total volume of observed streamflow during the entire simulation period was 13.36 x 106 m3 and the simulated streamflow volume was 13.82 x 106 m3 (5 percent difference). For the verification simulation in Ariel Creek, the difference between the total observed and simulated flow volumes was 17 percent. Simulated peak flow discharges were within two hours of the observed for 30 of 46 peak flow events (discharge greater than 0.1 m3/sec) in Purdy Creek and 27 of 53 events in Ariel Creek. For 22 of the 46 events in Purdy Creek and 24 of 53 in Ariel Creek, the differences between the observed and simulated peak discharge rates were less than 30 percent. These 22 events accounted for 63 percent of total volume of streamflow observed during the selected 46 peak flow events in Purdy Creek. In Ariel Creek, these 24 peak flow events accounted for 62 percent of the total flow observed during all peak flow events. Differences in observed and simulated peak flow rates and volumes (on a percent basis) were greater during the snowmelt runoff events and summer periods than for other times.A hydrologic modeling study, using the Hydrologic Simulation Program - FORTRAN (HSPF), was conducted in two glaciated watersheds, Purdy Creek and Ariel Creek in northeastern Pennsylvania. Both watersheds have wetlands and poorly drained soils due to low hydraulic conductivity and presence of fragipans. The HSPF model was calibrated in the Purdy Creek watershed and verified in the Ariel Creek watershed for June 1992 to December 1993 period. In Purdy Creek, the total volume of observed streamflow during the entire simulation period was 13.36??106 m3 and the simulated streamflow volume was 13.82??106 m3 (5 percent difference). For the verification simulation in Ariel Creek, the difference between the total observed and simulated flow volumes was 17 percent. Simulated peak flow discharges were within two hours of the observed for 30 of 46 peak flow events (discharge greater than 0.1 m3/sec) in Purdy Creek and 27 of 53 events in Ariel Creek. For 22 of the 46 events in Purdy Creek and 24 of 53 in Ariel Creek, the differences between the observed and simulated peak discharge rates were less than 30 percent. These 22 events accounted for 63 percent of total volume of streamflow observed during the selected 46 peak flow events in Purdy Creek. In Ariel Creek, these 24 peak flow events accounted for 62 percent of the total flow observed during all peak flow events. Differences in observed and simulated peak flow rates and volumes (on a percent basis) were greater during the snowmelt runoff events and summer periods than for other times.
Observation of 1-D time dependent non-propagating laser plasma structures using fluid and PIC codes
NASA Astrophysics Data System (ADS)
Verma, Deepa; Bera, Ratan Kumar; Kumar, Atul; Patel, Bhavesh; Das, Amita
2017-12-01
The manuscript reports the observation of time dependent localized and non-propagating structures in the coupled laser plasma system through 1-D fluid and Particle-In-Cell (PIC) simulations. It is reported that such structures form spontaneously as a result of collision amongst certain exact solitonic solutions. They are seen to survive as coherent entities for a long time up to several hundreds of plasma periods. Furthermore, it is shown that such time dependence can also be artificially recreated by significantly disturbing the delicate balance between the radiation and the density fields required for the exact non-propagating solution obtained by Esirkepov et al., JETP 68(1), 36-41 (1998). The ensuing time evolution is an interesting interplay between kinetic and field energies of the system. The electrostatic plasma oscillations are coupled with oscillations in the electromagnetic field. The inhomogeneity of the background and the relativistic nature, however, invariably produces large amplitude density perturbations leading to its wave breaking. In the fluid simulations, the signature of wave breaking can be discerned by a drop in the total energy which evidently gets lost to the grid. The PIC simulations are observed to closely follow the fluid simulations till the point of wave breaking. However, the total energy in the case of PIC simulations is seen to remain conserved throughout the simulations. At the wave breaking, the particles are observed to acquire thermal kinetic energy in the case of PIC. Interestingly, even after wave breaking, compact coherent structures with trapped radiation inside high-density peaks continue to exist both in PIC and fluid simulations. Although the time evolution does not exactly match in the two simulations as it does prior to the process of wave breaking, the time-dependent features exhibited by the remnant structures are characteristically similar.
A Simulation Model for Purchasing Duplicate Copies in a Library
ERIC Educational Resources Information Center
Arms, W. Y.; Walter, T. P.
1974-01-01
A method of estimating the number of duplicate copies of books needed based on a computer simulation which takes into account number of copies available, number of loans, total underlying demand, satisfaction level, percentage time on shelf. (LS)
Estimating total maximum daily loads with the Stochastic Empirical Loading and Dilution Model
Granato, Gregory; Jones, Susan Cheung
2017-01-01
The Massachusetts Department of Transportation (DOT) and the Rhode Island DOT are assessing and addressing roadway contributions to total maximum daily loads (TMDLs). Example analyses for total nitrogen, total phosphorus, suspended sediment, and total zinc in highway runoff were done by the U.S. Geological Survey in cooperation with FHWA to simulate long-term annual loads for TMDL analyses with the stochastic empirical loading and dilution model known as SELDM. Concentration statistics from 19 highway runoff monitoring sites in Massachusetts were used with precipitation statistics from 11 long-term monitoring sites to simulate long-term pavement yields (loads per unit area). Highway sites were stratified by traffic volume or surrounding land use to calculate concentration statistics for rural roads, low-volume highways, high-volume highways, and ultraurban highways. The median of the event mean concentration statistics in each traffic volume category was used to simulate annual yields from pavement for a 29- or 30-year period. Long-term average yields for total nitrogen, phosphorus, and zinc from rural roads are lower than yields from the other categories, but yields of sediment are higher than for the low-volume highways. The average yields of the selected water quality constituents from high-volume highways are 1.35 to 2.52 times the associated yields from low-volume highways. The average yields of the selected constituents from ultraurban highways are 1.52 to 3.46 times the associated yields from high-volume highways. Example simulations indicate that both concentration reduction and flow reduction by structural best management practices are crucial for reducing runoff yields.
Virtual Collaborative Simulation Environment for Integrated Product and Process Development
NASA Technical Reports Server (NTRS)
Gulli, Michael A.
1997-01-01
Deneb Robotics is a leader in the development of commercially available, leading edge three- dimensional simulation software tools for virtual prototyping,, simulation-based design, manufacturing process simulation, and factory floor simulation and training applications. Deneb has developed and commercially released a preliminary Virtual Collaborative Engineering (VCE) capability for Integrated Product and Process Development (IPPD). This capability allows distributed, real-time visualization and evaluation of design concepts, manufacturing processes, and total factory and enterprises in one seamless simulation environment.
Shewokis, Patricia A; Shariff, Faiz U; Liu, Yichuan; Ayaz, Hasan; Castellanos, Andres; Lind, D Scott
2017-02-01
Using functional near infrared spectroscopy, a noninvasive, optical brain imaging tool that monitors changes in hemodynamics within the prefrontal cortex (PFC), we assessed performance and cognitive effort during the acquisition, retention and transfer of multiple simulated laparoscopic tasks by novice learners within a contextual interference paradigm. Third-year medical students (n = 10) were randomized to either a blocked or random practice schedule. Across 3 days, students performed 108 acquisition trials of 3 laparoscopic tasks on the LapSim ® simulator followed by delayed retention and transfer tests. Performance metrics (Global score, Total time) and hemodynamic responses (total hemoglobin (μm)) were assessed during skill acquisition, retention and transfer. All acquisition tasks resulted in significant practice schedule X trial block interactions for the left medial anterior PFC. During retention and transfer, random performed the skills in less time and had lower total hemoglobin change in the right dorsolateral PFC than blocked. Compared with blocked, random practice resulted in enhanced learning through better performance and less cognitive load for retention and transfer of simulated laparoscopic tasks. Copyright © 2016 Elsevier Inc. All rights reserved.
Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-29
In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less
Huang, Cynthia Y; Thomas, Jonathan B; Alismail, Abdullah; Cohen, Avi; Almutairi, Waleed; Daher, Noha S; Terry, Michael H; Tan, Laren D
2018-01-01
The aim of this study was to investigate the feasibility of using augmented reality (AR) glasses in central line simulation by novice operators and compare its efficacy to standard central line simulation/teaching. This was a prospective randomized controlled study enrolling 32 novice operators. Subjects were randomized on a 1:1 basis to either simulation using the augmented virtual reality glasses or simulation using conventional instruction. The study was conducted in tertiary-care urban teaching hospital. A total of 32 adult novice central line operators with no visual or auditory impairments were enrolled. Medical doctors, respiratory therapists, and sleep technicians were recruited from the medical field. The mean time for AR placement in the AR group was 71±43 s, and the time to internal jugular (IJ) cannulation was 316±112 s. There was no significant difference in median (minimum, maximum) time (seconds) to IJ cannulation for those who were in the AR group and those who were not (339 [130, 550] vs 287 [35, 475], p =0.09), respectively. There was also no significant difference between the two groups in median total procedure time (524 [329, 792] vs 469 [198, 781], p =0.29), respectively. There was a significant difference in the adherence level between the two groups favoring the AR group ( p =0.003). AR simulation of central venous catheters in manikins is feasible and efficacious in novice operators as an educational tool. Future studies are recommended in this area as it is a promising area of medical education.
Real-time liquid-crystal atmosphere turbulence simulator with graphic processing unit.
Hu, Lifa; Xuan, Li; Li, Dayu; Cao, Zhaoliang; Mu, Quanquan; Liu, Yonggang; Peng, Zenghui; Lu, Xinghai
2009-04-27
To generate time-evolving atmosphere turbulence in real time, a phase-generating method for our liquid-crystal (LC) atmosphere turbulence simulator (ATS) is derived based on the Fourier series (FS) method. A real matrix expression for generating turbulence phases is given and calculated with a graphic processing unit (GPU), the GeForce 8800 Ultra. A liquid crystal on silicon (LCOS) with 256x256 pixels is used as the turbulence simulator. The total time to generate a turbulence phase is about 7.8 ms for calculation and readout with the GPU. A parallel processing method of calculating and sending a picture to the LCOS is used to improve the simulating speed of our LC ATS. Therefore, the real-time turbulence phase-generation frequency of our LC ATS is up to 128 Hz. To our knowledge, it is the highest speed used to generate a turbulence phase in real time.
Controlling total spot power from holographic laser by superimposing a binary phase grating.
Liu, Xiang; Zhang, Jian; Gan, Yu; Wu, Liying
2011-04-25
By superimposing a tunable binary phase grating with a conventional computer-generated hologram, the total power of multiple holographic 3D spots can be easily controlled by changing the phase depth of grating with high accuracy to a random power value for real-time optical manipulation without extra power loss. Simulation and experiment results indicate that a resolution of 0.002 can be achieved at a lower time cost for normalized total spot power.
Studies on the oil spillage near shorline
NASA Astrophysics Data System (ADS)
Voicu, I.; Dumitrescu, L. G.; Panaitescu, V. F.; Panaitescu, M.
2017-08-01
This paper presents a simulation of an oil spillage near shoreline in real conditions. The purpose of the paper is to determine the evolution of oil spill on sea water surface and in the same time to determine the total costs of depolluting operations organized by the authorities. The simulation is made on the PISCES II Simulator (Potential Incident Simulator Control and Evaluation System) which is designed to handle on real situations such as oil pollutions of the sea. The mathematical model used by the simulator is the dispersion oil-water model, taking account all external conditions such as air/sea water temperature, current/wind speed and direction, sea water density, petroleum physical properties. In the conclusions chapter is presented oil spill details with a financial report for total costs of depolluting operation.
Kim, Tae Han; Lee, Yu Jin; Lee, Eui Jung; Ro, Young Sun; Lee, KyungWon; Lee, Hyeona; Jang, Dayea Beatrice; Song, Kyoung Jun; Shin, Sang Do; Myklebust, Helge; Birkenes, Tonje Søraas
2018-02-01
For cardiac arrests witnessed at home, the witness is usually a middle-aged or older housewife. We compared the quality of cardiopulmonary resuscitation (CPR) performance of bystanders trained with the newly developed telephone-basic life support (T-BLS) program and those trained with standard BLS (S-BLS) training programs. Twenty-four middle-aged and older housewives without previous CPR education were enrolled and randomized into two groups of BLS training programs. The T-BLS training program included concepts and current instruction protocols for telephone-assisted CPR, whereas the S-BLS training program provided training for BLS. After each training course, the participants simulated CPR and were assisted by a dispatcher via telephone. Cardiopulmonary resuscitation quality was measured and recorded using a mannequin simulator. The primary outcome was total no-flow time (>1.5 seconds without chest compression) during simulation. Among 24 participants, two (8.3%) who experienced mechanical failure of simulation mannequin and one (4.2%) who violated simulation protocols were excluded at initial simulation, and two (8.3%) refused follow-up after 6 months. The median (interquartile range) total no-flow time during initial simulation was 79.6 (66.4-96.9) seconds for the T-BLS training group and 147.6 (122.5-184.0) seconds for the S-BLS training group (P < 0.01). Median cumulative interruption time and median number of interruption events during BLS at initial simulation and 6-month follow-up simulation were significantly shorter in the T-BLS than in the S-BLS group (1.0 vs. 9.5, P < 0.01, and 1.0 vs. 10.5, P = 0.02, respectively). Participants trained with the T-BLS training program showed shorter no-flow time and fewer interruptions during bystander CPR simulation assisted by a dispatcher.
Vehicle routing problem with time windows using natural inspired algorithms
NASA Astrophysics Data System (ADS)
Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.
2018-03-01
Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-07-01
UTCHEM IMPLICIT is a three-dimensional chemical flooding simulator. The solution scheme is fully implicit. The pressure equation and the mass conservation equations are solved simultaneously for the aqueous phase pressure and the total concentrations of each component. A third-order-in-space, second-order-in-time finite-difference method and a new total-variation-diminishing (TVD) third-order flux limiter are used to reduce numerical dispersion effects. Saturations and phase concentrations are solved in a flash routine. The major physical phenomena modeled in the simulator are: dispersion, adsorption, aqueous-oleic-microemulsion phase behavior, interfacial tension, relative permeability, capillary trapping, compositional phase viscosity, capillary pressure, phase density, polymer properties: shear thinning viscosity, inaccessiblemore » pore volume, permeability reduction, and adsorption. The following options are available in the simulator: constant or variable time-step sizes, uniform or nonuniform grid, pressure or rate constrained wells, horizontal and vertical wells.« less
Delay Tolerant Networking - Bundle Protocol Simulation
NASA Technical Reports Server (NTRS)
SeGui, John; Jenning, Esther
2006-01-01
In this paper, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the useof MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol and discuss statistics gathered concerning the total time needed to simulate numerous bundle transmissions.
Effect of virtual reality training on laparoscopic surgery: randomised controlled trial
Soerensen, Jette L; Grantcharov, Teodor P; Dalsgaard, Torur; Schouenborg, Lars; Ottosen, Christian; Schroeder, Torben V; Ottesen, Bent S
2009-01-01
Objective To assess the effect of virtual reality training on an actual laparoscopic operation. Design Prospective randomised controlled and blinded trial. Setting Seven gynaecological departments in the Zeeland region of Denmark. Participants 24 first and second year registrars specialising in gynaecology and obstetrics. Interventions Proficiency based virtual reality simulator training in laparoscopic salpingectomy and standard clinical education (controls). Main outcome measure The main outcome measure was technical performance assessed by two independent observers blinded to trainee and training status using a previously validated general and task specific rating scale. The secondary outcome measure was operation time in minutes. Results The simulator trained group (n=11) reached a median total score of 33 points (interquartile range 32-36 points), equivalent to the experience gained after 20-50 laparoscopic procedures, whereas the control group (n=10) reached a median total score of 23 (22-27) points, equivalent to the experience gained from fewer than five procedures (P<0.001). The median total operation time in the simulator trained group was 12 minutes (interquartile range 10-14 minutes) and in the control group was 24 (20-29) minutes (P<0.001). The observers’ inter-rater agreement was 0.79. Conclusion Skills in laparoscopic surgery can be increased in a clinically relevant manner using proficiency based virtual reality simulator training. The performance level of novices was increased to that of intermediately experienced laparoscopists and operation time was halved. Simulator training should be considered before trainees carry out laparoscopic procedures. Trial registration ClinicalTrials.gov NCT00311792. PMID:19443914
Integrating Problem-Based Learning and Simulation: Effects on Student Motivation and Life Skills.
Roh, Young Sook; Kim, Sang Suk
2015-07-01
Previous research has suggested that a teaching strategy integrating problem-based learning and simulation may be superior to traditional lecture. The purpose of this study was to assess learner motivation and life skills before and after taking a course involving problem-based learning and simulation. The design used repeated measures with a convenience sample of 83 second-year nursing students who completed the integrated course. Data from a self-administered questionnaire measuring learner motivation and life skills were collected at pretest, post-problem-based learning, and post-simulation time points. Repeated-measures analysis of variance determined that the mean scores for total learner motivation (F=6.62, P=.003), communication (F=8.27, P<.001), problem solving (F=6.91, P=.001), and self-directed learning (F=4.45, P=.016) differed significantly between time points. Post hoc tests using the Bonferroni correction revealed that total learner motivation and total life skills significantly increased both from pretest to postsimulation and from post-problem-based learning test to postsimulation test. Subscales of learner motivation and life skills, intrinsic goal orientation, self-efficacy for learning and performance, problem-solving skills, and self-directed learning skills significantly increased both from pretest to postsimulation test and from post-problem-based learning test to post-simulation test. The results demonstrate that an integrating problem-based learning and simulation course elicits significant improvement in learner motivation and life skills. Simulation plus problem-based learning is more effective than problem-based learning alone at increasing intrinsic goal orientation, task value, self-efficacy for learning and performance, problem solving, and self-directed learning.
Muralha, Nuno; Oliveira, Manuel; Ferreira, Maria Amélia; Costa-Maia, José
2017-05-31
Virtual reality simulation is a topic of discussion as a complementary tool to traditional laparoscopic surgical training in the operating room. However, it is unclear whether virtual reality training can have an impact on the surgical performance of advanced laparoscopic procedures. Our objective was to assess the ability of the virtual reality simulator LAP Mentor to identify and quantify changes in surgical performance indicators, after LAP Mentor training for digestive anastomosis. Twelve surgeons from Centro Hospitalar de São João in Porto (Portugal) performed two sessions of advanced task 5: anastomosis in LAP Mentor, before and after completing the tutorial, and were evaluated on 34 surgical performance indicators. The results show that six surgical performance indicators significantly changed after LAP Mentor training. The surgeons performed the task significantly faster as the median 'total time' significantly reduced (p < 0.05) from 759.5 to 523.5 seconds. Significant decreases (p < 0.05) were also found in median 'total needle loading time' (303.3 to 107.8 seconds), 'average needle loading time' (38.5 to 31.0 seconds), 'number of passages in which the needle passed precisely through the entrance dots' (2.5 to 1.0), 'time the needle was held outside the visible field' (20.9 to 2.4 seconds), and 'total time the needle-holders' ends are kept outside the predefined operative field' (88.2 to 49.6 seconds). This study raises the possibility of using virtual reality training simulation as a benchmark tool to assess the surgical performance of Portuguese surgeons. LAP Mentor is able to identify variations in surgical performance indicators of digestive anastomosis.
NASA Astrophysics Data System (ADS)
Boakye-Boateng, Nasir Abdulai
The growing demand for wind power integration into the generation mix prompts the need to subject these systems to stringent performance requirements. This study sought to identify the required tools and procedures needed to perform real-time simulation studies of Doubly-Fed Induction Generator (DFIG) based wind generation systems as basis for performing more practical tests of reliability and performance for both grid-connected and islanded wind generation systems. The author focused on developing a platform for wind generation studies and in addition, the author tested the performance of two DFIG models on the platform real-time simulation model; an average SimpowerSystemsRTM DFIG wind turbine, and a detailed DFIG based wind turbine using ARTEMiSRTM components. The platform model implemented here consists of a high voltage transmission system with four integrated wind farm models consisting in total of 65 DFIG based wind turbines and it was developed and tested on OPAL-RT's eMEGASimRTM Real-Time Digital Simulator.
Shao, Yu; Wang, Shumin
2016-12-01
The numerical simulation of acoustic scattering from elastic objects near a water-sand interface is critical to underwater target identification. Frequency-domain methods are computationally expensive, especially for large-scale broadband problems. A numerical technique is proposed to enable the efficient use of finite-difference time-domain method for broadband simulations. By incorporating a total-field/scattered-field boundary, the simulation domain is restricted inside a tightly bounded region. The incident field is further synthesized by the Fourier transform for both subcritical and supercritical incidences. Finally, the scattered far field is computed using a half-space Green's function. Numerical examples are further provided to demonstrate the accuracy and efficiency of the proposed technique.
Mobile Simulation Unit: taking simulation to the surgical trainee.
Pena, Guilherme; Altree, Meryl; Babidge, Wendy; Field, John; Hewett, Peter; Maddern, Guy
2015-05-01
Simulation-based training has become an increasingly accepted part of surgical training. However, simulators are still not widely available to surgical trainees. Some factors that hinder the widespread implementation of simulation-based training are the lack of standardized methods and equipment, costs and time constraints. We have developed a Mobile Simulation Unit (MSU) that enables trainees to access modern simulation equipment tailored to the needs of the learner at the trainee's workplace. From July 2012 to December 2012, the MSU visited six hospitals in South Australia, four in metropolitan and two in rural areas. Resident Medical Officers, surgical trainees, Fellows and International Medical Graduates were invited to voluntarily utilize a variety of surgical simulators on offer. Participants were asked to complete a survey about the accessibility of simulation equipment at their workplace, environment of the MSU, equipment available and instruction received. Utilization data were collected. The MSU was available for a total of 303 h over 52 days. Fifty-five participants were enrolled in the project and each spent on average 118 min utilizing the simulators. The utilization of the total available time was 36%. Participants reported having a poor access to simulation at their workplace and overwhelmingly gave positive feedback regarding their experience in the MSU. The use of the MSU to provide simulation-based education in surgery is feasible and practical. The MSU provides consistent simulation training at the surgical trainee's workplace, regardless of geographic location, and it has the potential to increase participation in simulation programmes. © 2014 Royal Australasian College of Surgeons.
1980-03-01
the total energy release of the explosive driver using expanded polystyrene and at the same time, controlling the rate of release. The part played by aqueous foam in minimising irregularities in waveform also is described. (Author)
Real-time, interactive, visually updated simulator system for telepresence
NASA Technical Reports Server (NTRS)
Schebor, Frederick S.; Turney, Jerry L.; Marzwell, Neville I.
1991-01-01
Time delays and limited sensory feedback of remote telerobotic systems tend to disorient teleoperators and dramatically decrease the operator's performance. To remove the effects of time delays, key components were designed and developed of a prototype forward simulation subsystem, the Global-Local Environment Telerobotic Simulator (GLETS) that buffers the operator from the remote task. GLETS totally immerses an operator in a real-time, interactive, simulated, visually updated artificial environment of the remote telerobotic site. Using GLETS, the operator will, in effect, enter into a telerobotic virtual reality and can easily form a gestalt of the virtual 'local site' that matches the operator's normal interactions with the remote site. In addition to use in space based telerobotics, GLETS, due to its extendable architecture, can also be used in other teleoperational environments such as toxic material handling, construction, and undersea exploration.
Bronchoscopy Simulation Training as a Tool in Medical School Education.
Gopal, Mallika; Skobodzinski, Alexus A; Sterbling, Helene M; Rao, Sowmya R; LaChapelle, Christopher; Suzuki, Kei; Litle, Virginia R
2018-07-01
Procedural simulation training is rare at the medical school level and little is known about its usefulness in improving anatomic understanding and procedural confidence in students. Our aim is to assess the impact of bronchoscopy simulation training on bronchial anatomy knowledge and technical skills in medical students. Medical students were recruited by email, consented, and asked to fill out a survey regarding their baseline experience. Two thoracic surgeons measured their knowledge of bronchoscopy on a virtual reality bronchoscopy simulator using the Bronchoscopy Skills and Tasks Assessment Tool (BSTAT), a validated 65-point checklist (46 for anatomy, 19 for simulation). Students performed four self-directed training sessions of 15 minutes per week. A posttraining survey and BSTAT were completed afterward. Differences between pretraining and posttraining scores were analyzed with paired Student's t tests and random intercept linear regression models accounting for baseline BSTAT score, total training time, and training year. The study was completed by 47 medical students with a mean training time of 81.5 ± 26.8 minutes. Mean total BSTAT score increased significantly from 12.3 ± 5.9 to 48.0 ± 12.9 (p < 0.0001); mean scores for bronchial anatomy increased from 0.1 ± 0.9 to 31.1 ± 12.3 (p < 0.0001); and bronchoscopy navigational skills increased from 12.1 ± 5.7 to 17.4 ± 2.5 (p < 0.0001). Total training time and frequency of training did not have a significant impact on level of improvement. Self-driven bronchoscopy simulation training in medical students led to improvements in bronchial anatomy knowledge and bronchoscopy skills. Further investigation is under way to determine the impact of bronchoscopy simulation training on future specialty interest and long-term skills retention. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
ETARA PC version 3.3 user's guide: Reliability, availability, maintainability simulation model
NASA Technical Reports Server (NTRS)
Hoffman, David J.; Viterna, Larry A.
1991-01-01
A user's manual describing an interactive, menu-driven, personal computer based Monte Carlo reliability, availability, and maintainability simulation program called event time availability reliability (ETARA) is discussed. Given a reliability block diagram representation of a system, ETARA simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair intervals as a function of exponential and/or Weibull distributions. Availability parameters such as equivalent availability, state availability (percentage of time as a particular output state capability), continuous state duration and number of state occurrences can be calculated. Initial spares allotment and spares replenishment on a resupply cycle can be simulated. The number of block failures are tabulated both individually and by block type, as well as total downtime, repair time, and time waiting for spares. Also, maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can be calculated over a cumulative period of time or at specific points in time.
Huang, Cynthia Y; Thomas, Jonathan B; Alismail, Abdullah; Cohen, Avi; Almutairi, Waleed; Daher, Noha S; Terry, Michael H; Tan, Laren D
2018-01-01
Objective The aim of this study was to investigate the feasibility of using augmented reality (AR) glasses in central line simulation by novice operators and compare its efficacy to standard central line simulation/teaching. Design This was a prospective randomized controlled study enrolling 32 novice operators. Subjects were randomized on a 1:1 basis to either simulation using the augmented virtual reality glasses or simulation using conventional instruction. Setting The study was conducted in tertiary-care urban teaching hospital. Subjects A total of 32 adult novice central line operators with no visual or auditory impairments were enrolled. Medical doctors, respiratory therapists, and sleep technicians were recruited from the medical field. Measurements and main results The mean time for AR placement in the AR group was 71±43 s, and the time to internal jugular (IJ) cannulation was 316±112 s. There was no significant difference in median (minimum, maximum) time (seconds) to IJ cannulation for those who were in the AR group and those who were not (339 [130, 550] vs 287 [35, 475], p=0.09), respectively. There was also no significant difference between the two groups in median total procedure time (524 [329, 792] vs 469 [198, 781], p=0.29), respectively. There was a significant difference in the adherence level between the two groups favoring the AR group (p=0.003). Conclusion AR simulation of central venous catheters in manikins is feasible and efficacious in novice operators as an educational tool. Future studies are recommended in this area as it is a promising area of medical education. PMID:29785148
Ruel, Allison V; Lee, Yuo-Yu; Boles, John; Boettner, Friedrich; Su, Edwin; Westrich, Geoffrey H
2015-07-01
After total hip replacement surgery, patients are eager to resume the activities of daily life, particularly driving. Most surgeons recommend waiting 6 weeks after surgery to resume driving; however, there is no evidence to indicate that patients cannot resume driving earlier. Our purpose was to evaluate when in the recovery period following THA that patients regain or improve upon their preoperative braking reaction time, allowing them to safely resume driving. We measured and compared pre- and postoperative braking reaction times of 90 patients from 3 different surgeons using a Fully Interactive Driving Simulator (Simulator Systems International, Tulsa, OK). We defined a return to safe braking reaction time as a return to a time value that is either equal to or less than the preoperative braking reaction time. Patients tested at 2 and 3 weeks after surgery had slower braking reaction times than preoperative times by an average of 0.069 and 0.009 s, respectively. At 4 weeks after surgery, however, patients improved their reaction times by 0.035 s (p = 0.0398). In addition, at 2, 3, and 4 weeks postoperatively, the results also demonstrated that patient less than 70 years of age recovered faster. Based upon the results of this study, most patients should be allowed to return to driving 4 weeks following minimally invasive primary total hip arthroplasty.
NASA Astrophysics Data System (ADS)
Exbrayat, J.-F.; Pitman, A. J.; Abramowitz, G.
2014-03-01
Recent studies have identified the first-order parameterization of microbial decomposition as a major source of uncertainty in simulations and projections of the terrestrial carbon balance. Here, we use a reduced complexity model representative of the current state-of-the-art parameterization of soil organic carbon decomposition. We undertake a systematic sensitivity analysis to disentangle the effect of the time-invariant baseline residence time (k) and the sensitvity of microbial decomposition to temperature (Q10) on soil carbon dynamics at regional and global scales. Our simulations produce a range in total soil carbon at equilibrium of ~ 592 to 2745 Pg C which is similar to the ~ 561 to 2938 Pg C range in pre-industrial soil carbon in models used in the fifth phase of the Coupled Model Intercomparison Project. This range depends primarily on the value of k, although the impact of Q10 is not trivial at regional scales. As climate changes through the historical period, and into the future, k is primarily responsible for the magnitude of the response in soil carbon, whereas Q10 determines whether the soil remains a sink, or becomes a source in the future mostly by its effect on mid-latitudes carbon balance. If we restrict our simulations to those simulating total soil carbon stocks consistent with observations of current stocks, the projected range in total soil carbon change is reduced by 42% for the historical simulations and 45% for the future projections. However, while this observation-based selection dismisses outliers it does not increase confidence in the future sign of the soil carbon feedback. We conclude that despite this result, future estimates of soil carbon, and how soil carbon responds to climate change should be constrained by available observational data sets.
NASA Astrophysics Data System (ADS)
Exbrayat, J.-F.; Pitman, A. J.; Abramowitz, G.
2014-12-01
Recent studies have identified the first-order representation of microbial decomposition as a major source of uncertainty in simulations and projections of the terrestrial carbon balance. Here, we use a reduced complexity model representative of current state-of-the-art models of soil organic carbon decomposition. We undertake a systematic sensitivity analysis to disentangle the effect of the time-invariant baseline residence time (k) and the sensitivity of microbial decomposition to temperature (Q10) on soil carbon dynamics at regional and global scales. Our simulations produce a range in total soil carbon at equilibrium of ~ 592 to 2745 Pg C, which is similar to the ~ 561 to 2938 Pg C range in pre-industrial soil carbon in models used in the fifth phase of the Coupled Model Intercomparison Project (CMIP5). This range depends primarily on the value of k, although the impact of Q10 is not trivial at regional scales. As climate changes through the historical period, and into the future, k is primarily responsible for the magnitude of the response in soil carbon, whereas Q10 determines whether the soil remains a sink, or becomes a source in the future mostly by its effect on mid-latitude carbon balance. If we restrict our simulations to those simulating total soil carbon stocks consistent with observations of current stocks, the projected range in total soil carbon change is reduced by 42% for the historical simulations and 45% for the future projections. However, while this observation-based selection dismisses outliers, it does not increase confidence in the future sign of the soil carbon feedback. We conclude that despite this result, future estimates of soil carbon and how soil carbon responds to climate change should be more constrained by available data sets of carbon stocks.
Mehta, A; Patel, S; Robison, W; Senkowski, T; Allen, J; Shaw, E; Senkowski, C
2018-03-01
New techniques in minimally invasive and robotic surgical platforms require staged curricula to insure proficiency. Scant literature exists as to how much simulation should play a role in training those who have skills in advanced surgical technology. The abilities of novel users may help discriminate if surgically experienced users should start at a higher simulation level or if the tasks are too rudimentary. The study's purpose is to explore the ability of General Surgery residents to gain proficiency on the dVSS as compared to novel users. The hypothesis is that Surgery residents will have increased proficiency in skills acquisition as compared to naive users. Six General Surgery residents at a single institution were compared with six teenagers using metrics measured by the dVSS. Participants were given two 1-h sessions to achieve an MScoreTM in the 90th percentile on each of the five simulations. MScoreTM software compiles a variety of metrics including total time, number of attempts, and high score. Statistical analysis was run using Student's t test. Significance was set at p value <0.05. Total time, attempts, and high score were compared between the two groups. The General Surgery residents took significantly less Total Time to complete Pegboard 1 (PB1) (p = 0.043). No significant difference was evident between the two groups in the other four simulations across the same MScoreTM metrics. A focused look at the energy dissection task revealed that overall score might not be discriminant enough. Our findings indicate that prior medical knowledge or surgical experience does not significantly impact one's ability to acquire new skills on the dVSS. It is recommended that residency-training programs begin to include exposure to robotic technology.
A new infant hybrid respiratory simulator: preliminary evaluation based on clinical data.
Stankiewicz, Barbara; Pałko, Krzysztof J; Darowski, Marek; Zieliński, Krzysztof; Kozarski, Maciej
2017-11-01
A new hybrid (numerical-physical) simulator of the respiratory system, designed to simulate spontaneous and artificial/assisted ventilation of preterm and full-term infants underwent preliminary evaluation. A numerical, seven-compartmental model of the respiratory system mechanics allows the operator to simulate global and peripheral obstruction and restriction of the lungs. The physical part of the simulator is a piston-based construction of impedance transformer. LabVIEW real-time software coordinates the work of both parts of the simulator and its interaction with a ventilator. Using clinical data, five groups of "artificial infants" were examined: healthy full-term infants, very low-birth-weight preterm infants successfully (VLBW) and unsuccessfully extubated (VLBWun) and extremely low-birth-weight preterm infants without (ELBW) and with bronchopulmonary dysplasia (ELBW_BPD). Pressure-controlled ventilation was simulated to measure peak inspiratory pressure, mean airway pressure, total (patient + endotracheal tube) airway resistance (R), total dynamic compliance of the respiratory system (C), and total work of breathing by the ventilator (WOB). The differences between simulation and clinical parameters were not significant. High correlation coefficients between both types of data were obtained for R, C, and WOB (γ R = 0.99, P < 0.0005; γ C = 0.85, P < 0.005; γ WOB = 0.96, P < 0.05, respectively). Thus, the simulator accurately reproduces infant respiratory system mechanics.
Cai, Jian-liang; Zhang, Yi; Sun, Guo-feng; Li, Ning-chen; Yuan, Xue-li; Na, Yan-qun
2013-10-01
Minimally invasive flexible ureteroscopy techniques have widely adopted in the management of patients with renal stones. We performed this study to investigate the value of virtual reality simulator training in retrograde flexible ureteroscopy renal stone treatment for catechumen. Thirty catechumen, included 17 attending physicians and 13 associate chief physicians, were selected for study. The trainees first underwent 1-hour basic training to get familiar with the instrument and basic procedures, then followed by 4-hour practice on virtual reality simulators. Before and after the 4-hour training, all trainees undertake an assessment with task 7 program (right low pole calyces stone management). We documented for each trainee the total time of procedure, time of progressing from the orifice to stone, stone translocation and fragmentation time, laser operate proficiency scale, total laser energy, maximal size of residual stone fragments, number of trauma from the scopes and tools, damage to the scope and global rating scale (GRS). The proficiency of this training program was analyzed by the comparison of the first and second assessment outcomes. Significant improvement was observed in retrograde flexible ureteroscopy management of renal stone on virtual reality simulators after finishing the 4 hour special-purpose training. This was demonstrated by improvement in total procedure time ((18.37±2.59) minutes vs. (38.67±1.94) minutes), progressing time from the orifice to stone ((4.00±1.08) minutes vs. (13.80±2.01) minutes), time of stone translocation ((1.80±0.71) minutes vs. (6.57±1.01) minutes), fragmentation time ((4.43±1.25) minutes vs. (13.53±1.46) minutes), laser operate proficiency scale (8.47±0.73 vs. 3.77±0.77), total laser energy ((3231.6±401.4) W vs. (5329.8±448.9) W), maximal size of residual stone fragments ((2.66±0.39) mm vs. (5.77±0.63) mm), number of trauma from the scopes and tools (3.27±1.01 vs. 10.37±3.02), damage to the scope (0 vs. 0.97±0.76) and GRS (29.27±2.95 vs. 9.87±2.21). The differences between the first and the second assessment were all statistically significant (all P < 0.01). The virtual reality simulator training program can help the trainees to rapidly improve their retrograde flexible ureteroscopy skill in renal stone treatment.
Development of cost-effective surfactant flooding technology. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Sepehrnoori, K.
1996-11-01
Task 1 of this research was the development of a high-resolution, fully implicit, finite-difference, multiphase, multicomponent, compositional simulator for chemical flooding. The major physical phenomena modeled in this simulator are dispersion, heterogeneous permeability and porosity, adsorption, interfacial tension, relative permeability and capillary desaturation, compositional phase viscosity, compositional phase density and gravity effects, capillary pressure, and aqueous-oleic-microemulsion phase behavior. Polymer and its non-Newtonian rheology properties include shear-thinning viscosity, permeability reduction, inaccessible pore volume, and adsorption. Options of constant or variable space grids and time steps, constant-pressure or constant-rate well conditions, horizontal and vertical wells, and multiple slug injections are also availablemore » in the simulator. The solution scheme used in this simulator is fully implicit. The pressure equation and the mass-conservation equations are solved simultaneously for the aqueous-phase pressure and the total concentrations of each component. A third-order-in-space, second-order-in-time finite-difference method and a new total-variation-diminishing (TVD) third-order flux limiter are used that greatly reduce numerical dispersion effects. Task 2 was the optimization of surfactant flooding. The code UTCHEM was used to simulate surfactant polymer flooding.« less
Modeling and simulation of queuing system for customer service improvement: A case study
NASA Astrophysics Data System (ADS)
Xian, Tan Chai; Hong, Chai Weng; Hawari, Nurul Nazihah
2016-10-01
This study aims to develop a queuing model at UniMall by using discrete event simulation approach in analyzing the service performance that affects customer satisfaction. The performance measures that considered in this model are such as the average time in system, the total number of student served, the number of student in waiting queue, the waiting time in queue as well as the maximum length of buffer. ARENA simulation software is used to develop a simulation model and the output is analyzed. Based on the analysis of output, it is recommended that management of UniMall consider introducing shifts and adding another payment counter in the morning.
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges
2012-05-01
A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.
Predictors of laparoscopic simulation performance among practicing obstetrician gynecologists.
Mathews, Shyama; Brodman, Michael; D'Angelo, Debra; Chudnoff, Scott; McGovern, Peter; Kolev, Tamara; Bensinger, Giti; Mudiraj, Santosh; Nemes, Andreea; Feldman, David; Kischak, Patricia; Ascher-Walsh, Charles
2017-11-01
While simulation training has been established as an effective method for improving laparoscopic surgical performance in surgical residents, few studies have focused on its use for attending surgeons, particularly in obstetrics and gynecology. Surgical simulation may have a role in improving and maintaining proficiency in the operating room for practicing obstetrician gynecologists. We sought to determine if parameters of performance for validated laparoscopic virtual simulation tasks correlate with surgical volume and characteristics of practicing obstetricians and gynecologists. All gynecologists with laparoscopic privileges (n = 347) from 5 academic medical centers in New York City were required to complete a laparoscopic surgery simulation assessment. The physicians took a presimulation survey gathering physician self-reported characteristics and then performed 3 basic skills tasks (enforced peg transfer, lifting/grasping, and cutting) on the LapSim virtual reality laparoscopic simulator (Surgical Science Ltd, Gothenburg, Sweden). The association between simulation outcome scores (time, efficiency, and errors) and self-rated clinical skills measures (self-rated laparoscopic skill score or surgical volume category) were examined with regression models. The average number of laparoscopic procedures per month was a significant predictor of total time on all 3 tasks (P = .001 for peg transfer; P = .041 for lifting and grasping; P < .001 for cutting). Average monthly laparoscopic surgical volume was a significant predictor of 2 efficiency scores in peg transfer, and all 4 efficiency scores in cutting (P = .001 to P = .015). Surgical volume was a significant predictor of errors in lifting/grasping and cutting (P < .001 for both). Self-rated laparoscopic skill level was a significant predictor of total time in all 3 tasks (P < .0001 for peg transfer; P = .009 for lifting and grasping; P < .001 for cutting) and a significant predictor of nearly all efficiency scores and errors scores in all 3 tasks. In addition to total time, there was at least 1 other objective performance measure that significantly correlated with surgical volume for each of the 3 tasks. Higher-volume physicians and those with fellowship training were more confident in their laparoscopic skills. By determining simulation performance as it correlates to active physician practice, further studies may help assess skill and individualize training to maintain skill levels as case volumes fluctuate. Copyright © 2017 Elsevier Inc. All rights reserved.
Supercomputer simulations of structure formation in the Universe
NASA Astrophysics Data System (ADS)
Ishiyama, Tomoaki
2017-06-01
We describe the implementation and performance results of our massively parallel MPI†/OpenMP‡ hybrid TreePM code for large-scale cosmological N-body simulations. For domain decomposition, a recursive multi-section algorithm is used and the size of domains are automatically set so that the total calculation time is the same for all processes. We developed a highly-tuned gravity kernel for short-range forces, and a novel communication algorithm for long-range forces. For two trillion particles benchmark simulation, the average performance on the fullsystem of K computer (82,944 nodes, the total number of core is 663,552) is 5.8 Pflops, which corresponds to 55% of the peak speed.
NASA Technical Reports Server (NTRS)
Rosenstein, H.; Mcveigh, M. A.; Mollenkof, P. A.
1973-01-01
A mathematical model for a real time simulation of a tilt rotor aircraft was developed. The mathematical model is used for evaluating aircraft performance and handling qualities. The model is based on an eleven degree of freedom total force representation. The rotor is treated as a point source of forces and moments with appropriate response time lags and actuator dynamics. The aerodynamics of the wing, tail, rotors, landing gear, and fuselage are included.
Key algorithms used in GR02: A computer simulation model for predicting tree and stand growth
Garrett A. Hughes; Paul E. Sendak; Paul E. Sendak
1985-01-01
GR02 is an individual tree, distance-independent simulation model for predicting tree and stand growth over time. It performs five major functions during each run: (1) updates diameter at breast height, (2) updates total height, (3) estimates mortality, (4) determines regeneration, and (5) updates crown class.
Column descriptions ID : Unique integer for each control time simulation LABEL : Description unique to each ID (see paper) Z : Redshift TIMEAREA : Observer-frame control time x area at 'Z' (year-arcmin ^2) Z2 : Second redshift TIMEVOL : Total rest-frame control time x volume between 'Z' and 'Z2' (year
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2014-03-01
The sign-problem in PIMC simulations of non-relativistic fermions increases in serverity with the number of fermions and the number of beads (or time-slices) of the simulation. A large of number of beads is usually needed, because the conventional primitive propagator is only second-order and the usual thermodynamic energy-estimator converges very slowly from below with the total imaginary time. The Hamiltonian energy-estimator, while more complicated to evaluate, is a variational upper-bound and converges much faster with the total imaginary time, thereby requiring fewer beads. This work shows that when the Hamiltonian estimator is used in conjunction with fourth-order propagators with optimizable parameters, the ground state energies of 2D parabolic quantum-dots with approximately 10 completely polarized electrons can be obtain with ONLY 3-5 beads, before the onset of severe sign problems. This work was made possible by NPRP GRANT #5-674-1-114 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the author.
A mathematical simulation model of the CH-47B helicopter, volume 2
NASA Technical Reports Server (NTRS)
Weber, J. M.; Liu, T. Y.; Chung, W.
1984-01-01
A nonlinear simulation model of the CH-47B helicopter, was adapted for use in a simulation facility. The model represents the specific configuration of the variable stability CH-47B helicopter. Modeling of the helicopter uses a total force approach in six rigid body degrees of freedom. Rotor dynamics are simulated using the Wheatley-Bailey equations, steady state flapping dynamics and included in the model of the option for simulation of external suspension, slung load equations of motion. Validation of the model was accomplished by static and dynamic data from the original Boeing Vertol mathematical model and flight test data. The model is appropriate for use in real time piloted simulation and is implemented on the ARC Sigma IX computer where it may be operated with a digital cycle time of 0.03 sec.
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
Fast Computation of Ground Motion Shaking Map base on the Modified Stochastic Finite Fault Modeling
NASA Astrophysics Data System (ADS)
Shen, W.; Zhong, Q.; Shi, B.
2012-12-01
Rapidly regional MMI mapping soon after a moderate-large earthquake is crucial to loss estimation, emergency services and planning of emergency action by the government. In fact, many countries show different degrees of attention on the technology of rapid estimation of MMI , and this technology has made significant progress in earthquake-prone countries. In recent years, numerical modeling of strong ground motion has been well developed with the advances of computation technology and earthquake science. The computational simulation of strong ground motion caused by earthquake faulting has become an efficient way to estimate the regional MMI distribution soon after earthquake. In China, due to the lack of strong motion observation in network sparse or even completely missing areas, the development of strong ground motion simulation method has become an important means of quantitative estimation of strong motion intensity. In many of the simulation models, stochastic finite fault model is preferred to rapid MMI estimating for its time-effectiveness and accuracy. In finite fault model, a large fault is divided into N subfaults, and each subfault is considered as a small point source. The ground motions contributed by each subfault are calculated by the stochastic point source method which is developed by Boore, and then summed at the observation point to obtain the ground motion from the entire fault with a proper time delay. Further, Motazedian and Atkinson proposed the concept of Dynamic Corner Frequency, with the new approach, the total radiated energy from the fault and the total seismic moment are conserved independent of subfault size over a wide range of subfault sizes. In current study, the program EXSIM developed by Motazedian and Atkinson has been modified for local or regional computations of strong motion parameters such as PGA, PGV and PGD, which are essential for MMI estimating. To make the results more reasonable, we consider the impact of V30 for the ground shaking intensity, and the results of the comparisons between the simulated and observed MMI for the 2004 Mw 6.0 Parkfield earthquake, the 2008 Mw 7.9Wenchuan earthquake and the 1976 Mw 7.6Tangshan earthquake is fairly well. Take Parkfield earthquake as example, the simulative result reflect the directivity effect and the influence of the shallow velocity structure well. On the other hand, the simulative data is in good agreement with the network data and NGA (Next Generation Attenuation). The consumed time depends on the number of the subfaults and the number of the grid point. For the 2004 Mw 6.0 Parkfield earthquake, the grid size we calculated is 2.5° × 2.5°, the grid space is 0.025°, and the total time consumed is about 1.3hours. For the 2008 Mw 7.9 Wenchuan earthquake, the grid size calculated is 10° × 10°, the grid space is 0.05°, the total number of grid point is more than 40,000, and the total time consumed is about 7.5 hours. For t the 1976 Mw 7.6 Tangshan earthquake, the grid size we calculated is 4° × 6°, the grid space is 0.05°, and the total time consumed is about 2.1 hours. The CPU we used is 3.40GHz, and such computational time could further reduce by using GPU computing technique and other parallel computing technique. This is also our next focus.
NASA Astrophysics Data System (ADS)
Anderson, Brian J.; Korth, Haje; Welling, Daniel T.; Merkin, Viacheslav G.; Wiltberger, Michael J.; Raeder, Joachim; Barnes, Robin J.; Waters, Colin L.; Pulkkinen, Antti A.; Rastaetter, Lutz
2017-02-01
Two of the geomagnetic storms for the Space Weather Prediction Center Geospace Environment Modeling challenge occurred after data were first acquired by the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE). We compare Birkeland currents from AMPERE with predictions from four models for the 4-5 April 2010 and 5-6 August 2011 storms. The four models are the Weimer (2005b) field-aligned current statistical model, the Lyon-Fedder-Mobarry magnetohydrodynamic (MHD) simulation, the Open Global Geospace Circulation Model MHD simulation, and the Space Weather Modeling Framework MHD simulation. The MHD simulations were run as described in Pulkkinen et al. (2013) and the results obtained from the Community Coordinated Modeling Center. The total radial Birkeland current, ITotal, and the distribution of radial current density, Jr, for all models are compared with AMPERE results. While the total currents are well correlated, the quantitative agreement varies considerably. The Jr distributions reveal discrepancies between the models and observations related to the latitude distribution, morphologies, and lack of nightside current systems in the models. The results motivate enhancing the simulations first by increasing the simulation resolution and then by examining the relative merits of implementing more sophisticated ionospheric conductance models, including ionospheric outflows or other omitted physical processes. Some aspects of the system, including substorm timing and location, may remain challenging to simulate, implying a continuing need for real-time specification.
Kittipittayakorn, Cholada; Ying, Kuo-Ching
2016-01-01
Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries' healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES) and agent-based simulation (ABS) to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department.
Kittipittayakorn, Cholada
2016-01-01
Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries' healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES) and agent-based simulation (ABS) to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department. PMID:27195606
Mechanism and design of intermittent aeration activated sludge process for nitrogen removal.
Hanhan, Oytun; Insel, Güçlü; Yagci, Nevin Ozgur; Artan, Nazik; Orhon, Derin
2011-01-01
The paper provided a comprehensive evaluation of the mechanism and design of intermittent aeration activated sludge process for nitrogen removal. Based on the specific character of the process the total cycle time, (T(C)), the aerated fraction, (AF), and the cycle time ratio, (CTR) were defined as major design parameters, aside from the sludge age of the system. Their impact on system performance was evaluated by means of process simulation. A rational design procedure was developed on the basis of basic stochiometry and mass balance related to the oxidation and removal of nitrogen under aerobic and anoxic conditions, which enabled selected of operation parameters of optimum performance. The simulation results indicated that the total nitrogen level could be reduced to a minimum level by appropriate manipulation of the aerated fraction and cycle time ratio. They also showed that the effluent total nitrogen could be lowered to around 4.0 mgN/L by adjusting the dissolved oxygen set-point to 0.5 mg/L, a level which promotes simultaneous nitrification and denitrification.
The optimization of total laboratory automation by simulation of a pull-strategy.
Yang, Taho; Wang, Teng-Kuan; Li, Vincent C; Su, Chia-Lo
2015-01-01
Laboratory results are essential for physicians to diagnose medical conditions. Because of the critical role of medical laboratories, an increasing number of hospitals use total laboratory automation (TLA) to improve laboratory performance. Although the benefits of TLA are well documented, systems occasionally become congested, particularly when hospitals face peak demand. This study optimizes TLA operations. Firstly, value stream mapping (VSM) is used to identify the non-value-added time. Subsequently, batch processing control and parallel scheduling rules are devised and a pull mechanism that comprises a constant work-in-process (CONWIP) is proposed. Simulation optimization is then used to optimize the design parameters and to ensure a small inventory and a shorter average cycle time (CT). For empirical illustration, this approach is applied to a real case. The proposed methodology significantly improves the efficiency of laboratory work and leads to a reduction in patient waiting times and increased service level.
Martin, Audrey N; Farquar, George R; Frank, Matthias; Gard, Eric E; Fergenson, David P
2007-08-15
Single-particle aerosol mass spectrometry (SPAMS) was used for the real-time detection of liquid nerve agent simulants. A total of 1000 dual-polarity time-of-flight mass spectra were obtained for micrometer-sized single particles each of dimethyl methyl phosphonate, diethyl ethyl phosphonate, diethyl phosphoramidate, and diethyl phthalate using laser fluences between 0.58 and 7.83 nJ/microm2, and mass spectral variation with laser fluence was studied. The mass spectra obtained allowed identification of single particles of the chemical warfare agent (CWA) simulants at each laser fluence used although lower laser fluences allowed more facile identification. SPAMS is presented as a promising real-time detection system for the presence of CWAs.
Kantar, Rami S; Plana, Natalie M; Cutting, Court B; Diaz-Siso, Jesus Rodrigo; Flores, Roberto L
2018-01-29
In October 2012, a freely available, internet-based cleft simulator was created in partnership between academic, nonprofit, and industry sectors. The purpose of this educational resource was to address global disparities in cleft surgery education. This report assesses demographics, usage, and global effect of our simulator, in its fifth year since inception. Evaluate the global effect, usage, and demographics of an internet-based educational digital simulation cleft surgery software. Simulator modules, available in five languages demonstrate surgical anatomy, markings, detailed procedures, and intraoperative footage to supplement digital animation. Available data regarding number of users, sessions, countries reached, and content access were recorded. Surveys evaluating the demographic characteristics of registered users and simulator use were collected by direct e-mail. The total number of simulator new and active users reached 2865 and 4086 in June 2017, respectively. By June 2017, users from 136 countries had accessed the simulator. From 2015 to 2017, the number of sessions was 11,176 with a monthly average of 399.0 ± 190.0. Developing countries accounted for 35% of sessions and the average session duration was 9.0 ± 7.3 minutes. This yields a total simulator screen time of 100,584 minutes (1676 hours). Most survey respondents were surgeons or trainees (87%) specializing in plastic, maxillofacial, or general surgery (89%). Most users found the simulator to be useful (88%), at least equivalent or more useful than other resources (83%), and used it for teaching (58%). Our internet-based interactive cleft surgery platform reaches its intended target audience, is not restricted by socioeconomic barriers to access, and is judged to be useful by surgeons. More than 4000 active users have been reached since inception. The total screen time over approximately 2 years exceeded 1600 hours. This suggests that future surgical simulators of this kind may be sustainable by stakeholders interested in reaching this target audience. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise
NASA Astrophysics Data System (ADS)
Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej
2010-11-01
The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.
Jones, Perry M.; Winterstein, Thomas A.
2000-01-01
The U.S. Geological Survey (USGS), in cooperation with the Minnesota Department of Natural Resources and the Heron Lake Watershed District, conducted a study to characterize the rainfall-runoff response and to examine the effects of wetland restoration on the rainfall-runoff response within the Heron Lake Basin in southwestern Minnesota. About 93 percent of the land cover in the Heron Lake Basin consists of agricultural lands, consisting almost entirely of row crops, with less than one percent consisting of wetlands. The Hydrological Simulation Program – Fortran (HSPF), Version 10, was calibrated to continuous discharge data and used to characterize rainfall-runoff responses in the Heron Lake Basin between May 1991 and August 1997. Simulation of the Heron Lake Basin was done as a two-step process: (1) simulations of five small subbasins using data from August 1995 through August 1997, and (2) simulations of the two large basins, Jack and Okabena Creek Basins, using data from May 1991 through September 1996. Simulations of the five small subbasins was done to determine basin parameters for the land segments and assess rainfall-runoff response variability in the basin. Simulations of the two larger basins were done to verify the basin parameters and assess rainfall-runoff responses over a larger area and for a longer time period. Best-fit calibrations of the five subbasin simulations indicate that the rainfall-runoff response is uniform throughout the Heron Lake Basin, and 48 percent of the total rainfall for storms becomes direct (surface and interflow) runoff. Rainfall-runoff response variations result from variations in the distribution, intensity, timing, and duration of rainfall; soil moisture; evapotranspiration rates; and the presence of lakes in the basin. In the spring, the amount and distribution of rainfall tends to govern the runoff response. High evapotranspiration rates in the summer result in a depletion of moisture from the soils, substantially affecting the rainfall-runoff relation. Five wetland restoration simulations were run for each of five subbasins using data from August 1995 through August 1997, and for the two larger basins, Jack and Okabena Creek Basins, using data from May 1991 through September 1996. Results from linear regression analysis of total simulated direct runoff and total rainfall data for simulated storms in the wetland-restoration simulations indicate that the portion of total rainfall that becomes runoff will be reduced by 46 percent if 45 percent of current cropland is converted to wetland. The addition of wetlands reduced peak runoff in most of the simulations, but the reduction varied with antecedent soil moisture, the magnitude of the peak flow, and the presence of current wetlands and lakes. Reductions in the simulated total and peak runoff from the Jack Creek Basin for most of the simulated storms were greatest when additional wetlands were simulated in the North Branch Jack Creek or the Upper Jack Creek Subbasins. In the Okabena Creek Basin, reductions in simulated peak runoff for most of the storms were greatest when additional wetlands were simulated in the Lower Okabena Creek Subbasin.
Monte Carlo Simulation of Sudden Death Bearing Testing
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.
Mesgouez, C; Rilliard, F; Matossian, L; Nassiri, K; Mandel, E
2003-03-01
The aim of this study was to determine the influence of operator experience on the time needed for canal preparation when using a rotary nickel-titanium (Ni-Ti) system. A total of 100 simulated curved canals in resin blocks were used. Four operators prepared a total of 25 canals each. The operators included practitioners with prior experience of the preparation technique, and practitioners with no experience. The working length for each instrument was precisely predetermined. All canals were instrumented with rotary Ni-Ti ProFile Variable Taper Series 29 engine-driven instruments using a high-torque handpiece (Maillefer, Ballaigues, Switzerland). The time taken to prepare each canal was recorded. Significant differences between the operators were analysed using the Student's t-test and the Kruskall-Wallis and Dunn nonparametric tests. Comparison of canal preparation times demonstrated a statistically significant difference between the four operators (P < 0.001). In the inexperienced group, a significant linear regression between canal number and preparation time occurred. Time required for canal preparation was inversely related to operator experience.
NASA Astrophysics Data System (ADS)
Jizhi, Liu; Xingbi, Chen
2009-12-01
A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate.
Yousefi, M; Ferreira, R P M
2017-03-30
This study presents an agent-based simulation modeling in an emergency department. In a traditional approach, a supervisor (or a manager) allocates the resources (receptionist, nurses, doctors, etc.) to different sections based on personal experience or by using decision-support tools. In this study, each staff agent took part in the process of allocating resources based on their observation in their respective sections, which gave the system the advantage of utilizing all the available human resources during the workday by being allocated to a different section. In this simulation, unlike previous studies, all staff agents took part in the decision-making process to re-allocate the resources in the emergency department. The simulation modeled the behavior of patients, receptionists, triage nurses, emergency room nurses and doctors. Patients were able to decide whether to stay in the system or leave the department at any stage of treatment. In order to evaluate the performance of this approach, 6 different scenarios were introduced. In each scenario, various key performance indicators were investigated before and after applying the group decision-making. The outputs of each simulation were number of deaths, number of patients who leave the emergency department without being attended, length of stay, waiting time and total number of discharged patients from the emergency department. Applying the self-organizing approach in the simulation showed an average of 12.7 and 14.4% decrease in total waiting time and number of patients who left without being seen, respectively. The results showed an average increase of 11.5% in total number of discharged patients from emergency department.
Yousefi, M.; Ferreira, R.P.M.
2017-01-01
This study presents an agent-based simulation modeling in an emergency department. In a traditional approach, a supervisor (or a manager) allocates the resources (receptionist, nurses, doctors, etc.) to different sections based on personal experience or by using decision-support tools. In this study, each staff agent took part in the process of allocating resources based on their observation in their respective sections, which gave the system the advantage of utilizing all the available human resources during the workday by being allocated to a different section. In this simulation, unlike previous studies, all staff agents took part in the decision-making process to re-allocate the resources in the emergency department. The simulation modeled the behavior of patients, receptionists, triage nurses, emergency room nurses and doctors. Patients were able to decide whether to stay in the system or leave the department at any stage of treatment. In order to evaluate the performance of this approach, 6 different scenarios were introduced. In each scenario, various key performance indicators were investigated before and after applying the group decision-making. The outputs of each simulation were number of deaths, number of patients who leave the emergency department without being attended, length of stay, waiting time and total number of discharged patients from the emergency department. Applying the self-organizing approach in the simulation showed an average of 12.7 and 14.4% decrease in total waiting time and number of patients who left without being seen, respectively. The results showed an average increase of 11.5% in total number of discharged patients from emergency department. PMID:28380196
System Simulation by Recursive Feedback: Coupling A Set of Stand-Alone Subsystem Simulations
NASA Technical Reports Server (NTRS)
Nixon, Douglas D.; Hanson, John M. (Technical Monitor)
2002-01-01
Recursive feedback is defined and discussed as a framework for development of specific algorithms and procedures that propagate the time-domain solution for a dynamical system simulation consisting of multiple numerically coupled self-contained stand-alone subsystem simulations. A satellite motion example containing three subsystems (other dynamics, attitude dynamics, and aerodynamics) has been defined and constructed using this approach. Conventional solution methods are used in the subsystem simulations. Centralized and distributed versions of coupling structure have been addressed. Numerical results are evaluated by direct comparison with a standard total-system simultaneous-solution approach.
Teaching binocular indirect ophthalmoscopy to novice residents using an augmented reality simulator.
Rai, Amandeep S; Rai, Amrit S; Mavrikakis, Emmanouil; Lam, Wai Ching
2017-10-01
To compare the traditional teaching approach of binocular indirect ophthalmoscopy (BIO) to the EyeSI augmented reality (AR) BIO simulator. Prospective randomized control trial. 28 post-graduate year one (PGY1) ophthalmology residents. Residents were recruited at the 2012 Toronto Ophthalmology Residents Introductory Course (TORIC). 15 were randomized to conventional teaching (Group 1), and 13 to augmented reality simulator training (Group 2). 3 vitreoretinal fellows were enrolled to serve as experts. Evaluations were completed on the simulator, with 3 tasks, and outcome measures were total raw score, total time elapsed, and performance. Following conventional training, Group 1 residents were outperformed by vitreoretinal fellows with respect to all 3 outcome measures. Following AR training, Group 2 residents demonstrated superior total scores and performance compared to Group 1 residents. Once the Group 1 residents also completed the AR BIO training, there was a significant improvement compared to their baseline scores, and were now on par with Group 2 residents. This study provides construct validity for the EyeSI AR BIO simulator and demonstrates that it may be superior to conventional BIO teaching for novice ophthalmology residents. Copyright © 2017 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Ensemble Bayesian forecasting system Part I: Theory and algorithms
NASA Astrophysics Data System (ADS)
Herr, Henry D.; Krzysztofowicz, Roman
2015-05-01
The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.
Surface Roughness of Composite Resins after Simulated Toothbrushing with Different Dentifrices.
Monteiro, Bruna; Spohr, Ana Maria
2015-07-01
The aim of the study was to evaluate, in vitro, the surface roughness of two composite resins submitted to simulated toothbrushing with three different dentifrices. Totally, 36 samples of Z350XT and 36 samples of Empress Direct were built and randomly divided into three groups (n = 12) according to the dentifrice used (Oral-B Pro-Health Whitening [OBW], Colgate Sensitive Pro-Relief [CS], Colgate Total Clean Mint 12 [CT12]). The samples were submitted to 5,000, 10,000 or 20,000 cycles of simulated toothbrushing. After each simulated period, the surface roughness of the samples was measured using a roughness tester. According to three-way analysis of variance, dentifrice (P = 0.044) and brushing time (P = 0.000) were significant. The composite resin was not significant (P = 0.381) and the interaction among the factors was not significant (P > 0.05). The mean values of the surface roughness (µm) followed by the same letter represent no statistical difference by Tukey's post-hoc test (P <0.05): Dentifrice: CT12 = 0.269(a); CS Pro- Relief = 0.300(ab); OBW = 0.390(b). Brushing time: Baseline = 0,046ª; 5,000 cycles = 0.297(b); 10,000 cycles = 0.354(b); 20,000 cycles = 0.584(c). Z350 XT and Empress Direct presented similar surface roughness after all cycles of simulated toothbrushing. The higher the brushing time, the higher the surface roughness of composite resins. The dentifrice OBW caused a higher surface roughness in both composite resins.
Vectorization of a particle code used in the simulation of rarefied hypersonic flow
NASA Technical Reports Server (NTRS)
Baganoff, D.
1990-01-01
A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.
Dust Storm Signatures in Global Ionosphere Map of GPS Total Electron Content
NASA Astrophysics Data System (ADS)
Lin, Fang-Tse; Shih, Ai-Ling; Liu, Jann-Yenq; Kuo, Cheng-Ling; Lin, Tang-Huang; Lien, Wei-Hung
2016-04-01
In this paper both MODIS data and GIM (global ionosphere map) TEC (total electron content) as well as numerical simulations are used to study ionospheric dust storm effects in May 2008. The aerosol optical depth (AOD) and the LTT (latitude-time-TEC) along the Sahara longitude simultaneously reach their maximum values on 28 May 2008. The LLT (latitude-longitude-TEC) map specifically and significantly increases over the Sahara region on 28 May 2008. The simulation suggests that the dust storm may change the atmospheric conductivity, which in turn modifies the GIM TEC over the Sahara area.
Turbulent Plane Wakes Subjected to Successive Strains
NASA Technical Reports Server (NTRS)
Rogers, Michael M.
2003-01-01
Six direct numerical simulations of turbulent time-evolving strained plane wakes have been examined to investigate the response of a wake to successive irrotational plane strains of opposite sign. The orientation of the applied strain field has been selected so that the flow is the time-developing analogue of a spatially developing wake evolving in the presence of either a favourable or an adverse streamwise pressure gradient. The magnitude of the applied strain rate a is constant in time t until the total strain e(sup at) reaches about four. At this point, a new simulation is begun with the sign of the applied strain being reversed (the original simulation is continued as well). When the total strain is reduced back to its original value of one, yet another simulation is begun with the sign of the strain being reversed again back to its original sign. This process is done for both initially "favourable" and initially "adverse" strains, providing simulations for each of these strain types from three different initial conditions. The evolution of the wake mean velocity deficit and width is found to be very similar for all the adversely strained cases, with both measures rapidly achieving exponential growth at the rate associated with the cross-stream expansive strain e(sup at). In the "favourably" strained cases, the wake widths approach a constant and the velocity deficits ultimately decay rapidly as e(sup -2at). Although all three of these cases do exhibit the same asymptotic exponential behaviour, the time required to achieve this is longer for the cases that have been previously adversely strained (by at approx. equals 1). These simulations confirm the generality of the conclusions drawn in Rogers (2002) regarding the response of plane wakes to strain. The evolution of strained wakes is not consistent with the predictions of classical self-similar analysis; a more general equilibrium similarity solution is required to describe the results. At least for the cases considered here, the wake Reynolds number and the ratio of the turbulent kinetic energy to the square of the wake mean velocity deficit are determined nearly entirely by the total strain. For these measures the order in which the strains are applied does not matter and the changes brought about by the strain are nearly reversible. The wake mean velocity deficit and width, on the other hand, differ by about a factor of three when the total strain returns to one, depending on whether the wake was first "favourably" or "adversely" strained. The strain history is important for predicting the evolution of these quantities.
Pan, Jui-Wen; Tsai, Pei-Jung; Chang, Kao-Der; Chang, Yung-Yuan
2013-03-01
In this paper, we propose a method to analyze the light extraction efficiency (LEE) enhancement of a nanopatterned sapphire substrates (NPSS) light-emitting diode (LED) by comparing wave optics software with ray optics software. Finite-difference time-domain (FDTD) simulations represent the wave optics software and Light Tools (LTs) simulations represent the ray optics software. First, we find the trends of and an optimal solution for the LEE enhancement when the 2D-FDTD simulations are used to save on simulation time and computational memory. The rigorous coupled-wave analysis method is utilized to explain the trend we get from the 2D-FDTD algorithm. The optimal solution is then applied in 3D-FDTD and LTs simulations. The results are similar and the difference in LEE enhancement between the two simulations does not exceed 8.5% in the small LED chip area. More than 10(4) times computational memory is saved during the LTs simulation in comparison to the 3D-FDTD simulation. Moreover, LEE enhancement from the side of the LED can be obtained in the LTs simulation. An actual-size NPSS LED is simulated using the LTs. The results show a more than 307% improvement in the total LEE enhancement of the NPSS LED with the optimal solution compared to the conventional LED.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Tae-Hyuk; Sandu, Adrian; Watson, Layne T.
2015-08-01
Ensembles of simulations are employed to estimate the statistics of possible future states of a system, and are widely used in important applications such as climate change and biological modeling. Ensembles of runs can naturally be executed in parallel. However, when the CPU times of individual simulations vary considerably, a simple strategy of assigning an equal number of tasks per processor can lead to serious work imbalances and low parallel efficiency. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms for ensembles of simulations where many tasks are mapped onto each processor, andmore » where the individual compute times vary considerably among tasks. Four load balancing strategies are discussed: most-dividing, all-redistribution, random-polling, and neighbor-redistribution. Simulation results with a stochastic budding yeast cell cycle model are consistent with the theoretical analysis. It is especially significant that there is a provable global decrease in load imbalance for the local rebalancing algorithms due to scalability concerns for the global rebalancing algorithms. The overall simulation time is reduced by up to 25 %, and the total processor idle time by 85 %.« less
Scale-Up of Lubricant Mixing Process by Using V-Type Blender Based on Discrete Element Method.
Horibe, Masashi; Sonoda, Ryoichi; Watano, Satoru
2018-01-01
A method for scale-up of a lubricant mixing process in a V-type blender was proposed. Magnesium stearate was used for the lubricant, and the lubricant mixing experiment was conducted using three scales of V-type blenders (1.45, 21 and 130 L) under the same fill level and Froude (Fr) number. However, the properties of lubricated mixtures and tablets could not correspond with the mixing time or the total revolution number. To find the optimum scale-up factor, discrete element method (DEM) simulations of three scales of V-type blender mixing were conducted, and the total travel distance of particles under the different scales was calculated. The properties of the lubricated mixture and tablets obtained from the scale-up experiment were well correlated with the mixing time determined by the total travel distance. It was found that a scale-up simulation based on the travel distance of particles is valid for the lubricant mixing scale-up processes.
NASA Astrophysics Data System (ADS)
Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars
2018-02-01
The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.
An infiltration/cure model for manufacture of fabric composites by the resin infusion process
NASA Technical Reports Server (NTRS)
Weideman, Mark H.; Loos, Alfred C.; Dexter, H. Benson; Hasko, Gregory H.
1992-01-01
A 1-D infiltration/cure model was developed to simulate fabrication of advanced textile composites by the resin film infusion process. The simulation model relates the applied temperature and pressure processing cycles, along with the experimentally measured compaction and permeability characteristics of the fabric preforms, to the temperature distribution, the resin degree of cure and viscosity, and the infiltration flow front position as a function of time. The model also predicts the final panel thickness, fiber volume fraction, and resin mass for full saturation as a function of compaction pressure. Composite panels were fabricated using the RTM (Resin Transfer Molding) film infusion technique from knitted, knitted/stitched, and 2-D woven carbon preforms and Hercules 3501-6 resin. Fabric composites were fabricated at different compaction pressures and temperature cycles to determine the effects of the processing on the properties. The composites were C-scanned and micrographed to determine the quality of each panel. Advanced cure cycles, developed from the RTM simulation model, were used to reduce the total cure cycle times by a factor of 3 and the total infiltration times by a factor of 2.
Validating the Modeling and Simulation of a Generic Tracking Radar
2009-07-28
order Gauss-Markov time series with CTGM = 250 units and rGM = 10 s is shown in the top panel of Figure 1. The time series, ifr , can represent any...are shared among the sensors. The total position and velocity estimation errors valid at time index k are given by < fr *|fc = rk\\k - rk and
González, Natalia; Aguilar, Lorenzo; Sevillano, David; Giménez, Maria-Jose; Alou, Luis; Cafini, Fabio; Torrico, Martha; López, Ana-Maria; Coronel, Pilar; Prieto, Jose
2011-06-01
This study explores the effects of cefditoren (CDN) versus amoxicillin-clavulanic acid (AMC) on the evolution (within a single strain) of total and recombined populations derived from intrastrain ftsI gene diffusion in β-lactamase-positive (BL⁺) and β-lactamase-negative (BL⁻) Haemophilus influenzae. DNA from β-lactamase-negative, ampicillin-resistant (BLNAR) isolates (DNA(BLNAR)) and from β-lactamase-positive, amoxicillin-clavulanate-resistant (BLPACR) (DNA(BLPACR)) isolates was extracted and added to a 10⁷-CFU/ml suspension of one BL⁺ strain (CDN MIC, 0.007 μg/ml; AMC MIC, 1 μg/ml) or one BL⁻ strain (CDN MIC, 0.015 μg/ml; AMC MIC, 0.5 μg/ml) in Haemophilus Test Medium (HTM). The mixture was incubated for 3 h and was then inoculated into a two-compartment computerized device simulating free concentrations of CDN (400 mg twice a day [b.i.d.]) or AMC (875 and 125 mg three times a day [t.i.d.]) in serum over 24 h. Controls were antibiotic-free simulations. Colony counts were performed; the total population and the recombined population were differentiated; and postsimulation MICs were determined. At time zero, the recombined population was 0.00095% of the total population. In controls, the BL⁻ and BL⁺ total populations and the BL⁻ recombined population increased (from ≈3 log₁₀ to 4.5 to 5 log₁₀), while the BL⁺ recombined population was maintained in simulations with DNA(BLPACR) and was decreased by ≈2 log₁₀ with DNA(BLNAR). CDN was bactericidal (percentage of the dosing interval for which experimental antibiotic concentrations exceeded the MIC [ft>MIC], >88%), and no recombined populations were detected from 4 h on. AMC was bactericidal against BL⁻ strains (ft>MIC, 74.0%) in DNA(BLNAR) and DNA(BLPACR) simulations, with a small final recombined population (MIC, 4 μg/ml; ft>MIC, 30.7%) in DNA(BLPACR) simulations. When AMC was used against the BL⁺ strain (in DNA(BLNAR) or DNA(BLPACR) simulations), the bacterial load was reduced ≈2 log₁₀ (ft>MIC, 44.3%), but 6.3% and 32% of the total population corresponded to a recombined population (MIC, 16 μg/ml; ft>MIC, 0%) in DNA(BLNAR) and DNA(BLPACR) simulations, respectively. AMC, but not CDN, unmasked BL⁺ recombined populations obtained by transformation. ft>MIC values higher than those classically considered for bacteriological response are needed to counter intrastrain ftsI gene diffusion by covering recombined populations.
NASA Astrophysics Data System (ADS)
Erkyihun, Solomon Tassew; Rajagopalan, Balaji; Zagona, Edith; Lall, Upmanu; Nowak, Kenneth
2016-05-01
A model to generate stochastic streamflow projections conditioned on quasi-oscillatory climate indices such as Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) is presented. Recognizing that each climate index has underlying band-limited components that contribute most of the energy of the signals, we first pursue a wavelet decomposition of the signals to identify and reconstruct these features from annually resolved historical data and proxy based paleoreconstructions of each climate index covering the period from 1650 to 2012. A K-Nearest Neighbor block bootstrap approach is then developed to simulate the total signal of each of these climate index series while preserving its time-frequency structure and marginal distributions. Finally, given the simulated climate signal time series, a K-Nearest Neighbor bootstrap is used to simulate annual streamflow series conditional on the joint state space defined by the simulated climate index for each year. We demonstrate this method by applying it to simulation of streamflow at Lees Ferry gauge on the Colorado River using indices of two large scale climate forcings: Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO), which are known to modulate the Colorado River Basin (CRB) hydrology at multidecadal time scales. Skill in stochastic simulation of multidecadal projections of flow using this approach is demonstrated.
Modelling total solar irradiance since 1878 from simulated magnetograms
NASA Astrophysics Data System (ADS)
Dasi-Espuig, M.; Jiang, J.; Krivova, N. A.; Solanki, S. K.
2014-10-01
Aims: We present a new model of total solar irradiance (TSI) based on magnetograms simulated with a surface flux transport model (SFTM) and the Spectral And Total Irradiance REconstructions (SATIRE) model. Our model provides daily maps of the distribution of the photospheric field and the TSI starting from 1878. Methods: The modelling is done in two main steps. We first calculate the magnetic flux on the solar surface emerging in active and ephemeral regions. The evolution of the magnetic flux in active regions (sunspots and faculae) is computed using a surface flux transport model fed with the observed record of sunspot group areas and positions. The magnetic flux in ephemeral regions is treated separately using the concept of overlapping cycles. We then use a version of the SATIRE model to compute the TSI. The area coverage and the distribution of different magnetic features as a function of time, which are required by SATIRE, are extracted from the simulated magnetograms and the modelled ephemeral region magnetic flux. Previously computed intensity spectra of the various types of magnetic features are employed. Results: Our model reproduces the PMOD composite of TSI measurements starting from 1978 at daily and rotational timescales more accurately than the previous version of the SATIRE model computing TSI over this period of time. The simulated magnetograms provide a more realistic representation of the evolution of the magnetic field on the photosphere and also allow us to make use of information on the spatial distribution of the magnetic fields before the times when observed magnetograms were available. We find that the secular increase in TSI since 1878 is fairly stable to modifications of the treatment of the ephemeral region magnetic flux.
Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam
2012-02-01
To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. Paediatric residency program at BC Children's Hospital, Vancouver, British Columbia. The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes.
Mizuta, Sora; Saito, Itsuro; Isoyama, Takashi; Hara, Shintaro; Yurimoto, Terumi; Li, Xinyang; Murakami, Haruka; Ono, Toshiya; Mabuchi, Kunihiko; Abe, Yusuke
2017-09-01
1/R control is a physiological control method of the total artificial heart (TAH) with which long-term survival was obtained with animal experiments. However, 1/R control occasionally diverged in the undulation pump TAH (UPTAH) animal experiment. To improve the control stability of the 1/R control, appropriate control time constant in relation to characteristics of the baroreflex vascular system was investigated with frequency analysis and numerical simulation. In the frequency analysis, data of five goats in which the UPTAH was implanted were analyzed with first Fourier transform technique to examine the vasomotion frequency. The numerical simulation was carried out repeatedly changing baroreflex parameters and control time constant using the elements-expanded Windkessel model. Results of the frequency analysis showed that the 1/R control tended to diverge when very low frequency band that was an indication of the vasomotion frequency was relative high. In numerical simulation, divergence of the 1/R control could be reproduced and the boundary curves between the divergence and convergence of the 1/R control varied depending on the control time constant. These results suggested that the 1/R control tended to be unstable when the TAH recipient had high reflex speed in the baroreflex vascular system. Therefore, the control time constant should be adjusted appropriately with the individual vasomotion frequency.
Helicopter pilot scan techniques during low-altitude high-speed flight.
Kirby, Christopher E; Kennedy, Quinn; Yang, Ji Hyun
2014-07-01
This study examined pilots' visual scan patterns during a simulated high-speed, low-level flight and how their scan rates related to flight performance. As helicopters become faster and more agile, pilots are expected to navigate at low altitudes while traveling at high speeds. A pilot's ability to interpret information from a combination of visual sources determines not only mission success, but also aircraft and crew survival. In a fixed-base helicopter simulator modeled after the U.S. Navy's MH-60S, 17 active-duty Navy helicopter pilots with varying total flight times flew and navigated through a simulated southern Californian desert course. Pilots' scan rate and fixation locations were monitored using an eye-tracking system while they flew through the course. Flight parameters, including altitude, were recorded using the simulator's recording system. Experienced pilots with more than 1000 total flight hours better maintained a constant altitude (mean altitude deviation = 48.52 ft, SD = 31.78) than less experienced pilots (mean altitude deviation = 73.03 ft, SD = 10.61) and differed in some aspects of their visual scans. They spent more time looking at the instrument display and less time looking out the window (OTW) than less experienced pilots. Looking OTW was associated with less consistency in maintaining altitude. Results may aid training effectiveness specific to helicopter aviation, particularly in high-speed low-level flight conditions.
Gray: a ray tracing-based Monte Carlo simulator for PET
NASA Astrophysics Data System (ADS)
Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.
2018-05-01
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
NASA Astrophysics Data System (ADS)
Hu, Rui; Liu, Quan
2017-04-01
During the engineering projects with artificial ground freezing (AFG) techniques in coastal area, the freezing effect is affected by groundwater salinity. Based on the theories of artificially frozen soil and heat transfer in porous material, and with the assumption that only the variations of total dissolved solids (TDS) impact on freezing point and thermal conductivity, a numerical model of an AFG project in a saline aquifer was established and validated by comparing the simulated temperature field with the calculated temperature based on the analytic solution of rupak (reference) for single-pipe freezing temperature field T. The formation and development of freezing wall were simulated with various TDS. The results showed that the variety of TDS caused the larger temperature difference near the frozen front. With increasing TDS in the saline aquifer (1 35g/L), the average thickness of freezing wall decreased linearly and the total formation time of the freezing wall increased linearly. Compared with of the scenario of fresh-water (<1g/L), the average thickness of frozen wall decreased by 6% and the total formation time of the freezing wall increased by 8% with each increasing TDS of 7g/L. Key words: total dissolved solids, freezing point, thermal conductivity, freezing wall, numerical simulation Reference D.J.Pringel, H.Eicken, H.J.Trodahl, etc. Thermal conductivity of landfast Antarctic and Arctic sea ice[J]. Journal of Geophysical Research, 2007, 112: 1-13. Lukas U.Arenson, Dave C.Sego. The effect of salinity on the freezing of coarse- grained sand[J]. Canadian Geotechnical Journal, 2006, 43: 325-337. Hui Bing, Wei Ma. Laboratory investigation of the freezing point of saline soil[J]. Cold Regions Science and Technology, 2011, 67: 79-88.
Galloway, Joel M.; Ortiz, Roderick F.; Bales, Jerad D.; Mau, David P.
2008-01-01
Pueblo Reservoir is west of Pueblo, Colorado, and is an important water resource for southeastern Colorado. The reservoir provides irrigation, municipal, and industrial water to various entities throughout the region. In anticipation of increased population growth, the cities of Colorado Springs, Fountain, Security, and Pueblo West have proposed building a pipeline that would be capable of conveying 78 million gallons of raw water per day (240 acre-feet) from Pueblo Reservoir. The U.S. Geological Survey, in cooperation with Colorado Springs Utilities and the Bureau of Reclamation, developed, calibrated, and verified a hydrodynamic and water-quality model of Pueblo Reservoir to describe the hydrologic, chemical, and biological processes in Pueblo Reservoir that can be used to assess environmental effects in the reservoir. Hydrodynamics and water-quality characteristics in Pueblo Reservoir were simulated using a laterally averaged, two-dimensional model that was calibrated using data collected from October 1985 through September 1987. The Pueblo Reservoir model was calibrated based on vertical profiles of water temperature and dissolved-oxygen concentration, and water-quality constituent concentrations collected in the epilimnion and hypolimnion at four sites in the reservoir. The calibrated model was verified with data from October 1999 through September 2002, which included a relatively wet year (water year 2000), an average year (water year 2001), and a dry year (water year 2002). Simulated water temperatures compared well to measured water temperatures in Pueblo Reservoir from October 1985 through September 1987. Spatially, simulated water temperatures compared better to measured water temperatures in the downstream part of the reservoir than in the upstream part of the reservoir. Differences between simulated and measured water temperatures also varied through time. Simulated water temperatures were slightly less than measured water temperatures from March to May 1986 and 1987, and slightly greater than measured data in August and September 1987. Relative to the calibration period, simulated water temperatures during the verification period did not compare as well to measured water temperatures. In general, simulated dissolved-oxygen concentrations for the calibration period compared well to measured concentrations in Pueblo Reservoir. Spatially, simulated concentrations deviated more from the measured values at the downstream part of the reservoir than at other locations in the reservoir. Overall, the absolute mean error ranged from 1.05 (site 1B) to 1.42 milligrams per liter (site 7B), and the root mean square error ranged from 1.12 (site 1B) to 1.67 milligrams per liter (site 7B). Simulated dissolved oxygen in the verification period compared better to the measured concentrations than in the calibration period. The absolute mean error ranged from 0.91 (site 5C) to 1.28 milligrams per liter (site 7B), and the root mean square error ranged from 1.03 (site 5C) to 1.46 milligrams per liter (site 7B). Simulated total dissolved solids generally were less than measured total dissolved-solids concentrations in Pueblo Reservoir from October 1985 through September 1987. The largest differences between simulated and measured total dissolved solids were observed at the most downstream sites in Pueblo Reservoir during the second year of the calibration period. Total dissolved-solids data were not available from reservoir sites during the verification period, so in-reservoir specific-conductance data were compared to simulated total dissolved solids. Simulated total dissolved solids followed the same patterns through time as the measured specific conductance data during the verification period. Simulated total nitrogen concentrations compared relatively well to measured concentrations in the Pueblo Reservoir model. The absolute mean error ranged from 0.21 (site 1B) to 0.27 milligram per liter as nitrogen (sites 3B and 7
He, Qin; Mohaghegh, Shahab D.; Gholami, Vida
2013-01-01
CO 2 sequestration into a coal seam project was studied and a numerical model was developed in this paper to simulate the primary and secondary coal bed methane production (CBM/ECBM) and carbon dioxide (CO 2 ) injection. The key geological and reservoir parameters, which are germane to driving enhanced coal bed methane (ECBM) and CO 2 sequestration processes, including cleat permeability, cleat porosity, CH 4 adsorption time, CO 2 adsorption time, CH 4 Langmuir isotherm, CO 2 Langmuir isotherm, and Palmer and Mansoori parameters, have been analyzed within a reasonable range. The model simulation results showed good matches for bothmore » CBM/ECBM production and CO 2 injection compared with the field data. The history-matched model was used to estimate the total CO 2 sequestration capacity in the field. The model forecast showed that the total CO 2 injection capacity in the coal seam could be 22,817 tons, which is in agreement with the initial estimations based on the Langmuir isotherm experiment. Total CO 2 injected in the first three years was 2,600 tons, which according to the model has increased methane recovery (due to ECBM) by 6,700 scf/d.« less
DEM simulation of the granular Maxwell's Demon under zero gravity
NASA Astrophysics Data System (ADS)
Wang, Wenguang; Zhou, Zhigang; Zong, Jin; Hou, Meiying
2017-06-01
In this work, granular segregation in a two-compartment cell (Maxwell's Demon) under zero gravity is studied numerically by DEM simulation for comparison with the experimental observation in satellite SJ-10. The effect of three parameters: the total number of particlesN, the excitation strengthΓ, and the position of the window coupling the two compartments, on the segregationɛ and the waiting timeτ are investigated. In the simulation, non-zero segregation under zero gravity is obtained, and the segregation ɛ is found independent of the excitation strengthΓ. The waiting time τ, however, depends strongly onΓ. For higher acceleration Γ, |ɛi| reaches steady state valueɛ faster.
Wei, Fanan; Yang, Haitao; Liu, Lianqing; Li, Guangyong
2017-03-01
Dynamic mechanical behaviour of living cells has been described by viscoelasticity. However, quantitation of the viscoelastic parameters for living cells is far from sophisticated. In this paper, combining inverse finite element (FE) simulation with Atomic Force Microscope characterization, we attempt to develop a new method to evaluate and acquire trustworthy viscoelastic index of living cells. First, influence of the experiment parameters on stress relaxation process is assessed using FE simulation. As suggested by the simulations, cell height has negligible impact on shape of the force-time curve, i.e. the characteristic relaxation time; and the effect originates from substrate can be totally eliminated when stiff substrate (Young's modulus larger than 3 GPa) is used. Then, so as to develop an effective optimization strategy for the inverse FE simulation, the parameters sensitivity evaluation is performed for Young's modulus, Poisson's ratio, and characteristic relaxation time. With the experiment data obtained through typical stress relaxation measurement, viscoelastic parameters are extracted through the inverse FE simulation by comparing the simulation results and experimental measurements. Finally, reliability of the acquired mechanical parameters is verified with different load experiments performed on the same cell.
Efficient Simulation of Explicitly Solvated Proteins in the Well-Tempered Ensemble.
Deighan, Michael; Bonomi, Massimiliano; Pfaendtner, Jim
2012-07-10
Herein, we report significant reduction in the cost of combined parallel tempering and metadynamics simulations (PTMetaD). The efficiency boost is achieved using the recently proposed well-tempered ensemble (WTE) algorithm. We studied the convergence of PTMetaD-WTE conformational sampling and free energy reconstruction of an explicitly solvated 20-residue tryptophan-cage protein (trp-cage). A set of PTMetaD-WTE simulations was compared to a corresponding standard PTMetaD simulation. The properties of PTMetaD-WTE and the convergence of the calculations were compared. The roles of the number of replicas, total simulation time, and adjustable WTE parameter γ were studied.
Virtual reality training for endoscopic surgery: voluntary or obligatory?
van Dongen, K W; van der Wal, W A; Rinkes, I H M Borel; Schijven, M P; Broeders, I A M J
2008-03-01
Virtual reality (VR) simulators have been developed to train basic endoscopic surgical skills outside of the operating room. An important issue is how to create optimal conditions for integration of these types of simulators into the surgical training curriculum. The willingness of surgical residents to train these skills on a voluntary basis was surveyed. Twenty-one surgical residents were given unrestricted access to a VR simulator for a period of four months. After this period, a competitive element was introduced to enhance individual training time spent on the simulator. The overall end-scores for individual residents were announced periodically to the full surgical department, and the winner was awarded a prize. In the first four months of study, only two of the 21 residents (10%) trained on the simulator, for a total time span of 163 minutes. After introducing the competitive element the number of trainees increased to seven residents (33%). The amount of training time spent on the simulator increased to 738 minutes. Free unlimited access to a VR simulator for training basic endoscopic skills, without any form of obligation or assessment, did not motivate surgical residents to use the simulator. Introducing a competitive element for enhancing training time had only a marginal effect. The acquisition of expensive devices to train basic psychomotor skills for endoscopic surgery is probably only effective when it is an integrated and mandatory part of the surgical curriculum.
NASA Astrophysics Data System (ADS)
Panyun, YAN; Guozhu, LIANG; Yongzhi, LU; Zhihui, QI; Xingdou, GAO
2017-12-01
The fast simulation of the vehicular cold launch system (VCLS) in the launch process is an essential requirement for practical engineering applications. In particular, a general and fast simulation model of the VCLS will help the designer to obtain the optimum scheme in the initial design phase. For these purposes, a system-level fast simulation model was established for the VCLS based on the subsystem synthesis method. Moreover, a comparison of the load of a seven-axis VCLS on the rigid ground through both theoretical calculations and experiments was carried out. It was found that the error of the load of the rear left outrigger is less than 7.1%, and the error of the total load of all the outriggers is less than 2.8%. Moreover, time taken for completion of the simulation model is only 9.5 min, which is 5% of the time taken by conventional algorithms.
An interactive drilling simulator for teaching and research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, G.A.; Cooper, A.G.; Bihn, G.
1995-12-31
An interactive program has been constructed that allows a student or engineer to simulate the drilling of an oil well, and to optimize the drilling process by comparing different drilling plans. The program operates in a very user-friendly way, with emphasis on menu and button-driven commands. The simulator may be run either as a training program, with exercises that illustrate various features of the drilling process, as a game, in which a student is set a challenge to drill a well with minimum cost or time under constraints set by an instructor, or as a simulator of a real situationmore » to investigate the merit of different drilling strategies. It has three main parts, a Lithology Editor, a Settings Editor and the simulation program itself. The Lithology Editor allows the student, instructor or engineer to build a real or imaginary sequence of rock layers, each characterized by its mineralogy, drilling and log responses. The Settings Editor allows the definition of all the operational parameters, ranging from the drilling and wear rates of particular bits in specified rocks to the costs of different procedures. The simulator itself contains an algorithm that determines rate of penetration and rate of wear of the bit as drilling continues. It also determines whether the well kicks or fractures, and assigns various other {open_quotes}accident{close_quotes} conditions. During operation, a depth vs. time curve is displayed, together with a {open_quotes}mud log{close_quotes} showing the rock layers penetrated. If desired, the well may be {open_quotes}logged{close_quotes} casings may be set and pore and fracture pressure gradients may be displayed. During drilling, the total time and cost are shown, together with cost per foot in total and for the current bit run.« less
Numerical simulations of novel high-power high-brightness diode laser structures
NASA Astrophysics Data System (ADS)
Boucke, Konstantin; Rogg, Joseph; Kelemen, Marc T.; Poprawe, Reinhart; Weimann, Guenter
2001-07-01
One of the key topics in today's semiconductor laser development activities is to increase the brightness of high-power diode lasers. Although structures showing an increased brightness have been developed specific draw-backs of these structures lead to a still strong demand for investigation of alternative concepts. Especially for the investigation of basically novel structures easy-to-use and fast simulation tools are essential to avoid unnecessary, cost and time consuming experiments. A diode laser simulation tool based on finite difference representations of the Helmholtz equation in 'wide-angle' approximation and the carrier diffusion equation has been developed. An optimized numerical algorithm leads to short execution times of a few seconds per resonator round-trip on a standard PC. After each round-trip characteristics like optical output power, beam profile and beam parameters are calculated. A graphical user interface allows online monitoring of the simulation results. The simulation tool is used to investigate a novel high-power, high-brightness diode laser structure, the so-called 'Z-Structure'. In this structure an increased brightness is achieved by reducing the divergency angle of the beam by angular filtering: The round trip path of the beam is two times folded using internal total reflection at surfaces defined by a small index step in the semiconductor material, forming a stretched 'Z'. The sharp decrease of the reflectivity for angles of incidence above the angle of total reflection leads to a narrowing of the angular spectrum of the beam. The simulations of the 'Z-Structure' indicate an increase of the beam quality by a factor of five to ten compared to standard broad-area lasers.
Return to Driving After Hip Arthroscopy.
Momaya, Amit M; Stavrinos, Despina; McManus, Benjamin; Wittig, Shannon M; Emblom, Benton; Estes, Reed
2018-05-01
The objective of this study was to evaluate patients' braking performance using a modern driving simulator after undergoing a right hip arthroscopy. This prospective study included 5 total driving sessions at which measurements were taken. The study was conducted at an academic medical center. A total of 14 patients scheduled to undergo a right hip arthroscopy were enrolled and compared with a control group of 17 participants to account for a potential learning phenomenon. Patients drove in the simulator preoperatively to establish a baseline, and then drove again at 2, 4, 6, and 8 weeks postoperatively. The control group did not undergo any type of surgical procedure. The main independent variable was time from surgery. A modern driving simulator was used to measure initial reaction time (IRT), throttle release time (TRT), foot movement time (FMT), and brake travel time (BTT). The braking reaction time (BRT) was calculated as the sum of IRT + TRT + FMT, and the total braking time (TBT) was calculated as the sum of BRT + BTT. The experimental group showed no significant changes in BTT (P = 0.11, (Equation is included in full-text article.)= 0.04) nor TBT (P = 0.20, (Equation is included in full-text article.)= 0.03) over the duration of 8 weeks. Although the experimental group did exhibit significant improvements in IRT (P = 0.002), TRT (P < 0.0001), FMT (P < 0.0001), and BRT (P = 0.0002) between preoperative and 2 weeks postoperative driving sessions, there were no significant changes thereafter. The mean preoperative TBT and 2 weeks postoperative TBT for the experimental group were 3.07 seconds (SD = 0.50) and 2.97 seconds (SD = 0.57), respectively. No learning phenomenon was observed in the control group. This study's findings suggest that patients may return to driving 2 weeks postoperatively from a right-sided hip arthroscopy procedure.
Stochastic simulation of enzyme-catalyzed reactions with disparate timescales.
Barik, Debashis; Paul, Mark R; Baumann, William T; Cao, Yang; Tyson, John J
2008-10-01
Many physiological characteristics of living cells are regulated by protein interaction networks. Because the total numbers of these protein species can be small, molecular noise can have significant effects on the dynamical properties of a regulatory network. Computing these stochastic effects is made difficult by the large timescale separations typical of protein interactions (e.g., complex formation may occur in fractions of a second, whereas catalytic conversions may take minutes). Exact stochastic simulation may be very inefficient under these circumstances, and methods for speeding up the simulation without sacrificing accuracy have been widely studied. We show that the "total quasi-steady-state approximation" for enzyme-catalyzed reactions provides a useful framework for efficient and accurate stochastic simulations. The method is applied to three examples: a simple enzyme-catalyzed reaction where enzyme and substrate have comparable abundances, a Goldbeter-Koshland switch, where a kinase and phosphatase regulate the phosphorylation state of a common substrate, and coupled Goldbeter-Koshland switches that exhibit bistability. Simulations based on the total quasi-steady-state approximation accurately capture the steady-state probability distributions of all components of these reaction networks. In many respects, the approximation also faithfully reproduces time-dependent aspects of the fluctuations. The method is accurate even under conditions of poor timescale separation.
NASA Technical Reports Server (NTRS)
Fleming, Eric L.; Jackman, Charles H.; Considine, David B.
1999-01-01
We have adopted the transport scenarios used in Part 1 to examine the sensitivity of stratospheric aircraft perturbations to transport changes in our 2-D model. Changes to the strength of the residual circulation in the upper troposphere and stratosphere and changes to the lower stratospheric K(sub zz) had similar effects in that increasing the transport rates decreased the overall stratospheric residence time and reduced the magnitude of the negative perturbation response in total ozone. Increasing the stratospheric K(sub yy) increased the residence time and enhanced the global scale negative total ozone response. However, increasing K(sub yy) along with self-consistent increases in the corresponding planetary wave drive, which leads to a stronger residual circulation, more than compensates for the K(sub yy)-effect, and results in a significantly weaker perturbation response, relative to the base case, throughout the stratosphere. We found a relatively minor model perturbation response sensitivity to the magnitude of K(sub yy) in the tropical stratosphere, and only a very small sensitivity to the magnitude of the horizontal mixing across the tropopause and to the strength of the mesospheric gravity wave drag and diffusion. These transport simulations also revealed a generally strong correlation between passive NO(sub y) accumulation and age of air throughout the stratosphere, such that faster transport rates resulted in a younger mean age and a smaller NO(y) mass accumulation. However, specific variations in K(sub yy) and mesospheric gravity wave strength exhibited very little NO(sub y)-age correlation in the lower stratosphere, similar to 3-D model simulations performed in the recent NASA "Models and Measurements" II analysis. The base model transport, which gives the most favorable overall comparison with inert tracer observations, simulated a global/annual mean total ozone response of -0.59%, with only a slightly larger response in the northern compared to the southern hemisphere. For transport scenarios which gave tracer simulations within some agreement with measurements, the annual/globally averaged total ozone response ranged from -0.45% to -0.70%. Our previous 1995 model exhibited overly fast transport rates, resulting in a global/annually averaged perturbation total ozone response of -0.25%, which is significantly weaker compared to the 1999 model. This illustrates how transport deficiencies can bias model simulations of stratospheric aircraft.
Urban storm-runoff modelling; Madison, Wisconsin
Grant, R. Stephen; Goddard, Gerald
1979-01-01
A brief inconclusive evaluation of the water-quality subroutines of the model was made. Close agreement was noted between observed and simulated loads for nitrates, organic nitrogen, total phosphate, and total solids. Ammonia nitrogen and orthophosphate computed by the model ranged 7 to 11 times greater than the observed loads. Observed loads are doubtful because of the sparsity of water-quality data.
Chan, Hao Yang; Walker, Peter S
2018-05-18
The design of a total knee replacement implant needs to take account the complex surfaces of the knee which it is replacing. Ensuring design performance of the implant requires in vitro testing of the implant. A considerable amount of time is required to produce components and evaluate them inside an experimental setting. Numerous adjustments in the design of an implant and testing each individual design can be time consuming and expensive. Our solution is to use the OpenSim simulation software to rapidly test multiple design configurations of implants. This study modeled a testing rig which characterized the motion and laxity of knee implants. Three different knee implant designs were used to test and validate the accuracy of the simulation: symmetrical, asymmetric, and anatomic. Kinematics were described as distances measured from the center of each femoral condyle to a plane intersecting the most posterior points of the tibial condyles between 0 and 135° of flexion with 15° increments. Excluding the initial flexion measurement (∼0°) results, the absolute differences between all experimental and simulation results (neutral path, anterior-posterior shear, internal-external torque) for the symmetric, asymmetric, and anatomical designs were 1.98 mm ± 1.15, 1.17 mm ± 0.89, and 1.24 mm ± 0.97, respectively. Considering all designs, the accuracy of the simulation across all tests was 1.46 mm ± 1.07. It was concluded that the results of the simulation were an acceptable representation of the testing rig and hence applicable as a design tool for new total knees. Copyright © 2018 Elsevier Ltd. All rights reserved.
Surface Roughness of Composite Resins after Simulated Toothbrushing with Different Dentifrices
Monteiro, Bruna; Spohr, Ana Maria
2015-01-01
Background: The aim of the study was to evaluate, in vitro, the surface roughness of two composite resins submitted to simulated toothbrushing with three different dentifrices. Materials and Methods: Totally, 36 samples of Z350XT and 36 samples of Empress Direct were built and randomly divided into three groups (n = 12) according to the dentifrice used (Oral-B Pro-Health Whitening [OBW], Colgate Sensitive Pro-Relief [CS], Colgate Total Clean Mint 12 [CT12]). The samples were submitted to 5,000, 10,000 or 20,000 cycles of simulated toothbrushing. After each simulated period, the surface roughness of the samples was measured using a roughness tester. Results: According to three-way analysis of variance, dentifrice (P = 0.044) and brushing time (P = 0.000) were significant. The composite resin was not significant (P = 0.381) and the interaction among the factors was not significant (P > 0.05). The mean values of the surface roughness (µm) followed by the same letter represent no statistical difference by Tukey's post-hoc test (P <0.05): Dentifrice: CT12 = 0.269a; CS Pro- Relief = 0.300ab; OBW = 0.390b. Brushing time: Baseline = 0,046ª; 5,000 cycles = 0.297b; 10,000 cycles = 0.354b; 20,000 cycles = 0.584c. Conclusion: Z350 XT and Empress Direct presented similar surface roughness after all cycles of simulated toothbrushing. The higher the brushing time, the higher the surface roughness of composite resins. The dentifrice OBW caused a higher surface roughness in both composite resins. PMID:26229362
LaLanne, Christine L; Cannady, Michael S; Moon, Joseph F; Taylor, Danica L; Nessler, Jeff A; Crocker, George H; Newcomer, Sean C
2017-04-01
Participation in surfing has evolved to include all age groups. Therefore, the purpose of this study was to determine whether activity levels and cardiovascular responses to surfing change with age. Surfing time and heart rate (HR) were measured for the total surfing session and within each activity of surfing (paddling, sitting, wave riding, and miscellaneous). Peak oxygen consumption (VO 2peak ) was also measured during laboratory-based simulated surfboard paddling on a modified swim bench ergometer. VO 2peak decreased with age during simulated paddling (r = -.455, p < .001, n = 68). Total time surfing (p = .837) and time spent within each activity of surfing did not differ with age (n = 160). Mean HR during surfing significantly decreased with age (r = -.231, p = .004). However, surfing HR expressed as a percent of age-predicted maximum increased significantly with age. Therefore, recreational surfers across the age spectrum are achieving intensities and durations that are consistent with guidelines for cardiovascular health.
Springback Compensation Process for High Strength Steel Automotive Parts
NASA Astrophysics Data System (ADS)
Onhon, M. Fatih
2016-08-01
This paper is about an advanced stamping simulation methodology used in automotive industry to shorten total die manufacturing times in a new vehicle project by means of benefiting leading edge virtual try-out technology.
Larsen, Christian Rifbjerg; Oestergaard, Jeanett; Ottesen, Bent S; Soerensen, Jette Led
2012-09-01
Virtual reality (VR) simulators for surgical training might possess the properties needed for basic training in laparoscopy. Evidence for training efficacy of VR has been investigated by research of varying quality over the past decade. To review randomized controlled trials regarding VR training efficacy compared with traditional or no training, with outcome measured as surgical performance in humans or animals. In June 2011 Medline, Embase, the Cochrane Central Register of Controlled Trials, Web of Science and Google Scholar were searched using the following medical subject headings (MeSh) terms: Laparoscopy/standards, Computing methodologies, Programmed instruction, Surgical procedures, Operative, and the following free text terms: Virtual real* OR simulat* AND Laparoscop* OR train* Controlled trials. All randomized controlled trials investigating the effect of VR training in laparoscopy, with outcome measured as surgical performance. A total of 98 studies were screened, 26 selected and 12 included, with a total of 241 participants. Operation time was reduced by 17-50% by VR training, depending on simulator type and training principles. Proficiency-based training appeared superior to training based on fixed time or fixed numbers of repetition. Simulators offering training for complete operative procedures came out as more efficient than simulators offering only basic skills training. Skills in laparoscopic surgery can be increased by proficiency-based procedural VR simulator training. There is substantial evidence (grade IA - IIB) to support the use of VR simulators in laparoscopic training. © 2012 The Authors Acta Obstetricia et Gynecologica Scandinavica© 2012 Nordic Federation of Societies of Obstetrics and Gynecology.
van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.
2018-01-01
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620
NASA Astrophysics Data System (ADS)
Lamdjaya, T.; Jobiliong, E.
2017-01-01
PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.
NASA Astrophysics Data System (ADS)
Michael, Scott; Steiman-Cameron, Thomas Y.; Durisen, Richard H.; Boley, Aaron C.
2012-02-01
We conduct a convergence study of a protostellar disk, subject to a constant global cooling time and susceptible to gravitational instabilities (GIs), at a time when heating and cooling are roughly balanced. Our goal is to determine the gravitational torques produced by GIs, the level to which transport can be represented by a simple α-disk formulation, and to examine fragmentation criteria. Four simulations are conducted, identical except for the number of azimuthal computational grid points used. A Fourier decomposition of non-axisymmetric density structures in cos (mphi), sin (mphi) is performed to evaluate the amplitudes Am of these structures. The Am , gravitational torques, and the effective Shakura & Sunyaev α arising from gravitational stresses are determined for each resolution. We find nonzero Am for all m-values and that Am summed over all m is essentially independent of resolution. Because the number of measurable m-values is limited to half the number of azimuthal grid points, higher-resolution simulations have a larger fraction of their total amplitude in higher-order structures. These structures act more locally than lower-order structures. Therefore, as the resolution increases the total gravitational stress decreases as well, leading higher-resolution simulations to experience weaker average gravitational torques than lower-resolution simulations. The effective α also depends upon the magnitude of the stresses, thus αeff also decreases with increasing resolution. Our converged αeff is consistent with predictions from an analytic local theory for thin disks by Gammie, but only over many dynamic times when averaged over a substantial volume of the disk.
Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam
2012-01-01
OBJECTIVE: To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. DESIGN: Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. SETTING: Paediatric residency program at BC Children’s Hospital, Vancouver, British Columbia. INTERVENTIONS: The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. RESULTS: A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. CONCLUSIONS: A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes. PMID:23372405
NASA Technical Reports Server (NTRS)
Waller, Jess M.; Williams, James H.; Fries, Joseph (Technical Monitor)
1999-01-01
The permeation resistance of chlorinated polyethylene (CPE) used in totally encapsulating chemical protective suits against the aerospace fuels hydrazine, monomethylhydrazine, and uns-dimethylhydrazine was determined by measuring the breakthrough time (BT) and time-averaged vapor transmission rate (VTR) using procedures consistent with ASTM F 739 and ASTM F 1383. Two exposure scenarios were simulated: a 2 hour (h) fuel vapor exposure, and a liquid fuel "splash" followed by a 2 h vapor exposure. To simulate internal suit pressure during operation, a positive differential pressure of 0.3 in. water (75 Pa) on the collection side of the permeation apparatus was used. Using the available data, a model was developed to estimate propellant concentrations inside an air-line fed, totally encapsulating chemical protective suit. Concentrations were calculated under simulated conditions of fixed vapor transmission rate, variable breathing air flow rate, and variable splash exposure area. Calculations showed that the maximum allowable permeation rates of hydrazine fuels through CPE were of the order of 0.05 to 0.08 ng/sq cm min for encapsulating suits with low breathing air flow rates (of the order of 5 scfm or 140 L min-1). Above these permeation rates, the 10 parts-per-billion (ppb) threshold limit value time-weighted average could be exceeded. To evaluate suit performance at 10 ppb threshold-limiting value/time-weighted average level concentrations, use of a sensitive analytical method such as cation exchange high performance liquid chromatography with amperometric detection was found to be essential. The analytical detection limit determines the lowest measurable VTR, which in turn governed the lowest per meant concentration that could be calculated inside the totally encapsulating chemical protective suit.
Neurosurgery simulation using non-linear finite element modeling and haptic interaction
NASA Astrophysics Data System (ADS)
Lee, Huai-Ping; Audette, Michel; Joldes, Grand R.; Enquobahrie, Andinet
2012-02-01
Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.
Dynamically accumulated dose and 4D accumulated dose for moving tumors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Heng; Li Yupeng; Zhang Xiaodong
2012-12-15
Purpose: The purpose of this work was to investigate the relationship between dynamically accumulated dose (dynamic dose) and 4D accumulated dose (4D dose) for irradiation of moving tumors, and to quantify the dose uncertainty induced by tumor motion. Methods: The authors established that regardless of treatment modality and delivery properties, the dynamic dose will converge to the 4D dose, instead of the 3D static dose, after multiple deliveries. The bounds of dynamic dose, or the maximum estimation error using 4D or static dose, were established for the 4D and static doses, respectively. Numerical simulations were performed (1) to prove themore » principle that for each phase, after multiple deliveries, the average number of deliveries for any given time converges to the total number of fractions (K) over the number of phases (N); (2) to investigate the dose difference between the 4D and dynamic doses as a function of the number of deliveries for deliveries of a 'pulsed beam'; and (3) to investigate the dose difference between 4D dose and dynamic doses as a function of delivery time for deliveries of a 'continuous beam.' A Poisson model was developed to estimate the mean dose error as a function of number of deliveries or delivered time for both pulsed beam and continuous beam. Results: The numerical simulations confirmed that the number of deliveries for each phase converges to K/N, assuming a random starting phase. Simulations for the pulsed beam and continuous beam also suggested that the dose error is a strong function of the number of deliveries and/or total deliver time and could be a function of the breathing cycle, depending on the mode of delivery. The Poisson model agrees well with the simulation. Conclusions: Dynamically accumulated dose will converge to the 4D accumulated dose after multiple deliveries, regardless of treatment modality. Bounds of the dynamic dose could be determined using quantities derived from 4D doses, and the mean dose difference between the dynamic dose and 4D dose as a function of number of deliveries and/or total deliver time was also established.« less
Interpreting space-based trends in carbon monoxide with multiple models
Strode, Sarah A.; Worden, Helen M.; Damon, Megan; ...
2016-06-10
Here, we use a series of chemical transport model and chemistry climate model simulations to investigate the observed negative trends in MOPITT CO over several regions of the world, and to examine the consistency of time-dependent emission inventories with observations. We also found that simulations driven by the MACCity inventory, used for the Chemistry Climate Modeling Initiative (CCMI), reproduce the negative trends in the CO column observed by MOPITT for 2000–2010 over the eastern United States and Europe. However, the simulations have positive trends over eastern China, in contrast to the negative trends observed by MOPITT. The model bias inmore » CO, after applying MOPITT averaging kernels, contributes to the model–observation discrepancy in the trend over eastern China. This demonstrates that biases in a model's average concentrations can influence the interpretation of the temporal trend compared to satellite observations. The total ozone column plays a role in determining the simulated tropospheric CO trends. A large positive anomaly in the simulated total ozone column in 2010 leads to a negative anomaly in OH and hence a positive anomaly in CO, contributing to the positive trend in simulated CO. Our results demonstrate that accurately simulating variability in the ozone column is important for simulating and interpreting trends in CO.« less
Interpreting space-based trends in carbon monoxide with multiple models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strode, Sarah A.; Worden, Helen M.; Damon, Megan
Here, we use a series of chemical transport model and chemistry climate model simulations to investigate the observed negative trends in MOPITT CO over several regions of the world, and to examine the consistency of time-dependent emission inventories with observations. We also found that simulations driven by the MACCity inventory, used for the Chemistry Climate Modeling Initiative (CCMI), reproduce the negative trends in the CO column observed by MOPITT for 2000–2010 over the eastern United States and Europe. However, the simulations have positive trends over eastern China, in contrast to the negative trends observed by MOPITT. The model bias inmore » CO, after applying MOPITT averaging kernels, contributes to the model–observation discrepancy in the trend over eastern China. This demonstrates that biases in a model's average concentrations can influence the interpretation of the temporal trend compared to satellite observations. The total ozone column plays a role in determining the simulated tropospheric CO trends. A large positive anomaly in the simulated total ozone column in 2010 leads to a negative anomaly in OH and hence a positive anomaly in CO, contributing to the positive trend in simulated CO. Our results demonstrate that accurately simulating variability in the ozone column is important for simulating and interpreting trends in CO.« less
CYCLIC THERMAL SIGNATURE IN A GLOBAL MHD SIMULATION OF SOLAR CONVECTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cossette, Jean-Francois; Charbonneau, Paul; Smolarkiewicz, Piotr K.
Global magnetohydrodynamical simulations of the solar convection zone have recently achieved cyclic large-scale axisymmetric magnetic fields undergoing polarity reversals on a decadal time scale. In this Letter, we show that these simulations also display a thermal convective luminosity that varies in-phase with the magnetic cycle, and trace this modulation to deep-seated magnetically mediated changes in convective flow patterns. Within the context of the ongoing debate on the physical origin of the observed 11 yr variations in total solar irradiance, such a signature supports the thesis according to which all, or part, of the variations on decadal time scales and longermore » could be attributed to a global modulation of the Sun's internal thermal structure by magnetic activity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, S.A.; Robinson, G.E.; Conner, J.K.
Two species of mustard, Brassica nigra and B. rapa, were grown under simulated ambient and enhanced ultraviolet-B (UV-B) radiation and exposed to pollinators, Apis mellifera L. Observations were made to determine whether UV-B-induced changes in these plants affected pollinator behavior. Total duration of the foraging trip, number of flowers visited, foraging time per flower, search time per flower, total amount of pollen collected, and pollen collected per flower were measured. There were no significant differences between UV-B treatments in any of the behaviors measured or in any of the pollen measurements. These results suggest that increases in the amount ofmore » solar UV-B reaching the earth`s surface may not have a negative effect on the relationship between these members of the genus Brassica and their honey bee pollinators. 28 refs., 2 figs., 1 tab.« less
Hackethal, A; Immenroth, M; Bürger, T
2006-04-01
The Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR) simulator is validated for laparoscopy training, but benchmarks and target scores for assessing single tasks are needed. Control data for the MIST-VR traversal task scenario were collected from 61 novices who performed the task 10 times over 3 days (1 h daily). Data were collected on the time taken, error score, economy of movement, and total score. Test differences were analyzed through percentage scores and t-tests for paired samples. Improvement was greatest over tests 1 to 5 (improvement: test(1.2), 38.07%; p = 0.000; test(4.5), 10.66%; p = 0.010): between tests 5 and 10, improvement slowed and scores stabilized. Variation in participants' performance fell steadily over the 10 tests. Trainees should perform at least 10 tests of the traversal task-five to get used to the equipment and task (automation phase; target total score, 95.16) and five to stabilize and consolidate performance (test 10 target total score, 74.11).
Version 4.0 of code Java for 3D simulation of the CCA model
NASA Astrophysics Data System (ADS)
Fan, Linyu; Liao, Jianwei; Zuo, Junsen; Zhang, Kebo; Li, Chao; Xiong, Hailing
2018-07-01
This paper presents a new version Java code for the three-dimensional simulation of Cluster-Cluster Aggregation (CCA) model to replace the previous version. Many redundant traverses of clusters-list in the program were totally avoided, so that the consumed simulation time is significantly reduced. In order to show the aggregation process in a more intuitive way, we have labeled different clusters with varied colors. Besides, a new function is added for outputting the particle's coordinates of aggregates in file to benefit coupling our model with other models.
VARTM Process Modeling of Aerospace Composite Structures
NASA Technical Reports Server (NTRS)
Song, Xiao-Lan; Grimsley, Brian W.; Hubert, Pascal; Cano, Roberto J.; Loos, Alfred C.
2003-01-01
A three-dimensional model was developed to simulate the VARTM composite manufacturing process. The model considers the two important mechanisms that occur during the process: resin flow, and compaction and relaxation of the preform. The model was used to simulate infiltration of a carbon preform with an epoxy resin by the VARTM process. The model predicted flow patterns and preform thickness changes agreed qualitatively with the measured values. However, the predicted total infiltration times were much longer than measured most likely due to the inaccurate preform permeability values used in the simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shekar, Venkateswaran; Fiondella, Lance; Chatterjee, Samrat
Several transportation network vulnerability models have been proposed. However, most only consider disruptions as a static snapshot in time and the impact on total travel time. These approaches cannot consider the time-varying nature of travel demand nor other undesirable outcomes that follow from transportation network disruptions. This paper proposes an algorithmic approach to assess the vulnerability of a transportation network that considers the time-varying demand with an open source dynamic transportation simulation tool. The open source nature of the tool allows us to systematically consider many disruption scenarios and quantitatively compare their relative criticality. This is far more efficient thanmore » traditional approaches which would require days or weeks of a transportation engineers time to manually set up, run, and assess these simulations. In addition to travel time, we also collect statistics on additional fuel consumed and the corresponding carbon dioxide emissions. Our approach, thus provides a more systematic approach that is both time-varying and can consider additional negative consequences of disruptions for decision makers to evaluate.« less
Microsurgical Performance After Sleep Interruption: A NeuroTouch Simulator Study.
Micko, Alexander; Knopp, Karoline; Knosp, Engelbert; Wolfsberger, Stefan
2017-10-01
In times of the ubiquitous debate about doctors' working hour restrictions, it is still questionable if the physician's performance is impaired by high work load and long shifts. In this study, we evaluated the impact of sleep interruption on neurosurgical performance. Ten medical students and 10 neurosurgical residents were tested on the virtual-reality simulator NeuroTouch by performing an identical microsurgical task, well rested (baseline test), and after sleep interruption at night (stress test). Deviation of total score, timing, and excessive force on tissue were evaluated. In addition, vital parameters and self-assessment were analyzed. After sleep interruption, total performance score increased significantly (45.1 vs. 48.7, baseline vs. stress test, P = 0.048) while timing remained stable (10.1 vs. 10.4 minutes for baseline vs. stress test, P > 0.05) for both students and residents. Excessive force decreased in both groups during the stress test for the nondominant hand (P = 0.05). For the dominant hand, an increase of excessive force was encountered in the group of residents (P = 0.05). In contrast to their results, participants of both groups assessed their performance worse during the stress test. In our study, we found an increase of neurosurgical simulator performance in neurosurgical residents and medical students under simulated night shift conditions. Further, microsurgical dexterity remained unchanged. Based on our results and the data in the available literature, we cannot confirm that working hour restrictions will have a positive effect on neurosurgical performance. Copyright © 2017 Elsevier Inc. All rights reserved.
Strøm-Tejsen, P; Zukowska, D; Fang, L; Space, D R; Wyon, D P
2008-06-01
Experiments were carried out in a three-row, 21-seat section of a simulated aircraft cabin installed in a climate chamber to evaluate the extent to which passengers' perception of cabin air quality is affected by the operation of a gas-phase adsorption (GPA) purification unit. A total of 68 subjects, divided into four groups of 17 subjects took part in simulated 11-h flights. Each group experienced four conditions in balanced order, defined by two outside air supply rates (2.4 and 3.3 l/s per person), with and without the GPA purification unit installed in the recirculated air system, a total of 2992 subject-hours of exposure. During each flight the subjects completed questionnaires five times to provide subjective assessments of air quality, cabin environment, intensity of symptoms, and thermal comfort. Additionally, the subjects' visual acuity, finger temperature, skin dryness, and nasal peak flow were measured three times during each flight. Analysis of the subjective assessments showed that operating a GPA unit in the recirculated air provided consistent advantages with no apparent disadvantages. Operating a gas-phase adsorption (GPA) air purifier unit in the recirculated air in a simulated airplane cabin provided a clear and consistent advantage for passengers and crew that became increasingly apparent at longer flight times. This finding indicates that the expense of undertaking duly blinded field trials on revenue flights would be justified.
Gray: a ray tracing-based Monte Carlo simulator for PET.
Freese, David L; Olcott, Peter D; Buss, Samuel R; Levin, Craig S
2018-05-21
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a [Formula: see text] speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within [Formula: see text]% when accounting for differences in peak NECR. We also estimate the peak NECR to be [Formula: see text] kcps, or within [Formula: see text]% of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
Permeation Resistance of Personal Protective Equipment Materials to Monomethyhydrazine
NASA Technical Reports Server (NTRS)
Waller, J. M.; Williams, J. H.
1997-01-01
Permeation resistance was determined by measuring the breakthrough time and time-averaged vapor transmission rate of monomethylhydrazine (MMH) through two types of personal protective equipment (PPE). The two types of PPE evaluated were the totally encapsulating ILC Dover Chemturion Model 1212 chemical protective suit with accessories, and the FabOhio polyvinyl chloride (PVC) splash garment. Two exposure scenarios were simulated: (1) a saturated vapor exposure for 2 hours (h), and (2) a brief MMH 'splash' followed by a 2-h saturated vapor exposure. Time-averaged MMH concentrations inside the totally-encapsulating suit were calculated by summation of the area-weighted contributions made by each suit component. Results show that the totally encapsulating suit provides adequate protection at the new 10 ppb Threshold Limit Value Time-Weighted Average (TLV-TWA). The permeation resistance of the PVC splash garment to MMH was poorer than any of the totally encapsulating suit materials tested. Breakthrough occurred soon after initial vapor or 'splash' exposure.
NASA Astrophysics Data System (ADS)
Alhajjar, Bashar J.; Linn Gould, C.; Chesters, Gordon; Harkin, John M.
1990-12-01
The effects of phosphate (P) and zeolite (Z) -built detergents on leaching of N and P through sand columns simulating septic system drainfields were examined in laboratory columns. To simulate mound septic system drainfields, paired sets of columns were dosed intermittently with septic tank effluent from households using P- or Z-built detergent. Two other paired sets of columns were flooded with P- or Z-effluent to simulate new conventional septic system drainfields; after clogging mats or "crusts" developed at infiltration surface, the subsurfaces of the columns were aerated to simulate mature (crusted) conventional septic system drainfields. NO 3 loading in leachate was 1.1 times higher and ortho-P loading was 4.3 times lower when columns were dosed with Z- than with P-effluent. Dosed columns removed P poorly; total phosphorus (TP) loading in leachate was 81 and 19 g m -2 yr -1 with P- and Z-effluent, respectively. In flooded columns 1.3, 2.0 and 1.8 times more NH 4, organic nitrogen (ON) and total nitrogen (TN) respectively, were leached with Z- than with P-effluent; NO 3 leaching was similar. Flooded columns removed P efficiently; TP leached through flooded systems was 2.5 and 1.4 g m -2 yr -1 with P- and Z effluent, respectively. Crusted columns fed Z-effluent leached 1.2, 2.6, 1.4 and 2.1 times more NH 4, NO 3, ON and TN, respectively, than those with P-effluent but 1.8 times less TP. Crusted columns removed P satisfactorily: 8.2 and 4.6 g m -2 yr -1 TP with P- and Z-effluent, respectively. The P-built detergent substantially improves the efficiency of N removal with satisfactory P removal in columns simulating conventional septic system drainfield. Simultaneous removal of N and P under flooded conditions might be explained by precipitation of struvite-type minerals. Dosed system drainfields were less efficient in removing N and P compared to flooded and crusted system drainfelds.
NASA Technical Reports Server (NTRS)
Grantham, William D.; Smith, Paul M.; Person, Lee H., Jr.; Meyer, Robert T.; Tingas, Stephen A.
1987-01-01
A piloted simulation study was conducted to determine the permissible time delay in the flight control system of a 10-percent statically unstable transport airplane during cruise flight conditions. The math model used for the simulation was a derivative Lockheed L-1011 wide-body jet transport. Data were collected and analyzed from a total of 137 cruising flights in both calm- and turbulent-air conditions. Results of this piloted simulation study verify previous findings that show present military specifications for allowable control-system time delay may be too stringent when applied to transport-size airplanes. Also, the degree of handling-qualities degradation due to time delay is shown to be strongly dependent on the source of the time delay in an advanced flight control system. Maximum allowable time delay for each source of time delay in the control system, in addition to a less stringent overall maximum level of time delay, should be considered for large aircraft. Preliminary results also suggest that adverse effects of control-system time delay may be at least partially offset by variations in control gearing. It is recommended that the data base include different airplane baselines, control systems, and piloting tasks with many pilots participating, so that a reasonable set of limits for control-system time delay can be established to replace the military specification limits currently being used.
System and Method for Finite Element Simulation of Helicopter Turbulence
NASA Technical Reports Server (NTRS)
McFarland, R. E. (Inventor); Dulsenberg, Ken (Inventor)
1999-01-01
The present invention provides a turbulence model that has been developed for blade-element helicopter simulation. This model uses an innovative temporal and geometrical distribution algorithm that preserves the statistical characteristics of the turbulence spectra over the rotor disc, while providing velocity components in real time to each of five blade-element stations along each of four blades. for a total of twenty blade-element stations. The simulator system includes a software implementation of flight dynamics that adheres to the guidelines for turbulence set forth in military specifications. One of the features of the present simulator system is that it applies simulated turbulence to the rotor blades of the helicopter, rather than to its center of gravity. The simulator system accurately models the rotor penetration into a gust field. It includes time correlation between the front and rear of the main rotor, as well as between the side forces felt at the center of gravity and at the tail rotor. It also includes features for added realism, such as patchy turbulence and vertical gusts in to which the rotor disc penetrates. These features are realized by a unique real time implementation of the turbulence filters. The new simulator system uses two arrays one on either side of the main rotor to record the turbulence field and to produce time-correlation from the front to the rear of the rotor disc. The use of Gaussian Interpolation between the two arrays maintains the statistical properties of the turbulence across the rotor disc. The present simulator system and method may be used in future and existing real-time helicopter simulations with minimal increase in computational workload.
NASA Astrophysics Data System (ADS)
Ibrahim, Ireen Munira; Liong, Choong-Yeun; Bakar, Sakhinah Abu; Ahmad, Norazura; Najmuddin, Ahmad Farid
2017-04-01
Emergency department (ED) is the main unit of a hospital that provides emergency treatment. Operating 24 hours a day with limited number of resources invites more problems to the current chaotic situation in some hospitals in Malaysia. Delays in getting treatments that caused patients to wait for a long period of time are among the frequent complaints against government hospitals. Therefore, the ED management needs a model that can be used to examine and understand resource capacity which can assist the hospital managers to reduce patients waiting time. Simulation model was developed based on 24 hours data collection. The model developed using Arena simulation replicates the actual ED's operations of a public hospital in Selangor, Malaysia. The OptQuest optimization in Arena is used to find the possible combinations of a number of resources that can minimize patients waiting time while increasing the number of patients served. The simulation model was modified for improvement based on results from OptQuest. The improvement model significantly improves ED's efficiency with an average of 32% reduction in average patients waiting times and 25% increase in the total number of patients served.
NASA Astrophysics Data System (ADS)
Karlsteen, M.; Willander, M.
1993-11-01
In this paper the total switch time for a transistor in a Direct Coupled Transistor Logic (DCTL) circuit is simulated by using Laplace transformations of the Ebers-Moll equations. The influence of doping gradients and germanium gradients in the base is investigated and their relative importance and their limitations are established. In a well designed bipolar transistor only a minor enhancement of the total switch time is obtained with the use of a doping gradient in the base. However, for bipolar transistors with base thickness over 500 Å, an improperly selected doping profile could be devastating for the total switch time. For a bipolar transistor the improvement of the total switch time due to a linear germanium gradient in the base could be up to about 30% compared with an ordinary silicon bipolar transistor. Still, a too high germanium gradient forces the normal transistor current gain (α N) to grow and the total switch time is thereby increased. Further enhancement could be achieved by the use of a second degree polynomial germanium profile in the base. Also in this case, care must be taken not to enlarge the germanium gradient too much as the total switch time then starts to increase. In all cases the betterment of the base transit time that is introduced by the electric field will not be directly used to reduce the base transit time. Instead the improvement is mostly used to lower the emitter transition charging time. However, the most important parameter to control is the normal transistor current gain (α N) that has to be kept within a narrow range to keep the total switch time low.
NASA Technical Reports Server (NTRS)
1979-01-01
The pilot's perception and performance in flight simulators is examined. The areas investigated include: vestibular stimulation, flight management and man cockpit information interfacing, and visual perception in flight simulation. The effects of higher levels of rotary acceleration on response time to constant acceleration, tracking performance, and thresholds for angular acceleration are examined. Areas of flight management examined are cockpit display of traffic information, work load, synthetic speech call outs during the landing phase of flight, perceptual factors in the use of a microwave landing system, automatic speech recognition, automation of aircraft operation, and total simulation of flight training.
Tolerance design of patient-specific range QA using the DMAIC framework in proton therapy.
Rah, Jeong-Eun; Shin, Dongho; Manger, Ryan P; Kim, Tae Hyun; Oh, Do Hoon; Kim, Dae Yong; Kim, Gwe-Ya
2018-02-01
To implement the DMAIC (Define-Measure-Analyze-Improve-Control) can be used for customizing the patient-specific QA by designing site-specific range tolerances. The DMAIC framework (process flow diagram, cause and effect, Pareto chart, control chart, and capability analysis) were utilized to determine the steps that need focus for improving the patient-specific QA. The patient-specific range QA plans were selected according to seven treatment site groups, a total of 1437 cases. The process capability index, C pm was used to guide the tolerance design of patient site-specific range. For prostate field, our results suggested that the patient range measurements were capable at the current tolerance level of ±1 mm in clinical proton plans. For other site-specific ranges, we analyzed that the tolerance tends to be overdesigned to insufficient process capability calculated by the patient-specific QA data. The customized tolerances were calculated for treatment sites. Control charts were constructed to simulate the patient QA time before and after the new tolerances were implemented. It is found that the total simulation QA time was decreased on average of approximately 20% after establishing new site-specific range tolerances. We simulated the financial impact of this project. The QA failure for whole process in proton therapy would lead up to approximately 30% increase in total cost. DMAIC framework can be used to provide an effective QA by setting customized tolerances. When tolerance design is customized, the quality is reasonably balanced with time and cost demands. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Baumketner, Andriy; Shea, Joan-Emma
2006-03-01
We report a replica-exchange molecular dynamics study of the 10-35 fragment of Alzheimer's disease amyloid β peptide, Aβ10-35, in aqueous solution. This fragment was previously seen [J. Str. Biol. 130 (2000) 130] to possess all the most important amyloidogenic properties characteristic of full-length Aβ peptides. Our simulations attempted to fold Aβ10-35 from first principles. The peptide was modeled using all-atom OPLS/AA force field in conjunction with the TIP3P explicit solvent model. A total of 72 replicas were considered and simulated over 40 ns of total time, including 5 ns of initial equilibration. We find that Aβ10-35 does not possess any unique folded state, a 3D structure of predominant population, under normal temperature and pressure. Rather, this peptide exists as a mixture of collapsed globular states that remain in rapid dynamic equilibrium with each other. This conformational ensemble is seen to be dominated by random coil and bend structures with insignificant presence of α-helical or β-sheet structure. We find that, overall, the 3D structure of Aβ10-35 is shaped by salt bridges formed between oppositely charged residues.Of all possible salt bridges, K28-D23 was seen to have the highest formation probability, totaling more than 60% of the time.
NASA Technical Reports Server (NTRS)
Miller, G. K., Jr.; Riley, D. R.
1978-01-01
The effect of secondary tasks in determining permissible time delays in visual-motion simulation of a pursuit tracking task was examined. A single subject, a single set of aircraft handling qualities, and a single motion condition in tracking a target aircraft that oscillates sinusoidally in altitude were used. In addition to the basic simulator delays the results indicate that the permissible time delay is about 250 msec for either a tapping task, an adding task, or an audio task and is approximately 125 msec less than when no secondary task is involved. The magnitudes of the primary task performance measures, however, differ only for the tapping task. A power spectraldensity analysis basically confirms the result by comparing the root-mean-square performance measures. For all three secondary tasks, the total pilot workload was quite high.
NASA Technical Reports Server (NTRS)
Bodley, C. S.; Devers, A. D.; Park, A. C.; Frisch, H. P.
1978-01-01
A theoretical development and associated digital computer program system for the dynamic simulation and stability analysis of passive and actively controlled spacecraft are presented. The dynamic system (spacecraft) is modeled as an assembly of rigid and/or flexible bodies not necessarily in a topological tree configuration. The computer program system is used to investigate total system dynamic characteristics, including interaction effects between rigid and/or flexible bodies, control systems, and a wide range of environmental loadings. In addition, the program system is used for designing attitude control systems and for evaluating total dynamic system performance, including time domain response and frequency domain stability analyses.
Exchange pathways of plastoquinone and plastoquinol in the photosystem II complex
Van Eerden, Floris J.; Melo, Manuel N.; Frederix, Pim W. J. M.; Periole, Xavier; Marrink, Siewert J.
2017-01-01
Plastoquinone (PLQ) acts as an electron carrier between photosystem II (PSII) and the cytochrome b6f complex. To understand how PLQ enters and leaves PSII, here we show results of coarse grained molecular dynamics simulations of PSII embedded in the thylakoid membrane, covering a total simulation time of more than 0.5 ms. The long time scale allows the observation of many spontaneous entries of PLQ into PSII, and the unbinding of plastoquinol (PLQol) from the complex. In addition to the two known channels, we observe a third channel for PLQ/PLQol diffusion between the thylakoid membrane and the PLQ binding sites. Our simulations point to a promiscuous diffusion mechanism in which all three channels function as entry and exit channels. The exchange cavity serves as a PLQ reservoir. Our simulations provide a direct view on the exchange of electron carriers, a key step of the photosynthesis machinery. PMID:28489071
A highly coarse-grained model to simulate entangled polymer melts.
Zhu, You-Liang; Liu, Hong; Lu, Zhong-Yuan
2012-04-14
We introduce a highly coarse-grained model to simulate the entangled polymer melts. In this model, a polymer chain is taken as a single coarse-grained particle, and the creation and annihilation of entanglements are regarded as stochastic events in proper time intervals according to certain rules and possibilities. We build the relationship between the probability of appearance of an entanglement between any pair of neighboring chains at a given time interval and the rate of variation of entanglements which describes the concurrence of birth and death of entanglements. The probability of disappearance of entanglements is tuned to keep the total entanglement number around the target value. This useful model can reflect many characteristics of entanglements and macroscopic properties of polymer melts. As an illustration, we apply this model to simulate the polyethylene melt of C(1000)H(2002) at 450 K and further validate this model by comparing to experimental data and other simulation results.
Modeling a maintenance simulation of the geosynchronous platform
NASA Technical Reports Server (NTRS)
Kleiner, A. F., Jr.
1980-01-01
A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.
Simulative design and process optimization of the two-stage stretch-blow molding process
NASA Astrophysics Data System (ADS)
Hopmann, Ch.; Rasche, S.; Windeck, C.
2015-05-01
The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.
HiL simulation in biomechanics: a new approach for testing total joint replacements.
Herrmann, Sven; Kaehler, Michael; Souffrant, Robert; Rachholz, Roman; Zierath, János; Kluess, Daniel; Mittelmeier, Wolfram; Woernle, Christoph; Bader, Rainer
2012-02-01
Instability of artificial joints is still one of the most prevalent reasons for revision surgery caused by various influencing factors. In order to investigate instability mechanisms such as dislocation under reproducible, physiologically realistic boundary conditions, a novel test approach is introduced by means of a hardware-in-the-loop (HiL) simulation involving a highly flexible mechatronic test system. In this work, the underlying concept and implementation of all required units is presented enabling comparable investigations of different total hip and knee replacements, respectively. The HiL joint simulator consists of two units: a physical setup composed of a six-axes industrial robot and a numerical multibody model running in real-time. Within the multibody model, the anatomical environment of the considered joint is represented such that the soft tissue response is accounted for during an instability event. Hence, the robot loads and moves the real implant components according to the information provided by the multibody model while transferring back the position and resisting moment recorded. Functionality of the simulator is proved by testing the underlying control principles, and verified by reproducing the dislocation process of a standard total hip replacement. HiL simulations provide a new biomechanical testing tool for analyzing different joint replacement systems with respect to their instability behavior under realistic movements and physiological load conditions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Harriss-Phillips, W M; Bezak, E; Yeoh, E K
2011-01-01
Objective A temporal Monte Carlo tumour growth and radiotherapy effect model (HYP-RT) simulating hypoxia in head and neck cancer has been developed and used to analyse parameters influencing cell kill during conventionally fractionated radiotherapy. The model was designed to simulate individual cell division up to 108 cells, while incorporating radiobiological effects, including accelerated repopulation and reoxygenation during treatment. Method Reoxygenation of hypoxic tumours has been modelled using randomised increments of oxygen to tumour cells after each treatment fraction. The process of accelerated repopulation has been modelled by increasing the symmetrical stem cell division probability. Both phenomena were onset immediately or after a number of weeks of simulated treatment. Results The extra dose required to control (total cell kill) hypoxic vs oxic tumours was 15–25% (8–20 Gy for 5×2 Gy per week) depending on the timing of accelerated repopulation onset. Reoxygenation of hypoxic tumours resulted in resensitisation and reduction in total dose required by approximately 10%, depending on the time of onset. When modelled simultaneously, accelerated repopulation and reoxygenation affected cell kill in hypoxic tumours in a similar manner to when the phenomena were modelled individually; however, the degree was altered, with non-additive results. Simulation results were in good agreement with standard linear quadratic theory; however, differed for more complex comparisons where hypoxia, reoxygenation as well as accelerated repopulation effects were considered. Conclusion Simulations have quantitatively confirmed the need for patient individualisation in radiotherapy for hypoxic head and neck tumours, and have shown the benefits of modelling complex and dynamic processes using Monte Carlo methods. PMID:21933980
Wear simulation of total knee prostheses using load and kinematics waveforms from stair climbing.
Abdel-Jaber, Sami; Belvedere, Claudio; Leardini, Alberto; Affatato, Saverio
2015-11-05
Knee wear simulators are meant to perform load cycles on knee implants under physiological conditions, matching exactly, if possible, those experienced at the replaced joint during daily living activities. Unfortunately, only conditions of low demanding level walking, specified in ISO-14243, are used conventionally during such tests. A recent study has provided a consistent knee kinematic and load data-set measured during stair climbing in patients implanted with a specific modern total knee prosthesis design. In the present study, wear simulation tests were performed for the first time using this data-set on the same prosthesis design. It was hypothesised that more demanding tasks would result in wear rates that differ from those observed in retrievals. Four prostheses for total knee arthroplasty were tested using a displacement-controlled knee wear simulator for two million cycles at 1.1 Hz, under kinematics and load conditions typical of stair climbing. After simulation, the corresponding damage scars on the bearings were qualified and compared with equivalent explanted prostheses. An average mass loss of 20.2±1.5 mg was found. Scanning digital microscopy revealed similar features, though the explant had a greater variety of damage modes, including a high prevalence of adhesive wear damage and burnishing in the overall articulating surface. This study confirmed that the results from wear simulation machines are strongly affected by kinematics and loads applied during simulations. Based on the present results for the full understanding of the current clinical failure of knee implants, a more comprehensive series of conditions are necessary for equivalent simulations in vitro. Copyright © 2015 Elsevier Ltd. All rights reserved.
Magnetic Flyer Facility Correlation and UGT Simulation
1978-05-01
AND UGT SIMULATION (U) Kaman Sciences Corporation L ~ P.O. Box 7463 I Colorado Springs, Colcerado 80933 ý4 May 1978DC Final Report CONTRACT No. DNA O01...selected underground test ( UGT ) environment on 3DQP; and, (2) To correlate the magnetically driven flyer plate facilities of VKSC with those of the...tailored to matcb the pressure vs. time anid total impulse measurements obtained on UGT events. This matching of experi- mental data required considerable
Bae, Donald S; Lynch, Hayley; Jamieson, Katherine; Yu-Moe, C Winnie; Roussin, Christopher
2017-09-06
The purpose of this investigation was to characterize the clinical efficacy and cost-effectiveness of simulation training aimed at reducing cast-saw injuries. Third-year orthopaedic residents underwent simulation-based instruction on distal radial fracture reduction, casting, and cast removal using an oscillating saw. The analysis compared incidences of cast-saw injuries and associated costs before and after the implementation of the simulation curriculum. Actual and potential costs associated with cast-saw injuries included wound care, extra clinical visits, and potential total payment (indemnity and expense payments). Curriculum costs were calculated through time-derived, activity-based accounting methods. The researchers compared the costs of cast-saw injuries and the simulation curriculum to determine overall savings and return on investment. In the 2.5 years prior to simulation, cast-saw injuries occurred in approximately 4.3 per 100 casts cut by orthopaedic residents. For the 2.5-year period post-simulation, the injury rate decreased significantly to approximately 0.7 per 100 casts cut (p = 0.002). The total cost to implement the casting simulation was $2,465.31 per 6-month resident rotation. On the basis of historical data related to cast-saw burns (n = 6), total payments ranged from $2,995 to $25,000 per claim. The anticipated savings from averted cast-saw injuries and associated medicolegal payments in the 2.5 years post-simulation was $27,131, representing an 11-to-1 return on investment. Simulation-based training for orthopaedic surgical residents was effective in reducing cast-saw injuries and had a high theoretical return on investment. These results support further investment in simulation-based training as cost-effective means of improving patient safety and clinical outcomes. Therapeutic Level III. See Instructions for Authors for a complete description of levels of evidence.
Glick, Joshua; Lehman, Erik; Terndrup, Thomas
2014-03-01
Coordination of the tasks of performing chest compressions and defibrillation can lead to communication challenges that may prolong time spent off the chest. The purpose of this study was to determine whether defibrillation provided by the provider performing chest compressions led to a decrease in peri-shock pauses as compared to defibrillation administered by a second provider, in a simulated cardiac arrest scenario. This was a randomized, controlled study measuring pauses in chest compressions for defibrillation in a simulated cardiac arrest model. We approached hospital providers with current CPR certification for participation between July, 2011 and October, 2011. Volunteers were randomized to control (facilitator-administered defibrillation) or experimental (compressor-administered defibrillation) groups. All participants completed one minute of chest compressions on a mannequin in a shockable rhythm prior to administration of defibrillation. We measured and compared pauses for defibrillation in both groups. Out of 200 total participants, we analyzed data from 197 defibrillations. Compressor-initiated defibrillation resulted in a significantly lower pre-shock hands-off time (0.57 s; 95% CI: 0.47-0.67) compared to facilitator-initiated defibrillation (1.49 s; 95% CI: 1.35-1.64). Furthermore, compressor-initiated defibrillation resulted in a significantly lower peri-shock hands-off time (2.77 s; 95% CI: 2.58-2.95) compared to facilitator-initiated defibrillation (4.25 s; 95% CI: 4.08-4.43). Assigning the responsibility for shock delivery to the provider performing compressions encourages continuous compressions throughout the charging period and decreases total time spent off the chest. However, as this was a simulation-based study, clinical implementation is necessary to further evaluate these potential benefits.
Foley, J. M.; Gooding, A. L.; Thames, A. D.; Ettenhofer, M. L.; Kim, M. S.; Castellon, S. A.; Marcotte, T. D.; Sadek, J. R.; Heaton, R. K.; van Gorp, W. G.; Hinkin, C. H.
2013-01-01
Objectives To examine the effects of aging and neuropsychological (NP) impairment on driving simulator performance within a human immunodeficiency virus (HIV)-infected cohort. Methods Participants included 79 HIV-infected adults (n = 58 > age 50, n = 21 ≤ 40) who completed a NP battery and a personnel computer-based driving simulator task. Outcome variables included total completion time (time) and number of city blocks to complete the task (blocks). Results Compared to the younger group, the older group was less efficient in their route finding (blocks over optimum: 25.9 [20.1] vs 14.4 [16.9]; P = .02) and took longer to complete the task (time: 1297.6 [577.6] vs 804.4 [458.5] seconds; P = .001). Regression models within the older adult group indicated that visuospatial abilities (blocks: b = –0.40, P < .001; time: b = –0.40, P = .001) and attention (blocks: b = –0.49, P = .001; time: b = –0.42, P = .006) independently predicted simulator performance. The NP-impaired group performed more poorly on both time and blocks, compared to the NP normal group. Conclusions Older HIV-infected adults may be at risk of driving-related functional compromise secondary to HIV-associated neurocognitive decline. PMID:23314403
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
Sung, Yun-Hee; Kim, Chang-Ju; Yu, Byong-Kyu; Kim, Kyeong-Mi
2013-01-01
We investigated whether a hippotherapy simulator has influence on symmetric body weight bearing during gait in patients with stroke. Stroke patients were divided into a control group (n = 10) that received conventional rehabilitation for 60 min/day, 5 times/week for 4 weeks and an experimental group (n = 10) that used a hippotherapy simulator for 15 min/day, 5 times/week for 4 weeks after conventional rehabilitation for 45 min/day. Temporospatial gait assessed using OptoGait and trunk muscles (abdominis and erector spinae on affected side) activity evaluated using surface electromyography during sit-to-stand and gait. Prior to starting the experiment, pre-testing was performed. At the end of the 4-week intervention, we performed post-testing. Activation of the erector spinae in the experimental group was significantly increased compared to that in the control group (p < 0.01), whereas activation of the rectus abdominis decreased during sit-to-stand. Of the gait parameters, load response, single support, total double support, and pre-swing showed significant changes in the experimental group with a hippotherapy simulator compared to control group (p < 0.05). Moreover, activation of the erector spinae and rectus abdominis in gait correlate with changes of gait parameters including load response, single support, total double support, and pre-swing in experimental group. These findings suggest that use of a hippotherapy simulator to patients with stroke can improve asymmetric weight bearing by influencing trunk muscles.
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
Chase, Katherine J.; Caldwell, Rodney R.; Stanley, Andrea K.
2014-01-01
This report documents the construction of a precipitation-runoff model for simulating natural streamflow in the Smith River watershed, Montana. This Precipitation-Runoff Modeling System model, constructed in cooperation with the Meagher County Conservation District, can be used to examine the general hydrologic framework of the Smith River watershed, including quantification of precipitation, evapotranspiration, and streamflow; partitioning of streamflow between surface runoff and subsurface flow; and quantifying contributions to streamflow from several parts of the watershed. The model was constructed by using spatial datasets describing watershed topography, the streams, and the hydrologic characteristics of the basin soils and vegetation. Time-series data (daily total precipitation, and daily minimum and maximum temperature) were input to the model to simulate daily streamflow. The model was calibrated for water years 2002–2007 and evaluated for water years 1996–2001. Though water year 2008 was included in the study period to evaluate water-budget components, calibration and evaluation data were unavailable for that year. During the calibration and evaluation periods, simulated-natural flow values were compared to reconstructed-natural streamflow data. These reconstructed-natural streamflow data were calculated by adding Bureau of Reclamation’s depletions data to the observed streamflows. Reconstructed-natural streamflows represent estimates of streamflows for water years 1996–2007 assuming there was no agricultural water-resources development in the watershed. Additional calibration targets were basin mean monthly solar radiation and potential evapotranspiration. The model estimated the hydrologic processes in the Smith River watershed during the calibration and evaluation periods. Simulated-natural mean annual and mean monthly flows generally were the same or higher than the reconstructed-natural streamflow values during the calibration period, whereas they were lower during the evaluation period. The shape of the annual hydrographs for the simulated-natural daily streamflow values matched the shape of the hydrographs for the reconstructed-natural values for most of the calibration period, but daily streamflow values were underestimated during the evaluation period for water years 1996–1998. The model enabled a detailed evaluation of the components of the water budget within the Smith River watershed during the water year 1996–2008 study period. During this study period, simulated mean annual precipitation across the Smith River watershed was 16 inches, out of which 14 inches evaporated or transpired and 2 inches left the basin as streamflow. Per the precipitation-runoff model simulations, during most of the year, surface runoff rarely (less than 2 percent of the time during water years 2002–2008) makes up more than 10 percent of the total streamflow. Subsurface flow (the combination of interflow and groundwater flow) makes up most of the total streamflow (99 or more percent of total streamflow for 71 percent of the time during water years 2002–2008).
Surgery Website as a 24/7 Adjunct to a Surgical Curriculum.
Jyot, Apram; Baloul, Mohamed S; Finnesgard, Eric J; Allen, Samuel J; Naik, Nimesh D; Gomez Ibarra, Miguel A; Abbott, Eduardo F; Gas, Becca; Cardenas-Lara, Francisco J; Zeb, Muhammad H; Cadeliña, Rachel; Farley, David R
Successfully teaching duty hour restricted trainees demands engaging learning opportunities. Our surgical educational website and its associated assets were assessed to understand how such a resource was being used. Our website was accessible to all Mayo Clinic employees via the internal web network. Website access data from April 2015 through October 2016 were retrospectively collected using Piwik. Academic, tertiary care referral center with a large general surgery training program. Mayo Clinic, Rochester, MN. A total of 257 Mayo Clinic employees used the website. The website had 48,794 views from 6313 visits by 257 users who spent an average of 14 ± 11 minutes on the website. Our website houses 295 videos, 51 interactive modules, 14 educational documents, and 7 flashcard tutorials. The most popular content type was videos, with a total of 30,864 views. The most popular visiting time of the day was between 8 pm and 9 pm with 6358 views (13%), and Thursday was the most popular day with 17,907 views (37%). A total of 78% of users accessed content beyond the homepage. Average visits peaked in relation to 2 components of our curriculum: a 240% increase one day before our biannual intern simulation assessments, and a 61% increase one day before our weekly conducted Friday simulation sessions. Interns who rotated on the service of the staff surgeon who actively endorses the website had 93% more actions per visit as compared to other users. The highest clicks were on the home banner for our weekly simulation session pre-emptive videos, followed by "groin anatomy," and "TEP hernia repair" videos. Our website acted as a "just-in-time" accessible portal to reliable surgical information. It supplemented the time sensitive educational needs of our learners by serving as a heavily used adjunct to 3 components of our surgical education curriculum: weekly simulation sessions, biannual assessments, and clinical rotations. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
New Insights in Tropospheric Ozone and its Variability
NASA Technical Reports Server (NTRS)
Oman, Luke D.; Douglass, Anne R.; Ziemke, Jerry R.; Rodriquez, Jose M.
2011-01-01
We have produced time-slice simulations using the Goddard Earth Observing System Version 5 (GEOS-5) coupled to a comprehensive stratospheric and tropospheric chemical mechanism. These simulations are forced with observed sea surface temperatures over the past 25 years and use constant specified surface emissions, thereby providing a measure of the dynamically controlled ozone response. We examine the model performance in simulating tropospheric ozone and its variability. Here we show targeted comparisons results from our simulations with a multi-decadal tropical tropospheric column ozone dataset obtained from satellite observations of total column ozone. We use SHADOZ ozonesondes to gain insight into the observed vertical response and compare with the simulated vertical structure. This work includes but is not limited to ENSO related variability.
Power system analysis of Hanlim superconducting HVDC system using real time digital simulator
NASA Astrophysics Data System (ADS)
Won, Y. J.; Kim, J. G.; Kim, A. R.; Kim, G. H.; Park, M.; Yu, I. K.; Sim, K. D.; Cho, J.; Lee, S.; Jeong, K. W.; Watanabe, K.
2011-11-01
Jeju island is located approximately 100 km south from the mainland of Korea, and had a peak load of about 553 MW in 2008. The demand increases 7.2% a year over the last 5 years. Since the wind profiles of Jeju island are more favorable than mainland of Korea, many companies have shown interest in the wind power business at the Jeju island. Moreover KEPCO has a plan for renewable energy test too whose power will be delivered by HVDC system. One kilometer length of total 8 km was designed as superconducting DC cable. Rest 7 km will be the conventional overhead line. In this paper, the authors have developed a simulation model of the power network around 8 km HVDC system using real time digital simulator (RTDS).
Henricksen, Jared W; Altenburg, Catherine; Reeder, Ron W
2017-10-01
Despite efforts to prepare a psychologically safe environment, simulation participants are occasionally psychologically distressed. Instructing simulation educators about participant psychological risks and having a participant psychological distress action plan available to simulation educators may assist them as they seek to keep all participants psychologically safe. A Simulation Participant Psychological Safety Algorithm was designed to aid simulation educators as they debrief simulation participants perceived to have psychological distress and categorize these events as mild (level 1), moderate (level 2), or severe (level 3). A prebrief dedicated to creating a psychologically safe learning environment was held constant. The algorithm was used for 18 months in an active pediatric simulation program. Data collected included level of participant psychological distress as perceived and categorized by the simulation team using the algorithm, type of simulation that participants went through, who debriefed, and timing of when psychological distress was perceived to occur during the simulation session. The Kruskal-Wallis test was used to evaluate the relationship between events and simulation type, events and simulation educator team who debriefed, and timing of event during the simulation session. A total of 3900 participants went through 399 simulation sessions between August 1, 2014, and January 26, 2016. Thirty-four (<1%) simulation participants from 27 sessions (7%) were perceived to have an event. One participant was perceived to have a severe (level 3) psychological distress event. Events occurred more commonly in high-intensity simulations, with novice learners and with specific educator teams. Simulation type and simulation educator team were associated with occurrence of events (P < 0.001). There was no association between event timing and event level. Severe psychological distress as categorized by simulation personnel using the Simulation Participant Psychological Safety Algorithm is rare, with mild and moderate events being more common. The algorithm was used to teach simulation educators how to assist a participant who may be psychologically distressed and document perceived event severity.
On Digital Simulation of Multicorrelated Random Processes and Its Applications. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Sinha, A. K.
1973-01-01
Two methods are described to simulate, on a digital computer, a set of correlated, stationary, and Gaussian time series with zero mean from the given matrix of power spectral densities and cross spectral densities. The first method is based upon trigonometric series with random amplitudes and deterministic phase angles. The random amplitudes are generated by using a standard random number generator subroutine. An example is given which corresponds to three components of wind velocities at two different spatial locations for a total of six correlated time series. In the second method, the whole process is carried out using the Fast Fourier Transform approach. This method gives more accurate results and works about twenty times faster for a set of six correlated time series.
Open shop scheduling problem to minimize total weighted completion time
NASA Astrophysics Data System (ADS)
Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian
2017-01-01
A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.
3D registration of surfaces for change detection in medical images
NASA Astrophysics Data System (ADS)
Fisher, Elizabeth; van der Stelt, Paul F.; Dunn, Stanley M.
1997-04-01
Spatial registration of data sets is essential for quantifying changes that take place over time in cases where the position of a patient with respect to the sensor has been altered. Changes within the region of interest can be problematic for automatic methods of registration. This research addresses the problem of automatic 3D registration of surfaces derived from serial, single-modality images for the purpose of quantifying changes over time. The registration algorithm utilizes motion-invariant, curvature- based geometric properties to derive an approximation to an initial rigid transformation to align two image sets. Following the initial registration, changed portions of the surface are detected and excluded before refining the transformation parameters. The performance of the algorithm was tested using simulation experiments. To quantitatively assess the registration, random noise at various levels, known rigid motion transformations, and analytically-defined volume changes were applied to the initial surface data acquired from models of teeth. These simulation experiments demonstrated that the calculated transformation parameters were accurate to within 1.2 percent of the total applied rotation and 2.9 percent of the total applied translation, even at the highest applied noise levels and simulated wear values.
Effects of dispersal on total biomass in a patchy, heterogeneous system: analysis and experiment.
Zhang, Bo; Liu, Xin; DeAngelis, Donald L.; Ni, Wei-Ming; Wang, G Geoff
2015-01-01
An intriguing recent result from mathematics is that a population diffusing at an intermediate rate in an environment in which resources vary spatially will reach a higher total equilibrium biomass than the population in an environment in which the same total resources are distributed homogeneously. We extended the current mathematical theory to apply to logistic growth and also showed that the result applies to patchy systems with dispersal among patches, both for continuous and discrete time. This allowed us to make specific predictions, through simulations, concerning the biomass dynamics, which were verified by a laboratory experiment. The experiment was a study of biomass growth of duckweed (Lemna minor Linn.), where the resources (nutrients added to water) were distributed homogeneously among a discrete series of water-filled containers in one treatment, and distributed heterogeneously in another treatment. The experimental results showed that total biomass peaked at an intermediate, relatively low, diffusion rate, higher than the total carrying capacity of the system and agreeing with the simulation model. The implications of the experiment to dynamics of source, sink, and pseudo-sink dynamics are discussed.
Banta, Edward R.; Paschke, Suzanne S.
2012-01-01
Declining water levels caused by withdrawals of water from wells in the west-central part of the Denver Basin bedrock-aquifer system have raised concerns with respect to the ability of the aquifer system to sustain production. The Arapahoe aquifer in particular is heavily used in this area. Two optimization analyses were conducted to demonstrate approaches that could be used to evaluate possible future pumping scenarios intended to prolong the productivity of the aquifer and to delay excessive loss of saturated thickness. These analyses were designed as demonstrations only, and were not intended as a comprehensive optimization study. Optimization analyses were based on a groundwater-flow model of the Denver Basin developed as part of a recently published U.S. Geological Survey groundwater-availability study. For each analysis an optimization problem was set up to maximize total withdrawal rate, subject to withdrawal-rate and hydraulic-head constraints, for 119 selected municipal water-supply wells located in 96 model cells. The optimization analyses were based on 50- and 100-year simulations of groundwater withdrawals. The optimized total withdrawal rate for all selected wells for a 50-year simulation time was about 58.8 cubic feet per second. For an analysis in which the simulation time and head-constraint time were extended to 100 years, the optimized total withdrawal rate for all selected wells was about 53.0 cubic feet per second, demonstrating that a reduction in withdrawal rate of about 10 percent may extend the time before the hydraulic-head constraints are violated by 50 years, provided that pumping rates are optimally distributed. Analysis of simulation results showed that initially, the pumping produces water primarily by release of water from storage in the Arapahoe aquifer. However, because confining layers between the Denver and Arapahoe aquifers are thin, in less than 5 years, most of the water removed by managed-flows pumping likely would be supplied by depleting overlying hydrogeologic units, substantially increasing the rate of decline of hydraulic heads in parts of the overlying Denver aquifer.
Convergence of Free Energy Profile of Coumarin in Lipid Bilayer
2012-01-01
Atomistic molecular dynamics (MD) simulations of druglike molecules embedded in lipid bilayers are of considerable interest as models for drug penetration and positioning in biological membranes. Here we analyze partitioning of coumarin in dioleoylphosphatidylcholine (DOPC) bilayer, based on both multiple, unbiased 3 μs MD simulations (total length) and free energy profiles along the bilayer normal calculated by biased MD simulations (∼7 μs in total). The convergences in time of free energy profiles calculated by both umbrella sampling and z-constraint techniques are thoroughly analyzed. Two sets of starting structures are also considered, one from unbiased MD simulation and the other from “pulling” coumarin along the bilayer normal. The structures obtained by pulling simulation contain water defects on the lipid bilayer surface, while those acquired from unbiased simulation have no membrane defects. The free energy profiles converge more rapidly when starting frames from unbiased simulations are used. In addition, z-constraint simulation leads to more rapid convergence than umbrella sampling, due to quicker relaxation of membrane defects. Furthermore, we show that the choice of RESP, PRODRG, or Mulliken charges considerably affects the resulting free energy profile of our model drug along the bilayer normal. We recommend using z-constraint biased MD simulations based on starting geometries acquired from unbiased MD simulations for efficient calculation of convergent free energy profiles of druglike molecules along bilayer normals. The calculation of free energy profile should start with an unbiased simulation, though the polar molecules might need a slow pulling afterward. Results obtained with the recommended simulation protocol agree well with available experimental data for two coumarin derivatives. PMID:22545027
Convergence of Free Energy Profile of Coumarin in Lipid Bilayer.
Paloncýová, Markéta; Berka, Karel; Otyepka, Michal
2012-04-10
Atomistic molecular dynamics (MD) simulations of druglike molecules embedded in lipid bilayers are of considerable interest as models for drug penetration and positioning in biological membranes. Here we analyze partitioning of coumarin in dioleoylphosphatidylcholine (DOPC) bilayer, based on both multiple, unbiased 3 μs MD simulations (total length) and free energy profiles along the bilayer normal calculated by biased MD simulations (∼7 μs in total). The convergences in time of free energy profiles calculated by both umbrella sampling and z-constraint techniques are thoroughly analyzed. Two sets of starting structures are also considered, one from unbiased MD simulation and the other from "pulling" coumarin along the bilayer normal. The structures obtained by pulling simulation contain water defects on the lipid bilayer surface, while those acquired from unbiased simulation have no membrane defects. The free energy profiles converge more rapidly when starting frames from unbiased simulations are used. In addition, z-constraint simulation leads to more rapid convergence than umbrella sampling, due to quicker relaxation of membrane defects. Furthermore, we show that the choice of RESP, PRODRG, or Mulliken charges considerably affects the resulting free energy profile of our model drug along the bilayer normal. We recommend using z-constraint biased MD simulations based on starting geometries acquired from unbiased MD simulations for efficient calculation of convergent free energy profiles of druglike molecules along bilayer normals. The calculation of free energy profile should start with an unbiased simulation, though the polar molecules might need a slow pulling afterward. Results obtained with the recommended simulation protocol agree well with available experimental data for two coumarin derivatives.
The APOSTLE project: Local Group kinematic mass constraints and simulation candidate selection
NASA Astrophysics Data System (ADS)
Fattahi, Azadeh; Navarro, Julio F.; Sawala, Till; Frenk, Carlos S.; Oman, Kyle A.; Crain, Robert A.; Furlong, Michelle; Schaller, Matthieu; Schaye, Joop; Theuns, Tom; Jenkins, Adrian
2016-03-01
We use a large sample of isolated dark matter halo pairs drawn from cosmological N-body simulations to identify candidate systems whose kinematics match that of the Local Group (LG) of galaxies. We find, in agreement with the `timing argument' and earlier work, that the separation and approach velocity of the Milky Way (MW) and Andromeda (M31) galaxies favour a total mass for the pair of ˜5 × 1012 M⊙. A mass this large, however, is difficult to reconcile with the small relative tangential velocity of the pair, as well as with the small deceleration from the Hubble flow observed for the most distant LG members. Halo pairs that match these three criteria have average masses a factor of ˜2 times smaller than suggested by the timing argument, but with large dispersion. Guided by these results, we have selected 12 halo pairs with total mass in the range 1.6-3.6 × 1012 M⊙ for the APOSTLE project (A Project Of Simulating The Local Environment), a suite of hydrodynamical resimulations at various numerical resolution levels (reaching up to ˜104 M⊙ per gas particle) that use the subgrid physics developed for the EAGLE project. These simulations reproduce, by construction, the main kinematics of the MW-M31 pair, and produce satellite populations whose overall number, luminosities, and kinematics are in good agreement with observations of the MW and M31 companions. The APOSTLE candidate systems thus provide an excellent testbed to confront directly many of the predictions of the Λ cold dark matter cosmology with observations of our local Universe.
Real waiting times for surgery. Proposal for an improved system for their management.
Abásolo, Ignacio; Barber, Patricia; González López-Valcárcel, Beatriz; Jiménez, Octavio
2014-01-01
In Spain, official information on waiting times for surgery is based on the interval between the indication for surgery and its performance. We aimed to estimate total waiting times for surgical procedures, including outpatient visits and diagnostic tests prior to surgery. In addition, we propose an alternative system to manage total waiting times that reduces variability and maximum waiting times without increasing the use of health care resources. This system is illustrated by three surgical procedures: cholecystectomy, carpal tunnel release and inguinal/femoral hernia repair. Using data from two Autonomous Communities, we adjusted, through simulation, a theoretical distribution of the total waiting time assuming independence of the waiting times of each stage of the clinical procedure. We show an alternative system in which the waiting time for the second consultation is established according to the time previously waited for the first consultation. Average total waiting times for cholecystectomy, carpal tunnel release and inguinal/femoral hernia repair were 331, 355 and 137 days, respectively (official data are 83, 68 and 73 days, respectively). Using different negative correlations between waiting times for subsequent consultations would reduce maximum waiting times by between 2% and 15% and substantially reduce heterogeneity among patients, without generating higher resource use. Total waiting times are between two and five times higher than those officially published. The relationship between the waiting times at each stage of the medical procedure may be used to decrease variability and maximum waiting times. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.
Assessing manure management strategies through small-plot research and whole-farm modeling
Garcia, A.M.; Veith, T.L.; Kleinman, P.J.A.; Rotz, C.A.; Saporito, L.S.
2008-01-01
Plot-scale experimentation can provide valuable insight into the effects of manure management practices on phosphorus (P) runoff, but whole-farm evaluation is needed for complete assessment of potential trade offs. Artificially-applied rainfall experimentation on small field plots and event-based and long-term simulation modeling were used to compare P loss in runoff related to two dairy manure application methods (surface application with and without incorporation by tillage) on contrasting Pennsylvania soils previously under no-till management. Results of single-event rainfall experiments indicated that average dissolved reactive P losses in runoff from manured plots decreased by up to 90% with manure incorporation while total P losses did not change significantly. Longer-term whole farm simulation modeling indicated that average dissolved reactive P losses would decrease by 8% with manure incorporation while total P losses would increase by 77% due to greater erosion from fields previously under no-till. Differences in the two methods of inference point to the need for caution in extrapolating research findings. Single-event rainfall experiments conducted shortly after manure application simulate incidental transfers of dissolved P in manure to runoff, resulting in greater losses of dissolved reactive P. However, the transfer of dissolved P in applied manure diminishes with time. Over the annual time frame simulated by whole farm modeling, erosion processes become more important to runoff P losses. Results of this study highlight the need to consider the potential for increased erosion and total P losses caused by soil disturbance during incorporation. This study emphasizes the ability of modeling to estimate management practice effectiveness at the larger scales when experimental data is not available.
A simulation study to quantify the impacts of exposure ...
BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health.MethodsZIP-code level estimates of exposure for six pollutants (CO, NOx, EC, PM2.5, SO4, O3) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error.Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs.ResultsSubstantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3–85% for population error, and 31–85% for total error. When CO, NOx or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copoll
Hardware in-the-Loop Demonstration of Real-Time Orbit Determination in High Earth Orbits
NASA Technical Reports Server (NTRS)
Moreau, Michael; Naasz, Bo; Leitner, Jesse; Carpenter, J. Russell; Gaylor, Dave
2005-01-01
This paper presents results from a study conducted at Goddard Space Flight Center (GSFC) to assess the real-time orbit determination accuracy of GPS-based navigation in a number of different high Earth orbital regimes. Measurements collected from a GPS receiver (connected to a GPS radio frequency (RF) signal simulator) were processed in a navigation filter in real-time, and resulting errors in the estimated states were assessed. For the most challenging orbit simulated, a 12 hour Molniya orbit with an apogee of approximately 39,000 km, mean total position and velocity errors were approximately 7 meters and 3 mm/s respectively. The study also makes direct comparisons between the results from the above hardware in-the-loop tests and results obtained by processing GPS measurements generated from software simulations. Care was taken to use the same models and assumptions in the generation of both the real-time and software simulated measurements, in order that the real-time data could be used to help validate the assumptions and models used in the software simulations. The study makes use of the unique capabilities of the Formation Flying Test Bed at GSFC, which provides a capability to interface with different GPS receivers and to produce real-time, filtered orbit solutions even when less than four satellites are visible. The result is a powerful tool for assessing onboard navigation performance in a wide range of orbital regimes, and a test-bed for developing software and procedures for use in real spacecraft applications.
Integration of an Earth-Based Science Team During Human Exploration of Mars
NASA Technical Reports Server (NTRS)
Chappell, Steven P.; Beaton, Kara H.; Newton, Carolyn; Graff, Trevor G.; Young, Kelsey E.; Coan, David; Abercromby, Andrew F. J.; Gernhardt, Michael L.
2017-01-01
NASA Extreme Environment Mission Operations (NEEMO) is an underwater spaceflight analog that allows a true mission-like operational environment and uses buoyancy effects and added weight to simulate different gravity levels. A mission was undertaken in 2016, NEEMO 21, at the Aquarius undersea research habitat. During the mission, the effects of varied oper-ations concepts with representative communication latencies as-sociated with Mars missions were studied. Six subjects were weighed out to simulate partial gravity and evaluated different operations concepts for integration and management of a simulated Earth-based science team (ST) who provided input and direction during exploration activities. Exploration traverses were planned in advance based on precursor data collected. Subjects completed science-related tasks including presampling surveys and marine-science-based sampling during saturation dives up to 4 hours in duration that simulated extravehicular activity (EVA) on Mars. A communication latency of 15 minutes in each direction between space and ground was simulated throughout the EVAs. Objective data included task completion times, total EVA time, crew idle time, translation time, ST assimilation time (defined as time available for the science team to discuss, to review and act upon data/imagery after they have been collected and transmitted to the ground). Subjective data included acceptability, simulation quality, capability assessment ratings, and comments. In addition, comments from both the crew and the ST were captured during the post-mission debrief. Here, we focus on the acceptability of the operations concepts studied and the capabilities most enhancing or enabling in the operations concept. The importance and challenges of designing EVA time-lines to account for the length of the task, level of interaction with the ground that is required/desired, and communication latency, are discussed.
Software Estimates Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Smith, C. L.
2003-01-01
Simulation-Based Cost Model (SiCM), a discrete event simulation developed in Extend , simulates pertinent aspects of the testing of rocket propulsion test articles for the purpose of estimating the costs of such testing during time intervals specified by its users. A user enters input data for control of simulations; information on the nature of, and activity in, a given testing project; and information on resources. Simulation objects are created on the basis of this input. Costs of the engineering-design, construction, and testing phases of a given project are estimated from numbers and labor rates of engineers and technicians employed in each phase, the duration of each phase; costs of materials used in each phase; and, for the testing phase, the rate of maintenance of the testing facility. The three main outputs of SiCM are (1) a curve, updated at each iteration of the simulation, that shows overall expenditures vs. time during the interval specified by the user; (2) a histogram of the total costs from all iterations of the simulation; and (3) table displaying means and variances of cumulative costs for each phase from all iterations. Other outputs include spending curves for each phase.
The interaction of Io's plumes and sublimation atmosphere
NASA Astrophysics Data System (ADS)
McDoniel, William J.; Goldstein, David B.; Varghese, Philip L.; Trafton, Laurence M.
2017-09-01
Io's volcanic plumes are the ultimate source of its SO2 atmosphere, but past eruptions have covered the moon in surface frost which sublimates in sunlight. Today, Io's atmosphere is a result of some combination of volcanism and sublimation, but it is unknown exactly how these processes work together to create the observed atmosphere. We use the direct simulation Monte Carlo (DSMC) method to model the interaction of giant plumes with a sublimation atmosphere. Axisymmetric plume/atmosphere simulations demonstrate that the total mass of SO2 above Io's surface is only poorly approximated as the sum of independent volcanic and sublimated components. A simple analytic model is developed to show how variation in the mass of erupting gas above Io's surface can counteract variation in the mass of its hydrostatic atmosphere as surface temperature changes over a Jupiter year. Three-dimensional, unsteady simulations of giant plumes over an Io day are also presented, showing how plume material becomes suspended in the sublimation atmosphere. We find that a plume which produces some total mass above Io's surface at night will cause a net increase in the noon-time atmosphere of only a fraction of the night-time value. However, as much as seven times the night-side mass of the plume will become suspended in the sublimation atmosphere, altering its composition and displacing sublimated material.
Surgical simulation tasks challenge visual working memory and visual-spatial ability differently.
Schlickum, Marcus; Hedman, Leif; Enochsson, Lars; Henningsohn, Lars; Kjellin, Ann; Felländer-Tsai, Li
2011-04-01
New strategies for selection and training of physicians are emerging. Previous studies have demonstrated a correlation between visual-spatial ability and visual working memory with surgical simulator performance. The aim of this study was to perform a detailed analysis on how these abilities are associated with metrics in simulator performance with different task content. The hypothesis is that the importance of visual-spatial ability and visual working memory varies with different task contents. Twenty-five medical students participated in the study that involved testing visual-spatial ability using the MRT-A test and visual working memory using the RoboMemo computer program. Subjects were also trained and tested for performance in three different surgical simulators. The scores from the psychometric tests and the performance metrics were then correlated using multivariate analysis. MRT-A score correlated significantly with the performance metrics Efficiency of screening (p = 0.006) and Total time (p = 0.01) in the GI Mentor II task and Total score (p = 0.02) in the MIST-VR simulator task. In the Uro Mentor task, both the MRT-A score and the visual working memory 3-D cube test score as presented in the RoboMemo program (p = 0.02) correlated with Total score (p = 0.004). In this study we have shown that some differences exist regarding the impact of visual abilities and task content on simulator performance. When designing future cognitive training programs and testing regimes, one might have to consider that the design must be adjusted in accordance with the specific surgical task to be trained in mind.
Real-time simulator for designing electron dual scattering foil systems.
Carver, Robert L; Hogstrom, Kenneth R; Price, Michael J; LeBlanc, Justin D; Pitcher, Garrett M
2014-11-08
The purpose of this work was to develop a user friendly, accurate, real-time com- puter simulator to facilitate the design of dual foil scattering systems for electron beams on radiotherapy accelerators. The simulator allows for a relatively quick, initial design that can be refined and verified with subsequent Monte Carlo (MC) calculations and measurements. The simulator also is a powerful educational tool. The simulator consists of an analytical algorithm for calculating electron fluence and X-ray dose and a graphical user interface (GUI) C++ program. The algorithm predicts electron fluence using Fermi-Eyges multiple Coulomb scattering theory with the reduced Gaussian formalism for scattering powers. The simulator also estimates central-axis and off-axis X-ray dose arising from the dual foil system. Once the geometry of the accelerator is specified, the simulator allows the user to continuously vary primary scattering foil material and thickness, secondary scat- tering foil material and Gaussian shape (thickness and sigma), and beam energy. The off-axis electron relative fluence or total dose profile and central-axis X-ray dose contamination are computed and displayed in real time. The simulator was validated by comparison of off-axis electron relative fluence and X-ray percent dose profiles with those calculated using EGSnrc MC. Over the energy range 7-20 MeV, using present foils on an Elekta radiotherapy accelerator, the simulator was able to reproduce MC profiles to within 2% out to 20 cm from the central axis. The central-axis X-ray percent dose predictions matched measured data to within 0.5%. The calculation time was approximately 100 ms using a single Intel 2.93 GHz processor, which allows for real-time variation of foil geometrical parameters using slider bars. This work demonstrates how the user-friendly GUI and real-time nature of the simulator make it an effective educational tool for gaining a better understanding of the effects that various system parameters have on a relative dose profile. This work also demonstrates a method for using the simulator as a design tool for creating custom dual scattering foil systems in the clinical range of beam energies (6-20 MeV).
Zhou, Y; Murata, T; Defanti, T A
2000-01-01
Despite their attractive properties, networked virtual environments (net-VEs) are notoriously difficult to design, implement, and test due to the concurrency, real-time and networking features in these systems. Net-VEs demand high quality-of-service (QoS) requirements on the network to maintain natural and real-time interactions among users. The current practice for net-VE design is basically trial and error, empirical, and totally lacks formal methods. This paper proposes to apply a Petri net formal modeling technique to a net-VE-NICE (narrative immersive constructionist/collaborative environment), predict the net-VE performance based on simulation, and improve the net-VE performance. NICE is essentially a network of collaborative virtual reality systems called the CAVE-(CAVE automatic virtual environment). First, we introduce extended fuzzy-timing Petri net (EFTN) modeling and analysis techniques. Then, we present EFTN models of the CAVE, NICE, and transport layer protocol used in NICE: transmission control protocol (TCP). We show the possibility analysis based on the EFTN model for the CAVE. Then, by using these models and design/CPN as the simulation tool, we conducted various simulations to study real-time behavior, network effects and performance (latencies and jitters) of NICE. Our simulation results are consistent with experimental data.
Lee, Ju-Young; Lee, Soon Hee; Kim, Jung-Hee
2018-05-01
Despite the increase in simulators at nursing schools and the high expectations regarding simulation for nursing education, the unique features of integrating simulation-based education into the curriculum are unclear. The purpose of this study was to assess the curriculum development process of simulation-based educational interventions in nursing in Korea. Integrative review of literature used. Korean Studies Information Services System (KISS), Korean Medical Database (KMbase), KoreaMed, Research Information Sharing Service (RISS), and National Digital Library (NDL). Comprehensive databases were searched for records without a time limit (until December 2016), using terms such as "nursing," "simulation," and "education." A total of 1006 studies were screened. According to the model for simulation-based curriculum development (Khamis et al., 2016), the quality of reporting on the curriculum development was reviewed. A total of 125 papers were included in this review. In three studies, simulation scenarios were made from easy to difficulty levels, and none of the studies presented the level of learners' proficiency. Only 17.6% of the studies reported faculty development or preparation. The inter-rater reliability was presented in performance test by 24 studies and two studies evaluated the long-term effects of simulation education although there was no statistically significant change in terms of publication years. These findings suggest that educators and researchers should pay more attention to the educational strategies to integrate simulation into nursing education. It could contribute to guiding educators and researchers to develop a simulation-based curriculum and improve the quality of nursing education research. Copyright © 2018 Elsevier Ltd. All rights reserved.
Economic Feasibility of Staffing the Intensive Care Unit with a Communication Facilitator.
Khandelwal, Nita; Benkeser, David; Coe, Norma B; Engelberg, Ruth A; Curtis, J Randall
2016-12-01
In the intensive care unit (ICU), complex decision making by clinicians and families requires good communication to ensure that care is consistent with the patients' values and goals. To assess the economic feasibility of staffing ICUs with a communication facilitator. Data were from a randomized trial of an "ICU communication facilitator" linked to hospital financial records; eligible patients (n = 135) were admitted to the ICU at a single hospital with predicted mortality ≥30% and a surrogate decision maker. Adjusted regression analyses assessed differences in ICU total and direct variable costs between intervention and control patients. A bootstrap-based simulation assessed the cost efficiency of a facilitator while varying the full-time equivalent of the facilitator and the ICU mortality risk. Total ICU costs (mean 22.8k; 95% CI, -42.0k to -3.6k; P = 0.02) and average daily ICU costs (mean, -0.38k; 95% CI, -0.65k to -0.11k; P = 0.006)] were reduced significantly with the intervention. Despite more contacts, families of survivors spent less time per encounter with facilitators than did families of decedents (mean, 25 [SD, 11] min vs. 36 [SD, 14] min). Simulation demonstrated maximal weekly savings with a 1.0 full-time equivalent facilitator and a predicted ICU mortality of 15% (total weekly ICU cost savings, $58.4k [95% CI, $57.7k-59.2k]; weekly direct variable savings, $5.7k [95% CI, $5.5k-5.8k]) after incorporating facilitator costs. Adding a full-time trained communication facilitator in the ICU may improve the quality of care while simultaneously reducing short-term (direct variable) and long-term (total) health care costs. This intervention is likely to be more cost effective in a lower-mortality population.
Computational aspects of real-time simulation of rotary-wing aircraft. M.S. Thesis
NASA Technical Reports Server (NTRS)
Houck, J. A.
1976-01-01
A study was conducted to determine the effects of degrading a rotating blade element rotor mathematical model suitable for real-time simulation of rotorcraft. Three methods of degradation were studied, reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments; (2) reduction of number of blades; and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which a different rotor mathematical model should be used.
Effects of rotor model degradation on the accuracy of rotorcraft real time simulation
NASA Technical Reports Server (NTRS)
Houck, J. A.; Bowles, R. L.
1976-01-01
The effects are studied of degrading a rotating blade element rotor mathematical model to meet various real-time simulation requirements of rotorcraft. Three methods of degradation were studied: reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments, (2) reduction of number of blades, and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which the rotating blade element rotor mathematical model should not be used.
He, Yue; Zhu, Han Guang; Zhang, Zhi Yuan; He, Jie; Sader, Robert
2009-12-01
A total maxillectomy always causes composite defects of maxilla, zygomatic bone, orbital floor or rim, and palatal and nasal mucosa lining. This leads to significant functional and cosmetic consequences after ablative surgery. The purpose of this clinical study was to preliminarily 3-dimensionally reconstruct the defect of total maxillectomy with sufficient bone support and soft tissue lining. Three-dimensional model simulation technique and free fibula osteomyocutaneous flap flow-through from radial forearm flap were used to reconstruct a total maxillectomy defect for a 21-year-old female patient. Preoperatively, the 3-dimensional (3D) simulated resin models of skeleton and fibula were used to design the osteotomies and bone segment replacement. At surgery, a 22-cm-length free fibula was divided into 4 segments to make 1 maxilla skeletal framework in the schedule of the preoperative model surgical planning with a radial forearm flap flow-through for the free fibula flap with skin paddle to repair the palatal and nasal region. Free fibula and radial forearm flap were alive, and the patient was satisfied with the results both esthetically and functionally after dental rehabilitation which was carried out 6 months after surgery. This preliminarily clinical study and case demonstrated that: the fibula osteomyocutaneous flap is an ideal donor site in 3D total maxillectomy defect reconstruction, because of its thickness, length, and bone uniformity which makes ideal support for dental rehabilitation; the flow-through forearm radial flap not only serves as the vascular bridge to midface reconstruction, but also provides sufficient soft tissue cover for the intraoral defect; and the 3D model simulation and preoperative surgical planning are effective methods to refine reconstruction surgery, shorten the surgical time, and predict the outcome after operation.
Loccisano, Anne E; Acevedo, Orlando; DeChancie, Jason; Schulze, Brita G; Evanseck, Jeffrey D
2004-05-01
The utility of multiple trajectories to extend the time scale of molecular dynamics simulations is reported for the spectroscopic A-states of carbonmonoxy myoglobin (MbCO). Experimentally, the A0-->A(1-3) transition has been observed to be 10 micros at 300 K, which is beyond the time scale of standard molecular dynamics simulations. To simulate this transition, 10 short (400 ps) and two longer time (1.2 ns) molecular dynamics trajectories, starting from five different crystallographic and solution phase structures with random initial velocities centered in a 37 A radius sphere of water, have been used to sample the native-fold of MbCO. Analysis of the ensemble of structures gathered over the cumulative 5.6 ns reveals two biomolecular motions involving the side chains of His64 and Arg45 to explain the spectroscopic states of MbCO. The 10 micros A0-->A(1-3) transition involves the motion of His64, where distance between His64 and CO is found to vary up to 8.8 +/- 1.0 A during the transition of His64 from the ligand (A(1-3)) to bulk solvent (A0). The His64 motion occurs within a single trajectory only once, however the multiple trajectories populate the spectroscopic A-states fully. Consequently, multiple independent molecular dynamics simulations have been found to extend biomolecular motion from 5 ns of total simulation to experimental phenomena on the microsecond time scale.
NASA Astrophysics Data System (ADS)
Sharma, A.; Woldemeskel, F. M.; Sivakumar, B.; Mehrotra, R.
2014-12-01
We outline a new framework for assessing uncertainties in model simulations, be they hydro-ecological simulations for known scenarios, or climate simulations for assumed scenarios representing the future. This framework is illustrated here using GCM projections for future climates for hydrologically relevant variables (precipitation and temperature), with the uncertainty segregated into three dominant components - model uncertainty, scenario uncertainty (representing greenhouse gas emission scenarios), and ensemble uncertainty (representing uncertain initial conditions and states). A novel uncertainty metric, the Square Root Error Variance (SREV), is used to quantify the uncertainties involved. The SREV requires: (1) Interpolating raw and corrected GCM outputs to a common grid; (2) Converting these to percentiles; (3) Estimating SREV for model, scenario, initial condition and total uncertainty at each percentile; and (4) Transforming SREV to a time series. The outcome is a spatially varying series of SREVs associated with each model that can be used to assess how uncertain the system is at each simulated point or time. This framework, while illustrated in a climate change context, is completely applicable for assessment of uncertainties any modelling framework may be subject to. The proposed method is applied to monthly precipitation and temperature from 6 CMIP3 and 13 CMIP5 GCMs across the world. For CMIP3, B1, A1B and A2 scenarios whereas for CMIP5, RCP2.6, RCP4.5 and RCP8.5 representing low, medium and high emissions are considered. For both CMIP3 and CMIP5, model structure is the largest source of uncertainty, which reduces significantly after correcting for biases. Scenario uncertainly increases, especially for temperature, in future due to divergence of the three emission scenarios analysed. While CMIP5 precipitation simulations exhibit a small reduction in total uncertainty over CMIP3, there is almost no reduction observed for temperature projections. Estimation of uncertainty in both space and time sheds lights on the spatial and temporal patterns of uncertainties in GCM outputs, providing an effective platform for risk-based assessments of any alternate plans or decisions that may be formulated using GCM simulations.
Operating Room of the Future: Advanced Technologies in Safe and Efficient Operating Rooms
2010-10-01
research, and treatment purposes. A laser optical mouse and a graphics tablet were used by radiologists to segment 12 simulated reference lesions per...radiologists seg- mented a total of 132 simulated lesions. Overall error in contour segmentation was less with the graphics tablet than with the mouse...PG0.0001). Error in area of segmentation was not significantly different between the tablet and the mouse (P=0.62). Time for segmen- tation was less with
Generation of an incident focused light pulse in FDTD.
Capoğlu, Ilker R; Taflove, Allen; Backman, Vadim
2008-11-10
A straightforward procedure is described for accurately creating an incident focused light pulse in the 3-D finite-difference time-domain (FDTD) electromagnetic simulation of the image space of an aplanatic converging lens. In this procedure, the focused light pulse is approximated by a finite sum of plane waves, and each plane wave is introduced into the FDTD simulation grid using the total-field/scattered-field (TF/SF) approach. The accuracy of our results is demonstrated by comparison with exact theoretical formulas.
Generation of an incident focused light pulse in FDTD
Çapoğlu, İlker R.; Taflove, Allen; Backman, Vadim
2009-01-01
A straightforward procedure is described for accurately creating an incident focused light pulse in the 3-D finite-difference time-domain (FDTD) electromagnetic simulation of the image space of an aplanatic converging lens. In this procedure, the focused light pulse is approximated by a finite sum of plane waves, and each plane wave is introduced into the FDTD simulation grid using the total-field/scattered-field (TF/SF) approach. The accuracy of our results is demonstrated by comparison with exact theoretical formulas. PMID:19582013
NASA Astrophysics Data System (ADS)
Bhardwaj, Manish; McCaughan, Leon; Olkhovets, Anatoli; Korotky, Steven K.
2006-12-01
We formulate an analytic framework for the restoration performance of path-based restoration schemes in planar mesh networks. We analyze various switch architectures and signaling schemes and model their total restoration interval. We also evaluate the network global expectation value of the time to restore a demand as a function of network parameters. We analyze a wide range of nominally capacity-optimal planar mesh networks and find our analytic model to be in good agreement with numerical simulation data.
Multimodel comparison of the ionosphere variability during the 2009 sudden stratosphere warming
NASA Astrophysics Data System (ADS)
Pedatella, N. M.; Fang, T.-W.; Jin, H.; Sassi, F.; Schmidt, H.; Chau, J. L.; Siddiqui, T. A.; Goncharenko, L.
2016-07-01
A comparison of different model simulations of the ionosphere variability during the 2009 sudden stratosphere warming (SSW) is presented. The focus is on the equatorial and low-latitude ionosphere simulated by the Ground-to-topside model of the Atmosphere and Ionosphere for Aeronomy (GAIA), Whole Atmosphere Model plus Global Ionosphere Plasmasphere (WAM+GIP), and Whole Atmosphere Community Climate Model eXtended version plus Thermosphere-Ionosphere-Mesosphere-Electrodynamics General Circulation Model (WACCMX+TIMEGCM). The simulations are compared with observations of the equatorial vertical plasma drift in the American and Indian longitude sectors, zonal mean F region peak density (NmF2) from the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) satellites, and ground-based Global Positioning System (GPS) total electron content (TEC) at 75°W. The model simulations all reproduce the observed morning enhancement and afternoon decrease in the vertical plasma drift, as well as the progression of the anomalies toward later local times over the course of several days. However, notable discrepancies among the simulations are seen in terms of the magnitude of the drift perturbations, and rate of the local time shift. Comparison of the electron densities further reveals that although many of the broad features of the ionosphere variability are captured by the simulations, there are significant differences among the different model simulations, as well as between the simulations and observations. Additional simulations are performed where the neutral atmospheres from four different whole atmosphere models (GAIA, HAMMONIA (Hamburg Model of the Neutral and Ionized Atmosphere), WAM, and WACCMX) provide the lower atmospheric forcing in the TIME-GCM. These simulations demonstrate that different neutral atmospheres, in particular, differences in the solar migrating semidiurnal tide, are partly responsible for the differences in the simulated ionosphere variability in GAIA, WAM+GIP, and WACCMX+TIMEGCM.
Lin, Chih-Hao; Kao, Chung-Yao; Huang, Chong-Ye
2015-01-01
Ambulance diversion (AD) is considered one of the possible solutions to relieve emergency department (ED) overcrowding. Study of the effectiveness of various AD strategies is prerequisite for policy-making. Our aim is to develop a tool that quantitatively evaluates the effectiveness of various AD strategies. A simulation model and a computer simulation program were developed. Three sets of simulations were executed to evaluate AD initiating criteria, patient-blocking rules, and AD intervals, respectively. The crowdedness index, the patient waiting time for service, and the percentage of adverse patients were assessed to determine the effect of various AD policies. Simulation results suggest that, in a certain setting, the best timing for implementing AD is when the crowdedness index reaches the critical value, 1.0 - an indicator that ED is operating at its maximal capacity. The strategy to divert all patients transported by ambulance is more effective than to divert either high-acuity patients only or low-acuity patients only. Given a total allowable AD duration, implementing AD multiple times with short intervals generally has better effect than having a single AD with maximal allowable duration. An input-throughput-output simulation model is proposed for simulating ED operation. Effectiveness of several AD strategies on relieving ED overcrowding was assessed via computer simulations based on this model. By appropriate parameter settings, the model can represent medical resource providers of different scales. It is also feasible to expand the simulations to evaluate the effect of AD strategies on a community basis. The results may offer insights for making effective AD policies. Copyright © 2012. Published by Elsevier B.V.
White, Ian; Buchberg, Brian; Tsikitis, V Liana; Herzig, Daniel O; Vetto, John T; Lu, Kim C
2014-06-01
Colorectal cancer is the second most common cause of death in the USA. The need for screening colonoscopies, and thus adequately trained endoscopists, particularly in rural areas, is on the rise. Recent increases in required endoscopic cases for surgical resident graduation by the Surgery Residency Review Committee (RRC) further emphasize the need for more effective endoscopic training during residency to determine if a virtual reality colonoscopy simulator enhances surgical resident endoscopic education by detecting improvement in colonoscopy skills before and after 6 weeks of formal clinical endoscopic training. We conducted a retrospective review of prospectively collected surgery resident data on an endoscopy simulator. Residents performed four different clinical scenarios on the endoscopic simulator before and after a 6-week endoscopic training course. Data were collected over a 5-year period from 94 different residents performing a total of 795 colonoscopic simulation scenarios. Main outcome measures included time to cecal intubation, "red out" time, and severity of simulated patient discomfort (mild, moderate, severe, extreme) during colonoscopy scenarios. Average time to intubation of the cecum was 6.8 min for those residents who had not undergone endoscopic training versus 4.4 min for those who had undergone endoscopic training (p < 0.001). Residents who could be compared against themselves (pre vs. post-training), cecal intubation times decreased from 7.1 to 4.3 min (p < 0.001). Post-endoscopy rotation residents caused less severe discomfort during simulated colonoscopy than pre-endoscopy rotation residents (4 vs. 10%; p = 0.004). Virtual reality endoscopic simulation is an effective tool for both augmenting surgical resident endoscopy cancer education and measuring improvement in resident performance after formal clinical endoscopic training.
Toofanny, Rudesh D; Simms, Andrew M; Beck, David A C; Daggett, Valerie
2011-08-10
Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008.
2011-01-01
Background Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Results Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. Conclusions The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008. PMID:21831299
NASA Astrophysics Data System (ADS)
Tanaka, Kiyoshi; Takano, Shuichi; Sugimura, Tatsuo
2000-10-01
In this work we focus on the indexed triangle strips that is an extended representation of triangle strips to improve the efficiency for geometrical transformation of vertices, and present a method to construct optimum indexed triangle strips using Genetic Algorithm (GA) for real-time visualization. The main objective of this work is how to optimally construct indexed triangle strips by improving the ratio that reuses the data stored in the cash memory and simultaneously reducing the total index numbers with GA. Simulation results verify that the average index numbers and cache miss ratio per polygon cold be small, and consequently the total visualization time required for the optimum solution obtained by this scheme could be remarkably reduced.
McRae, Marion E; Chan, Alice; Hulett, Renee; Lee, Ai Jin; Coleman, Bernice
2017-06-01
There are few reports of the effectiveness or satisfaction with simulation to learn cardiac surgical resuscitation skills. To test the effect of simulation on the self-confidence of nurses to perform cardiac surgical resuscitation simulation and nurses' satisfaction with the simulation experience. A convenience sample of sixty nurses rated their self-confidence to perform cardiac surgical resuscitation skills before and after two simulations. Simulation performance was assessed. Subjects completed the Satisfaction with Simulation Experience scale and demographics. Self-confidence scores to perform all cardiac surgical skills as measured by paired t-tests were significantly increased after the simulation (d=-0.50 to 1.78). Self-confidence and cardiac surgical work experience were not correlated with time to performance. Total satisfaction scores were high (mean 80.2, SD 1.06) indicating satisfaction with the simulation. There was no correlation of the satisfaction scores with cardiac surgical work experience (τ=-0.05, ns). Self-confidence scores to perform cardiac surgical resuscitation procedures were higher after the simulation. Nurses were highly satisfied with the simulation experience. Copyright © 2016 Elsevier Ltd. All rights reserved.
Effects of cow diet on the microbial community and organic matter and nitrogen content of feces.
van Vliet, P C J; Reijs, J W; Bloem, J; Dijkstra, J; de Goede, R G M
2007-11-01
Knowledge of the effects of cow diet on manure composition is required to improve nutrient use efficiency and to decrease emissions of N to the environment. Therefore, we performed an experiment with nonlactating cows to determine the consequences of changes in cow rations for the chemical characteristics and the traits of the microbial community in the feces. In this experiment, 16 cows were fed 8 diets, differing in crude protein, neutral detergent fiber, starch, and net energy content. These differences were achieved by changing dietary ingredients or roughage to concentrate ratio. After an adaptation period of 3 wk, fecal material was collected and analyzed. Observed results were compared with simulated values using a mechanistic model that provides insight into the mechanisms involved in the effect of dietary variation on fecal composition. Feces produced on a high-fiber, low-protein diet had a high C:N ratio (>16) and had lower concentrations of both organic and inorganic N than feces on a low-fiber, high-protein diet. Fecal bacterial biomass concentration was highest in high-protein, high-energy diets. The fraction of inorganic N in the feces was not significantly different between the different feces. Microbial biomass in the feces ranged from 1,200 to 8,000 microg of C/g of dry matter (average: 3,700 microg of C/g of dry matter). Bacterial diversity was similar for all fecal materials, but the different protein levels in the feeding regimens induced changes in the community structure present in the different feces. The simulated total N content (N(total)) in the feces ranged from 1.0 to 1.5 times the observed concentrations, whereas the simulated C:N(total) of the feces ranged from 0.7 to 0.9 times the observed C:N(total). However, bacterial biomass C was not predicted satisfactorily (simulated values being on average 3 times higher than observed), giving rise to further discussion on the definition of microbial C in feces. Based on these observations, it was concluded that diet composition affected fecal chemical composition and microbial biomass. These changes may affect the nutrient use and efficiency of the manure. Because the present experiment used a limited number of dry cows and extreme diet regimens, extrapolation of results to other dairy cow situations should be done with care.
Flowfield analysis of helicopter rotor in hover and forward flight based on CFD
NASA Astrophysics Data System (ADS)
Zhao, Qinghe; Li, Xiaodong
2018-05-01
The helicopter rotor field is simulated in hover and forward flight based on Computational Fluid Dynamics(CFD). In hover case only one rotor is simulated with the periodic boundary condition in the rotational coordinate system and the grid is fixed. In the non-lift forward flight case, the total rotor is simulated in inertia coordinate system and the whole grid moves rigidly. The dual-time implicit scheme is applied to simulate the unsteady flowfield on the movement grids. The k – ω turbulence model is employed in order to capture the effects of turbulence. To verify the solver, the flowfield around the Caradonna-Tung rotor is computed. The comparison shows a good agreement between the numerical results and the experimental data.
Scaling Analysis of Alloy Solidification and Fluid Flow in a Rectangular Cavity
NASA Astrophysics Data System (ADS)
Plotkowski, A.; Fezi, K.; Krane, M. J. M.
A scaling analysis was performed to predict trends in alloy solidification in a side-cooled rectangular cavity. The governing equations for energy and momentum were scaled in order to determine the dependence of various aspects of solidification on the process parameters for a uniform initial temperature and an isothermal boundary condition. This work improved on previous analyses by adding considerations for the cooling bulk fluid flow. The analysis predicted the time required to extinguish the superheat, the maximum local solidification time, and the total solidification time. The results were compared to a numerical simulation for a Al-4.5 wt.% Cu alloy with various initial and boundary conditions. Good agreement was found between the simulation results and the trends predicted by the scaling analysis.
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
Modeling riverine nutrient transport to the Baltic Sea: a large-scale approach.
Mörth, Carl-Magnus; Humborg, Christoph; Eriksson, Hanna; Danielsson, Asa; Medina, Miguel Rodriguez; Löfgren, Stefan; Swaney, Dennis P; Rahm, Lars
2007-04-01
We developed for the first time a catchment model simulating simultaneously the nutrient land-sea fluxes from all 105 major watersheds within the Baltic Sea drainage area. A consistent modeling approach to all these major watersheds, i.e., a consistent handling of water fluxes (hydrological simulations) and loading functions (emission data), will facilitate a comparison of riverine nutrient transport between Baltic Sea subbasins that differ substantially. Hot spots of riverine emissions, such as from the rivers Vistula, Oder, and Daugava or from the Danish coast, can be easily demonstrated and the comparison between these hot spots, and the relatively unperturbed rivers in the northern catchments show decisionmakers where remedial actions are most effective to improve the environmental state of the Baltic Sea, and, secondly, what percentage reduction of riverine nutrient loads is possible. The relative difference between measured and simulated fluxes during the validation period was generally small. The cumulative deviation (i.e., relative bias) [Sigma(Simulated - Measured)/Sigma Measured x 100 (%)] from monitored water and nutrient fluxes amounted to +8.2% for runoff, to -2.4% for dissolved inorganic nitrogen, to +5.1% for total nitrogen, to +13% for dissolved inorganic phosphorus and to +19% for total phosphorus. Moreover, the model suggests that point sources for total phosphorus compiled by existing pollution load compilations are underestimated because of inconsistencies in calculating effluent loads from municipalities.
NASA Astrophysics Data System (ADS)
Shim, J. S.; Rastätter, L.; Kuznetsova, M.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B. A.; Fedrizzi, M.; Förster, M.; Fuller-Rowell, T. J.; Gardner, L. C.; Goncharenko, L.; Huba, J.; McDonald, S. E.; Mannucci, A. J.; Namgaladze, A. A.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.
2017-10-01
In order to assess current modeling capability of reproducing storm impacts on total electron content (TEC), we considered quantities such as TEC, TEC changes compared to quiet time values, and the maximum value of the TEC and TEC changes during a storm. We compared the quantities obtained from ionospheric models against ground-based GPS TEC measurements during the 2006 AGU storm event (14-15 December 2006) in the selected eight longitude sectors. We used 15 simulations obtained from eight ionospheric models, including empirical, physics-based, coupled ionosphere-thermosphere, and data assimilation models. To quantitatively evaluate performance of the models in TEC prediction during the storm, we calculated skill scores such as RMS error, Normalized RMS error (NRMSE), ratio of the modeled to observed maximum increase (Yield), and the difference between the modeled peak time and observed peak time. Furthermore, to investigate latitudinal dependence of the performance of the models, the skill scores were calculated for five latitude regions. Our study shows that RMSE of TEC and TEC changes of the model simulations range from about 3 TECU (total electron content unit, 1 TECU = 1016 el m-2) (in high latitudes) to about 13 TECU (in low latitudes), which is larger than latitudinal average GPS TEC error of about 2 TECU. Most model simulations predict TEC better than TEC changes in terms of NRMSE and the difference in peak time, while the opposite holds true in terms of Yield. Model performance strongly depends on the quantities considered, the type of metrics used, and the latitude considered.
Kim, Heejin; Kwon, Sunghyuk; Heo, Jiyoon; Lee, Hojin; Chung, Min K
2014-05-01
Investigating the effect of touch-key size on usability of In-Vehicle Information Systems (IVISs) is one of the most important research issues since it is closely related to safety issues besides its usability. This study investigated the effects of the touch-key size of IVISs with respect to safety issues (the standard deviation of lane position, the speed variation, the total glance time, the mean glance time, the mean time between glances, and the mean number of glances) and the usability of IVISs (the task completion time, error rate, subjective preference, and NASA-TLX) through a driving simulation. A total of 30 drivers participated in the task of entering 5-digit numbers with various touch-key sizes while performing simulated driving. The size of the touch-key was 7.5 mm, 12.5 mm, 17.5 mm, 22.5 mm and 27.5 mm, and the speed of driving was set to 0 km/h (stationary state), 50 km/h and 100 km/h. As a result, both the driving safety and the usability of the IVISs increased as the touch-key size increased up to a certain size (17.5 mm in this study), at which they reached asymptotes. We performed Fitts' law analysis of our data, and this revealed that the data from the dual task experiment did not follow Fitts' law. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Simulated annealing with restart strategy for the blood pickup routing problem
NASA Astrophysics Data System (ADS)
Yu, V. F.; Iswari, T.; Normasari, N. M. E.; Asih, A. M. S.; Ting, H.
2018-04-01
This study develops a simulated annealing heuristic with restart strategy (SA_RS) for solving the blood pickup routing problem (BPRP). BPRP minimizes the total length of the routes for blood bag collection between a blood bank and a set of donation sites, each associated with a time window constraint that must be observed. The proposed SA_RS is implemented in C++ and tested on benchmark instances of the vehicle routing problem with time windows to verify its performance. The algorithm is then tested on some newly generated BPRP instances and the results are compared with those obtained by CPLEX. Experimental results show that the proposed SA_RS heuristic effectively solves BPRP.
Simulation analysis of the effect of initial delay on flight delay diffusion
NASA Astrophysics Data System (ADS)
Que, Zufu; Yao, Hongguang; Yue, Wei
2018-01-01
The initial delay of the flight is an important factor affecting the spread of flight delays, so clarifying their relationship conduces to control flight delays in the aeronautical network. Through establishing a model of the chain aviation network and making simulation analysis of the effects of initial delay on the delay longitudinal diffusion, it’s found that the number of delayed airports in the air network, the total delay time and the average delay time of the delayed airport are generally positively correlated with the initial delay. This indicates that the occurrence of the initial delay should be avoided or reduced as much as possible to improve the punctuality of the flight.
Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K
2016-11-25
Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.
Geothermal reservoir simulation of hot sedimentary aquifer system using FEFLOW®
NASA Astrophysics Data System (ADS)
Nur Hidayat, Hardi; Gala Permana, Maximillian
2017-12-01
The study presents the simulation of hot sedimentary aquifer for geothermal utilization. Hot sedimentary aquifer (HSA) is a conduction-dominated hydrothermal play type utilizing deep aquifer, which is heated by near normal heat flow. One of the examples of HSA is Bavarian Molasse Basin in South Germany. This system typically uses doublet wells: an injection and production well. The simulation was run for 3650 days of simulation time. The technical feasibility and performance are analysed in regards to the extracted energy from this concept. Several parameters are compared to determine the model performance. Parameters such as reservoir characteristics, temperature information and well information are defined. Several assumptions are also defined to simplify the simulation process. The main results of the simulation are heat period budget or total extracted heat energy, and heat rate budget or heat production rate. Qualitative approaches for sensitivity analysis are conducted by using five parameters in which assigned lower and higher value scenarios.
NASA Technical Reports Server (NTRS)
Halloran, B. P.; Bikle, D. D.; Globus, R. K.; Levens, M. J.; Wronski, T. J.; Morey-Holton, E.
1985-01-01
Weightlessness, as experienced during space flight, and simulated weightlessness induce osteopenia. Using the suspended rat model to simulate weightlessness, a reduction in total tibia Ca and bone formation rate at the tibiofibular junction as well as an inhibition of Ca-45 and H-3-proline uptake by bone within 5-7 days of skeletal unloading was observed. Between days 7 and 15 of unloading, uptake of Ca-45 and H-3-proline, and bone formation rate return to normal, although total bone Ca remains abnormally low. To examine the relationship between these characteristic changes in bone metabolism induced by skeletal unloading and vitamin D metabolism, the serum concentrations of 25-hydroxyvitamin D (25-OH-D), 24, 25-dihydroxyvitamin D (24,25(OH)2D) and 1,25-dihydroxyvitamin D (1,25(OH)2D) at various times after skeletal unloading were measured. The effect of chronic infusion of 1,25(OH)2D3 on the bone changes associated with unloading was also determined.
Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming
2017-02-01
The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.
Efficiency of super-Eddington magnetically-arrested accretion
NASA Astrophysics Data System (ADS)
McKinney, Jonathan C.; Dai, Lixin; Avara, Mark J.
2015-11-01
The radiative efficiency of super-Eddington accreting black holes (BHs) is explored for magnetically-arrested discs, where magnetic flux builds-up to saturation near the BH. Our three-dimensional general relativistic radiation magnetohydrodynamic (GRRMHD) simulation of a spinning BH (spin a/M = 0.8) accreting at ˜50 times Eddington shows a total efficiency ˜50 per cent when time-averaged and total efficiency ≳ 100 per cent in moments. Magnetic compression by the magnetic flux near the rotating BH leads to a thin disc, whose radiation escapes via advection by a magnetized wind and via transport through a low-density channel created by a Blandford-Znajek (BZ) jet. The BZ efficiency is sub-optimal due to inertial loading of field lines by optically thick radiation, leading to BZ efficiency ˜40 per cent on the horizon and BZ efficiency ˜5 per cent by r ˜ 400rg (gravitational radii) via absorption by the wind. Importantly, radiation escapes at r ˜ 400rg with efficiency η ≈ 15 per cent (luminosity L ˜ 50LEdd), similar to η ≈ 12 per cent for a Novikov-Thorne thin disc and beyond η ≲ 1 per cent seen in prior GRRMHD simulations or slim disc theory. Our simulations show how BH spin, magnetic field, and jet mass-loading affect these radiative and jet efficiencies.
Effects of dispersal on total biomass in a patchy, heterogeneous system: Analysis and experiment.
Zhang, Bo; Liu, Xin; DeAngelis, D L; Ni, Wei-Ming; Wang, G Geoff
2015-06-01
An intriguing recent result from mathematics is that a population diffusing at an intermediate rate in an environment in which resources vary spatially will reach a higher total equilibrium biomass than the population in an environment in which the same total resources are distributed homogeneously. We extended the current mathematical theory to apply to logistic growth and also showed that the result applies to patchy systems with dispersal among patches, both for continuous and discrete time. This allowed us to make specific predictions, through simulations, concerning the biomass dynamics, which were verified by a laboratory experiment. The experiment was a study of biomass growth of duckweed (Lemna minor Linn.), where the resources (nutrients added to water) were distributed homogeneously among a discrete series of water-filled containers in one treatment, and distributed heterogeneously in another treatment. The experimental results showed that total biomass peaked at an intermediate, relatively low, diffusion rate, higher than the total carrying capacity of the system and agreeing with the simulation model. The implications of the experiment to dynamics of source, sink, and pseudo-sink dynamics are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Fluid milk processing (FMP) has significant environmental impact because of its high energy use. High temperature short time (HTST) pasteurization is the third most energy intensive operation comprising about 16% of total energy use, after clean-in-place operations and packaging. Nonthermal processe...
USDA-ARS?s Scientific Manuscript database
JUSTIFICATION Fluid milk processing (FMP) has significant environmental impact because of its high energy use and greenhouse gas (GHG) emissions. High temperature short time (HTST) pasteurization is the third most energy intense operation in FMP comprising about 16% of total energy use, after clean-...
Nillius, Peter; Klamra, Wlodek; Sibczynski, Pawel; Sharma, Diksha; Danielsson, Mats; Badano, Aldo
2015-02-01
The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. The authors measured light output from a 490-μm CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybridmantis, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV -1 while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV -1 . The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the measured LR resulting in a bulk absorption coefficient of 5 × 10 -5 μm -1 . The combination of experimental measurements for microcolumnar CsI:Tl scintillators using sealed-sources and synchrotron exposures with results obtained via simulation suggests that the time course of the emission might play a role in experimental estimates. The procedure yielded an experimentally derived linear absorption coefficient for microcolumnar Cs:Tl of 5 × 10 -5 μm -1 . To the author's knowledge, this is the first time this parameter has been validated against experimental observations. The measurements also offer insight into the relative role of optical transport on the effective optical yield of the scintillator with microcolumnar structure. © 2015 American Association of Physicists in Medicine.
Nillius, Peter; Klamra, Wlodek; Sibczynski, Pawel; Sharma, Diksha; Danielsson, Mats; Badano, Aldo
2015-02-01
The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. The authors measured light output from a 490-μm CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybridmantis, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV−1 while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV−1. The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the measured LR resulting in a bulk absorption coefficient of 5 × 10−5μm−1. The combination of experimental measurements for microcolumnar CsI:Tl scintillators using sealed-sources and synchrotron exposures with results obtained via simulation suggests that the time course of the emission might play a role in experimental estimates. The procedure yielded an experimentally derived linear absorption coefficient for microcolumnar Cs:Tl of 5 × 10−5μm−1. To the author’s knowledge, this is the first time this parameter has been validated against experimental observations. The measurements also offer insight into the relative role of optical transport on the effective optical yield of the scintillator with microcolumnar structure.
Dark-ages reionization and galaxy formation simulation - IX. Economics of reionizing galaxies
NASA Astrophysics Data System (ADS)
Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Geil, Paul M.; Kim, Han-Seek; Mesinger, Andrei; Wyithe, J. Stuart B.
2017-09-01
Using a series of high-resolution hydrodynamical simulations we show that during the rapid growth of high-redshift (z > 5) galaxies, reserves of molecular gas are consumed over a time-scale of 300 Myr, almost independent of feedback scheme. We find that there exists no such simple relation for the total gas fractions of these galaxies, with little correlation between gas fractions and specific star formation rates. The bottleneck or limiting factor in the growth of early galaxies is in converting infalling gas to cold star-forming gas. Thus, we find that the majority of high-redshift dwarf galaxies are effectively in recession, with demand (of star formation) never rising to meet supply (of gas), irrespective of the baryonic feedback physics modelled. We conclude that the basic assumption of self-regulation in galaxies - that they can adjust total gas consumption within a Hubble time - does not apply for the dwarf galaxies thought to be responsible for providing most UV photons to reionize the high-redshift Universe. We demonstrate how this rapid molecular time-scale improves agreement between semi-analytic model predictions of the early Universe and observed stellar mass functions.
Thompson, S A; Dummer, P M
1997-01-01
The aim of this study was to determine the shaping ability of ProFile.04 Taper Series 29 nickel-titanium instruments in simulated canals. A total of 40 simulated root canals made up of four different shapes in terms of angle and position of curvature were prepared by ProFile instruments using a step-down approach. Part 1 of this two-part report describes the efficacy of the instruments in terms of preparation time, instrument failure, canal blockages, loss of canal length and three-dimensional canal form. The time necessary for canal preparation was not influenced significantly by canal shape. No instrument fractures occurred but a total of 52 instruments deformed. Size 6 instruments deformed the most followed by sizes 5, 3 and 4. Canal shape did not influence significantly instrument deformation. None of the canals became blocked with debris and loss of working distance was on average 0.5 mm or less. Intracanal impressions of canal form demonstrated that most canals had definite apical stops, smooth canal walls and good flow and taper. Under the conditions of this study, ProFile.04 Taper Series 29 rotary nickel-titanium instruments prepared simulated canals rapidly and created good three-dimensional form. A substantial number of instruments deformed but it was not possible to determine whether this phenomenon occurred because of the nature of the experimental model or through an inherent design weakness in the instruments.
Simulation of the Universal-Time Diurnal Variation of the Global Electric Circuit Charging Rate
NASA Technical Reports Server (NTRS)
Mackerras, D.; Darvenzia, M.; Orville, R. E.; Williams, E. R.; Goodman, S. J.
1999-01-01
A global lightning model that includes diurnal and annual lightning variation, and total flash density versus latitude for each major land and ocean, has been used as the basis for simulating the global electric circuit charging rate. A particular objective has been to reconcile the difference in amplitude ratios [AR=(max-min)/mean] between global lightning diurnal variation (AR approx. = 0.8) and the diurnal variation of typical atmospheric potential gradient curves (AR approx. = 0.35). A constraint on the simulation is that the annual mean charging current should be about 1000 A. The global lightning model shows that negative ground flashes can contribute, at most, about 10-15% of the required current. For the purpose of the charging rate simulation, it was assumed that each ground flash contributes 5 C to the charging process. It was necessary to assume that all electrified clouds contribute to charging by means other than lightning, that the total flash rate can serve as an indirect indicator of the rate of charge transfer, and that oceanic electrified clouds contribute to charging even though they are relatively inefficient in producing lightning. It was also found necessary to add a diurnally invariant charging current component. By trial and error it was found that charging rate diurnal variation curves in Universal time (UT) could be produced with amplitude ratios and general shapes similar to those of the potential gradient diurnal variation curves measured over ocean and arctic regions during voyages of the Carnegie Institute research vessels.
NASA Astrophysics Data System (ADS)
Zhang, S.; Wang, B.; Cao, X. S.; Yang, Z.
Objective The mRNA expression of alpha 1 chain of type I collagen COL-I alpha 1 in rat osteosarcoma ROS17 2 8 cells induced by bone morphogenetic protein-2 BMP-2 was reduced under simulated microgravity The protein kinase MEK1 of MAPK signal pathway plays an important role in the expression of COL-I alpha 1 mRNA The purpose of this study is to investigate the effects of simulated weightlessness on the activity of MEK1 induced by BMP-2 in ROS17 2 8 cells Methods ROS17 2 8 cells were cultured in 1G control and rotating clinostat simulated weightlessness for 24 h 48 h and 72 h BMP-2 500 ng ml was added into the medium 1 h before the culture ended There was a control group in which ROS17 2 8 cells were cultured in 1G condition without BMP-2 Then the total protein of cells was extracted and the expression of phosphated-ERK1 2 p-ERK1 2 protein was detected by means of Western Blotting to show the kinase activity of MEK1 Results There were no significant differences in the expression of total ERK1 2 among all groups The expression of p-ERK1 2 was unconspicuous in the control group without BMP-2 but increased significantly when BMP-2 was added P 0 01 The level of p-ERK1 2 in simulated weightlessness group was much more lower than that in 1G group in every time point P 0 01 The expression of p-ERK1 2 gradually decreased along with the time of weightlessness simulation P 0 01 Conclusions The kinase activity of MEK1 induced by BMP-2 in rat osteosarcoma cells was reduced under simulated weightlessness
1978-12-31
Dielectric Discharge. .. ......... 23 3.2.1 Total Emitted Charge .. ........... ........ 26 3.2.2 Emission Time History .. .. ................. 29 3.3...taken to be a rise time of 10 ns and a fall time of 10 to 100 ns. In addition, a physical model of the discharge mechanism has been developed in which...scale model of the P78-2, dubbed the SCATSAT was constructed whose design was chosen to simulate the basic structure of the real satellite, including the
Modeling Population Exposure to Ultrafine Particles in a Major Italian Urban Area
Spinazzè, Andrea; Cattaneo, Andrea; Peruzzo, Carlo; Cavallo, Domenico M.
2014-01-01
Average daily ultrafine particles (UFP) exposure of adult Milan subpopulations (defined on the basis of gender, and then for age, employment or educational status), in different exposure scenarios (typical working day in summer and winter) were simulated using a microenvironmental stochastic simulation model. The basic concept of this kind of model is that time-weighted average exposure is defined as the sum of partial microenvironmental exposures, which are determined by the product of UFP concentration and time spent in each microenvironment. In this work, environmental concentrations were derived from previous experimental studies that were based on microenvironmental measurements in the city of Milan by means of personal or individual monitoring, while time-activity patterns were derived from the EXPOLIS study. A significant difference was observed between the exposures experienced in winter (W: 28,415 pt/cm3) and summer (S: 19,558 pt/cm3). Furthermore, simulations showed a moderate difference between the total exposures experienced by women (S: 19,363 pt/cm3; W: 27,623 pt/cm3) and men (S: 18,806 pt/cm3; W: 27,897 pt/cm3). In addition, differences were found as a function of (I) age, (II) employment status and (III) educational level; accordingly, the highest total exposures resulted for (I) 55–59 years old people, (II) housewives and students and (III) people with higher educational level (more than 10 years of scholarity). Finally, significant differences were found between microenvironment-specific exposures. PMID:25321878
The Effect on the Lunar Exosphere of a Coroual Mass Ejection Passage
NASA Technical Reports Server (NTRS)
Killen, R. M.; Hurley, D. M.; Farrell, W. M.
2011-01-01
Solar wind bombardment onto exposed surfaces in the solar system produces an energetic component to the exospheres about those bodies. The solar wind energy and composition are highly dependent on the origin of the plasma. Using the measured composition of the slow wind, fast wind, solar energetic particle (SEP) population, and coronal mass ejection (CME), broken down into their various components, we have estimated the total sputter yield for each type of solar wind. We show that the heavy ion component, especially the He++ and 0+7 can greatly enhance the total sputter yield during times when the heavy ion population is enhanced. Folding in the flux, we compute the source rate for several species during different types of solar wind. Finally, we use a Monte Carlo model developed to simulate the time-dependent evolution of the lunar exosphere to study the sputtering component of the exosphere under the influence of a CME passage. We simulate the background exosphere of Na, K, Ca, and Mg. Simulations indicate that sputtering increases the mass of those constituents in the exosphere a few to a few tens times the background values. The escalation of atmospheric density occurs within an hour of onset The decrease in atmospheric density after the CME passage is also rapid, although takes longer than the increase, Sputtered neutral particles have a high probability of escaping the moon,by both Jeans escape and photo ionization. Density and spatial distribution of the exosphere can be tested with the LADEE mission.
Inference of scale-free networks from gene expression time series.
Daisuke, Tominaga; Horton, Paul
2006-04-01
Quantitative time-series observation of gene expression is becoming possible, for example by cell array technology. However, there are no practical methods with which to infer network structures using only observed time-series data. As most computational models of biological networks for continuous time-series data have a high degree of freedom, it is almost impossible to infer the correct structures. On the other hand, it has been reported that some kinds of biological networks, such as gene networks and metabolic pathways, may have scale-free properties. We hypothesize that the architecture of inferred biological network models can be restricted to scale-free networks. We developed an inference algorithm for biological networks using only time-series data by introducing such a restriction. We adopt the S-system as the network model, and a distributed genetic algorithm to optimize models to fit its simulated results to observed time series data. We have tested our algorithm on a case study (simulated data). We compared optimization under no restriction, which allows for a fully connected network, and under the restriction that the total number of links must equal that expected from a scale free network. The restriction reduced both false positive and false negative estimation of the links and also the differences between model simulation and the given time-series data.
A monte carlo study of restricted diffusion: Implications for diffusion MRI of prostate cancer.
Gilani, Nima; Malcolm, Paul; Johnson, Glyn
2017-04-01
Diffusion MRI is used frequently to assess prostate cancer. The prostate consists of cellular tissue surrounding fluid filled ducts. Here, the diffusion properties of the ductal fluid alone were studied. Monte Carlo simulations were used to investigate ductal residence times to determine whether ducts can be regarded as forming a separate compartment and whether ductal radius could determine the Apparent Diffusion Coefficient (ADC) of the ductal fluid. Random walks were simulated in cavities. Average residence times were estimated for permeable cavities. Signal reductions resulting from application of a Stejskal-Tanner pulse sequence were calculated in impermeable cavities. Simulations were repeated for cavities of different radii and different diffusion times. Residence times are at least comparable with diffusion times even in relatively high grade tumors. ADCs asymptotically approach theoretical limiting values. At large radii and short diffusion times, ADCs are similar to free diffusion. At small radii and long diffusion times, ADCs are reduced toward zero, and kurtosis approaches a value of -1.2. Restricted diffusion in cavities of similar sizes to prostate ducts may reduce ductal ADCs. This may contribute to reductions in total ADC seen in prostate cancer. Magn Reson Med 77:1671-1677, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Smith, D G; Baranski, J V; Thompson, M M; Abel, S M
2003-01-01
A total of twenty-five subjects were cloistered for a period of 70 hours, five at a time, in a hyperbaric chamber modified to simulate the conditions aboard the International Space Station (ISS). A recording of 72 dBA background noise from the ISS service module was used to simulate noise conditions on the ISS. Two groups experienced the background noise throughout the experiment, two other groups experienced the noise only during the day, and one control group was cloistered in a quiet environment. All subjects completed a battery of cognitive tests nine times throughout the experiment. The data showed little or no effect of noise on reasoning, perceptual decision-making, memory, vigilance, mood, or subjective indices of fatigue. Our results suggest that the level of noise on the space station should not affect cognitive performance, at least over a period of several days.
Simulation of high-energy radiation belt electron fluxes using NARMAX-VERB coupled codes
Pakhotin, I P; Drozdov, A Y; Shprits, Y Y; Boynton, R J; Subbotin, D A; Balikhin, M A
2014-01-01
This study presents a fusion of data-driven and physics-driven methodologies of energetic electron flux forecasting in the outer radiation belt. Data-driven NARMAX (Nonlinear AutoRegressive Moving Averages with eXogenous inputs) model predictions for geosynchronous orbit fluxes have been used as an outer boundary condition to drive the physics-based Versatile Electron Radiation Belt (VERB) code, to simulate energetic electron fluxes in the outer radiation belt environment. The coupled system has been tested for three extended time periods totalling several weeks of observations. The time periods involved periods of quiet, moderate, and strong geomagnetic activity and captured a range of dynamics typical of the radiation belts. The model has successfully simulated energetic electron fluxes for various magnetospheric conditions. Physical mechanisms that may be responsible for the discrepancies between the model results and observations are discussed. PMID:26167432
Controlling protein molecular dynamics: How to accelerate folding while preserving the native state
NASA Astrophysics Data System (ADS)
Jensen, Christian H.; Nerukh, Dmitry; Glen, Robert C.
2008-12-01
The dynamics of peptides and proteins generated by classical molecular dynamics (MD) is described by using a Markov model. The model is built by clustering the trajectory into conformational states and estimating transition probabilities between the states. Assuming that it is possible to influence the dynamics of the system by varying simulation parameters, we show how to use the Markov model to determine the parameter values that preserve the folded state of the protein and at the same time, reduce the folding time in the simulation. We investigate this by applying the method to two systems. The first system is an imaginary peptide described by given transition probabilities with a total folding time of 1μs. We find that only small changes in the transition probabilities are needed to accelerate (or decelerate) the folding. This implies that folding times for slowly folding peptides and proteins calculated using MD cannot be meaningfully compared to experimental results. The second system is a four residue peptide valine-proline-alanine-leucine in water. We control the dynamics of the transitions by varying the temperature and the atom masses. The simulation results show that it is possible to find the combinations of parameter values that accelerate the dynamics and at the same time preserve the native state of the peptide. A method for accelerating larger systems without performing simulations for the whole folding process is outlined.
Theory of an optomechanical quantum heat engine
2014-08-12
control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. University of Arizona 888 N . Euclid Ave. Tucson, AZ 85719 -4824 ABSTRACT Theory of an...modes, with a cutoff number state | N 〉 with N n̄a(b), so that the total dimension of the density matrix ρsys is ( N + 1)4. As a result the simulations...become very time consuming even for relatively modest values of N . However, due to the diagonality of thermal states in an energy basis the total
Comparison between hybrid laser-MIG welding and MIG welding for the invar36 alloy
NASA Astrophysics Data System (ADS)
Zhan, Xiaohong; Li, Yubo; Ou, Wenmin; Yu, Fengyi; Chen, Jie; Wei, Yanhong
2016-11-01
The invar36 alloy is suitable to produce mold of composite materials structure because it has similar thermal expansion coefficient with composite materials. In the present paper, the MIG welding and laser-MIG hybrid welding methods are compared to get the more appropriate method to overcome the poor weldability of invar36 alloy. According to the analysis of the experimental and simulated results, it has been proved that the Gauss and cone combined heat source model can characterize the laser-MIG hybrid welding heat source well. The total welding time of MIG welding is 8 times that of hybrid laser-MIG welding. The welding material consumption of MIG welding is about 4 times that of hybrid laser-MIG welding. The stress and deformation simulation indicate that the peak value of deformation during MIG welding is 3 times larger than that of hybrid laser-MIG welding.
Response time effects of alerting tone and semantic context for synthesized voice cockpit warnings
NASA Technical Reports Server (NTRS)
Simpson, C. A.; Williams, D. H.
1980-01-01
Some handbooks and human factors design guides have recommended that a voice warning should be preceded by a tone to attract attention to the warning. As far as can be determined from a search of the literature, no experimental evidence supporting this exists. A fixed-base simulator flown by airline pilots was used to test the hypothesis that the total 'system-time' to respond to a synthesized voice cockpit warning would be longer when the message was preceded by a tone because the voice itself was expected to perform both the alerting and the information transfer functions. The simulation included realistic ATC radio voice communications, synthesized engine noise, cockpit conversation, and realistic flight routes. The effect of a tone before a voice warning was to lengthen response time; that is, responses were slower with an alerting tone. Lengthening the voice warning with another work, however, did not increase response time.
Case Studies of Forecasting Ionospheric Total Electron Content
NASA Astrophysics Data System (ADS)
Mannucci, A. J.; Meng, X.; Verkhoglyadova, O. P.; Tsurutani, B.; McGranaghan, R. M.
2017-12-01
We report on medium-range forecast-mode runs of ionosphere-thermosphere coupled models that calculate ionospheric total electron content (TEC), focusing on low-latitude daytime conditions. A medium-range forecast-mode run refers to simulations that are driven by inputs that can be predicted 2-3 days in advance, for example based on simulations of the solar wind. We will present results from a weak geomagnetic storm caused by a high-speed solar wind stream on June 29, 2012. Simulations based on the Global Ionosphere Thermosphere Model (GITM) and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIEGCM) significantly over-estimate TEC in certain low latitude daytime regions, compared to TEC maps based on observations. We will present the results from a more intense coronal mass ejection (CME) driven storm where the simulations are closer to observations. We compare high latitude data sets to model inputs, such as auroral boundary and convection patterns, to assess the degree to which poorly estimated high latitude drivers may be the largest cause of discrepancy between simulations and observations. Our results reveal many factors that can affect the accuracy of forecasts, including the fidelity of empirical models used to estimate high latitude precipitation patterns, or observation proxies for solar EUV spectra, such as the F10.7 index. Implications for forecasts with few-day lead times are discussed
Impact of dynamically changing land cover on runoff process: the case of Iligan river basin
NASA Astrophysics Data System (ADS)
Salcedo, Stephanie Mae B.; Suson, Peter D.; Milano, Alan E.; Ignacio, Ma. Teresa T.
2016-10-01
Iligan river basin located in Northern Mindanao, Philippines covers 165.7 km2 of basin area. In December 2011, tropical storm Sendong (Washi) hit Iligan City, leaving a trail of wrecked infrastructures and about 490 persons reported dead. What transpired was a wake up call to mitigate future flood disasters. Fundamental to mitigation is understanding runoff behavior inside a basin considering that this is the main source of flooding. For this reason, the present study evaluated total runoff volume, peak discharge and lag time given land cover scenarios in four different years- 1973, 1989, 1998 and 2008. IFSAR and LIDAR DEM were integrated to generate the basin model in ArcGIS. HEC-HMS was used in simulating models for each scenario with Soil Conservation Service Curve Number (SCS CN) as the loss parameter method. Four simulation models of the runoff with varying CN values were established using RIDF as rainfall input with 5 year, 10 year, 25 year, 50 year and 100 year Rainfall Return Period (RRP). Total Runoff volume, peak discharge and lag time were progressively higher from 1973 to 2008 with 1989 land cover as exception where runoff parameters was its lowest. The total runoff volume, peak discharge and lag time is governed by vegetation type. When vegetation is characterized predominantly with woody perennials, runoff volume and peak time is lower. Conversely, when the presence of woody perennials is minimal, these parameters are higher. This study shows that an important way to mitigate flooding is to reduce surface runoff by maintaining vegetation predominantly composed of woody perennials.
Havemann, Maria Cecilie; Dalsgaard, Torur; Sørensen, Jette Led; Røssaak, Kristin; Brisling, Steffen; Mosgaard, Berit Jul; Høgdall, Claus; Bjerrum, Flemming
2018-05-14
Increasing focus on patient safety makes it important to ensure surgical competency among surgeons before operating on patients. The objective was to gather validity evidence for a virtual-reality simulator test for robotic surgical skills and evaluate its potential as a training tool. Surgeons with varying experience in robotic surgery were recruited: novices (zero procedures), intermediates (1-50), experienced (> 50). Five experienced surgeons rated five exercises on the da Vinci Skills Simulator. Participants were tested using the five exercises. Participants were invited back 3 times and completed a total of 10 attempts per exercise. The outcome was the average simulator performance score for the 5 exercises. 32 participants from 5 surgical specialties were included. 38 participants completed all 4 sessions. A moderate correlation between the average total score and robotic experience was identified for the first attempt (Spearman r = 0.58; p = 0.0004). A difference in average total score was observed between novices and intermediates [median score 61% (IQR 52-66) vs. 83% (IQR 75-91), adjusted p < 0.0001], as well as novices and experienced [median score 61% (IQR 52-66) vs. 80 (IQR 69-85), adjusted p = 0.002]. All three groups improved their performance between the 1st and 10th attempts (p < 0.00). This study describes validity evidence for a virtual-reality simulator for basic robotic surgical skills, which can be used for assessment of basic competency and as a training tool. However, more validity evidence is needed before it can be used for certification or high-stakes assessment.
Do night naps impact driving performance and daytime recovery sleep?
Centofanti, Stephanie A; Dorrian, Jillian; Hilditch, Cassie J; Banks, Siobhan
2017-02-01
Short, nighttime naps are used as a fatigue countermeasure in night shift work, and may offer protective benefits on the morning commute. However, there is a concern that nighttime napping may impact upon the quality of daytime sleep. The aim of the current project was to investigate the influence of short nighttime naps (<30min) on simulated driving performance and subsequent daytime recovery sleep. Thirty-one healthy subjects (aged 21-35 y; 18 females) participated in a 3-day laboratory study. After a 9-h baseline sleep opportunity (22:00h-07:00h), subjects were kept awake the following night with random assignment to: a 10-min nap ending at 04:00h plus a 10-min nap at 07:00h; a 30-min nap ending at 04:00h; or a no-nap control. A 40-min driving simulator task was administered at 07:00h and 18:30h post-recovery sleep. All conditions had a 6-h daytime recovery sleep opportunity (10:00h-16:00h) the next day. All sleep periods were recorded polysomnographically. Compared to control, the napping conditions did not significantly impact upon simulated driving lane variability, percentage of time in a safe zone, or time to first crash on morning or evening drives (p>0.05). Short nighttime naps did not significantly affect daytime recovery total sleep time (p>0.05). Slow wave sleep (SWS) obtained during the 30-min nighttime nap resulted in a significant reduction in SWS during subsequent daytime recovery sleep (p<0.05), such that the total amount of SWS in 24-h was preserved. Therefore, short naps did not protect against performance decrements during a simulated morning commute, but they also did not adversely affect daytime recovery sleep following a night shift. Further investigation is needed to examine the optimal timing, length or combination of naps for reducing performance decrements on the morning commute, whilst still preserving daytime sleep quality. Copyright © 2015 Elsevier Ltd. All rights reserved.
Assessment of virtual reality robotic simulation performance by urology resident trainees.
Ruparel, Raaj K; Taylor, Abby S; Patel, Janil; Patel, Vipul R; Heckman, Michael G; Rawal, Bhupendra; Leveillee, Raymond J; Thiel, David D
2014-01-01
To examine resident performance on the Mimic dV-Trainer (MdVT; Mimic Technologies, Inc., Seattle, WA) for correlation with resident trainee level (postgraduate year [PGY]), console experience (CE), and simulator exposure in their training program to assess for internal bias with the simulator. Residents from programs of the Southeastern Section of the American Urologic Association participated. Each resident was scored on 4 simulator tasks (peg board, camera targeting, energy dissection [ED], and needle targeting) with 3 different outcomes (final score, economy of motion score, and time to complete exercise) measured for each task. These scores were evaluated for association with PGY, CE, and simulator exposure. Robotic skills training laboratory. A total of 27 residents from 14 programs of the Southeastern Section of the American Urologic Association participated. Time to complete the ED exercise was significantly shorter for residents who had logged live robotic console compared with those who had not (p = 0.003). There were no other associations with live robotic console time that approached significance (all p ≥ 0.21). The only measure that was significantly associated with PGY was time to complete ED exercise (p = 0.009). No associations with previous utilization of a robotic simulator in the resident's home training program were statistically significant. The ED exercise on the MdVT is most associated with CE and PGY compared with other exercises. Exposure of trainees to the MdVT in training programs does not appear to alter performance scores compared with trainees who do not have the simulator. © 2013 Published by Association of Program Directors in Surgery on behalf of Association of Program Directors in Surgery.
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2012-01-01
Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.
Global Changes of the Water Cycle Intensity
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Schubert, Siegfried D.; Walker, Gregory K.
2003-01-01
In this study, we evaluate numerical simulations of the twentieth century climate, focusing on the changes in the intensity of the global water cycle. A new diagnostic of atmospheric water vapor cycling rate is developed and employed, that relies on constituent tracers predicted at the model time step. This diagnostic is compared to a simplified traditional calculation of cycling rate, based on monthly averages of precipitation and total water content. The mean sensitivity of both diagnostics to variations in climate forcing is comparable. However, the new diagnostic produces systematically larger values and more variability than the traditional average approach. Climate simulations were performed using SSTs of the early (1902-1921) and late (1979- 1998) twentieth century along with the appropriate C02 forcing. In general, the increase of global precipitation with the increases in SST that occurred between the early and late twentieth century is small. However, an increase of atmospheric temperature leads to a systematic increase in total precipitable water. As a result, the residence time of water in the atmosphere increased, indicating a reduction of the global cycling rate. This result was explored further using a number of 50-year climate simulations from different models forced with observed SST. The anomalies and trends in the cycling rate and hydrologic variables of different GCMs are remarkably similar. The global annual anomalies of precipitation show a significant upward trend related to the upward trend of surface temperature, during the latter half of the twentieth century. While this implies an increase in the hydrologic cycle intensity, a concomitant increase of total precipitable water again leads to a decrease in the calculated global cycling rate. An analysis of the land/sea differences shows that the simulated precipitation over land has a decreasing trend while the oceanic precipitation has an upward trend consistent with previous studies and the available observations. The decreasing continental trend in precipitation is located primarily over tropical land regions, with some other regions, such as North America experiencing an increasing trend. Precipitation trends are diagnosed further using the water tracers to delineate the precipitation that occurs because of continental evaporation, as opposed to oceanic evaporation. These diagnostics show that over global land areas, the recycling of continental moisture is decreasing in time. However, the recycling changes are not spatially uniform so that some regions, most notably over the United States, experience continental recycling of water that increases in time.
Wesolowski, Edwin A.
1996-01-01
Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.
Initial validation of a virtual-reality robotic simulator.
Lendvay, Thomas S; Casale, Pasquale; Sweet, Robert; Peters, Craig
2008-09-01
Robotic surgery is an accepted adjunct to minimally invasive surgery, but training is restricted to console time. Virtual-reality (VR) simulation has been shown to be effective for laparoscopic training and so we seek to validate a novel VR robotic simulator. The American Urological Association (AUA) Office of Education approved this study. Subjects enrolled in a robotics training course at the 2007 AUA annual meeting underwent skills training in a da Vinci dry-lab module and a virtual-reality robotics module which included a three-dimensional (3D) VR robotic simulator. Demographic and acceptability data were obtained, and performance metrics from the simulator were compared between experienced and nonexperienced roboticists for a ring transfer task. Fifteen subjects-four with previous robotic surgery experience and 11 without-participated. Nine subjects were still in urology training and nearly half of the group had reported playing video games. Overall performance of the da Vinci system and the simulator were deemed acceptable by a Likert scale (0-6) rating of 5.23 versus 4.69, respectively. Experienced subjects outperformed nonexperienced subjects on the simulator on three metrics: total task time (96 s versus 159 s, P < 0.02), economy of motion (1,301 mm versus 2,095 mm, P < 0.04), and time the telemanipulators spent outside of the center of the platform's workspace (4 s versus 35 s, P < 0.02). This is the first demonstration of face and construct validity of a virtual-reality robotic simulator. Further studies assessing predictive validity are ultimately required to support incorporation of VR robotic simulation into training curricula.
Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars; Cayé-Thomasen, Per; Sørensen, Mads Sølvsten
2016-01-01
The cognitive load (CL) theoretical framework suggests that working memory is limited, which has implications for learning and skills acquisition. Complex learning situations such as surgical skills training can potentially induce a cognitive overload, inhibiting learning. This study aims to compare CL in traditional cadaveric dissection training and virtual reality (VR) simulation training of mastoidectomy. A prospective, crossover study. Participants performed cadaveric dissection before VR simulation of the procedure or vice versa. CL was estimated by secondary-task reaction time testing at baseline and during the procedure in both training modalities. The national Danish temporal bone course. A total of 40 novice otorhinolaryngology residents. Reaction time was increased by 20% in VR simulation training and 55% in cadaveric dissection training of mastoidectomy compared with baseline measurements. Traditional dissection training increased CL significantly more than VR simulation training (p < 0.001). VR simulation training imposed a lower CL than traditional cadaveric dissection training of mastoidectomy. Learning complex surgical skills can be a challenge for the novice and mastoidectomy skills training could potentially be optimized by employing VR simulation training first because of the lower CL. Traditional dissection training could then be used to supplement skills training after basic competencies have been acquired in the VR simulation. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Paschek, Dietmar; Nymeyer, Hugh; García, Angel E
2007-03-01
We simulate the folding/unfolding equilibrium of the 20-residue miniprotein Trp-cage. We use replica exchange molecular dynamics simulations of the AMBER94 atomic detail model of the protein explicitly solvated by water, starting from a completely unfolded configuration. We employ a total of 40 replicas, covering the temperature range between 280 and 538 K. Individual simulation lengths of 100 ns sum up to a total simulation time of about 4 micros. Without any bias, we observe the folding of the protein into the native state with an unfolding-transition temperature of about 440 K. The native state is characterized by a distribution of root mean square distances (RMSD) from the NMR data that peaks at 1.8A, and is as low as 0.4A. We show that equilibration times of about 40 ns are required to yield convergence. A folded configuration in the entire extended ensemble is found to have a lifetime of about 31 ns. In a clamp-like motion, the Trp-cage opens up during thermal denaturation. In line with fluorescence quenching experiments, the Trp-residue sidechain gets hydrated when the protein opens up, roughly doubling the number of water molecules in the first solvation shell. We find the helical propensity of the helical domain of Trp-cage rather well preserved even at very high temperatures. In the folded state, we can identify states with one and two buried internal water molecules interconnecting parts of the Trp-cage molecule by hydrogen bonds. The loss of hydrogen bonds of these buried water molecules in the folded state with increasing temperature is likely to destabilize the folded state at elevated temperatures.
Hogg, Melissa E; Tam, Vernissia; Zenati, Mazen; Novak, Stephanie; Miller, Jennifer; Zureikat, Amer H; Zeh, Herbert J
Hepatobiliary surgery is a highly complex, low-volume specialty with long learning curves necessary to achieve optimal outcomes. This creates significant challenges in both training and measuring surgical proficiency. We hypothesize that a virtual reality curriculum with mastery-based simulation is a valid tool to train fellows toward operative proficiency. This study evaluates the content and predictive validity of robotic simulation curriculum as a first step toward developing a comprehensive, proficiency-based pathway. A mastery-based simulation curriculum was performed in a virtual reality environment. A pretest/posttest experimental design used both virtual reality and inanimate environments to evaluate improvement. Participants self-reported previous robotic experience and assessed the curriculum by rating modules based on difficulty and utility. This study was conducted at the University of Pittsburgh Medical Center (Pittsburgh, PA), a tertiary care academic teaching hospital. A total of 17 surgical oncology fellows enrolled in the curriculum, 16 (94%) completed. Of 16 fellows who completed the curriculum, 4 fellows (25%) achieved mastery on all 24 modules; on average, fellows mastered 86% of the modules. Following curriculum completion, individual test scores improved (p < 0.0001). An average of 2.4 attempts was necessary to master each module (range: 1-17). Median time spent completing the curriculum was 4.2 hours (range: 1.1-6.6). Total 8 (50%) fellows continued practicing modules beyond mastery. Survey results show that "needle driving" and "endowrist 2" modules were perceived as most difficult although "needle driving" modules were most useful. Overall, 15 (94%) fellows perceived improvement in robotic skills after completing the curriculum. In a cohort of board-certified general surgeons who are novices in robotic surgery, a mastery-based simulation curriculum demonstrated internal validity with overall score improvement. Time to complete the curriculum was manageable. Published by Elsevier Inc.
Impacts of licensed premises trading hour policies on alcohol-related harms.
Atkinson, Jo-An; Prodan, Ante; Livingston, Michael; Knowles, Dylan; O'Donnell, Eloise; Room, Robin; Indig, Devon; Page, Andrew; McDonnell, Geoff; Wiggers, John
2018-07-01
Evaluations of alcohol policy changes demonstrate that restriction of trading hours of both 'on'- and 'off'-licence venues can be an effective means of reducing rates of alcohol-related harm. Despite this, the effects of different trading hour policy options over time, accounting for different contexts and demographic characteristics, and the common co-occurrence of other harm reduction strategies in trading hour policy initiatives, are difficult to estimate. The aim of this study was to use dynamic simulation modelling to compare estimated impacts over time of a range of trading hour policy options on various indicators of acute alcohol-related harm. An agent-based model of alcohol consumption in New South Wales, Australia was developed using existing research evidence, analysis of available data and a structured approach to incorporating expert opinion. Five policy scenarios were simulated, including restrictions to trading hours of on-licence venues and extensions to trading hours of bottle shops. The impact of the scenarios on four measures of alcohol-related harm were considered: total acute harms, alcohol-related violence, emergency department (ED) presentations and hospitalizations. Simulation of a 3 a.m. (rather than 5 a.m.) closing time resulted in an estimated 12.3 ± 2.4% reduction in total acute alcohol-related harms, a 7.9 ± 0.8% reduction in violence, an 11.9 ± 2.1% reduction in ED presentations and a 9.5 ± 1.8% reduction in hospitalizations. Further reductions were achieved simulating a 1 a.m. closing time, including a 17.5 ± 1.1% reduction in alcohol-related violence. Simulated extensions to bottle shop trading hours resulted in increases in rates of all four measures of harm, although most of the effects came from increasing operating hours from 10 p.m. to 11 p.m. An agent-based simulation model suggests that restricting trading hours of licensed venues reduces rates of alcohol-related harm and extending trading hours of bottle shops increases rates of alcohol-related harm. The model can estimate the effects of a range of policy options. © 2018 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
Learning styles and the prospective ophthalmologist.
Modi, Neil; Williams, Olayinka; Swampillai, Andrew J; Waqar, Salman; Park, Jonathan; Kersey, Thomas L; Sleep, Tamsin
2015-04-01
Understanding the learning styles of individual trainees may enable trainers to tailor an educational program and optimise learning. Surgical trainees have previously been shown to demonstrate a tendency towards particular learning styles. We seek to clarify the relationship between learning style and learned surgical performance using a simulator, prior to surgical training. The Kolb Learning Style Inventory was administered to a group of thirty junior doctors. Participants were then asked to perform a series of tasks using the EyeSi virtual reality cataract surgery simulator (VR Magic, Mannheim, Germany). All completed a standard introductory programme to eliminate learning curve. They then undertook four attempts of level 4 forceps module binocularly. Total score, odometer movement (mm), corneal area injured (mm(2)), lens area injured (mm(2)) and total time taken (seconds) recorded. Mean age was 31.6 years. No significant correlation was found between any learning style and any variable on the EyeSi cataract surgery simulator. There is a predominant learning style amongst surgical residents. There is however no demonstrable learning style that results in a better (or worse) performance on the EyeSi surgery simulator and hence in learning and performing cataract surgery.
Numerical aerodynamic simulation facility preliminary study, volume 1
NASA Technical Reports Server (NTRS)
1977-01-01
A technology forecast was established for the 1980-1985 time frame and the appropriateness of various logic and memory technologies for the design of the numerical aerodynamic simulation facility was assessed. Flow models and their characteristics were analyzed and matched against candidate processor architecture. Metrics were established for the total facility, and housing and support requirements of the facility were identified. An overview of the system is presented, with emphasis on the hardware of the Navier-Stokes solver, which is the key element of the system. Software elements of the system are also discussed.
Transient thermal modeling of the nonscanning ERBE detector
NASA Technical Reports Server (NTRS)
Mahan, J. R.
1983-01-01
A numerical model to predict the transient thermal response of the ERBE nonscanning wide field of view total radiometer channel was developed. The model, which uses Monte Carlo techniques to characterize the radiative component of heat transfer, is described and a listing of the computer program is provided. Application of the model to simulate the actual blackbody calibration procedure is discussed. The use of the model to establish a real time flight data interpretation strategy is recommended. Modification of the model to include a simulated Earth radiation source field and a filter dome is indicated.
Performance simulation for the design of solar heating and cooling systems
NASA Technical Reports Server (NTRS)
Mccormick, P. O.
1975-01-01
Suitable approaches for evaluating the performance and the cost of a solar heating and cooling system are considered, taking into account the value of a computer simulation concerning the entire system in connection with the large number of parameters involved. Operational relations concerning the collector efficiency in the case of a new improved collector and a reference collector are presented in a graph. Total costs for solar and conventional heating, ventilation, and air conditioning systems as a function of time are shown in another graph.
A generalized framework for nucleosynthesis calculations
NASA Astrophysics Data System (ADS)
Sprouse, Trevor; Mumpower, Matthew; Aprahamian, Ani
2014-09-01
Simulating astrophysical events is a difficult process, requiring a detailed pairing of knowledge from both astrophysics and nuclear physics. Astrophysics guides the thermodynamic evolution of an astrophysical event. We present a nucleosynthesis framework written in Fortran that combines as inputs a thermodynamic evolution and nuclear data to time evolve the abundances of nuclear species. Through our coding practices, we have emphasized the applicability of our framework to any astrophysical event, including those involving nuclear fission. Because these calculations are often very complicated, our framework dynamically optimizes itself based on the conditions at each time step in order to greatly minimize total computation time. To highlight the power of this new approach, we demonstrate the use of our framework to simulate both Big Bang nucleosynthesis and r-process nucleosynthesis with speeds competitive with current solutions dedicated to either process alone.
NASA Astrophysics Data System (ADS)
Winter, Jan; Rapp, Stephan; Schmidt, Michael; Huber, Heinz P.
2017-09-01
In this paper, we present ultrafast measurements of the complex refractive index for copper up to a time delay of 20 ps with an accuracy <1% at laser fluences in the vicinity of the ablation threshold. The measured refractive index n and extinction coefficient k are supported by a simulation including the two-temperature model with an accurate description of thermal and optical properties and a thermomechanical model. Comparison of the measured time resolved optical properties with results of the simulation reveals underlying physical mechanisms in three distinct time delay regimes. It is found that in the early stage (-5 ps to 0 ps) the thermally excited d-band electrons make a major contribution to the laser pulse absorption and create a steep increase in transient optical properties n and k. In the second time regime (0-10 ps) the material expansion influences the plasma frequency, which is also reflected in the transient extinction coefficient. In contrast, the refractive index n follows the total collision frequency. Additionally, the electron-ion thermalization time can be attributed to a minimum of the extinction coefficient at ∼10 ps. In the third time regime (10-20 ps) the transient extinction coefficient k indicates the surface cooling-down process.
Cyr, Andrée-Ann; Stinchcombe, Arne; Gagnon, Sylvain; Marshall, Shawn; Hing, Malcolm Man-Son; Finestone, Hillel
2009-05-01
This study examined the role of impaired divided attention and speed of processing in traumatic brain injury (TBI) drivers in high-crash-risk simulated road events. A total of 17 TBI drivers and 16 healthy participants were exposed to four challenging simulated roadway events to which behavioral reactions were recorded. Participants were also asked to perform a dual task during portions of the driving task, and TBI individuals were administered standard measures of divided attention and reaction time. Results indicated that the TBI group crashed significantly more than controls (p < .05) and that dual-task performance correlated significantly with crash rate (r = .58, p = .05).
Development of mpi_EPIC model for global agroecosystem modeling
Kang, Shujiang; Wang, Dali; Jeff A. Nichols; ...
2014-12-31
Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmentalmore » effects.« less
Carbonaceous aerosols and Impacts on regional climate over South Asia
NASA Astrophysics Data System (ADS)
Pathak, B.; Parottil, A.
2017-12-01
A comprehensive assessment on the effects of carbonaceous aerosols over regional climate of South Asia CORDEX Domain is carried out using the ICTP developed Regional climate model version 4 (RegCM 4.4). Five different simulations considering (a) Carbonaceous aerosols with feedback to meteorological field (EXP1), (b) Carbonaceous aerosols without feedback to meteorological field (c) only Black Carbon with feed back to meteorological field (EXP3) and (d) only Black Carbon without feed back to meteorological field (EXP4) and only meteorology simulation (CNTL) are performed. All the five experiments are integrated from 01 January 2008 to 01 January 2012 continuously with a horizontal resolution of 50 km with first one year as spin up time. The simulated meteorology for all the simulations is validated by comparing with observations. The influence of carbonaceous aerosols on Direct Radiative Forcing (DRF) at the top of the atmosphere (TOA) and within the atmosphere (ATM) over the South Asian region with focus on Indian subcontinent is carried out. The contribution of black carbon to the total DRF and its significance is analyzed. Modulation in precipitation and temperature with the aerosol-climate feedback is studied by comparing the meteorological parameters in CNTL with CARB/BC with and without feedback simulations. In general, black carbon is found to reduce the precipitation, wind over the region more strongly than total carbonaceous aerosols. Role of black carbon in warming the surface is investigated by comparing the RegCM simulation considering both biomass burning and anthropogenic emissions with simulations considering only anthropogenic simulations.
NASA Astrophysics Data System (ADS)
Wang, R.; Zhao, M.; Hu, Y.; Guo, S.
2016-12-01
Responses of soil CO2 emission to natural precipitation play an essential role in regulating regional C cycling. With more erratic precipitation regimes, mostly likely of more frequent heavy rainstorms, projected into the future, extreme precipitation would potentially affect local soil moisture, plant growth, microbial communities, and further soil CO2 emissions. However, responses of soil CO2 emissions to extreme precipitation have not yet been systematically investigated. Such performances could be of particular importance for rainfed arable soil in semi-arid regions where soil microbial respiration stress is highly sensitive to temporal distribution of natural precipitation.In this study, a simulated experiment was conducted on bare loess soil from the semi-arid Chinese Loess Plateau. Three precipitation regimes with total precipitation amounts of 150 mm, 300 mm and 600 mm were carried out to simulate the extremely dry, business as usual, and extremely wet summer. The three regimes were individually materialized by wetting soils in a series of sub-events (10 mm or 150 mm). Co2 emissions from surface soil were continuously measured in-situ for one month. The results show that: 1) Evident CO2 emission pulses were observed immediately after applying sub-events, and cumulative CO2 emissions from events of total amount of 600 mm were greater than that from 150 mm. 3) In particular, for the same total amount of 600 mm, wetting regimes by applying four times of 150 mm sub-events resulted in 20% less CO2 emissions than by applying 60 times of 10 mm sub-events. This is mostly because its harsh 150 mm storms introduced more over-wet soil microbial respiration stress days (moisture > 28%). As opposed, for the same total amount of 150 mm, CO2 emissions from wetting regimes by applying 15 times of 10 mm sub-events were 22% lower than by wetting at once with 150 mm water, probably because its deficiency of soil moisture resulted in more over-dry soil microbial respiration stress days (moisture < 15%). Overall, soil CO2 emissions not only responded to total precipitation amount, but was also sensitive to precipitation regimes. Such differentiated responses of CO2 emissions highlight the necessity to properly account for relative contributions from CO2 emissions when projecting global carbon cycling into future climate scenarios.
Debes, Anders J; Aggarwal, Rajesh; Balasundaram, Indran; Jacobsen, Morten B
2010-06-01
This study aimed to assess the transferability of basic laparoscopic skills between a virtual reality simulator (MIST-VR) and a video trainer box (D-Box). Forty-six medical students were randomized into 2 groups, training on MIST-VR or D-Box. After training with one modality, a crossover assessment on the other was performed. When tested on MIST-VR, the MIST-VR group showed significantly shorter time (90.3 seconds vs 188.6 seconds, P <.001), better economy of movements (4.40 vs 7.50, P <.001), and lower score (224.7 vs 527.0, P <.001). However, when assessed on the D-Box, there was no difference between the groups for time (402.0 seconds vs 325.6 seconds, P = .152), total hand movements (THC) (289 vs 262, P = .792), or total path length (TPL) (34.9 m vs 34.6 m, P = .388). Both simulators provide significant improvement in performance. Our results indicate that skills learned on the MIST-VR are transferable to the D-Box, but the opposite cannot be demonstrated. Copyright 2010 Elsevier Inc. All rights reserved.
Thain, Peter K; Bleakley, Christopher M; Mitchell, Andrew C S
2015-07-01
Cryotherapy is used widely in sport and exercise medicine to manage acute injuries and facilitate rehabilitation. The analgesic effects of cryotherapy are well established; however, a potential caveat is that cooling tissue negatively affects neuromuscular control through delayed muscle reaction time. This topic is important to investigate because athletes often return to exercise, rehabilitation, or competitive activity immediately or shortly after cryotherapy. To compare the effects of wet-ice application, cold-water immersion, and an untreated control condition on peroneus longus and tibialis anterior muscle reaction time during a simulated lateral ankle sprain. Randomized controlled clinical trial. University of Hertfordshire human performance laboratory. A total of 54 physically active individuals (age = 20.1 ± 1.5 years, height = 1.7 ± 0.07 m, mass = 66.7 ± 5.4 kg) who had no injury or history of ankle sprain. Wet-ice application, cold-water immersion, or an untreated control condition applied to the ankle for 10 minutes. Muscle reaction time and muscle amplitude of the peroneus longus and tibialis anterior in response to a simulated lateral ankle sprain were calculated. The ankle-sprain simulation incorporated a combined inversion and plantar-flexion movement. We observed no change in muscle reaction time or muscle amplitude after cryotherapy for either the peroneus longus or tibialis anterior (P > .05). Ten minutes of joint cooling did not adversely affect muscle reaction time or muscle amplitude in response to a simulated lateral ankle sprain. These findings suggested that athletes can safely return to sporting activity immediately after icing. Further evidence showed that ice can be applied before ankle rehabilitation without adversely affecting dynamic neuromuscular control. Investigation in patients with acute ankle sprains is warranted to assess the clinical applicability of these interventions.
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall
NASA Astrophysics Data System (ADS)
Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.
2017-06-01
Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.
Feedback control of vibrations in a moving flexible robot arm with rotary and prismatic joints
NASA Technical Reports Server (NTRS)
Wang, P. K. C.; Wei, Jin-Duo
1987-01-01
A robot with a long extendible flexible arm which can also undergo both vertical translation and rotary motion is considered. First, A distributed-parameter model for the robot arm dynamics is developed. It is found that the extending motion could enhance the arm vibrations. Then, a Galerkin-type approximation based on an appropriate time-dependent basis for the solution space is used to obtain an approximate finite-dimensional model for simulation studies. A feedback control for damping the motion-induced vibrations is derived by considering the time rate-of-change of the total vibrational energy of the flexible arm. The authors conclude with some simulation results for a special case with the proposed control law.
Zhou, Yuan; Ancker, Jessica S; Upadhye, Mandar; McGeorge, Nicolette M; Guarrera, Theresa K; Hegde, Sudeep; Crane, Peter W; Fairbanks, Rollin J; Bisantz, Ann M; Kaushal, Rainu; Lin, Li
2013-01-01
The effect of health information technology (HIT) on efficiency and workload among clinical and nonclinical staff has been debated, with conflicting evidence about whether electronic health records (EHRs) increase or decrease effort. None of this paper to date, however, examines the effect of interoperability quantitatively using discrete event simulation techniques. To estimate the impact of EHR systems with various levels of interoperability on day-to-day tasks and operations of ambulatory physician offices. Interviews and observations were used to collect workflow data from 12 adult primary and specialty practices. A discrete event simulation model was constructed to represent patient flows and clinical and administrative tasks of physicians and staff members. High levels of EHR interoperability were associated with reduced time spent by providers on four tasks: preparing lab reports, requesting lab orders, prescribing medications, and writing referrals. The implementation of an EHR was associated with less time spent by administrators but more time spent by physicians, compared with time spent at paper-based practices. In addition, the presence of EHRs and of interoperability did not significantly affect the time usage of registered nurses or the total visit time and waiting time of patients. This paper suggests that the impact of using HIT on clinical and nonclinical staff work efficiency varies, however, overall it appears to improve time efficiency more for administrators than for physicians and nurses.
Simulation of Clinical Diagnosis: A Comparative Study
de Dombal, F. T.; Horrocks, Jane C.; Staniland, J. R.; Gill, P. W.
1971-01-01
This paper presents a comparison between three different modes of simulation of the diagnostic process—a computer-based system, a verbal mode, and a further mode in which cards were selected from a large board. A total of 34 subjects worked through a series of 444 diagnostic simulations. The verbal mode was found to be most enjoyable and realistic. At the board, considerable amounts of extra irrelevant data were selected. At the computer, the users asked the same questions every time, whether or not they were relevant to the particular diagnosis. They also found the teletype distracting, noisy, and slow. The need for an acceptable simulation system remains, and at present our Minisim and verbal modes are proving useful in training junior clinical students. Future simulators should be flexible, economical, and acceptably realistic—and to us this latter criterion implies the two-way use of speech. We are currently developing and testing such a system. PMID:5579197
Man-rated flight software for the F-8 DFBW program
NASA Technical Reports Server (NTRS)
Bairnsfather, R. R.
1975-01-01
The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program Assembly Control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools--the all-digital simulator, the hybrid simulator, and the Iron Bird simulator--are described, as well as the program test plans and their implementation on the various simulators. Failure-effects analysis and the creation of special failure-generating software for testing purposes are described. The quality of the end product is evidenced by the F-8 DFBW flight test program in which 42 flights, totaling 58 hours of flight time, were successfully made without any DFCS inflight software, or hardware, failures.
A microprocessor-controlled tracheal insufflation-assisted total liquid ventilation system.
Parker, James Courtney; Sakla, Adel; Donovan, Francis M; Beam, David; Chekuri, Annu; Al-Khatib, Mohammad; Hamm, Charles R; Eyal, Fabien G
2009-09-01
A prototype time cycled, constant volume, closed circuit perfluorocarbon (PFC) total liquid ventilator system is described. The system utilizes microcontroller-driven display and master control boards, gear motor pumps, and three-way solenoid valves to direct flow. A constant tidal volume and functional residual capacity (FRC) are maintained with feedback control using end-expiratory and end-inspiratory stop-flow pressures. The system can also provide a unique continuous perfusion (bias flow, tracheal insufflation) through one lumen of a double-lumen endotracheal catheter to increase washout of dead space liquid. FRC and arterial blood gases were maintained during ventilation with Rimar 101 PFC over 2-3 h in normal piglets and piglets with simulated pulmonary edema induced by instillation of albumin solution. Addition of tracheal insufflation flow significantly improved the blood gases and enhanced clearance of instilled albumin solution during simulated edema.
NASA Astrophysics Data System (ADS)
Rabin, Sam S.; Ward, Daniel S.; Malyshev, Sergey L.; Magi, Brian I.; Shevliakova, Elena; Pacala, Stephen W.
2018-03-01
This study describes and evaluates the Fire Including Natural & Agricultural Lands model (FINAL) which, for the first time, explicitly simulates cropland and pasture management fires separately from non-agricultural fires. The non-agricultural fire module uses empirical relationships to simulate burned area in a quasi-mechanistic framework, similar to past fire modeling efforts, but with a novel optimization method that improves the fidelity of simulated fire patterns to new observational estimates of non-agricultural burning. The agricultural fire components are forced with estimates of cropland and pasture fire seasonality and frequency derived from observational land cover and satellite fire datasets. FINAL accurately simulates the amount, distribution, and seasonal timing of burned cropland and pasture over 2001-2009 (global totals: 0.434×106 and 2.02×106 km2 yr-1 modeled, 0.454×106 and 2.04×106 km2 yr-1 observed), but carbon emissions for cropland and pasture fire are overestimated (global totals: 0.295 and 0.706 PgC yr-1 modeled, 0.194 and 0.538 PgC yr-1 observed). The non-agricultural fire module underestimates global burned area (1.91×106 km2 yr-1 modeled, 2.44×106 km2 yr-1 observed) and carbon emissions (1.14 PgC yr-1 modeled, 1.84 PgC yr-1 observed). The spatial pattern of total burned area and carbon emissions is generally well reproduced across much of sub-Saharan Africa, Brazil, Central Asia, and Australia, whereas the boreal zone sees underestimates. FINAL represents an important step in the development of global fire models, and offers a strategy for fire models to consider human-driven fire regimes on cultivated lands. At the regional scale, simulations would benefit from refinements in the parameterizations and improved optimization datasets. We include an in-depth discussion of the lessons learned from using the Levenberg-Marquardt algorithm in an interactive optimization for a dynamic global vegetation model.
Zero dimensional model of atmospheric SMD discharge and afterglow in humid air
NASA Astrophysics Data System (ADS)
Smith, Ryan; Kemaneci, Efe; Offerhaus, Bjoern; Stapelmann, Katharina; Peter Brinkmann, Ralph
2016-09-01
A novel mesh-like Surface Micro Discharge (SMD) device designed for surface wound treatment is simulated by multiple time-scaled zero-dimensional models. The chemical dynamics of the discharge are resolved in time at atmospheric pressure in humid conditions. Simulated are the particle densities of electrons, 26 ionic species, and 26 reactive neutral species including: O3, NO, and HNO3. The total of 53 described species are constrained by 624 reactions within the simulated plasma discharge volume. The neutral species are allowed to diffuse into a diffusive gas regime which is of primary interest. Two interdependent zero-dimensional models separated by nine orders of magnitude in temporal resolution are used to accomplish this; thereby reducing the computational load. Through variation of control parameters such as: ignition frequency, deposited power density, duty cycle, humidity level, and N2 content, the ideal operation conditions for the SMD device can be predicted. The described model has been verified by matching simulation parameters and comparing results to that of previous works. Current operating conditions of the experimental mesh-like SMD were matched and results are compared to the simulations. Work supported by SFB TR 87.
Construct validity of the LapVR virtual-reality surgical simulator.
Iwata, Naoki; Fujiwara, Michitaka; Kodera, Yasuhiro; Tanaka, Chie; Ohashi, Norifumi; Nakayama, Goro; Koike, Masahiko; Nakao, Akimasa
2011-02-01
Laparoscopic surgery requires fundamental skills peculiar to endoscopic procedures such as eye-hand coordination. Acquisition of such skills prior to performing actual surgery is highly desirable for favorable outcome. Virtual-reality simulators have been developed for both surgical training and assessment of performance. The aim of the current study is to show construct validity of a novel simulator, LapVR (Immersion Medical, San Jose, CA, USA), for Japanese surgeons and surgical residents. Forty-four subjects were divided into the following three groups according to their experience in laparoscopic surgery: 14 residents (RE) with no experience in laparoscopic surgery, 14 junior surgeons (JR) with little experience, and 16 experienced surgeons (EX). All subjects executed "essential task 1" programmed in the LapVR, which consists of six tasks, resulting in automatic measurement of 100 parameters indicating various aspects of laparoscopic skills. Time required for each task tended to be inversely correlated with experience in laparoscopic surgery. For the peg transfer skill, statistically significant differences were observed between EX and RE in three parameters, including total time and average time taken to complete the procedure and path length for the nondominant hand. For the cutting skill, similar differences were observed between EX and RE in total time, number of unsuccessful cutting attempts, and path length for the nondominant hand. According to the programmed comprehensive evaluation, performance in terms of successful completion of the task and actual experience of the participants in laparoscopic surgery correlated significantly for the peg transfer (P=0.007) and cutting skills (P=0.026). The peg transfer and cutting skills could best distinguish between EX and RE. This study is the first to provide evidence that LapVR has construct validity to discriminate between novice and experienced laparoscopic surgeons.
Puślecki, Mateusz; Ligowski, Marcin; Dąbrowski, Marek; Stefaniak, Sebastian; Ładzińska, Małgorzata; Pawlak, Aleksander; Zieliński, Marcin; Szarpak, Łukasz; Perek, Bartłomiej; Jemielity, Marek
2018-04-18
Despite advances in mechanical ventilation, severe acute respiratory distress syndrome (ARDS) is associated with high morbidity and mortality rates ranging from 30% to 60%. Extracorporeal Membrane Oxygenation (ECMO) can be used as a "bridge to recovery". ECMO is a complex network that provides oxygenation and ventilation and allows the lungs to rest and recover from respiratory failure, while minimizing iatrogenic ventilator-induced lung injury. In the critical care settings, ECMO is shown to improve survival rates and outcomes in patients with severe ARDS. The primary objective was to present an innovative approach for using high-fidelity medical simulation before setting ECMO program for reversible respiratory failure (RRF) in Poland's first unique regional program "ECMO for Greater Poland", covering a total population of 3.5 million inhabitants in the Greater Poland region (Wielkopolska). Because this organizational model is complex and expensive, we use advanced high-fidelity medical simulation to prepare for the real-life implementation. The algorithm was proposed for respiratory treatment by veno-venous (VV) Extracorporeal Membrane Oxygenation (ECMO). The scenario includes all critical stages: hospital identification (Regional Department of Intensive Care) - inclusion and exclusion criteria matching using an authorship protocol; ECMO team transport; therapy confirmation; veno-venous cannulation of mannequin's artificial vessels and implementation of perfusion therapy and transport with ECMO to another hospital in a provincial city (Clinical Department of Intensive Care), where the VV ECMO therapy was performed in the next 48 h, as training platform. The total time, by definition, means the time from the first contact with the mannequin to the cannulation of artificial vessels and starting VV perfusion on ECMO, did not exceed 3 h - including 75 min of transport (the total time of simulation with first call from provincial hospital to admission to the Clinical Intensive Care department was 5 h). The next 48 h for perfusion simulation "in situ" generated a specific learning platform for intensive care personnel. Shortly after this simulation, we performed, the first in the region: ECMO used for RRF treatment. The transport was successful and exceeded 120 km. During first year of Program duration we performed 6 successful ECMO transports (5 adult and 1 paediatric) with 60% of adult patient survival of ECMO therapies. Three patients in good condition were discharged to home. Two years old patient was successfully disconnected from ECMO and in stabile condition is treated in Paediatric Department. We discovered the important role of medical simulation, not only as an examination for testing the medical professional's skills, but also as a mechanism for creating non-existent procedures. During debriefing, it was found that the previous simulation-based training allowed to build a successful procedural chain, to eliminate errors at the stage of identification, notification, transportation and providing ECMO perfusion therapy. Copyright © 2018. Published by Elsevier Inc.
Amplified total internal reflection: theory, analysis, and demonstration of existence via FDTD.
Willis, Keely J; Schneider, John B; Hagness, Susan C
2008-02-04
The explanation of wave behavior upon total internal reflection from a gainy medium has defied consensus for 40 years. We examine this question using both the finite-difference time-domain (FDTD) method and theoretical analyses. FDTD simulations of a localized wave impinging on a gainy half space are based directly on Maxwell's equations and make no underlying assumptions. They reveal that amplification occurs upon total internal reflection from a gainy medium; conversely, amplification does not occur for incidence below the critical angle. Excellent agreement is obtained between the FDTD results and an analytical formulation that employs a new branch cut in the complex "propagation-constant" plane.
Assessing summertime urban air conditioning consumption in a semiarid environment
NASA Astrophysics Data System (ADS)
Salamanca, F.; Georgescu, M.; Mahalov, A.; Moustaoui, M.; Wang, M.; Svoma, B. M.
2013-09-01
Evaluation of built environment energy demand is necessary in light of global projections of urban expansion. Of particular concern are rapidly expanding urban areas in environments where consumption requirements for cooling are excessive. Here, we simulate urban air conditioning (AC) electric consumption for several extreme heat events during summertime over a semiarid metropolitan area with the Weather Research and Forecasting (WRF) model coupled to a multilayer building energy scheme. Observed total load values obtained from an electric utility company were split into two parts, one linked to meteorology (i.e., AC consumption) which was compared to WRF simulations, and another to human behavior. WRF-simulated non-dimensional AC consumption profiles compared favorably to diurnal observations in terms of both amplitude and timing. The hourly ratio of AC to total electricity consumption accounted for ˜53% of diurnally averaged total electric demand, ranging from ˜35% during early morning to ˜65% during evening hours. Our work highlights the importance of modeling AC electricity consumption and its role for the sustainable planning of future urban energy needs. Finally, the methodology presented in this article establishes a new energy consumption-modeling framework that can be applied to any urban environment where the use of AC systems is prevalent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Bing-Ang; Li, Bin; Lin, Jie
This paper aims to understand the effect of nanoarchitecture on the performance of pseudocapacitive electrodes consisting of conducting scaffold coated with pseudocapacitive material. To do so, two-dimensional numerical simulations of ordered conducting nanorods coated with a thin film of pseudocapacitive material were performed. The simulations reproduced three-electrode cyclic voltammetry measurements based on a continuum model derived from first principles. Two empirical approaches commonly used experimentally to characterize the contributions of surface-controlled and diffusion-controlled charge storage mechanisms to the total current density with respect to scan rate were theoretically validated for the first time. Moreover, the areal capacitive capacitance, attributed tomore » EDL formation, remained constant and independent of electrode dimensions, at low scan rates. However, at high scan rates, it decreased with decreasing conducting nanorod radius and increasing pseudocapacitive layer thickness due to resistive losses. By contrast, the gravimetric faradaic capacitance, due to reversible faradaic reactions, decreased continuously with increasing scan rate and pseudocapacitive layer thickness but was independent of conducting nanorod radius. Note that the total gravimetric capacitance predicted numerically featured values comparable to experimental measurements. Finally, an optimum pseudocapacitive layer thickness that maximizes total areal capacitance was identified as a function of scan rate and confirmed by scaling analysis.« less
Mei, Bing-Ang; Li, Bin; Lin, Jie; ...
2017-10-27
This paper aims to understand the effect of nanoarchitecture on the performance of pseudocapacitive electrodes consisting of conducting scaffold coated with pseudocapacitive material. To do so, two-dimensional numerical simulations of ordered conducting nanorods coated with a thin film of pseudocapacitive material were performed. The simulations reproduced three-electrode cyclic voltammetry measurements based on a continuum model derived from first principles. Two empirical approaches commonly used experimentally to characterize the contributions of surface-controlled and diffusion-controlled charge storage mechanisms to the total current density with respect to scan rate were theoretically validated for the first time. Moreover, the areal capacitive capacitance, attributed tomore » EDL formation, remained constant and independent of electrode dimensions, at low scan rates. However, at high scan rates, it decreased with decreasing conducting nanorod radius and increasing pseudocapacitive layer thickness due to resistive losses. By contrast, the gravimetric faradaic capacitance, due to reversible faradaic reactions, decreased continuously with increasing scan rate and pseudocapacitive layer thickness but was independent of conducting nanorod radius. Note that the total gravimetric capacitance predicted numerically featured values comparable to experimental measurements. Finally, an optimum pseudocapacitive layer thickness that maximizes total areal capacitance was identified as a function of scan rate and confirmed by scaling analysis.« less
NASA Astrophysics Data System (ADS)
Wild, Oliver; Sundet, Jostein K.; Prather, Michael J.; Isaksen, Ivar S. A.; Akimoto, Hajime; Browell, Edward V.; Oltmans, Samuel J.
2003-11-01
Two closely related chemical transport models (CTMs) employing the same high-resolution meteorological data (˜180 km × ˜180 km × ˜600 m) from the European Centre for Medium-Range Weather Forecasts are used to simulate the ozone total column and tropospheric distribution over the western Pacific region that was explored by the NASA Transport and Chemical Evolution over the Pacific (TRACE-P) measurement campaign in February-April 2001. We make extensive comparisons with ozone measurements from the lidar instrument on the NASA DC-8, with ozonesondes taken during the period around the Pacific Rim, and with TOMS total column ozone. These demonstrate that within the uncertainties of the meteorological data and the constraints of model resolution, the two CTMs (FRSGC/UCI and Oslo CTM2) can simulate the observed tropospheric ozone and do particularly well when realistic stratospheric ozone photochemistry is included. The greatest differences between the models and observations occur in the polluted boundary layer, where problems related to the simplified chemical mechanism and inadequate horizontal resolution are likely to have caused the net overestimation of about 10 ppb mole fraction. In the upper troposphere, the large variability driven by stratospheric intrusions makes agreement very sensitive to the timing of meteorological features.
Bader, Whitney; Bovy, Benoît; Conway, Stephanie; ...
2017-02-14
Changes of atmospheric methane total columns (CH 4) since 2005 have been evaluated using Fourier transform infrared (FTIR) solar observations carried out at 10 ground-based sites, affiliated to the Network for Detection of Atmospheric Composition Change (NDACC). From this, we find an increase of atmospheric methane total columns of 0.31 ± 0.03 % year –1 (2 σ level of uncertainty) for the 2005–2014 period. Comparisons with in situ methane measurements at both local and global scales show good agreement. We used the GEOS-Chem chemical transport model tagged simulation, which accounts for the contribution of each emission source and one sinkmore » in the total methane, simulated over 2005–2012. After regridding according to NDACC vertical layering using a conservative regridding scheme and smoothing by convolving with respective FTIR seasonal averaging kernels, the GEOS-Chem simulation shows an increase of atmospheric methane total columns of 0.35 ± 0.03 % year –1 between 2005 and 2012, which is in agreement with NDACC measurements over the same time period (0.30 ± 0.04 % year –1, averaged over 10 stations). Analysis of the GEOS-Chem-tagged simulation allows us to quantify the contribution of each tracer to the global methane change since 2005. We find that natural sources such as wetlands and biomass burning contribute to the interannual variability of methane. However, anthropogenic emissions, such as coal mining, and gas and oil transport and exploration, which are mainly emitted in the Northern Hemisphere and act as secondary contributors to the global budget of methane, have played a major role in the increase of atmospheric methane observed since 2005. Furthermore based on the GEOS-Chem-tagged simulation, we discuss possible cause(s) for the increase of methane since 2005, which is still unexplained.« less
NASA Astrophysics Data System (ADS)
Guan, Kaiyu; Good, Stephen P.; Caylor, Kelly K.; Medvigy, David; Pan, Ming; Wood, Eric F.; Sato, Hisashi; Biasutti, Michela; Chen, Min; Ahlström, Anders; Xu, Xiangtao
2018-02-01
There is growing evidence of ongoing changes in the statistics of intra-seasonal rainfall variability over large parts of the world. Changes in annual total rainfall may arise from shifts, either singly or in a combination, of distinctive intra-seasonal characteristics -i.e. rainfall frequency, rainfall intensity, and rainfall seasonality. Understanding how various ecosystems respond to the changes in intra-seasonal rainfall characteristics is critical for predictions of future biome shifts and ecosystem services under climate change, especially for arid and semi-arid ecosystems. Here, we use an advanced dynamic vegetation model (SEIB-DGVM) coupled with a stochastic rainfall/weather simulator to answer the following question: how does the productivity of ecosystems respond to a given percentage change in the total seasonal rainfall that is realized by varying only one of the three rainfall characteristics (rainfall frequency, intensity, and rainy season length)? We conducted ensemble simulations for continental Africa for a realistic range of changes (-20% ~ +20%) in total rainfall amount. We find that the simulated ecosystem productivity (measured by gross primary production, GPP) shows distinctive responses to the intra-seasonal rainfall characteristics. Specifically, increase in rainfall frequency can lead to 28% more GPP increase than the same percentage increase in rainfall intensity; in tropical woodlands, GPP sensitivity to changes in rainy season length is ~4 times larger than to the same percentage changes in rainfall frequency or intensity. In contrast, shifts in the simulated biome distribution are much less sensitive to intra-seasonal rainfall characteristics than they are to total rainfall amount. Our results reveal three major distinctive productivity responses to seasonal rainfall variability—‘chronic water stress’, ‘acute water stress’ and ‘minimum water stress’ - which are respectively associated with three broad spatial patterns of African ecosystem physiognomy, i.e. savannas, woodlands, and tropical forests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bader, Whitney; Bovy, Benoît; Conway, Stephanie
Changes of atmospheric methane total columns (CH 4) since 2005 have been evaluated using Fourier transform infrared (FTIR) solar observations carried out at 10 ground-based sites, affiliated to the Network for Detection of Atmospheric Composition Change (NDACC). From this, we find an increase of atmospheric methane total columns of 0.31 ± 0.03 % year –1 (2 σ level of uncertainty) for the 2005–2014 period. Comparisons with in situ methane measurements at both local and global scales show good agreement. We used the GEOS-Chem chemical transport model tagged simulation, which accounts for the contribution of each emission source and one sinkmore » in the total methane, simulated over 2005–2012. After regridding according to NDACC vertical layering using a conservative regridding scheme and smoothing by convolving with respective FTIR seasonal averaging kernels, the GEOS-Chem simulation shows an increase of atmospheric methane total columns of 0.35 ± 0.03 % year –1 between 2005 and 2012, which is in agreement with NDACC measurements over the same time period (0.30 ± 0.04 % year –1, averaged over 10 stations). Analysis of the GEOS-Chem-tagged simulation allows us to quantify the contribution of each tracer to the global methane change since 2005. We find that natural sources such as wetlands and biomass burning contribute to the interannual variability of methane. However, anthropogenic emissions, such as coal mining, and gas and oil transport and exploration, which are mainly emitted in the Northern Hemisphere and act as secondary contributors to the global budget of methane, have played a major role in the increase of atmospheric methane observed since 2005. Furthermore based on the GEOS-Chem-tagged simulation, we discuss possible cause(s) for the increase of methane since 2005, which is still unexplained.« less
Demonstration of an Aerocapture GN and C System Through Hardware-in-the-Loop Simulations
NASA Technical Reports Server (NTRS)
Masciarelli, James; Deppen, Jennifer; Bladt, Jeff; Fleck, Jeff; Lawson, Dave
2010-01-01
Aerocapture is an orbit insertion maneuver in which a spacecraft flies through a planetary atmosphere one time using drag force to decelerate and effect a hyperbolic to elliptical orbit change. Aerocapture employs a feedback Guidance, Navigation, and Control (GN&C) system to deliver the spacecraft into a precise postatmospheric orbit despite the uncertainties inherent in planetary atmosphere knowledge, entry targeting and aerodynamic predictions. Only small amounts of propellant are required for attitude control and orbit adjustments, thereby providing mass savings of hundreds to thousands of kilograms over conventional all-propulsive techniques. The Analytic Predictor Corrector (APC) guidance algorithm has been developed to steer the vehicle through the aerocapture maneuver using bank angle control. Through funding provided by NASA's In-Space Propulsion Technology Program, the operation of an aerocapture GN&C system has been demonstrated in high-fidelity simulations that include real-time hardware in the loop, thus increasing the Technology Readiness Level (TRL) of aerocapture GN&C. First, a non-real-time (NRT), 6-DOF trajectory simulation was developed for the aerocapture trajectory. The simulation included vehicle dynamics, gravity model, atmosphere model, aerodynamics model, inertial measurement unit (IMU) model, attitude control thruster torque models, and GN&C algorithms (including the APC aerocapture guidance). The simulation used the vehicle and mission parameters from the ST-9 mission. A 2000 case Monte Carlo simulation was performed and results show an aerocapture success rate of greater than 99.7%, greater than 95% of total delta-V required for orbit insertion is provided by aerodynamic drag, and post-aerocapture orbit plane wedge angle error is less than 0.5 deg (3-sigma). Then a real-time (RT), 6-DOF simulation for the aerocapture trajectory was developed which demonstrated the guidance software executing on a flight-like computer, interfacing with a simulated IMU and simulated thrusters, with vehicle dynamics provided by an external simulator. Five cases from the NRT simulations were run in the RT simulation environment. The results compare well to those of the NRT simulation thus verifying the RT simulation configuration. The results of the above described simulations show the aerocapture maneuver using the APC algorithm can be accomplished reliably and the algorithm is now at TRL-6. Flight validation is the next step for aerocapture technology development.
Effects of Preoperative Simulation on Minimally Invasive Hybrid Lumbar Interbody Fusion.
Rieger, Bernhard; Jiang, Hongzhen; Reinshagen, Clemens; Molcanyi, Marek; Zivcak, Jozef; Grönemeyer, Dietrich; Bosche, Bert; Schackert, Gabriele; Ruess, Daniel
2017-10-01
The main focus of this study was to evaluate how preoperative simulation affects the surgical work flow, radiation exposure, and outcome of minimally invasive hybrid lumbar interbody fusion (MIS-HLIF). A total of 132 patients who underwent single-level MIS-HLIF were enrolled in a cohort study design. Dose area product was analyzed in addition to surgical data. Once preoperative simulation was established, 66 cases (SIM cohort) were compared with 66 patients who had previously undergone MIS-HLIF without preoperative simulation (NO-SIM cohort). Dose area product was reduced considerably in the SIM cohort (320 cGy·cm 2 NO-SIM cohort: 470 cGy·cm 2 ; P < 0.01). Surgical time was shorter for the SIM cohort (155 minutes; NO-SIM cohort, 182 minutes; P < 0.05). SIM cohort had a better outcome in Numeric Rating Scale back at 6 months follow-up compared with the NO-SIM cohort (P < 0.05). Preoperative simulation reduced radiation exposure and resulted in less back pain at the 6 months follow-up time point. Preoperative simulation provided guidance in determining the correct cage height. Outcome controls enabled the surgeon to improve the procedure and the software algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.
Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.
Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys
2018-04-01
Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.
Simulator-induced spatial disorientation: effects of age, sleep deprivation, and type of conflict.
Previc, Fred H; Ercoline, William R; Evans, Richard H; Dillon, Nathan; Lopez, Nadia; Daluz, Christina M; Workman, Andrew
2007-05-01
Spatial disorientation mishaps are greater at night and with greater time on task, and sleep deprivation is known to decrease cognitive and overall flight performance. However, the ability to perceive and to be influenced by physiologically appropriate simulated SD conflicts has not previously been studied in an automated simulator flight profile. A set of 10 flight profiles were flown by 10 U.S. Air Force (USAF) pilots over a period of 28 h in a specially designed flight simulator for spatial disorientation research and training. Of the 10 flights, 4 had a total of 7 spatial disorientation (SD) conflicts inserted into each of them, 5 simulating motion illusions and 2 involving visual illusions. The percentage of conflict reports was measured along with the effects of four conflicts on flight performance. The results showed that, with one exception, all motion conflicts were reported over 60% of the time, whereas the two visual illusions were reported on average only 25% of the time, although they both significantly affected flight performance. Pilots older than 35 yr of age were more likely to report conflicts than were those under 30 yr of age (63% vs. 38%), whereas fatigue had little effect overall on either recognized or unrecognized SD. The overall effects of these conflicts on perception and performance were generally not altered by sleep deprivation, despite clear indications of fatigue in our pilots.
Computing the total atmospheric refraction for real-time optical imaging sensor simulation
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2015-05-01
Fast and accurate computation of light path deviation due to atmospheric refraction is an important requirement for real-time simulation of optical imaging sensor systems. A large body of existing literature covers various methods for application of Snell's Law to the light path ray tracing problem. This paper provides a discussion of the adaptation to real time simulation of atmospheric refraction ray tracing techniques used in mid-1980's LOWTRAN releases. The refraction ray trace algorithm published in a LOWTRAN-6 technical report by Kneizys (et. al.) has been coded in MATLAB for development, and in C-language for simulation use. To this published algorithm we have added tuning parameters for variable path segment lengths, and extensions for Earth grazing and exoatmospheric "near Earth" ray paths. Model atmosphere properties used to exercise the refraction algorithm were obtained from tables published in another LOWTRAN-6 related report. The LOWTRAN-6 based refraction model is applicable to atmospheric propagation at wavelengths in the IR and visible bands of the electromagnetic spectrum. It has been used during the past two years by engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) in support of several advanced imaging sensor simulations. Recently, a faster (but sufficiently accurate) method using Gauss-Chebyshev Quadrature integration for evaluating the refraction integral was adopted.
Reime, Marit Hegg; Johnsgaard, Tone; Kvam, Fred Ivan; Aarflot, Morten; Engeberg, Janecke Merethe; Breivik, Marit; Brattebø, Guttorm
2017-01-01
Larger student groups and pressure on limited faculty time have raised the question of the learning value of merely observing simulation training in emergency medicine, instead of active team participation. The purpose of this study was to examine observers and hands-on participants' self-reported learning outcomes during simulation-based interprofessional team training regarding non-technical skills. In addition, we compared the learning outcomes for different professions and investigated team performance relative to the number of simulations in which they participated. A concurrent mixed-method design was chosen to evaluate the study, using questionnaires, observations, and focus group interviews. Participants included a total of 262 postgraduate and bachelor nursing students and medical students, organised into 44 interprofessional teams. The quantitative data showed that observers and participants had similar results in three of six predefined learning outcomes. The qualitative data emphasised the importance of participating in different roles, training several times, and training interprofessionally to enhance realism. Observing simulation training can be a valuable learning experience, but the students' preferred hands-on participation and learning by doing. For this reason, one can legitimise the observer role, given the large student groups and limited faculty time, as long as the students are also given some opportunity for hands-on participation in order to become more confident in their professional roles.
Vaccaro, Christine M; Crisp, Catrina C; Fellner, Angela N; Jackson, Christopher; Kleeman, Steven D; Pavelka, James
2013-01-01
The objective of this study was to compare the effect of virtual reality simulation training plus robotic orientation versus robotic orientation alone on performance of surgical tasks using an inanimate model. Surgical resident physicians were enrolled in this assessor-blinded randomized controlled trial. Residents were randomized to receive either (1) robotic virtual reality simulation training plus standard robotic orientation or (2) standard robotic orientation alone. Performance of surgical tasks was assessed at baseline and after the intervention. Nine of 33 modules from the da Vinci Skills Simulator were chosen. Experts in robotic surgery evaluated each resident's videotaped performance of the inanimate model using the Global Rating Scale (GRS) and Objective Structured Assessment of Technical Skills-modified for robotic-assisted surgery (rOSATS). Nine resident physicians were enrolled in the simulation group and 9 in the control group. As a whole, participants improved their total time, time to incision, and suture time from baseline to repeat testing on the inanimate model (P = 0.001, 0.003, <0.001, respectively). Both groups improved their GRS and rOSATS scores significantly (both P < 0.001); however, the GRS overall pass rate was higher in the simulation group compared with the control group (89% vs 44%, P = 0.066). Standard robotic orientation and/or robotic virtual reality simulation improve surgical skills on an inanimate model, although this may be a function of the initial "practice" on the inanimate model and repeat testing of a known task. However, robotic virtual reality simulation training increases GRS pass rates consistent with improved robotic technical skills learned in a virtual reality environment.
Martin, Kevin D; Patterson, David P; Cameron, Kenneth L
2016-11-01
To evaluate the correlation between timed task performance on an arthroscopy shoulder simulator and participation in a standardized expert shoulder arthroscopy educational course. Orthopaedic trainees were voluntarily recruited from over 25 residency programs throughout the United States and Canada. Each trainee was tested on arrival at the Arthroscopy Association of North America orthopaedic learning center on a virtual reality arthroscopy shoulder simulator, and his or her performance was objectively scored. Each trainee's postgraduate year level was recorded, as was his or her experience in residency with shoulder arthroscopy as measured by Accreditation Council for Graduate Medical Education case-log totals. After the focused 4-day training curriculum consisting of didactics and cadaveric experience, each trainee was re-evaluated on the same simulator. Statistical analysis was performed to determine if participation in the course was associated with changes in simulation performance from before to after assessment. Forty-eight trainees completed the testing. On completion of the course, trainees showed significant improvements in all objective measures recorded by the simulator. Total probe distance needed to complete the task decreased by 42% (from 420.4 mm to 245.3 mm, P < .001), arthroscope tip distance traveled decreased by 59% (from 194.1 mm to 80.2 mm, P < .001), and time to completion decreased by 38% (from 66.8 seconds to 41.6 seconds, P < .001). Highly significant improvements in all 3 measures suggest improved instrument handling, anatomic recognition, and arthroscopy-related visual-spatial ability. This study shows objective improvement in orthopaedic trainee basic arthroscopy skill and proficiency after a standardized 4-day arthroscopy training curriculum. The results validate the Arthroscopy Association of North America resident training course and its curriculum with objective evidence of benefit. Level III, prospective study of nonconsecutive participants. Published by Elsevier Inc.
Lizarraga, Joy S.; Ockerman, Darwin J.
2011-01-01
The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, Fort Worth District; the City of Corpus Christi; the Guadalupe-Blanco River Authority; the San Antonio River Authority; and the San Antonio Water System, configured, calibrated, and tested a watershed model for a study area consisting of about 5,490 mi2 of the Frio River watershed in south Texas. The purpose of the model is to contribute to the understanding of watershed processes and hydrologic conditions in the lower Frio River watershed. The model simulates streamflow, evapotranspiration (ET), and groundwater recharge by using a numerical representation of physical characteristics of the landscape, and meteorological and streamflow data. Additional time-series inputs to the model include wastewater-treatment-plant discharges, surface-water withdrawals, and estimated groundwater inflow from Leona Springs. Model simulations of streamflow, ET, and groundwater recharge were done for various periods of record depending upon available measured data for input and comparison, starting as early as 1961. Because of the large size of the study area, the lower Frio River watershed was divided into 12 subwatersheds; separate Hydrological Simulation Program-FORTRAN models were developed for each subwatershed. Simulation of the overall study area involved running simulations in downstream order. Output from the model was summarized by subwatershed, point locations, reservoir reaches, and the Carrizo-Wilcox aquifer outcrop. Four long-term U.S. Geological Survey streamflow-gaging stations and two short-term streamflow-gaging stations were used for streamflow model calibration and testing with data from 1991-2008. Calibration was based on data from 2000-08, and testing was based on data from 1991-99. Choke Canyon Reservoir stage data from 1992-2008 and monthly evaporation estimates from 1999-2008 also were used for model calibration. Additionally, 2006-08 ET data from a U.S. Geological Survey meteorological station in Medina County were used for calibration. Streamflow and ET calibration were considered good or very good. For the 2000-08 calibration period, total simulated flow volume and the flow volume of the highest 10 percent of simulated daily flows were calibrated to within about 10 percent of measured volumes at six U.S. Geological Survey streamflow-gaging stations. The flow volume of the lowest 50 percent of daily flows was not simulated as accurately but represented a small percent of the total flow volume. The model-fit efficiency for the weekly mean streamflow during the calibration periods ranged from 0.60 to 0.91, and the root mean square error ranged from 16 to 271 percent of the mean flow rate. The simulated total flow volumes during the testing periods at the long-term gaging stations exceeded the measured total flow volumes by approximately 22 to 50 percent at three stations and were within 7 percent of the measured total flow volumes at one station. For the longer 1961-2008 simulation period at the long-term stations, simulated total flow volumes were within about 3 to 18 percent of measured total flow volumes. The calibrations made by using Choke Canyon reservoir volume for 1992-2008, reservoir evaporation for 1999-2008, and ET in Medina County for 2006-08, are considered very good. Model limitations include possible errors related to model conceptualization and parameter variability, lack of data to better quantify certain model inputs, and measurement errors. Uncertainty regarding the degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error. A sensitivity analysis was performed for the Upper San Miguel subwatershed model to show the effect of changes to model parameters on the estimated mean recharge, ET, and surface runoff from that part of the Carrizo-Wilcox aquifer outcrop. Simulated recharge was most sensitive to the changes in the lower-zone ET (LZ
Zhao, Longshan; Wu, Faqi
2015-01-01
In this study, a simple travel time-based runoff model was proposed to simulate a runoff hydrograph on soil surfaces with different microtopographies. Three main parameters, i.e., rainfall intensity (I), mean flow velocity (v m) and ponding time of depression (t p), were inputted into this model. The soil surface was divided into numerous grid cells, and the flow length of each grid cell (l i) was then calculated from a digital elevation model (DEM). The flow velocity in each grid cell (v i) was derived from the upstream flow accumulation area using v m. The total flow travel time through each grid cell to the surface outlet was the sum of the sum of flow travel times along the flow path (i.e., the sum of l i/v i) and t p. The runoff rate at the slope outlet for each respective travel time was estimated by finding the sum of the rain rate from all contributing cells for all time intervals. The results show positive agreement between the measured and predicted runoff hydrographs. PMID:26103635
Zhao, Longshan; Wu, Faqi
2015-01-01
In this study, a simple travel time-based runoff model was proposed to simulate a runoff hydrograph on soil surfaces with different microtopographies. Three main parameters, i.e., rainfall intensity (I), mean flow velocity (vm) and ponding time of depression (tp), were inputted into this model. The soil surface was divided into numerous grid cells, and the flow length of each grid cell (li) was then calculated from a digital elevation model (DEM). The flow velocity in each grid cell (vi) was derived from the upstream flow accumulation area using vm. The total flow travel time through each grid cell to the surface outlet was the sum of the sum of flow travel times along the flow path (i.e., the sum of li/vi) and tp. The runoff rate at the slope outlet for each respective travel time was estimated by finding the sum of the rain rate from all contributing cells for all time intervals. The results show positive agreement between the measured and predicted runoff hydrographs.
NASA Technical Reports Server (NTRS)
Khan, M. Javed; Rossi, Marcia; Heath, Bruce E.; Ali, Syed firasat; Crane, Peter; Knighten, Tremaine; Culpepper, Christi
2003-01-01
The use of Post-Flight Feedback (PFFB) and Above Real-Time Training (ARTT) while training novice pilots to perform a coordinated level turn on a PC-based flight simulator was investigated. One group trained at 1.5 ARTT followed by an equal number of flights at 2.0 ARTT; the second group experienced Real Time Training (RTT). The total number of flights for both groups was equal. Each group was further subdivided into two groups one of which was provided PFFB while the other was not. Then, all participants experienced two challenging evaluation missions in real time. Performance was assessed by comparing root-mean-square error in bank-angle and altitude. Participants in the 1.512.0 ARTT No-PFFB sequence did not show improvement in performance across training sessions. An ANOVA on performance in evaluation flights found that the PFFB groups performed significantly better than those with No-PFFB. Also, the RTT groups performed significantly better than the ARTT groups. Data from two additional groups trained under a 2.011.5 ARTT PFFB and No-PFFB regimes were collected and combined with data from the previously Trainers, Real-time simulation, Personal studied groups and reanalyzed to study the computers, Man-in-the-loop simulation influence of sequence. An ANOVA on test trials found no significant effects between groups. Under training situations involving ARTT we recommend that appropriate PFFB be provided.
Xue, Ying; Rusli, Jannov; Chang, Hou-Min; Phillips, Richard; Jameel, Hasan
2012-02-01
Process simulation and lab trials were carried out to demonstrate and confirm the efficiency of the concept that recycling hydrolysate at low total solid enzymatic hydrolysis is one of the options to increase the sugar concentration without mixing problems. Higher sugar concentration can reduce the capital cost for fermentation and distillation because of smaller retention volume. Meanwhile, operation cost will also decrease for less operating volume and less energy required for distillation. With the computer simulation, time and efforts can be saved to achieve the steady state of recycling process, which is the scenario for industrial production. This paper, to the best of our knowledge, is the first paper discussing steady-state saccharification with recycling of the filtrate form enzymatic hydrolysis to increase sugar concentration. Recycled enzymes in the filtrate (15-30% of the original enzyme loading) resulted in 5-10% higher carbohydrate conversion compared to the case in which recycled enzymes were denatured. The recycled hydrolysate yielded 10% higher carbohydrate conversion compared to pure sugar simulated hydrolysate at the same enzyme loading, which indicated hydrolysis by-products could boost enzymatic hydrolysis. The high sugar concentration (pure sugar simulated) showed inhibition effect, since about 15% decrease in carbohydrate conversion was observed compared with the case with no sugar added. The overall effect of hydrolysate recycling at WinGEMS simulated steady-state conditions with 5% total solids was increasing the sugar concentration from 35 to 141 g/l, while the carbohydrate conversion was 2% higher for recycling at steady state (87%) compared with no recycling strategy (85%). Ten percent and 15% total solid processes were also evaluated in this study.
NASA Astrophysics Data System (ADS)
You, Youngjun; Rhee, Key-Pyo; Ahn, Kyoungsoo
2013-06-01
In constructing a collision avoidance system, it is important to determine the time for starting collision avoidance maneuver. Many researchers have attempted to formulate various indices by applying a range of techniques. Among these indices, collision risk obtained by combining Distance to the Closest Point of Approach (DCPA) and Time to the Closest Point of Approach (TCPA) information with fuzzy theory is mostly used. However, the collision risk has a limit, in that membership functions of DCPA and TCPA are empirically determined. In addition, the collision risk is not able to consider several critical collision conditions where the target ship fails to take appropriate actions. It is therefore necessary to design a new concept based on logical approaches. In this paper, a collision ratio is proposed, which is the expected ratio of unavoidable paths to total paths under suitably characterized operation conditions. Total paths are determined by considering categories such as action space and methodology of avoidance. The International Regulations for Preventing Collisions at Sea (1972) and collision avoidance rules (2001) are considered to solve the slower ship's dilemma. Different methods which are based on a constant speed model and simulated speed model are used to calculate the relative positions between own ship and target ship. In the simulated speed model, fuzzy control is applied to determination of command rudder angle. At various encounter situations, the time histories of the collision ratio based on the simulated speed model are compared with those based on the constant speed model.
Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F
2012-05-01
The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=-2.487 (-2.040 to -0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=-2.272 (-0.028 to -0.002). ANOVA reported significant differences across years of experience (0-1, 1-2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required.
Dubin, Ariel K; Smith, Roger; Julian, Danielle; Tanaka, Alyssa; Mattingly, Patricia
To answer the question of whether there is a difference between robotic virtual reality simulator performance assessment and validated human reviewers. Current surgical education relies heavily on simulation. Several assessment tools are available to the trainee, including the actual robotic simulator assessment metrics and the Global Evaluative Assessment of Robotic Skills (GEARS) metrics, both of which have been independently validated. GEARS is a rating scale through which human evaluators can score trainees' performances on 6 domains: depth perception, bimanual dexterity, efficiency, force sensitivity, autonomy, and robotic control. Each domain is scored on a 5-point Likert scale with anchors. We used 2 common robotic simulators, the dV-Trainer (dVT; Mimic Technologies Inc., Seattle, WA) and the da Vinci Skills Simulator (dVSS; Intuitive Surgical, Sunnyvale, CA), to compare the performance metrics of robotic surgical simulators with the GEARS for a basic robotic task on each simulator. A prospective single-blinded randomized study. A surgical education and training center. Surgeons and surgeons in training. Demographic information was collected including sex, age, level of training, specialty, and previous surgical and simulator experience. Subjects performed 2 trials of ring and rail 1 (RR1) on each of the 2 simulators (dVSS and dVT) after undergoing randomization and warm-up exercises. The second RR1 trial simulator performance was recorded, and the deidentified videos were sent to human reviewers using GEARS. Eight different simulator assessment metrics were identified and paired with a similar performance metric in the GEARS tool. The GEARS evaluation scores and simulator assessment scores were paired and a Spearman rho calculated for their level of correlation. Seventy-four subjects were enrolled in this randomized study with 9 subjects excluded for missing or incomplete data. There was a strong correlation between the GEARS score and the simulator metric score for time to complete versus efficiency, time to complete versus total score, economy of motion versus depth perception, and overall score versus total score with rho coefficients greater than or equal to 0.70; these were significant (p < .0001). Those with weak correlation (rho ≥0.30) were bimanual dexterity versus economy of motion, efficiency versus master workspace range, bimanual dexterity versus master workspace range, and robotic control versus instrument collisions. On basic VR tasks, several simulator metrics are well matched with GEARS scores assigned by human reviewers, but others are not. Identifying these matches/mismatches can improve the training and assessment process when using robotic surgical simulators. Copyright © 2017 American Association of Gynecologic Laparoscopists. Published by Elsevier Inc. All rights reserved.
1984-09-01
1 SKD : Scheduling time 3 RPR: Repair time 8 AWP: Awaiting parts time 20 TAT: Total time 20 Figure 3.2. TAT Elements. The ASO model is tasked to...Y,.Jviv* ji ]vjV ]v jv j i ]v [ KM. D* R JV jv jv where j identifies parts within i at the next lower identure, and j’ = r* or contains r* as a lower...sub- routines of TIGER, ACIM was run as a separate program using batch processing. The ACIM program is made up of three 61
NASA Astrophysics Data System (ADS)
Mattei, S.; Nishida, K.; Onai, M.; Lettry, J.; Tran, M. Q.; Hatayama, A.
2017-12-01
We present a fully-implicit electromagnetic Particle-In-Cell Monte Carlo collision code, called NINJA, written for the simulation of inductively coupled plasmas. NINJA employs a kinetic enslaved Jacobian-Free Newton Krylov method to solve self-consistently the interaction between the electromagnetic field generated by the radio-frequency coil and the plasma response. The simulated plasma includes a kinetic description of charged and neutral species as well as the collision processes between them. The algorithm allows simulations with cell sizes much larger than the Debye length and time steps in excess of the Courant-Friedrichs-Lewy condition whilst preserving the conservation of the total energy. The code is applied to the simulation of the plasma discharge of the Linac4 H- ion source at CERN. Simulation results of plasma density, temperature and EEDF are discussed and compared with optical emission spectroscopy measurements. A systematic study of the energy conservation as a function of the numerical parameters is presented.
Automating NEURON Simulation Deployment in Cloud Resources.
Stockton, David B; Santamaria, Fidel
2017-01-01
Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.
Automating NEURON Simulation Deployment in Cloud Resources
Santamaria, Fidel
2016-01-01
Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341
Precipitation From a Multiyear Database of Convection-Allowing WRF Simulations
NASA Astrophysics Data System (ADS)
Goines, D. C.; Kennedy, A. D.
2018-03-01
Convection-allowing models (CAMs) have become frequently used for operational forecasting and, more recently, have been utilized for general circulation model downscaling. CAM forecasts have typically been analyzed for a few case studies or over short time periods, but this limits the ability to judge the overall skill of deterministic simulations. Analysis over long time periods can yield a better understanding of systematic model error. Four years of warm season (April-August, 2010-2013)-simulated precipitation has been accumulated from two Weather Research and Forecasting (WRF) models with 4 km grid spacing. The simulations were provided by the National Center for Environmental Prediction (NCEP) and the National Severe Storms Laboratory (NSSL), each with different dynamic cores and parameterization schemes. These simulations are evaluated against the NCEP Stage-IV precipitation data set with similar 4 km grid spacing. The spatial distribution and diurnal cycle of precipitation in the central United States are analyzed using Hovmöller diagrams, grid point correlations, and traditional verification skill scoring (i.e., ETS; Equitable Threat Score). Although NCEP-WRF had a high positive error in total precipitation, spatial characteristics were similar to observations. For example, the spatial distribution of NCEP-WRF precipitation correlated better than NSSL-WRF for the Northern Plains. Hovmöller results exposed a delay in initiation and decay of diurnal precipitation by NCEP-WRF while both models had difficulty in reproducing the timing and location of propagating precipitation. ETS was highest for NSSL-WRF in all domains at all times. ETS was also higher in areas of propagating precipitation compared to areas of unorganized diurnal scattered precipitation. Monthly analysis identified unique differences between the two models in their abilities to correctly simulate the spatial distribution and zonal motion of precipitation through the warm season.
NASA Astrophysics Data System (ADS)
Merenda, K. D.
2016-12-01
Since 2013, the Pierre Auger Cosmic Ray Observatory in Mendoza, Argentina, extended its trigger algorithm to detect emissions of light consistent with the signature from very low frequency perturbations due to electromagnetic pulse sources (ELVES). Correlations with the World Wide Lightning Location Network (WWLLN), the Lightning Imaging Sensor (LIS) and simulated events were used to assess the quality of the reconstructed data. The FD is a pixel array telescope sensitive to the deep UV emissions of ELVES. The detector provides the finest time resolution of 100 nanoseconds ever applied to the study of ELVES. Four eyes, separated by approximately 40 kilometers, consist of six telescopes and span a total of 360 degrees of azimuth angle. The detector operates at night when storms are not in the field of view. An existing 3D EMP Model solves Maxwell's equations using a three dimensional finite-difference time-domain model to describe the propagation of electromagnetic pulses from lightning sources to the ionosphere. The simulation also provides a projection of the resulting ELVES onto the pixel array of the FD. A full reconstruction of simulated events is under development. We introduce the analog signal time evolution comparison between Auger reconstructed data and simulated events on individual FD pixels. In conjunction, we will present a study of the angular distribution of light emission around the vertical and above the causative lightning source. We will also contrast, with Monte Carlo, Auger double ELVES events separated by at most 5 microseconds. These events are too short to be explained by multiple return strokes, ground reflections, or compact intra-cloud lightning sources. Reconstructed ELVES data is 40% correlated to WWLLN data and an analysis with the LIS database is underway.
Debatin, Maurice; Hesser, Jürgen
2015-01-01
Reducing the amount of time for data acquisition and reconstruction in industrial CT decreases the operation time of the X-ray machine and therefore increases the sales. This can be achieved by reducing both, the dose and the pulse length of the CT system and the number of projections for the reconstruction, respectively. In this paper, a novel generalized Anisotropic Total Variation regularization for under-sampled, low-dose iterative CT reconstruction is discussed and compared to the standard methods, Total Variation, Adaptive weighted Total Variation and Filtered Backprojection. The novel regularization function uses a priori information about the Gradient Magnitude Distribution of the scanned object for the reconstruction. We provide a general parameterization scheme and evaluate the efficiency of our new algorithm for different noise levels and different number of projection views. When noise is not present, error-free reconstructions are achievable for AwTV and GATV from 40 projections. In cases where noise is simulated, our strategy achieves a Relative Root Mean Square Error that is up to 11 times lower than Total Variation-based and up to 4 times lower than AwTV-based iterative statistical reconstruction (e.g. for a SNR of 223 and 40 projections). To obtain the same reconstruction quality as achieved by Total Variation, the projection number and the pulse length, and the acquisition time and the dose respectively can be reduced by a factor of approximately 3.5, when AwTV is used and a factor of approximately 6.7, when our proposed algorithm is used.
Effect of helicity on the correlation time of large scales in turbulent flows
NASA Astrophysics Data System (ADS)
Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne
2017-11-01
Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.
Matthews, M E; Waldvogel, C F; Mahaffey, M J; Zemel, P C
1978-06-01
Preparation procedures of standardized quantity formulas were analyzed for similarities and differences in production activities, and three entrée classifications were developed, based on these activities. Two formulas from each classification were selected, preparation procedures were divided into elements of production, and the MSD Quantity Food Production Code was applied. Macro elements not included in the existing Code were simulated, coded, assigned associated Time Measurement Units, and added to the MSD Quantity Food Production Code. Repeated occurrence of similar elements within production methods indicated that macro elements could be synthesized for use within one or more entrée classifications. Basic elements were grouped, simulated, and macro elements were derived. Macro elements were applied in the simulated production of 100 portions of each entrée formula. Total production time for each formula and average production time for each entrée classification were calculated. Application of macro elements indicated that this method of predetermining production time was feasible and could be adapted by quantity foodservice managers as a decision technique used to evaluate menu mix, production personnel schedules, and allocation of equipment usage. These macro elements could serve as a basis for further development and refinement of other macro elements which could be applied to a variety of menu item formulas.
Feaster, Toby D.; Westcott, Nancy E.; Hudson, Robert J.M.; Conrads, Paul; Bradley, Paul M.
2012-01-01
Rainfall is an important forcing function in most watershed models. As part of a previous investigation to assess interactions among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations in the Edisto River Basin, the topography-based hydrological model (TOPMODEL) was applied in the McTier Creek watershed in Aiken County, South Carolina. Measured rainfall data from six National Weather Service (NWS) Cooperative (COOP) stations surrounding the McTier Creek watershed were used to calibrate the McTier Creek TOPMODEL. Since the 1990s, the next generation weather radar (NEXRAD) has provided rainfall estimates at a finer spatial and temporal resolution than the NWS COOP network. For this investigation, NEXRAD-based rainfall data were generated at the NWS COOP stations and compared with measured rainfall data for the period June 13, 2007, to September 30, 2009. Likewise, these NEXRAD-based rainfall data were used with TOPMODEL to simulate streamflow in the McTier Creek watershed and then compared with the simulations made using measured rainfall data. NEXRAD-based rainfall data for non-zero rainfall days were lower than measured rainfall data at all six NWS COOP locations. The total number of concurrent days for which both measured and NEXRAD-based data were available at the COOP stations ranged from 501 to 833, the number of non-zero days ranged from 139 to 209, and the total difference in rainfall ranged from -1.3 to -21.6 inches. With the calibrated TOPMODEL, simulations using NEXRAD-based rainfall data and those using measured rainfall data produce similar results with respect to matching the timing and shape of the hydrographs. Comparison of the bias, which is the mean of the residuals between observed and simulated streamflow, however, reveals that simulations using NEXRAD-based rainfall tended to underpredict streamflow overall. Given that the total NEXRAD-based rainfall data for the simulation period is lower than the total measured rainfall at the NWS COOP locations, this bias would be expected. Therefore, to better assess the use of NEXRAD-based rainfall estimates as compared to NWS COOP rainfall data on the hydrologic simulations, TOPMODEL was recalibrated and updated simulations were made using the NEXRAD-based rainfall data. Comparisons of observed and simulated streamflow show that the TOPMODEL results using measured rainfall data and NEXRAD-based rainfall are comparable. Nonetheless, TOPMODEL simulations using NEXRAD-based rainfall still tended to underpredict total streamflow volume, although the magnitude of differences were similar to the simulations using measured rainfall. The McTier Creek watershed was subdivided into 12 subwatersheds and NEXRAD-based rainfall data were generated for each subwatershed. Simulations of streamflow were generated for each subwatershed using NEXRAD-based rainfall and compared with subwatershed simulations using measured rainfall data, which unlike the NEXRAD-based rainfall were the same data for all subwatersheds (derived from a weighted average of the six NWS COOP stations surrounding the basin). For the two simulations, subwatershed streamflow were summed and compared to streamflow simulations at two U.S. Geological Survey streamgages. The percentage differences at the gage near Monetta, South Carolina, were the same for simulations using measured rainfall data and NEXRAD-based rainfall. At the gage near New Holland, South Carolina, the percentage differences using the NEXRAD-based rainfall were twice as much as those using the measured rainfall. Single-mass curve comparisons showed an increase in the total volume of rainfall from north to south. Similar comparisons of the measured rainfall at the NWS COOP stations showed similar percentage differences, but the NEXRAD-based rainfall variations occurred over a much smaller distance than the measured rainfall. Nonetheless, it was concluded that in some cases, using NEXRAD-based rainfall data in TOPMODEL streamflow simulations may provide an effective alternative to using measured rainfall data. For this investigation, however, TOPMODEL streamflow simulations using NEXRAD-based rainfall data for both calibration and simulations did not show significant improvements with respect to matching observed streamflow over simulations generated using measured rainfall data.
Lighting Condition Analysis for Mars Moon Phobos
NASA Technical Reports Server (NTRS)
Li, Zu Qun; Crues, Edwin Z.; Bielski, Paul; De Carufel, Guy
2016-01-01
A manned mission to Phobos may be an important precursor and catalyst for the human exploration of Mars, as it will fully demonstrate the technologies for a successful Mars mission. A comprehensive understanding of Phobos' environment such as lighting condition and gravitational acceleration are essential to the mission success. The lighting condition is one of many critical factors for landing zone selection, vehicle power subsystem design, and surface mobility vehicle path planning. Due to the orbital characteristic of Phobos, the lighting condition will change dramatically from one Martian season to another. This study uses high fidelity computer simulation to investigate the lighting conditions, specifically the solar radiation flux over the surface, on Phobos. Ephemeris data from the Jet Propulsion Laboratory (JPL) DE405 model was used to model the state of the Sun, the Earth, and Mars. An occultation model was developed to simulate Phobos' self-shadowing and its solar eclipses by Mars. The propagated Phobos' state was compared with data from JPL's Horizon system to ensure the accuracy of the result. Results for Phobos lighting condition over one Martian year are presented in this paper, which include length of solar eclipse, average solar radiation intensity, surface exposure time, total maximum solar energy, and total surface solar energy (constrained by incident angle). The results show that Phobos' solar eclipse time changes throughout the Martian year with the maximum eclipse time occurring during the Martian spring and fall equinox and no solar eclipse during the Martian summer and winter solstice. Solar radiation intensity is close to minimum at the summer solstice and close to maximum at the winter solstice. Total surface exposure time is longer near the north pole and around the anti- Mars point. Total maximum solar energy is larger around the anti-Mars point. Total surface solar energy is higher around the anti-Mars point near the equator. The results from this study and others like it will be important in determining landing site selection, vehicle system design and mission operations for the human exploration of Phobos and subsequently Mars.
NASA Astrophysics Data System (ADS)
Larochelle, Kevin J.
This study focused on moisture and intermediate temperature effects on the embrittlement phenomenon and stress rupture life of the ceramic matrix composite (CMC) made of Sylramic(TM) fibers with an in-situ layer of boron nitride (Syl-iBN), boron nitride interphase (BN), and SiC matrix (Syl-iBN/BN/SiC). Stress rupture tests were performed at 550°C or 750°C with moisture contents of 0.0, 0.2, or 0.6 atm partial pressure of water vapor, pH 2O. The CMC stress rupture strengths at 100 hrs at 550°C with 0.0, 0.2, or 0.6 atm pH2O were 75%, 65% and 51% of the monotonic room temperature tensile strength, respectively. At 750°C, the corresponding strengths were 67%, 51%, and 49%, respectively. Field Emission Scanning Electron Microscopy (FESEM) analysis showed that the amount of pesting by glass formations increased with time, temperature, and pH2O leading to embrittlement. Total embrittlement times for 550°C were estimated to be greater than 63 hrs for 0.0 atm pH2O greater than 38 hrs for 0.2 atm pH 2O and between 8 and 71 hrs for 0.6 atm pH2O. Corresponding estimated embrittlement times for the 750°C were greater than 83 hrs, between 13 and 71 hrs, and between 1 and 6 hrs. A time-dependent, phenomenological, Monte Carlo-type simulation of composite failure was developed. The simulated total embrittlement times for the 550°C cases were 300 hrs, 100 hrs, and 25 hrs for 0.0, 0.2, and 0.6 atm pH 2O, respectively. The corresponding embrittlement times for the 750°C cases were 300 hrs, 20 hrs, and 3 hrs. A detailed sensitivity analysis on the variables used in the model was conducted. The model was most sensitive to variation in the ultimate strength of the CMC at room temperature, the ultimate strength of the CMC at elevated temperature, and the reference strength of a fiber and it was least sensitive to variation in the modulus of elasticity of the matrix and fiber. The sensitivity analysis showed that the stress ruptures curves generated by variation in the total embrittlement time simulate the trends in the experimental data. This research showed that the degree of stress rupture strength degradation increases with temperature, moisture content level, and exposure time.
The effect of fidelity: how expert behavior changes in a virtual reality environment.
Ioannou, Ioanna; Avery, Alex; Zhou, Yun; Szudek, Jacek; Kennedy, Gregor; O'Leary, Stephen
2014-09-01
We compare the behavior of expert surgeons operating on the "gold standard" of simulation-the cadaveric temporal bone-against a high-fidelity virtual reality (VR) simulation. We aim to determine whether expert behavior changes within the virtual environment and to understand how the fidelity of simulation affects users' behavior. Five expert otologists performed cortical mastoidectomy and cochleostomy on a human cadaveric temporal bone and a VR temporal bone simulator. Hand movement and video recordings were used to derive a range of measures, to facilitate an analysis of surgical technique, and to compare expert behavior between the cadaveric and simulator environments. Drilling time was similar across the two environments. Some measures such as total time and burr change count differed predictably due to the ease of switching burrs within the simulator. Surgical strokes were generally longer in distance and duration in VR, but these measures changed proportionally to cadaveric measures across the stages of the procedure. Stroke shape metrics differed, which was attributed to the modeling of burr behavior within the simulator. This will be corrected in future versions. Slight differences in drill interaction between a virtual environment and the real world can have measurable effects on surgical technique, particularly in terms of stroke length, duration, and curvature. It is important to understand these effects when designing and implementing surgical training programs based on VR simulation--and when improving the fidelity of VR simulators to facilitate use of a similar technique in both real and simulated situations. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec
2015-06-01
A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less
Byvank, T.; Banasek, J. T.; Potter, W. M.; ...
2017-12-07
We experimentally measure the effects of an applied axial magnetic field (B z) on laboratory plasma jets and compare experimental results with numerical simulations using an extended magnetohydrodynamics code. A 1 MA peak current, 100 ns rise time pulse power machine is used to generate the plasma jet. On application of the axial field, we observe on-axis density hollowing and a conical formation of the jet using interferometry, compression of the applied B z using magnetic B-dot probes, and azimuthal rotation of the jet using Thomson scattering. Experimentally, we find densities ≤ 5×10 17 cm -3 on-axis relative to jetmore » densities of ≥ 3×10 18 cm -3. For aluminum jets, 6.5 ± 0.5 mm above the foil, we find on-axis compression of the applied 1.0 ± 0.1 T B z to a total 2.4 ± 0.3 T, while simulations predict a peak compression to a total 3.4 T at the same location. On the aluminum jet boundary, we find ion azimuthal rotation velocities of 15-20 km/s, while simulations predict 14 km/s at the density peak. We discuss possible sources of discrepancy between the experiments and simulations, including: surface plasma on B-dot probes, optical fiber spatial resolution, simulation density floors, and 2D vs. 3D simulation effects. Lastly, this quantitative comparison between experiments and numerical simulations helps elucidate the underlying physics that determine the plasma dynamics of magnetized plasma jets.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byvank, T.; Banasek, J. T.; Potter, W. M.
We experimentally measure the effects of an applied axial magnetic field (B z) on laboratory plasma jets and compare experimental results with numerical simulations using an extended magnetohydrodynamics code. A 1 MA peak current, 100 ns rise time pulse power machine is used to generate the plasma jet. On application of the axial field, we observe on-axis density hollowing and a conical formation of the jet using interferometry, compression of the applied B z using magnetic B-dot probes, and azimuthal rotation of the jet using Thomson scattering. Experimentally, we find densities ≤ 5×10 17 cm -3 on-axis relative to jetmore » densities of ≥ 3×10 18 cm -3. For aluminum jets, 6.5 ± 0.5 mm above the foil, we find on-axis compression of the applied 1.0 ± 0.1 T B z to a total 2.4 ± 0.3 T, while simulations predict a peak compression to a total 3.4 T at the same location. On the aluminum jet boundary, we find ion azimuthal rotation velocities of 15-20 km/s, while simulations predict 14 km/s at the density peak. We discuss possible sources of discrepancy between the experiments and simulations, including: surface plasma on B-dot probes, optical fiber spatial resolution, simulation density floors, and 2D vs. 3D simulation effects. Lastly, this quantitative comparison between experiments and numerical simulations helps elucidate the underlying physics that determine the plasma dynamics of magnetized plasma jets.« less
Hardware fault insertion and instrumentation system: Mechanization and validation
NASA Technical Reports Server (NTRS)
Benson, J. W.
1987-01-01
Automated test capability for extensive low-level hardware fault insertion testing is developed. The test capability is used to calibrate fault detection coverage and associated latency times as relevant to projecting overall system reliability. Described are modifications made to the NASA Ames Reconfigurable Flight Control System (RDFCS) Facility to fully automate the total test loop involving the Draper Laboratories' Fault Injector Unit. The automated capability provided included the application of sequences of simulated low-level hardware faults, the precise measurement of fault latency times, the identification of fault symptoms, and bulk storage of test case results. A PDP-11/60 served as a test coordinator, and a PDP-11/04 as an instrumentation device. The fault injector was controlled by applications test software in the PDP-11/60, rather than by manual commands from a terminal keyboard. The time base was especially developed for this application to use a variety of signal sources in the system simulator.
A new physics-based modeling approach for tsunami-ionosphere coupling
NASA Astrophysics Data System (ADS)
Meng, X.; Komjathy, A.; Verkhoglyadova, O. P.; Yang, Y.-M.; Deng, Y.; Mannucci, A. J.
2015-06-01
Tsunamis can generate gravity waves propagating upward through the atmosphere, inducing total electron content (TEC) disturbances in the ionosphere. To capture this process, we have implemented tsunami-generated gravity waves into the Global Ionosphere-Thermosphere Model (GITM) to construct a three-dimensional physics-based model WP (Wave Perturbation)-GITM. WP-GITM takes tsunami wave properties, including the wave height, wave period, wavelength, and propagation direction, as inputs and time-dependently characterizes the responses of the upper atmosphere between 100 km and 600 km altitudes. We apply WP-GITM to simulate the ionosphere above the West Coast of the United States around the time when the tsunami associated with the March 2011 Tohuku-Oki earthquke arrived. The simulated TEC perturbations agree with Global Positioning System observations reasonably well. For the first time, a fully self-consistent and physics-based model has reproduced the GPS-observed traveling ionospheric signatures of an actual tsunami event.
3D-printed tracheoesophageal puncture and prosthesis placement simulator.
Barber, Samuel R; Kozin, Elliott D; Naunheim, Matthew R; Sethi, Rosh; Remenschneider, Aaron K; Deschler, Daniel G
A tracheoesophageal prosthesis (TEP) allows for speech after total laryngectomy. However, TEP placement is technically challenging, requiring a coordinated series of steps. Surgical simulators improve technical skills and reduce operative time. We hypothesize that a reusable 3-dimensional (3D)-printed TEP simulator will facilitate comprehension and rehearsal prior to actual procedures. The simulator was designed using Fusion360 (Autodesk, San Rafael, CA). Components were 3D-printed in-house using an Ultimaker 2+ (Ultimaker, Netherlands). Squid simulated the common tracheoesophageal wall. A Blom-Singer TEP (InHealth Technologies, Carpinteria, CA) replicated placement. Subjects watched an instructional video and completed pre- and post-simulation surveys. The simulator comprised 3D-printed parts: the esophageal lumen and superficial stoma. Squid was placed between components. Ten trainees participated. Significant differences existed between junior and senior residents with surveys regarding anatomy knowledge(p<0.05), technical details(p<0.01), and equipment setup(p<0.01). Subjects agreed that simulation felt accurate, and rehearsal raised confidence in future procedures. A 3D-printed TEP simulator is feasible for surgical training. Simulation involving multiple steps may accelerate technical skills and improve education. Copyright © 2017 Elsevier Inc. All rights reserved.
Effects of Light Regimes on the Growth of Cherrybark Oak Seedlings
Yanfei Guo; Michael G. Shelton; Brian R. Lockhart
2001-01-01
Light regimes vary significantly within small forest openings, ranging from full sunlight to total shade, and they may affect the establishment and early growth of oak seedlings. We designed modified shadehouses to simulate the complex light conditions within forest openings and tested the effects of daily photosynthetically active radiation (PAR), time of direct light...
Effects of Light Regimes on 1-Year-Old Sweetgum and Water Oak Seedlings
Yanfei Guo; Michael G. Shelton; Hui Zhang
2002-01-01
Light regimes vary significantly within small forest openings, ranging from full sunlight to total shade. This may affect establishment, early growth, and competitive status of hardwood seedlings. We used modified shadehouses to simulate light conditions within forest openings and to test the effects of daily photosynthetically active radiation and time of direct light...
Charles T. Stiff; William F. Stansfield
2004-01-01
Separate thinning guidelines were developed for maximizing land expectation value (LEV), present net worth (PNW), and total sawlog yield (TSY) of existing and future loblolly pine (Pinus taeda L.) plantations in eastern Texas. The guidelines were created using data from simulated stands which were thinned one time during their rotation using a...
NASA Astrophysics Data System (ADS)
Gica, E.
2016-12-01
The Short-term Inundation Forecasting for Tsunamis (SIFT) tool, developed by NOAA Center for Tsunami Research (NCTR) at the Pacific Marine Environmental Laboratory (PMEL), is used in forecast operations at the Tsunami Warning Centers in Alaska and Hawaii. The SIFT tool relies on a pre-computed tsunami propagation database, real-time DART buoy data, and an inversion algorithm to define the tsunami source. The tsunami propagation database is composed of 50×100km unit sources, simulated basin-wide for at least 24 hours. Different combinations of unit sources, DART buoys, and length of real-time DART buoy data can generate a wide range of results within the defined tsunami source. For an inexperienced SIFT user, the primary challenge is to determine which solution, among multiple solutions for a single tsunami event, would provide the best forecast in real time. This study investigates how the use of different tsunami sources affects simulated tsunamis at tide gauge locations. Using the tide gauge at Hilo, Hawaii, a total of 50 possible solutions for the 2011 Tohoku tsunami are considered. Maximum tsunami wave amplitude and root mean square error results are used to compare tide gauge data and the simulated tsunami time series. Results of this study will facilitate SIFT users' efforts to determine if the simulated tide gauge tsunami time series from a specific tsunami source solution would be within the range of possible solutions. This study will serve as the basis for investigating more historical tsunami events and tide gauge locations.
Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun
2016-06-17
The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil. Copyright © 2016 Elsevier B.V. All rights reserved.
Blessing and curse of chaos in numerical turbulence simulations
NASA Astrophysics Data System (ADS)
Lee, Jon
1994-03-01
Because of the trajectory instability, time reversal is not possible beyond a certain evolution time and hence the time irreversibility prevails under the finite-accuracy trajectory computation. This therefore provides a practical reconciliation of the dynamic reversibility and macroscopic irreversibility (blessing of chaos). On the other hand, the trajectory instability is also responsible for a limited evolution time, so that finite-accuracy computation would yield a pseudo-orbit which is totally unrelated to the true trajectory (curse of chaos). For the inviscid 2D flow, however, we can accurately compute the long- time average of flow quantities with a pseudo-orbit by invoking the ergodic theorem.
Coaching From the Sidelines: Examining the Impact of Teledebriefing in Simulation-Based Training.
Ahmed, Rami A; Atkinson, Steven Scott; Gable, Brad; Yee, Jennifer; Gardner, Aimee K
2016-10-01
Although simulation facilities are available at most teaching institutions, the number of qualified instructors and/or content experts that facilitate postsimulation debriefing is inadequate at many institutions. There remains a paucity of evidence-based data regarding several aspects of debriefing, including debriefing with a facilitator present versus teledebriefing, in which participants undergo debriefing with a facilitator providing instruction and direction from an off-site location while they observe the simulation in real-time. We conducted this study to identify the effectiveness and feasibility of teledebriefing as an alternative form of instruction. This study was conducted with emergency medicine residents randomized into either a teledebriefing or on-site debriefing group during 11 simulation training sessions implemented for a 9-month period. The primary outcome of interest was resident perception of debriefing effectiveness, as measured by the Debriefing Assessment for Simulation in Healthcare-Student Version (See Appendix, Supplemental Digital Content 1, http://links.lww.com/SIH/A282) completed at the end of every simulation session. A total of 44 debriefings occurred during the study period with a total number of 246 Debriefing Assessment for Simulation in Healthcare-Student Version completed. The data revealed a statistically significant difference between the effectiveness of on-site debriefing [6.64 (0.45)] and teledebriefing [6.08 (0.57), P < 0.001]. Residents regularly evaluated both traditional debriefing and teledebriefing as "consistently effective/very good." Teledebriefing was found to be rated lower than in-person debriefing but was still consistently effective. Further research is necessary to evaluate the effectiveness of teledebriefing in comparison with other alternatives. Teledebriefing potentially provides an alternative form of instruction within simulation environments for programs lacking access to expert faculty.
Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units
NASA Astrophysics Data System (ADS)
Kemal, Jonathan Yashar
For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.
NASA Astrophysics Data System (ADS)
Kumar, R.; Samaniego, L. E.; Livneh, B.
2013-12-01
Knowledge of soil hydraulic properties such as porosity and saturated hydraulic conductivity is required to accurately model the dynamics of near-surface hydrological processes (e.g. evapotranspiration and root-zone soil moisture dynamics) and provide reliable estimates of regional water and energy budgets. Soil hydraulic properties are commonly derived from pedo-transfer functions using soil textural information recorded during surveys, such as the fractions of sand and clay, bulk density, and organic matter content. Typically large scale land-surface models are parameterized using a relatively coarse soil map with little or no information on parametric sub-grid variability. In this study we analyze the impact of sub-grid soil variability on simulated hydrological fluxes over the Mississippi River Basin (≈3,240,000 km2) at multiple spatio-temporal resolutions. A set of numerical experiments were conducted with the distributed mesoscale hydrologic model (mHM) using two soil datasets: (a) the Digital General Soil Map of the United States or STATSGO2 (1:250 000) and (b) the recently collated Harmonized World Soil Database based on the FAO-UNESCO Soil Map of the World (1:5 000 000). mHM was parameterized with the multi-scale regionalization technique that derives distributed soil hydraulic properties via pedo-transfer functions and regional coefficients. Within the experimental framework, the 3-hourly model simulations were conducted at four spatial resolutions ranging from 0.125° to 1°, using meteorological datasets from the NLDAS-2 project for the time period 1980-2012. Preliminary results indicate that the model was able to capture observed streamflow behavior reasonably well with both soil datasets, in the major sub-basins (i.e. the Missouri, the Upper Mississippi, the Ohio, the Red, and the Arkansas). However, the spatio-temporal patterns of simulated water fluxes and states (e.g. soil moisture, evapotranspiration) from both simulations, showed marked differences; particularly at a shorter time scale (hours to days) in regions with coarse texture sandy soils. Furthermore, the partitioning of total runoff into near-surface interflows and baseflow components was also significantly different between the two simulations. Simulations with the coarser soil map produced comparatively higher baseflows. At longer time scales (months to seasons) where climatic factors plays a major role, the integrated fluxes and states from both sets of model simulations match fairly closely, despite the apparent discrepancy in the partitioning of total runoff.
Efficient numerical simulation of heat storage in subsurface georeservoirs
NASA Astrophysics Data System (ADS)
Boockmeyer, A.; Bauer, S.
2015-12-01
The transition of the German energy market towards renewable energy sources, e.g. wind or solar power, requires energy storage technologies to compensate for their fluctuating production. Large amounts of energy could be stored in georeservoirs such as porous formations in the subsurface. One possibility here is to store heat with high temperatures of up to 90°C through borehole heat exchangers (BHEs) since more than 80 % of the total energy consumption in German households are used for heating and hot water supply. Within the ANGUS+ project potential environmental impacts of such heat storages are assessed and quantified. Numerical simulations are performed to predict storage capacities, storage cycle times, and induced effects. For simulation of these highly dynamic storage sites, detailed high-resolution models are required. We set up a model that accounts for all components of the BHE and verified it using experimental data. The model ensures accurate simulation results but also leads to large numerical meshes and thus high simulation times. In this work, we therefore present a numerical model for each type of BHE (single U, double U and coaxial) that reduces the number of elements and the simulation time significantly for use in larger scale simulations. The numerical model includes all BHE components and represents the temporal and spatial temperature distribution with an accuracy of less than 2% deviation from the fully discretized model. By changing the BHE geometry and using equivalent parameters, the simulation time is reduced by a factor of ~10 for single U-tube BHEs, ~20 for double U-tube BHEs and ~150 for coaxial BHEs. Results of a sensitivity study that quantify the effects of different design and storage formation parameters on temperature distribution and storage efficiency for heat storage using multiple BHEs are then shown. It is found that storage efficiency strongly depends on the number of BHEs composing the storage site, their distance and the cycle time. The temperature distribution is most sensitive to thermal conductivity of both borehole grouting and storage formation while storage efficiency is mainly controlled by the thermal conductivity of the storage formation.
Improving Project Management with Simulation and Completion Distribution Functions
NASA Technical Reports Server (NTRS)
Cates, Grant R.
2004-01-01
Despite the critical importance of project completion timeliness, management practices in place today remain inadequate for addressing the persistent problem of project completion tardiness. A major culprit in late projects is uncertainty, which most, if not all, projects are inherently subject to. This uncertainty resides in the estimates for activity durations, the occurrence of unplanned and unforeseen events, and the availability of critical resources. In response to this problem, this research developed a comprehensive simulation based methodology for conducting quantitative project completion time risk analysis. It is called the Project Assessment by Simulation Technique (PAST). This new tool enables project stakeholders to visualize uncertainty or risk, i.e. the likelihood of their project completing late and the magnitude of the lateness, by providing them with a completion time distribution function of their projects. Discrete event simulation is used within PAST to determine the completion distribution function for the project of interest. The simulation is populated with both deterministic and stochastic elements. The deterministic inputs include planned project activities, precedence requirements, and resource requirements. The stochastic inputs include activity duration growth distributions, probabilities for events that can impact the project, and other dynamic constraints that may be placed upon project activities and milestones. These stochastic inputs are based upon past data from similar projects. The time for an entity to complete the simulation network, subject to both the deterministic and stochastic factors, represents the time to complete the project. Repeating the simulation hundreds or thousands of times allows one to create the project completion distribution function. The Project Assessment by Simulation Technique was demonstrated to be effective for the on-going NASA project to assemble the International Space Station. Approximately $500 million per month is being spent on this project, which is scheduled to complete by 2010. NASA project stakeholders participated in determining and managing completion distribution functions produced from PAST. The first result was that project stakeholders improved project completion risk awareness. Secondly, using PAST, mitigation options were analyzed to improve project completion performance and reduce total project cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, W., E-mail: wei.lu@xfel.eu; European X-Ray Free-Electron Laser Facility, 22607 Hamburg; Noll, T.
A hard X-ray Split and Delay Line (SDL) under development for the Materials Imaging and Dynamics (MID) station at the European X-Ray Free-Electron Laser (XFEL.EU) is presented. This device will provide pairs of X-ray pulses with a variable time delay ranging from −10 ps to 800 ps in a photon energy range from 5 to 10 keV. Throughput simulations in the SASE case indicate a total transmission of 1.1% or 3.5% depending on the operation mode. In the self-seeded case of XFEL.EU operation simulations indicate that the transmission can be improved to more than 11%.
Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.; ...
2017-02-17
Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).
All-optical analog-to-digital converter based on Kerr effect in photonic crystal
NASA Astrophysics Data System (ADS)
Jafari, Dariush; Nurmohammadi, Tofiq; Asadi, Mohammad Javad; Abbasian, Karim
2018-05-01
In this paper, a novel all-optical analog-to-digital converter (AOADC) is proposed and simulated for proof of principle. This AOADC is designed to operate in the range of telecom wavelength (1550 nm). A cavity made of nonlinear Kerr material in photonic crystal (PhC), is designed to achieve an optical analog-to-digital conversion with 1 Tera sample per second (TS/s) and the total footprint of 42 μm2 . The simulation is done using finite-difference time domain (FDTD) method.
Microparticle accelerator of unique design. [for micrometeoroid impact and cratering simulation
NASA Technical Reports Server (NTRS)
Vedder, J. F.
1978-01-01
A microparticle accelerator has been devised for micrometeoroid impact and cratering simulation; the device produces high-velocity (0.5-15 km/sec), micrometer-sized projectiles of any cohesive material. In the source, an electrodynamic levitator, single particles are charged by ion bombardment in high vacuum. The vertical accelerator has four drift tubes, each initially at a high negative voltage. After injection of the projectile, each tube is grounded in turn at a time determined by the voltage and charge/mass ratio to give four acceleration stages with a total voltage equivalent to about 1.7 MV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.
Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).
NASA Astrophysics Data System (ADS)
Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard
2015-01-01
In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.
Movahed, Reza; Teschke, Marcus; Wolford, Larry M
2013-12-01
Clinicians who address temporomandibular joint (TMJ) pathology and dentofacial deformities surgically can perform the surgery in 1 stage or 2 separate stages. The 2-stage approach requires the patient to undergo 2 separate operations and anesthesia, significantly prolonging the overall treatment. However, performing concomitant TMJ and orthognathic surgery (CTOS) in these cases requires careful treatment planning and surgical proficiency in the 2 surgical areas. This article presents a new treatment protocol for the application of computer-assisted surgical simulation in CTOS cases requiring reconstruction with patient-fitted total joint prostheses. The traditional and new CTOS protocols are described and compared. The new CTOS protocol helps decrease the preoperative workup time and increase the accuracy of model surgery. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Alternative modeling methods for plasma-based Rf ion sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com
Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less
Alternative modeling methods for plasma-based Rf ion sources.
Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C
2016-02-01
Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.
Huang, Qiuhua; Vittal, Vijay
2018-05-09
Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Qiuhua; Vittal, Vijay
Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less
NASA Astrophysics Data System (ADS)
Valente, Pedro C.; da Silva, Carlos B.; Pinho, Fernando T.
2013-11-01
We report a numerical study of statistically steady and decaying turbulence of FENE-P fluids for varying polymer relaxation times ranging from the Kolmogorov dissipation time-scale to the eddy turnover time. The total turbulent kinetic energy dissipation is shown to increase with the polymer relaxation time in both steady and decaying turbulence, implying a ``drag increase.'' If the total power input in the statistically steady case is kept equal in the Newtonian and the viscoelastic simulations the increase in the turbulence-polymer energy transfer naturally lead to the previously reported depletion of the Newtonian, but not the overall, kinetic energy dissipation. The modifications to the nonlinear energy cascade with varying Deborah/Weissenberg numbers are quantified and their origins investigated. The authors acknowledge the financial support from Fundação para a Ciência e a Tecnologia under grant PTDC/EME-MFE/113589/2009.
Structure analysis of simulated molecular clouds with the Δ-variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.
Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n 0 = 30, 100 and 300 cm -3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and columnmore » density maps for various chemical components: the total, H 2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H 2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H 2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H 2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm -3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less
Structure analysis of simulated molecular clouds with the Δ-variance
Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.
2015-05-27
Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n 0 = 30, 100 and 300 cm -3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and columnmore » density maps for various chemical components: the total, H 2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H 2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H 2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H 2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm -3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less
Dettinger, M.D.; Cayan, D.R.; Meyer, M.K.; Jeton, A.
2004-01-01
Hydrologic responses of river basins in the Sierra Nevada of California to historical and future climate variations and changes are assessed by simulating daily streamflow and water-balance responses to simulated climate variations over a continuous 200-yr period. The coupled atmosphere-ocean-ice-land Parallel Climate Model provides the simulated climate histories, and existing hydrologic models of the Merced, Carson, and American Rivers are used to simulate the basin responses. The historical simulations yield stationary climate and hydrologic variations through the first part of the 20th century until about 1975 when temperatures begin to warm noticeably and when snowmelt and streamflow peaks begin to occur progressively earlier within the seasonal cycle. A future climate simulated with business-as-usual increases in greenhouse-gas and aerosol radiative forcings continues those recent trends through the 21st century with an attendant +2.5??C warming and a hastening of snowmelt and streamflow within the seasonal cycle by almost a month. The various projected trends in the business-as-usual simulations become readily visible despite realistic simulated natural climatic and hydrologic variability by about 2025. In contrast to these changes that are mostly associated with streamflow timing, long-term average totals of streamflow and other hydrologic fluxes remain similar to the historical mean in all three simulations. A control simulation in which radiative forcings are held constant at 1995 levels for the 50 years following 1995 yields climate and streamflow timing conditions much like the 1980s and 1990s throughout its duration. The availability of continuous climate-change projection outputs and careful design of initial conditions and control experiments, like those utilized here, promise to improve the quality and usability of future climate-change impact assessments.
Collaborative testing of turbulence models
NASA Technical Reports Server (NTRS)
Bradshaw, Peter; Launder, Brian E.; Lumley, John L.
1991-01-01
A review is given of an ongoing international project, in which data from experiments on, and simulations of, turbulent flows are distributed to developers of (time-averaged) engineering turbulence models. The predictions of each model are sent to the organizers and redistributed to all the modelers, plus some experimentalists and other experts (total approx. 120), for comment. The 'reaction time' of modelers has proved to be much longer than anticipated, partly because the comparisons with data have prompted many modelers to improve their models or numerics.
Subramanian, Swetha; Mast, T Douglas
2015-10-07
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
Consequences of Base Time for Redundant Signals Experiments
Townsend, James T.; Honey, Christopher
2007-01-01
We report analytical and computational investigations into the effects of base time on the diagnosticity of two popular theoretical tools in the redundant signals literature: (1) the race model inequality and (2) the capacity coefficient. We show analytically and without distributional assumptions that the presence of base time decreases the sensitivity of both of these measures to model violations. We further use simulations to investigate the statistical power model selection tools based on the race model inequality, both with and without base time. Base time decreases statistical power, and biases the race model test toward conservatism. The magnitude of this biasing effect increases as we increase the proportion of total reaction time variance contributed by base time. We marshal empirical evidence to suggest that the proportion of reaction time variance contributed by base time is relatively small, and that the effects of base time on the diagnosticity of our model-selection tools are therefore likely to be minor. However, uncertainty remains concerning the magnitude and even the definition of base time. Experimentalists should continue to be alert to situations in which base time may contribute a large proportion of the total reaction time variance. PMID:18670591
Factors influencing nurses' attitudes toward simulation-based education.
Decarlo, Deborah; Collingridge, Dave S; Grant, Carrie; Ventre, Kathleen M
2008-01-01
To identify barriers to nurses' participation in simulation, and to determine whether prior simulation exposure, professional experience, and practice location influence their tendency to perceive specific issues as barriers. We also sought to identify nurses' educational priorities, and to determine whether these were influenced by years of experience or practice location. We surveyed full-time and part-time nurses in a university-affiliated children's hospital to gather data on professional demographics, simulation exposure, perceived barriers to participation in simulation, and training priorities. A total of 523 of 936 (56%) eligible nurses completed the survey. Binary logistic regression analysis revealed that "simulation is 'not the real thing'" was selected as a barrier more often by nurses with prior simulation experience (P = 0.02), fewer years in practice (P = 0.02), and employment in non-acute care areas of the hospital (P = 0.03). "Unfamiliarity with equipment" was reported more often by nurses with less experience (P = 0.01). "Stressful or intimidating environment" was selected more often by those who work in non-acute care areas (P < 0.01). "Providing opportunities to manage rare events" was suggested as a training priority by nurses with less experience (P = 0.08) and by those practicing in acute care areas (P = 0.03). We identified several barriers to nurses' participation in simulation training. Nurses' tendency to name specific issues as barriers is related to prior simulation exposure, years of experience, and area of hospital practice. Rehearsing rare event management is a priority for less-experienced nurses and those in acute care areas.
Akhtar, Kashif; Sugand, Kapil; Sperrin, Matthew; Cobb, Justin; Standfield, Nigel; Gupte, Chinmay
2015-01-01
Virtual-reality (VR) simulation in orthopedic training is still in its infancy, and much of the work has been focused on arthroscopy. We evaluated the construct validity of a new VR trauma simulator for performing dynamic hip screw (DHS) fixation of a trochanteric femoral fracture. 30 volunteers were divided into 3 groups according to the number of postgraduate (PG) years and the amount of clinical experience: novice (1-4 PG years; less than 10 DHS procedures); intermediate (5-12 PG years; 10-100 procedures); expert (> 12 PG years; > 100 procedures). Each participant performed a DHS procedure and objective performance metrics were recorded. These data were analyzed with each performance metric taken as the dependent variable in 3 regression models. There were statistically significant differences in performance between groups for (1) number of attempts at guide-wire insertion, (2) total fluoroscopy time, (3) tip-apex distance, (4) probability of screw cutout, and (5) overall simulator score. The intermediate group performed the procedure most quickly, with the lowest fluoroscopy time, the lowest tip-apex distance, the lowest probability of cutout, and the highest simulator score, which correlated with their frequency of exposure to running the trauma lists for hip fracture surgery. This study demonstrates the construct validity of a haptic VR trauma simulator with surgeons undertaking the procedure most frequently performing best on the simulator. VR simulation may be a means of addressing restrictions on working hours and allows trainees to practice technical tasks without putting patients at risk. The VR DHS simulator evaluated in this study may provide valid assessment of technical skill.
NASA Astrophysics Data System (ADS)
Kempka, T.; Norden, B.; Tillner, E.; Nakaten, B.; Kühn, M.
2012-04-01
Geological modelling and dynamic flow simulations were conducted at the Ketzin pilot site showing a good agreement of history matched geological models with CO2 arrival times in both observation wells and timely development of reservoir pressure determined in the injection well. Recently, a re-evaluation of the seismic 3D data enabled a refinement of the structural site model and the implementation of the fault system present at the top of the Ketzin anticline. The updated geological model (model size: 5 km x 5 km) shows a horizontal discretization of 5 x 5 m and consists of three vertical zones, with the finest discretization at the top (0.5 m). According to the revised seismic analysis, the facies modelling to simulate the channel and floodplain facies distribution at Ketzin was updated. Using a sequential Gaussian simulator for the distribution of total and effective porosities and an empiric porosity-permeability relationship based on site and literature data available, the structural model was parameterized. Based on this revised reservoir model of the Stuttgart formation, numerical simulations using the TOUGH2-MP/ECO2N and Schlumberger Information Services (SIS) ECLIPSE 100 black-oil simulators were undertaken in order to evaluate the long-term (up to 10,000 years) migration of the injected CO2 (about 57,000 t at the end of 2011) and the development of reservoir pressure over time. The simulation results enabled us to quantitatively compare both reservoir simulators based on current operational data considering the long-term effects of CO2 storage including CO2 dissolution in the formation fluid. While the integration of the static geological model developed in the SIS Petrel modelling package into the ECLIPSE simulator is relatively flawless, a work-flow allowing for the export of Petrel models into the TOUGH2-MP input file format had to be implemented within the scope of this study. The challenge in this task was mainly determined by the presence of a complex faulted system in the revised reservoir model demanding for an integrated concept to deal with connections between the elements aligned to faults in the TOUGH2-MP simulator. Furthermore, we developed a methodology to visualize and compare the TOUGH2-MP simulation results with those of the Eclipse simulator using the Petrel software package. The long-term simulation results of both simulators are generally in good agreement. Spatial and timely migration of the CO2 plume as well as residual gas saturation are almost identical for both simulators, even though a time-dependent approach of CO2 dissolution in the formation fluid was chosen in the ECLIPSE simulator. Our results confirmed that a scientific open-source simulator as the TOUGH2-MP software package is capable to provide the same accuracy as the industrial standard simulator ECLIPSE 100. However, the computational time and additional efforts to implement a suitable workflow for using the TOUGH2-MP simulator are significantly higher, while the open-source concept of TOUGH2 provides more flexibility regarding process adaptation.
Belger, Mark; Haro, Josep Maria; Reed, Catherine; Happich, Michael; Kahle-Wrobleski, Kristin; Argimon, Josep Maria; Bruno, Giuseppe; Dodel, Richard; Jones, Roy W; Vellas, Bruno; Wimo, Anders
2016-07-18
Missing data are a common problem in prospective studies with a long follow-up, and the volume, pattern and reasons for missing data may be relevant when estimating the cost of illness. We aimed to evaluate the effects of different methods for dealing with missing longitudinal cost data and for costing caregiver time on total societal costs in Alzheimer's disease (AD). GERAS is an 18-month observational study of costs associated with AD. Total societal costs included patient health and social care costs, and caregiver health and informal care costs. Missing data were classified as missing completely at random (MCAR), missing at random (MAR) or missing not at random (MNAR). Simulation datasets were generated from baseline data with 10-40 % missing total cost data for each missing data mechanism. Datasets were also simulated to reflect the missing cost data pattern at 18 months using MAR and MNAR assumptions. Naïve and multiple imputation (MI) methods were applied to each dataset and results compared with complete GERAS 18-month cost data. Opportunity and replacement cost approaches were used for caregiver time, which was costed with and without supervision included and with time for working caregivers only being costed. Total costs were available for 99.4 % of 1497 patients at baseline. For MCAR datasets, naïve methods performed as well as MI methods. For MAR, MI methods performed better than naïve methods. All imputation approaches were poor for MNAR data. For all approaches, percentage bias increased with missing data volume. For datasets reflecting 18-month patterns, a combination of imputation methods provided more accurate cost estimates (e.g. bias: -1 % vs -6 % for single MI method), although different approaches to costing caregiver time had a greater impact on estimated costs (29-43 % increase over base case estimate). Methods used to impute missing cost data in AD will impact on accuracy of cost estimates although varying approaches to costing informal caregiver time has the greatest impact on total costs. Tailoring imputation methods to the reason for missing data will further our understanding of the best analytical approach for studies involving cost outcomes.
Tofte, Josef N; Westerlind, Brian O; Martin, Kevin D; Guetschow, Brian L; Uribe-Echevarria, Bastián; Rungprai, Chamnanni; Phisitkul, Phinit
2017-03-01
To validate the knee, shoulder, and virtual Fundamentals of Arthroscopic Training (FAST) modules on a virtual arthroscopy simulator via correlations with arthroscopy case experience and postgraduate year. Orthopaedic residents and faculty from one institution performed a standardized sequence of knee, shoulder, and FAST modules to evaluate baseline arthroscopy skills. Total operation time, camera path length, and composite total score (metric derived from multiple simulator measurements) were compared with case experience and postgraduate level. Values reported are Pearson r; alpha = 0.05. 35 orthopaedic residents (6 per postgraduate year), 2 fellows, and 3 faculty members (2 sports, 1 foot and ankle), including 30 male and 5 female residents, were voluntarily enrolled March to June 2015. Knee: training year correlated significantly with year-averaged knee composite score, r = 0.92, P = .004, 95% confidence interval (CI) = 0.84, 0.96; operation time, r = -0.92, P = .004, 95% CI = -0.96, -0.84; and camera path length, r = -0.97, P = .0004, 95% CI = -0.98, -0.93. Knee arthroscopy case experience correlated significantly with composite score, r = 0.58, P = .0008, 95% CI = 0.27, 0.77; operation time, r = -0.54, P = .002, 95% CI = -0.75, -0.22; and camera path length, r = -0.62, P = .0003, 95% CI = -0.8, -0.33. Shoulder: training year correlated strongly with average shoulder composite score, r = 0.90, P = .006, 95% CI = 0.81, 0.95; operation time, r = -0.94, P = .001, 95% CI = -0.97, -0.89; and camera path length, r = -0.89, P = .007, 95% CI = -0.95, -0.80. Shoulder arthroscopy case experience correlated significantly with average composite score, r = 0.52, P = .003, 95% CI = 0.2, 0.74; strongly with operation time, r = -0.62, P = .0002, 95% CI = -0.8, -0.33; and camera path length, r = -0.37, P = .044, 95% CI = -0.64, -0.01, by training year. FAST: training year correlated significantly with 3 combined FAST activity average composite scores, r = 0.81, P = .0279, 95% CI = 0.65, 0.90; operation times, r = -0.86, P = .012, 95% CI = -0.93, -0.74; and camera path lengths, r = -0.85, P = .015, 95% CI = -0.92, -0.72. Total arthroscopy cases performed did not correlate significantly with overall FAST performance. We found significant correlations between both training year and knee and shoulder arthroscopy experience when compared with performance as measured by composite score, camera path length, and operation time during a simulated diagnostic knee and shoulder arthroscopy, respectively. Three FAST activities demonstrated significant correlations with training year but not arthroscopy case experience as measured by composite score, camera path length, and operation time. We attempt to validate an arthroscopy simulator that could be used to supplement arthroscopy skills training for orthopaedic residents. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
High-speed extended-term time-domain simulation for online cascading analysis of power system
NASA Astrophysics Data System (ADS)
Fu, Chuan
A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.
The effect of gas dynamics on semi-analytic modelling of cluster galaxies
NASA Astrophysics Data System (ADS)
Saro, A.; De Lucia, G.; Dolag, K.; Borgani, S.
2008-12-01
We study the degree to which non-radiative gas dynamics affect the merger histories of haloes along with subsequent predictions from a semi-analytic model (SAM) of galaxy formation. To this aim, we use a sample of dark matter only and non-radiative smooth particle hydrodynamics (SPH) simulations of four massive clusters. The presence of gas-dynamical processes (e.g. ram pressure from the hot intra-cluster atmosphere) makes haloes more fragile in the runs which include gas. This results in a 25 per cent decrease in the total number of subhaloes at z = 0. The impact on the galaxy population predicted by SAMs is complicated by the presence of `orphan' galaxies, i.e. galaxies whose parent substructures are reduced below the resolution limit of the simulation. In the model employed in our study, these galaxies survive (unaffected by the tidal stripping process) for a residual merging time that is computed using a variation of the Chandrasekhar formula. Due to ram-pressure stripping, haloes in gas simulations tend to be less massive than their counterparts in the dark matter simulations. The resulting merging times for satellite galaxies are then longer in these simulations. On the other hand, the presence of gas influences the orbits of haloes making them on average more circular and therefore reducing the estimated merging times with respect to the dark matter only simulation. This effect is particularly significant for the most massive satellites and is (at least in part) responsible for the fact that brightest cluster galaxies in runs with gas have stellar masses which are about 25 per cent larger than those obtained from dark matter only simulations. Our results show that gas dynamics has only a marginal impact on the statistical properties of the galaxy population, but that its impact on the orbits and merging times of haloes strongly influences the assembly of the most massive galaxies.
De Paris, Renata; Frantz, Fábio A.; Norberto de Souza, Osmar; Ruiz, Duncan D. A.
2013-01-01
Molecular docking simulations of fully flexible protein receptor (FFR) models are coming of age. In our studies, an FFR model is represented by a series of different conformations derived from a molecular dynamic simulation trajectory of the receptor. For each conformation in the FFR model, a docking simulation is executed and analyzed. An important challenge is to perform virtual screening of millions of ligands using an FFR model in a sequential mode since it can become computationally very demanding. In this paper, we propose a cloud-based web environment, called web Flexible Receptor Docking Workflow (wFReDoW), which reduces the CPU time in the molecular docking simulations of FFR models to small molecules. It is based on the new workflow data pattern called self-adaptive multiple instances (P-SaMIs) and on a middleware built on Amazon EC2 instances. P-SaMI reduces the number of molecular docking simulations while the middleware speeds up the docking experiments using a High Performance Computing (HPC) environment on the cloud. The experimental results show a reduction in the total elapsed time of docking experiments and the quality of the new reduced receptor models produced by discarding the nonpromising conformations from an FFR model ruled by the P-SaMI data pattern. PMID:23691504
NASA Astrophysics Data System (ADS)
Mechem, David B.; Giangrande, Scott E.
2018-03-01
Controls on precipitation onset and the transition from shallow cumulus to congestus are explored using a suite of 16 large-eddy simulations based on the 25 May 2011 event from the Midlatitude Continental Convective Clouds Experiment (MC3E). The thermodynamic variables in the model are relaxed at various timescales to observationally constrained temperature and moisture profiles in order to better reproduce the observed behavior of precipitation onset and total precipitation. Three of the simulations stand out as best matching the precipitation observations and also perform well for independent comparisons of cloud fraction, precipitation area fraction, and evolution of cloud top occurrence. All three simulations exhibit a destabilization over time, which leads to a transition to deeper clouds, but the evolution of traditional stability metrics by themselves is not able to explain differences in the simulations. Conditionally sampled cloud properties (in particular, mean cloud buoyancy), however, do elicit differences among the simulations. The inability of environmental profiles alone to discern subtle differences among the simulations and the usefulness of conditionally sampled model quantities argue for hybrid observational/modeling approaches. These combined approaches enable a more complete physical understanding of cloud systems by combining observational sampling of time-varying three-dimensional meteorological quantities and cloud properties, along with detailed representation of cloud microphysical and dynamical processes from numerical models.
NASA Astrophysics Data System (ADS)
Carlson, Curtis Ray
New models and simulations of wave growth experienced by electromagnetic waves propagating through the magnetosphere in the whistler mode are presented. The main emphasis is to simulate single frequency wave pulses, in the 2 to 6 kHz range, that have been injected into the magnetosphere, near L approximately 4. Simulations using a new transient model reproduce exponential wave growth and saturation coincident with a linearly increasing frequency versus time (up to 60 Hz/s). Unique methods for calculating the phased bunched currents, stimulated radiation, and radiation propagation are based upon test particle trajectories calculated by integrating nonlinear equations of motion generalized to allow the evolution of the frequency and wave number at each point in space. Results show the importance of the transient aspects in the wave growth process. The wave growth established as the wave propagates toward the equator is given a spatially advancing wave phase structure by the geomagnetic inhomogeneity. Through the feedback of this radiation upon other electrons, the conditions are set up which result in the linearly increasing output frequency with time. The transient simulations also show that features like growth rate and total growth are simply related to the various parameters, such as applied wave intensity, energetic electron flux, and energetic electron distribution.
BIGNASim: a NoSQL database structure and analysis portal for nucleic acids simulation data
Hospital, Adam; Andrio, Pau; Cugnasco, Cesare; Codo, Laia; Becerra, Yolanda; Dans, Pablo D.; Battistini, Federica; Torres, Jordi; Goñi, Ramón; Orozco, Modesto; Gelpí, Josep Ll.
2016-01-01
Molecular dynamics simulation (MD) is, just behind genomics, the bioinformatics tool that generates the largest amounts of data, and that is using the largest amount of CPU time in supercomputing centres. MD trajectories are obtained after months of calculations, analysed in situ, and in practice forgotten. Several projects to generate stable trajectory databases have been developed for proteins, but no equivalence exists in the nucleic acids world. We present here a novel database system to store MD trajectories and analyses of nucleic acids. The initial data set available consists mainly of the benchmark of the new molecular dynamics force-field, parmBSC1. It contains 156 simulations, with over 120 μs of total simulation time. A deposition protocol is available to accept the submission of new trajectory data. The database is based on the combination of two NoSQL engines, Cassandra for storing trajectories and MongoDB to store analysis results and simulation metadata. The analyses available include backbone geometries, helical analysis, NMR observables and a variety of mechanical analyses. Individual trajectories and combined meta-trajectories can be downloaded from the portal. The system is accessible through http://mmb.irbbarcelona.org/BIGNASim/. Supplementary Material is also available on-line at http://mmb.irbbarcelona.org/BIGNASim/SuppMaterial/. PMID:26612862
Quasi-coarse-grained dynamics: modelling of metallic materials at mesoscales
NASA Astrophysics Data System (ADS)
Dongare, Avinash M.
2014-12-01
A computationally efficient modelling method called quasi-coarse-grained dynamics (QCGD) is developed to expand the capabilities of molecular dynamics (MD) simulations to model behaviour of metallic materials at the mesoscales. This mesoscale method is based on solving the equations of motion for a chosen set of representative atoms from an atomistic microstructure and using scaling relationships for the atomic-scale interatomic potentials in MD simulations to define the interactions between representative atoms. The scaling relationships retain the atomic-scale degrees of freedom and therefore energetics of the representative atoms as would be predicted in MD simulations. The total energetics of the system is retained by scaling the energetics and the atomic-scale degrees of freedom of these representative atoms to account for the missing atoms in the microstructure. This scaling of the energetics renders improved time steps for the QCGD simulations. The success of the QCGD method is demonstrated by the prediction of the structural energetics, high-temperature thermodynamics, deformation behaviour of interfaces, phase transformation behaviour, plastic deformation behaviour, heat generation during plastic deformation, as well as the wave propagation behaviour, as would be predicted using MD simulations for a reduced number of representative atoms. The reduced number of atoms and the improved time steps enables the modelling of metallic materials at the mesoscale in extreme environments.
Bokhari, Ravia; Bollman-McGregor, Jyoti; Kahoi, Kanav; Smith, Marshall; Feinstein, Ara; Ferrara, John
2010-06-01
Assuring quality surgical trainees within the confines of reduced work hours mandates reassessment of educational paradigms. Surgical simulators have been shown to be effective in teaching surgical residents, but their use is limited by cost and time constraints. The Nintendo Wii gaming console is inexpensive and allows natural hand movements similar to those performed in laparoscopy to guide game play. We hypothesize that surgical skills can be improved through take-home simulators adapted from affordable off-the-shelf gaming consoles. A total of 21 surgical residents participated in a prospective, controlled study. An experimental group of 14 surgical residents was assigned to play Marble Mania on the Nintendo Wii using a unique physical controller that interfaces with the WiiMote controller followed by a simulated electrocautery task. Seven residents assigned to the control group performed the electrocautery task without playing the game first. When compared with the control group, the experimental group performed the task with fewer errors and superior movement proficiency (P < 0.05). The experimental group demonstrated increased ambidexterity with improvement in proficiency of the nondominant hand over time. In conclusion, the Nintendo Wii gaming device along with Marble Mania serves as an effective take-home surgical simulator.
Interacting vegetative and thermal contributions to water movement in desert soil
Garcia, C.A.; Andraski, Brian J.; Stonestrom, David A.; Cooper, C.A.; Šimůnek, J.; Wheatcraft, S.W.
2011-01-01
Thermally driven water-vapor flow can be an important component of total water movement in bare soil and in deep unsaturated zones, but this process is often neglected when considering the effects of soil–plant–atmosphere interactions on shallow water movement. The objectives of this study were to evaluate the coupled and separate effects of vegetative and thermal-gradient contributions to soil water movement in desert environments. The evaluation was done by comparing a series of simulations with and without vegetation and thermal forcing during a 4.7-yr period (May 2001–December 2005). For vegetated soil, evapotranspiration alone reduced root-zone (upper 1 m) moisture to a minimum value (25 mm) each year under both isothermal and nonisothermal conditions. Variations in the leaf area index altered the minimum storage values by up to 10 mm. For unvegetated isothermal and nonisothermal simulations, root-zone water storage nearly doubled during the simulation period and created a persistent driving force for downward liquid fluxes below the root zone (total net flux ~1 mm). Total soil water movement during the study period was dominated by thermally driven vapor fluxes. Thermally driven vapor flow and condensation supplemented moisture supplies to plant roots during the driest times of each year. The results show how nonisothermal flow is coupled with plant water uptake, potentially influencing ecohydrologic relations in desert environments.
Supercritical methanol for polyethylene terephthalate depolymerization: Observation using simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genta, Minoru; Iwaya, Tomoko; Sasaki, Mitsuru
2007-07-01
To apply PET depolymerization in supercritical methanol to commercial recycling, the benefits of supercritical methanol usage in PET depolymerization was investigated from the viewpoint of the reaction rate and energy demands. PET was depolymerized in a batch reactor at 573 K in supercritical methanol under 14.7 MPa and in vapor methanol under 0.98 MPa in our previous work. The main products of both reactions were the PET monomers of dimethyl terephthalate (DMT) and ethylene glycol (EG). The rate of PET depolymerization in supercritical methanol was faster than that of PET depolymerization in vapor methanol. This indicates supercritical fluid is beneficialmore » in reducing reaction time without the use of a catalyst. We depicted the simple process flow of PET depolymerization in supercritical methanol and in vapor methanol, and by simulation evaluated the total heat demand of each process. In this simulation, bis-hydroxyethyl terephthalate (BHET) was used as a model component of PET. The total heat demand of PET depolymerization in supercritical methanol was 2.35 x 10{sup 6} kJ/kmol Produced-DMT. That of PET depolymerization in vapor methanol was 2.84 x 10{sup 6} kJ/kmol Produced-DMT. The smaller total heat demand of PET depolymerization in supercritical methanol clearly reveals the advantage of using supercritical fluid in terms of energy savings.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, J; Coss, D; McMurry, J
Purpose: To evaluate the efficiency of multithreaded Geant4 (Geant4-MT, version 10.0) for proton Monte Carlo dose calculations using a high performance computing facility. Methods: Geant4-MT was used to calculate 3D dose distributions in 1×1×1 mm3 voxels in a water phantom and patient's head with a 150 MeV proton beam covering approximately 5×5 cm2 in the water phantom. Three timestamps were measured on the fly to separately analyze the required time for initialization (which cannot be parallelized), processing time of individual threads, and completion time. Scalability of averaged processing time per thread was calculated as a function of thread number (1,more » 100, 150, and 200) for both 1M and 50 M histories. The total memory usage was recorded. Results: Simulations with 50 M histories were fastest with 100 threads, taking approximately 1.3 hours and 6 hours for the water phantom and the CT data, respectively with better than 1.0 % statistical uncertainty. The calculations show 1/N scalability in the event loops for both cases. The gains from parallel calculations started to decrease with 150 threads. The memory usage increases linearly with number of threads. No critical failures were observed during the simulations. Conclusion: Multithreading in Geant4-MT decreased simulation time in proton dose distribution calculations by a factor of 64 and 54 at a near optimal 100 threads for water phantom and patient's data respectively. Further simulations will be done to determine the efficiency at the optimal thread number. Considering the trend of computer architecture development, utilizing Geant4-MT for radiotherapy simulations is an excellent cost-effective alternative for a distributed batch queuing system. However, because the scalability depends highly on simulation details, i.e., the ratio of the processing time of one event versus waiting time to access for the shared event queue, a performance evaluation as described is recommended.« less
SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pahlka, R; Kappadath, S; Mawlawi, O
2016-06-15
Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less
Disks around merging binary black holes: From GW150914 to supermassive black holes
NASA Astrophysics Data System (ADS)
Khan, Abid; Paschalidis, Vasileios; Ruiz, Milton; Shapiro, Stuart L.
2018-02-01
We perform magnetohydrodynamic simulations in full general relativity of disk accretion onto nonspinning black hole binaries with mass ratio q =29 /36 . We survey different disk models which differ in their scale height, total size and magnetic field to quantify the robustness of previous simulations on the initial disk model. Scaling our simulations to LIGO GW150914 we find that such systems could explain possible gravitational wave and electromagnetic counterparts such as the Fermi GBM hard x-ray signal reported 0.4 s after GW150915 ended. Scaling our simulations to supermassive binary black holes, we find that observable flow properties such as accretion rate periodicities, the emergence of jets throughout inspiral, merger and postmerger, disk temperatures, thermal frequencies, and the time delay between merger and the boost in jet outflows that we reported in earlier studies display only modest dependence on the initial disk model we consider here.
Validation of X1 motorcycle model in industrial plant layout by using WITNESSTM simulation software
NASA Astrophysics Data System (ADS)
Hamzas, M. F. M. A.; Bareduan, S. A.; Zakaria, M. Z.; Tan, W. J.; Zairi, S.
2017-09-01
This paper demonstrates a case study on simulation, modelling and analysis for X1 Motorcycles Model. In this research, a motorcycle assembly plant has been selected as a main place of research study. Simulation techniques by using Witness software were applied to evaluate the performance of the existing manufacturing system. The main objective is to validate the data and find out the significant impact on the overall performance of the system for future improvement. The process of validation starts when the layout of the assembly line was identified. All components are evaluated to validate whether the data is significance for future improvement. Machine and labor statistics are among the parameters that were evaluated for process improvement. Average total cycle time for given workstations is used as criterion for comparison of possible variants. From the simulation process, the data used are appropriate and meet the criteria for two-sided assembly line problems.
The making of the mechanical universe
NASA Technical Reports Server (NTRS)
Blinn, James
1989-01-01
The Mechanical Universe project required the production of over 550 different animated scenes, totaling about 7 and 1/2 hours of screen time. The project required the use of a wide range of techniques and motivated the development of several different software packages. A documentation is presented of many aspects of the project, encompassing artistic design issues, scientific simulations, software engineering, and video engineering.
Phosphorus and nitrate nitrogen in runoff following fertilizer application to turfgrass.
Shuman, L M
2002-01-01
Intensively managed golf courses are perceived by the public as possibly adding nutrients to surface waters via surface transport. An experiment was designed to determine the transport of nitrate N and phosphate P from simulated golf course fairways of 'Tifway' bermudagrass [Cynodon dactylon (L.) Pers.]. Fertilizer treatments were 10-10-10 granular at three rates and rainfall events were simulated at four intervals after treatment (hours after treatment, HAT). Runoff volume was directly related to simulated rainfall amounts and soil moisture at the time of the event and varied from 24.3 to 43.5% of that added for the 50-mm events and 3.1 to 27.4% for the 25-mm events. The highest concentration and mass of phosphorus in runoff was during the first simulated rainfall event at 4 HAT with a dramatic decrease at 24 HAT and subsequent events. Nitrate N concentrations were low in the runoff water (approximately 0.5 mg L-1) for the first three runoff events and highest (approximately 1-1.5 mg L-1) at 168 HAT due to the time elapsed for conversion of ammonia to nitrate. Nitrate N mass was highest at the 4 and 24 HAT events and stepwise increases with rate were evident at 24 HAT. Total P transported for all events was 15.6 and 13.8% of that added for the two non-zero rates, respectively. Total nitrate N transported was 1.5 and 0.9% of that added for the two rates, respectively. Results indicate that turfgrass management should include applying minimum amounts of irrigation after fertilizer application and avoiding application before intense rain or when soil is very moist.
Sea-ice deformation in a coupled ocean-sea-ice model and in satellite remote sensing data
NASA Astrophysics Data System (ADS)
Spreen, Gunnar; Kwok, Ron; Menemenlis, Dimitris; Nguyen, An T.
2017-07-01
A realistic representation of sea-ice deformation in models is important for accurate simulation of the sea-ice mass balance. Simulated sea-ice deformation from numerical simulations with 4.5, 9, and 18 km horizontal grid spacing and a viscous-plastic (VP) sea-ice rheology are compared with synthetic aperture radar (SAR) satellite observations (RGPS, RADARSAT Geophysical Processor System) for the time period 1996-2008. All three simulations can reproduce the large-scale ice deformation patterns, but small-scale sea-ice deformations and linear kinematic features (LKFs) are not adequately reproduced. The mean sea-ice total deformation rate is about 40 % lower in all model solutions than in the satellite observations, especially in the seasonal sea-ice zone. A decrease in model grid spacing, however, produces a higher density and more localized ice deformation features. The 4.5 km simulation produces some linear kinematic features, but not with the right frequency. The dependence on length scale and probability density functions (PDFs) of absolute divergence and shear for all three model solutions show a power-law scaling behavior similar to RGPS observations, contrary to what was found in some previous studies. Overall, the 4.5 km simulation produces the most realistic divergence, vorticity, and shear when compared with RGPS data. This study provides an evaluation of high and coarse-resolution viscous-plastic sea-ice simulations based on spatial distribution, time series, and power-law scaling metrics.
Daidone, Isabella; Amadei, Andrea; Di Nola, Alfredo
2005-05-15
The folding of the amyloidogenic H1 peptide MKHMAGAAAAGAVV taken from the syrian hamster prion protein is explored in explicit aqueous solution at 300 K using long time scale all-atom molecular dynamics simulations for a total simulation time of 1.1 mus. The system, initially modeled as an alpha-helix, preferentially adopts a beta-hairpin structure and several unfolding/refolding events are observed, yielding a very short average beta-hairpin folding time of approximately 200 ns. The long time scale accessed by our simulations and the reversibility of the folding allow to properly explore the configurational space of the peptide in solution. The free energy profile, as a function of the principal components (essential eigenvectors) of motion, describing the main conformational transitions, shows the characteristic features of a funneled landscape, with a downhill surface toward the beta-hairpin folded basin. However, the analysis of the peptide thermodynamic stability, reveals that the beta-hairpin in solution is rather unstable. These results are in good agreement with several experimental evidences, according to which the isolated H1 peptide adopts very rapidly in water beta-sheet structure, leading to amyloid fibril precipitates [Nguyen et al., Biochemistry 1995;34:4186-4192; Inouye et al., J Struct Biol 1998;122:247-255]. Moreover, in this article we also characterize the diffusion behavior in conformational space, investigating its relations with folding/unfolding conditions. Copyright 2005 Wiley-Liss, Inc.
Fast CT-PRESS-based spiral chemical shift imaging at 3 Tesla.
Mayer, Dirk; Kim, Dong-Hyun; Adalsteinsson, Elfar; Spielman, Daniel M
2006-05-01
A new sequence is presented that combines constant-time point-resolved spectroscopy (CT-PRESS) with fast spiral chemical shift imaging. It allows the acquisition of multivoxel spectra without line splitting with a minimum total measurement time of less than 5 min for a field of view of 24 cm and a nominal 1.5x1.5-cm2 in-plane resolution. Measurements were performed with 17 CS encoding steps in t1 (Deltat1=12.8 ms) and an average echo time of 151 ms, which was determined by simulating the CT-PRESS experiment for the spin systems of glutamate (Glu) and myo-inositol (mI). Signals from N-acetyl-aspartate, total creatine, choline-containing compounds (Cho), Glu, and mI were detected in a healthy volunteer with no or only minor baseline distortions within 14 min on a 3 T MR scanner. Copyright (c) 2006 Wiley-Liss, Inc.
Implicit Learning of a Finger Motor Sequence by Patients with Cerebral Palsy After Neurofeedback.
Alves-Pinto, Ana; Turova, Varvara; Blumenstein, Tobias; Hantuschke, Conny; Lampe, Renée
2017-03-01
Facilitation of implicit learning of a hand motor sequence after a single session of neurofeedback training of alpha power recorded from the motor cortex has been shown in healthy individuals (Ros et al., Biological Psychology 95:54-58, 2014). This facilitation effect could be potentially applied to improve the outcome of rehabilitation in patients with impaired hand motor function. In the current study a group of ten patients diagnosed with cerebral palsy trained reduction of alpha power derived from brain activity recorded from right and left motor areas. Training was distributed in three periods of 8 min each. In between, participants performed a serial reaction time task with their non-dominant hand, to a total of five runs. A similar procedure was repeated a week or more later but this time training was based on simulated brain activity. Reaction times pooled across participants decreased on each successive run faster after neurofeedback training than after the simulation training. Also recorded were two 3-min baseline conditions, once with the eyes open, another with the eyes closed, at the beginning and end of the experimental session. No significant changes in alpha power with neurofeedback or with simulation training were obtained and no correlation with the reductions in reaction time could be established. Contributions for this are discussed.
NASA Technical Reports Server (NTRS)
Lafuse, Sharon A.
1991-01-01
The paper describes the Shuttle Leak Management Expert System (SLMES), a preprototype expert system developed to enable the ECLSS subsystem manager to analyze subsystem anomalies and to formulate flight procedures based on flight data. The SLMES combines the rule-based expert system technology with the traditional FORTRAN-based software into an integrated system. SLMES analyzes the data using rules, and, when it detects a problem that requires simulation, it sets up the input for the FORTRAN-based simulation program ARPCS2AT2, which predicts the cabin total pressure and composition as a function of time. The program simulates the pressure control system, the crew oxygen masks, the airlock repress/depress valves, and the leakage. When the simulation has completed, other SLMES rules are triggered to examine the results of simulation contrary to flight data and to suggest methods for correcting the problem. Results are then presented in form of graphs and tables.
Console, Rodolfo; Nardi, Anna; Carluccio, Roberto; Murru, Maura; Falcone, Giuseppe; Parsons, Thomas E.
2017-01-01
The use of a newly developed earthquake simulator has allowed the production of catalogs lasting 100 kyr and containing more than 100,000 events of magnitudes ≥4.5. The model of the fault system upon which we applied the simulator code was obtained from the DISS 3.2.0 database, selecting all the faults that are recognized on the Calabria region, for a total of 22 fault segments. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which can be compared with those of the real observations. The results of the physics-based simulator algorithm were compared with those obtained by an alternative method using a slip-rate balanced technique. Finally, as an example of a possible use of synthetic catalogs, an attenuation law has been applied to all the events reported in the synthetic catalog for the production of maps showing the exceedance probability of given values of PGA on the territory under investigation.
Traffic Congestion Detection System through Connected Vehicles and Big Data
Cárdenas-Benítez, Néstor; Aquino-Santos, Raúl; Magaña-Espinoza, Pedro; Aguilar-Velazco, José; Edwards-Block, Arthur; Medina Cass, Aldo
2016-01-01
This article discusses the simulation and evaluation of a traffic congestion detection system which combines inter-vehicular communications, fixed roadside infrastructure and infrastructure-to-infrastructure connectivity and big data. The system discussed in this article permits drivers to identify traffic congestion and change their routes accordingly, thus reducing the total emissions of CO2 and decreasing travel time. This system monitors, processes and stores large amounts of data, which can detect traffic congestion in a precise way by means of a series of algorithms that reduces localized vehicular emission by rerouting vehicles. To simulate and evaluate the proposed system, a big data cluster was developed based on Cassandra, which was used in tandem with the OMNeT++ discreet event network simulator, coupled with the SUMO (Simulation of Urban MObility) traffic simulator and the Veins vehicular network framework. The results validate the efficiency of the traffic detection system and its positive impact in detecting, reporting and rerouting traffic when traffic events occur. PMID:27136548
Traffic Congestion Detection System through Connected Vehicles and Big Data.
Cárdenas-Benítez, Néstor; Aquino-Santos, Raúl; Magaña-Espinoza, Pedro; Aguilar-Velazco, José; Edwards-Block, Arthur; Medina Cass, Aldo
2016-04-28
This article discusses the simulation and evaluation of a traffic congestion detection system which combines inter-vehicular communications, fixed roadside infrastructure and infrastructure-to-infrastructure connectivity and big data. The system discussed in this article permits drivers to identify traffic congestion and change their routes accordingly, thus reducing the total emissions of CO₂ and decreasing travel time. This system monitors, processes and stores large amounts of data, which can detect traffic congestion in a precise way by means of a series of algorithms that reduces localized vehicular emission by rerouting vehicles. To simulate and evaluate the proposed system, a big data cluster was developed based on Cassandra, which was used in tandem with the OMNeT++ discreet event network simulator, coupled with the SUMO (Simulation of Urban MObility) traffic simulator and the Veins vehicular network framework. The results validate the efficiency of the traffic detection system and its positive impact in detecting, reporting and rerouting traffic when traffic events occur.
An Occupational Performance Test Validation Program for Fire Fighters at the Kennedy Space Center
NASA Technical Reports Server (NTRS)
Schonfeld, Brian R.; Doerr, Donald F.; Convertino, Victor A.
1990-01-01
We evaluated performance of a modified Combat Task Test (CTT) and of standard fitness tests in 20 male subjects to assess the prediction of occupational performance standards for Kennedy Space Center fire fighters. The CTT consisted of stair-climbing, a chopping simulation, and a victim rescue simulation. Average CTT performance time was 3.61 +/- 0.25 min (SEM) and all CTT tasks required 93% to 97% maximal heart rate. By using scores from the standard fitness tests, a multiple linear regression model was fitted to each parameter: the stairclimb (r(exp 2) = .905, P less than .05), the chopping performance time (r(exp 2) = .582, P less than .05), the victim rescue time (r(exp 2) = .218, P = not significant), and the total performance time (r(exp 2) = .769, P less than .05). Treadmill time was the predominant variable, being the major predictor in two of four models. These results indicated that standardized fitness tests can predict performance on some CTT tasks and that test predictors were amenable to exercise training.
Cation binding to 15-TBA quadruplex DNA is a multiple-pathway cation-dependent process.
Reshetnikov, Roman V; Sponer, Jiri; Rassokhina, Olga I; Kopylov, Alexei M; Tsvetkov, Philipp O; Makarov, Alexander A; Golovin, Andrey V
2011-12-01
A combination of explicit solvent molecular dynamics simulation (30 simulations reaching 4 µs in total), hybrid quantum mechanics/molecular mechanics approach and isothermal titration calorimetry was used to investigate the atomistic picture of ion binding to 15-mer thrombin-binding quadruplex DNA (G-DNA) aptamer. Binding of ions to G-DNA is complex multiple pathway process, which is strongly affected by the type of the cation. The individual ion-binding events are substantially modulated by the connecting loops of the aptamer, which play several roles. They stabilize the molecule during time periods when the bound ions are not present, they modulate the route of the ion into the stem and they also stabilize the internal ions by closing the gates through which the ions enter the quadruplex. Using our extensive simulations, we for the first time observed full spontaneous exchange of internal cation between quadruplex molecule and bulk solvent at atomistic resolution. The simulation suggests that expulsion of the internally bound ion is correlated with initial binding of the incoming ion. The incoming ion then readily replaces the bound ion while minimizing any destabilization of the solute molecule during the exchange. © The Author(s) 2011. Published by Oxford University Press.
Global Fluxon Modeling of the Solar Corona and Inner Heliosphere
NASA Astrophysics Data System (ADS)
Lamb, D. A.; DeForest, C. E.
2017-12-01
The fluxon approach to MHD modeling enables simulations of low-beta plasmas in the absence of undesirable numerical effects such as diffusion and magnetic reconnection. The magnetic field can be modeled as a collection of discrete field lines ("fluxons") containing a set amount of magnetic flux in a prescribed field topology. Due to the fluxon model's pseudo-Lagrangian grid, simulations can be completed in a fraction of the time of traditional grid-based simulations, enabling near-real-time simulations of the global magnetic field structure and its influence on solar wind properties. Using SDO/HMI synoptic magnetograms as lower magnetic boundary conditions, and a separate one-dimensional fluid flow model attached to each fluxon, we compare the resulting fluxon relaxations with other commonly-used global models (such as PFSS), and with white-light images of the corona (including the August 2017 total solar eclipse). Finally, we show the computed magnetic field expansion ratio, and the modeled solar wind speed near the coronal-heliospheric transition. Development of the fluxon MHD model FLUX (the Field Line Universal relaXer), has been funded by NASA's Living with a Star program and by Southwest Research Institute.
Cation binding to 15-TBA quadruplex DNA is a multiple-pathway cation-dependent process
Reshetnikov, Roman V.; Sponer, Jiri; Rassokhina, Olga I.; Kopylov, Alexei M.; Tsvetkov, Philipp O.; Makarov, Alexander A.; Golovin, Andrey V.
2011-01-01
A combination of explicit solvent molecular dynamics simulation (30 simulations reaching 4 µs in total), hybrid quantum mechanics/molecular mechanics approach and isothermal titration calorimetry was used to investigate the atomistic picture of ion binding to 15-mer thrombin-binding quadruplex DNA (G-DNA) aptamer. Binding of ions to G-DNA is complex multiple pathway process, which is strongly affected by the type of the cation. The individual ion-binding events are substantially modulated by the connecting loops of the aptamer, which play several roles. They stabilize the molecule during time periods when the bound ions are not present, they modulate the route of the ion into the stem and they also stabilize the internal ions by closing the gates through which the ions enter the quadruplex. Using our extensive simulations, we for the first time observed full spontaneous exchange of internal cation between quadruplex molecule and bulk solvent at atomistic resolution. The simulation suggests that expulsion of the internally bound ion is correlated with initial binding of the incoming ion. The incoming ion then readily replaces the bound ion while minimizing any destabilization of the solute molecule during the exchange. PMID:21893589
Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F
2012-01-01
Objectives The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Methods Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Results Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=−2.487 (−2.040 to −0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=−2.272 (−0.028 to −0.002). ANOVA reported significant differences across years of experience (0–1, 1–2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. Conclusion It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required. PMID:21304005
Tang, Youhua; Chai, Tianfeng; Pan, Li; Lee, Pius; Tong, Daniel; Kim, Hyun-Cheol; Chen, Weiwei
2015-10-01
We employed an optimal interpolation (OI) method to assimilate AIRNow ozone/PM2.5 and MODIS (Moderate Resolution Imaging Spectroradiometer) aerosol optical depth (AOD) data into the Community Multi-scale Air Quality (CMAQ) model to improve the ozone and total aerosol concentration for the CMAQ simulation over the contiguous United States (CONUS). AIRNow data assimilation was applied to the boundary layer, and MODIS AOD data were used to adjust total column aerosol. Four OI cases were designed to examine the effects of uncertainty setting and assimilation time; two of these cases used uncertainties that varied in time and location, or "dynamic uncertainties." More frequent assimilation and higher model uncertainties pushed the modeled results closer to the observation. Our comparison over a 24-hr period showed that ozone and PM2.5 mean biases could be reduced from 2.54 ppbV to 1.06 ppbV and from -7.14 µg/m³ to -0.11 µg/m³, respectively, over CONUS, while their correlations were also improved. Comparison to DISCOVER-AQ 2011 aircraft measurement showed that surface ozone assimilation applied to the CMAQ simulation improves regional low-altitude (below 2 km) ozone simulation. This paper described an application of using optimal interpolation method to improve the model's ozone and PM2.5 estimation using surface measurement and satellite AOD. It highlights the usage of the operational AIRNow data set, which is available in near real time, and the MODIS AOD. With a similar method, we can also use other satellite products, such as the latest VIIRS products, to improve PM2.5 prediction.
A Global Assessment of Rain-Dissolved Organic Carbon
NASA Astrophysics Data System (ADS)
Safieddine, S.; Heald, C. L.
2017-12-01
Precipitation is the largest physical removal pathway of atmospheric organic carbon from the atmosphere. The removed carbon is transferred to the land and ocean in the form of dissolved organic carbon (DOC). Limited measurements have hindered efforts to characterize global DOC. In this poster presentation, we show the first simulated global DOC distribution based on a GEOS-Chem model simulation of the atmospheric reactive carbon budget. Over the ocean, simulated DOC concentrations are between 0.1 to 1 mgCL-1 with a total of 85 TgCyr-1 deposited. DOC concentrations are higher inland, ranging between 1 and 10 mgCL-1, producing a total of 188 TgCyr-1 terrestrial organic wet deposition. We compare the 2010 simulated DOC to a 30-year synthesis of available DOC measurements over different environments. Despite imperfect matching of observational and simulated time intervals, the model is able to reproduce much of the spatial variability of DOC (r= 0.63), with a low bias of 35%. We compare the global average carbon oxidation state (OSc) of both atmospheric and dissolved organic carbon, as a simple metric for describing the chemical composition of organics. In the global atmosphere reactive organic carbon (ROC) is dominated by hydrocarbons and ketones, and OSc, ranges from -1.8 to -0.6. In the dissolved form, formaldehyde, formic acid, primary and secondary semi-volatiles organic aerosol dominate the DOC concentrations. The increase in solubility upon oxidation leads to a global increase in OSc in rainwater with -0.6<=OSc <=0. This simulation provides new insight into the current model representation of the flow of atmospheric and rain-dissolved organic carbon, and new opportunities to use observations and simulations to understand the DOC reaching land and ocean.
Validation and learning in the Procedicus KSA virtual reality surgical simulator.
Ström, P; Kjellin, A; Hedman, L; Johnson, E; Wredmark, T; Felländer-Tsai, L
2003-02-01
Advanced simulator training within medicine is a rapidly growing field. Virtual reality simulators are being introduced as cost-saving educational tools, which also lead to increased patient safety. Fifteen medical students were included in the study. For 10 medical students performance was monitored, before and after 1 h of training, in two endoscopic simulators (the Procedicus KSA with haptic feedback and anatomical graphics and the established MIST simulator without this haptic feedback and graphics). Five medical students performed 50 tests in the Procedicus KSA in order to analyze learning curves. One of these five medical students performed multiple training sessions during 2 weeks and performed more than 300 tests. There was a significant improvement after 1 h of training regarding time, movement economy, and total score. The results in the two simulators were highly correlated. Our results show that the use of surgical simulators as a pedagogical tool in medical student training is encouraging. It shows rapid learning curves and our suggestion is to introduce endoscopic simulator training in undergraduate medical education during the course in surgery when motivation is high and before the development of "negative stereotypes" and incorrect practices.
Bambini, Deborah; Emery, Matthew; de Voest, Margaret; Meny, Lisa; Shoemaker, Michael J.
2016-01-01
There are significant limitations among the few prior studies that have examined the development and implementation of interprofessional education (IPE) experiences to accommodate a high volume of students from several disciplines and from different institutions. The present study addressed these gaps by seeking to determine the extent to which a single, large, inter-institutional, and IPE simulation event improves student perceptions of the importance and relevance of IPE and simulation as a learning modality, whether there is a difference in students’ perceptions among disciplines, and whether the results are reproducible. A total of 290 medical, nursing, pharmacy, and physical therapy students participated in one of two large, inter-institutional, IPE simulation events. Measurements included student perceptions about their simulation experience using the Attitude Towards Teamwork in Training Undergoing Designed Educational Simulation (ATTITUDES) Questionnaire and open-ended questions related to teamwork and communication. Results demonstrated a statistically significant improvement across all ATTITUDES subscales, while time management, role confusion, collaboration, and mutual support emerged as significant themes. Results of the present study indicate that a single IPE simulation event can reproducibly result in significant and educationally meaningful improvements in student perceptions towards teamwork, IPE, and simulation as a learning modality. PMID:28970407
Rheological Characterization of Unusual DWPF Slurry Samples (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koopman, D. C.
2005-09-01
A study was undertaken to identify and clarify examples of unusual rheological behavior in Defense Waste Processing Facility (DWPF) simulant slurry samples. Identification was accomplished by reviewing sludge, Sludge Receipt and Adjustment Tank (SRAT) product, and Slurry Mix Evaporator (SME) product simulant rheological results from the prior year. Clarification of unusual rheological behavior was achieved by developing and implementing new measurement techniques. Development of these new methods is covered in a separate report, WSRC-TR-2004-00334. This report includes a review of recent literature on unusual rheological behavior, followed by a summary of the rheological measurement results obtained on a set ofmore » unusual simulant samples. Shifts in rheological behavior of slurries as the wt. % total solids changed have been observed in numerous systems. The main finding of the experimental work was that the various unusual DWPF simulant slurry samples exhibit some degree of time dependent behavior. When a given shear rate is applied to a sample, the apparent viscosity of the slurry changes with time rather than remaining constant. These unusual simulant samples are more rheologically complex than Newtonian liquids or more simple slurries, neither of which shows significant time dependence. The study concludes that the unusual rheological behavior that has been observed is being caused by time dependent rheological properties in the slurries being measured. Most of the changes are due to the effect of time under shear, but SB3 SME products were also changing properties while stored in sample bottles. The most likely source of this shear-related time dependence for sludge is in the simulant preparation. More than a single source of time dependence was inferred for the simulant SME product slurries based on the range of phenomena observed. Rheological property changes were observed on the time-scale of a single measurement (minutes) as well as on a time scale of hours to weeks. The unusual shape of the slurry flow curves was not an artifact of the rheometric measurement. Adjusting the user-specified parameters in the rheometer measurement jobs can alter the shape of the flow curve of these time dependent samples, but this was not causing the unusual behavior. Variations in the measurement parameters caused the time dependence of a given slurry to manifest at different rates. The premise of the controlled shear rate flow curve measurement is that the dynamic response of the sample to a change in shear rate is nearly instantaneous. When this is the case, the data can be fitted to a time independent rheological equation, such as the Bingham plastic model. In those cases where this does not happen, interpretation of the data is difficult. Fitting time dependent data to time independent rheological equations, such as the Bingham plastic model, is also not appropriate.« less
Helicity conservation under quantum reconnection of vortex rings.
Zuccher, Simone; Ricca, Renzo L
2015-12-01
Here we show that under quantum reconnection, simulated by using the three-dimensional Gross-Pitaevskii equation, self-helicity of a system of two interacting vortex rings remains conserved. By resolving the fine structure of the vortex cores, we demonstrate that the total length of the vortex system reaches a maximum at the reconnection time, while both writhe helicity and twist helicity remain separately unchanged throughout the process. Self-helicity is computed by two independent methods, and topological information is based on the extraction and analysis of geometric quantities such as writhe, total torsion, and intrinsic twist of the reconnecting vortex rings.
A method for modelling peak signal statistics on a mobile satellite transponder
NASA Technical Reports Server (NTRS)
Bilodeau, Andre; Lecours, Michel; Pelletier, Marcel; Delisle, Gilles Y.
1990-01-01
A simulation method is proposed. The simulation was developed to model the peak duration and energy content of signal peaks in a mobile communication satellite operating in a Frequency Division Multiple Access (FDMA) mode and presents an estimate of those power peaks for a system where the channels are modeled as band limited Gaussian noise, which is taken as a reasonable representation for Amplitude Commanded Single Sideband (ACSSB), Minimum Shift Keying (MSK), or Phase Shift Keying (PSK) modulated signals. The simulation results show that, under this hypothesis, the level of the signal power peaks for 10 percent, 1 percent, and 0.1 percent of the time are well described by a Rayleigh law and that their duration is extremely short and inversely proportional to the total FDM system bandwidth.
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
Formation of stellar clusters in magnetized, filamentary infrared dark clouds
NASA Astrophysics Data System (ADS)
Li, Pak Shing; Klein, Richard I.; McKee, Christopher F.
2018-01-01
Star formation in a filamentary infrared dark cloud (IRDC) is simulated over the dynamic range of 4.2 pc to 28 au for a period of 3.5 × 105 yr, including magnetic fields and both radiative and outflow feedback from the protostars. At the end of the simulation, the star formation efficiency is 4.3 per cent and the star formation rate per free-fall time is εff ≃ 0.04, within the range of observed values. The total stellar mass increases as ∼t2, whereas the number of protostars increases as ∼t1.5. We find that the density profile around most of the simulated protostars is ∼ρ ∝ r-1.5. At the end of the simulation, the protostellar mass function approaches the Chabrier stellar initial mass function. We infer that the time to form a star of median mass 0.2 M⊙ is about 1.4 × 105 yr from the median mass accretion rate. We find good agreement among the protostellar luminosities observed in the large sample of Dunham et al., our simulation and a theoretical estimate, and we conclude that the classical protostellar luminosity problem is resolved. The multiplicity of the stellar systems in the simulation agrees, to within a factor of 2, with observations of Class I young stellar objects; most of the simulated multiple systems are unbound. Bipolar protostellar outflows are launched using a subgrid model, and extend up to 1 pc from their host star. The mass-velocity relation of the simulated outflows is consistent with both observation and theory.
Romkema, Sietske; Bongers, Raoul M; van der Sluis, Corry K
2013-01-01
Intermanual transfer may improve prosthetic handling and acceptance if used in training soon after an amputation. The purpose of this study was to determine whether intermanual transfer effects can be detected after training with a myoelectric upper-limb prosthesis simulator. A mechanistic, randomized, pretest-posttest design was used. A total of 48 right-handed participants (25 women, 23 men) who were able-bodied were randomly assigned to an experimental group or a control group. The experimental group performed a training program of 5 days' duration using the prosthesis simulator. To determine the improvement in skill, a test was administered before, immediately after, and 6 days after training. The control group only performed the tests. Training was performed with the unaffected arm, and tests were performed with the affected arm (the affected arm simulating an amputated limb). Half of the participants were tested with the dominant arm and half with the nondominant arm. Initiation time was defined as the time from starting signal until start of the movement, movement time was defined as the time from the beginning of the movement until completion of the task, and force control was defined as the maximal applied force on a deformable object. The movement time decreased significantly more in the experimental group (F₂,₉₂=7.42, P=.001, η²(G)=.028) when compared with the control group. This finding is indicative of faster handling of the prosthesis. No statistically significant differences were found between groups with regard to initiation time and force control. We did not find a difference in intermanual transfer between the dominant and nondominant arms. The training utilized participants who were able-bodied in a laboratory setting and focused only on transradial amputations. Intermanual transfer was present in the affected arm after training the unaffected arm with a myoelectric prosthesis simulator, and this effect did not depend on laterality. This effect may improve rehabilitation of patients with an upper-limb amputation.
Thain, Peter K.; Bleakley, Christopher M.; Mitchell, Andrew C. S.
2015-01-01
Context Cryotherapy is used widely in sport and exercise medicine to manage acute injuries and facilitate rehabilitation. The analgesic effects of cryotherapy are well established; however, a potential caveat is that cooling tissue negatively affects neuromuscular control through delayed muscle reaction time. This topic is important to investigate because athletes often return to exercise, rehabilitation, or competitive activity immediately or shortly after cryotherapy. Objective To compare the effects of wet-ice application, cold-water immersion, and an untreated control condition on peroneus longus and tibialis anterior muscle reaction time during a simulated lateral ankle sprain. Design Randomized controlled clinical trial. Setting University of Hertfordshire human performance laboratory. Patients or Other Participants A total of 54 physically active individuals (age = 20.1 ± 1.5 years, height = 1.7 ± 0.07 m, mass = 66.7 ± 5.4 kg) who had no injury or history of ankle sprain. Intervention(s) Wet-ice application, cold-water immersion, or an untreated control condition applied to the ankle for 10 minutes. Main Outcome Measure(s) Muscle reaction time and muscle amplitude of the peroneus longus and tibialis anterior in response to a simulated lateral ankle sprain were calculated. The ankle-sprain simulation incorporated a combined inversion and plantar-flexion movement. Results We observed no change in muscle reaction time or muscle amplitude after cryotherapy for either the peroneus longus or tibialis anterior (P > .05). Conclusions Ten minutes of joint cooling did not adversely affect muscle reaction time or muscle amplitude in response to a simulated lateral ankle sprain. These findings suggested that athletes can safely return to sporting activity immediately after icing. Further evidence showed that ice can be applied before ankle rehabilitation without adversely affecting dynamic neuromuscular control. Investigation in patients with acute ankle sprains is warranted to assess the clinical applicability of these interventions. PMID:26067429
Axial to transverse energy mixing dynamics in octupole-based magnetostatic antihydrogen traps
NASA Astrophysics Data System (ADS)
Zhong, M.; Fajans, J.; Zukor, A. F.
2018-05-01
The nature of the trajectories of antihydrogen atoms confined in an octupole minimum-B trap is of great importance for upcoming spectroscopy, cooling, and gravity experiments. Of particular interest is the mixing time between the axial and transverse energies for the antiatoms. Here, using computer simulations, we establish that almost all trajectories are chaotic, and then quantify the characteristic mixing time between the axial and transverse energies. We find that there are two classes of trajectories: for trajectories whose axial energy is higher than about 20% of the total energy, the axial energy substantially mixes within about 10 s, whereas for trajectories whose axial energy is lower than about 10% of the total energy, the axial energy remains nearly constant for 1000 s or longer.
Biochemical, physical and tactical analysis of a simulated game in young soccer players.
Aquino, Rodrigo L; Gonçalves, Luiz G; Vieira, Luiz H; Oliveira, Lucas P; Alves, Guilherme F; Santiago, Paulo R; Puggina, Enrico F
2016-12-01
The objectives of this study were to describe and compare the displacement patterns and the tactical performance of the players in the first to the second game time and verify possible associations between indirect markers of muscle damage with displacement patterns in a simulated game played by young soccer players. Eighteen young soccer players were submitted to a simulated game and two blood collections, one before and another 30 minutes post-game to analyze the behavior of creatine kinase and lactate dehydrogenase enzymes. The patterns of displacement and tactics variables were obtained through functions developed in MATLAB environment (MathWorks, Inc., Natick, MA, USA). It is observed a significant increase in average speed (P=0.05), number of sprints (P<0.001), the percentage the total distance covered at high intensity (P<0.001) and tactical variables (team surface area: P=0.002; spreading: P=0.001) in the second period of the simulated game. In addition, there was significant reduction in the percentage of the total distance at low intensity (P≤0.05) in the second period, and there was a strong association between the percentage of change delta of creatine kinase and lactate dehydrogenase with the displacement patterns in the simulated game. The results show that indirect markers of muscle damage have great association with displacement patterns in game performed in training conditions for young soccer players, evidencing a need for reflection on the post-training recovery sessions strategies, contributing to better planning of sessions throughout the macrocycle.
Variability of Thermosphere and Ionosphere Responses to Solar Flares
NASA Technical Reports Server (NTRS)
Qian, Liying; Burns, Alan G.; Chamberlin, Philip C.; Solomon, Stanley C.
2011-01-01
We investigated how the rise rate and decay rate of solar flares affect the thermosphere and ionosphere responses to them. Model simulations and data analysis were conducted for two flares of similar magnitude (X6.2 and X5.4) that had the same location on the solar limb, but the X6.2 flare had longer rise and decay times. Simulated total electron content (TEC) enhancements from the X6.2 and X5.4 flares were 6 total electron content units (TECU) and approximately 2 TECU, and the simulated neutral density enhancements were approximately 15% -20% and approximately 5%, respectively, in reasonable agreement with observations. Additional model simulations showed that for idealized flares with the same magnitude and location, the thermosphere and ionosphere responses changed significantly as a function of rise and decay rates. The Neupert Effect, which predicts that a faster flare rise rate leads to a larger EUV enhancement during the impulsive phase, caused a larger maximum ion production enhancement. In addition, model simulations showed that increased E x B plasma transport due to conductivity increases during the flares caused a significant equatorial anomaly feature in the electron density enhancement in the F region but a relatively weaker equatorial anomaly feature in TEC enhancement, owing to dominant contributions by photochemical production and loss processes. The latitude dependence of the thermosphere response correlated well with the solar zenith angle effect, whereas the latitude dependence of the ionosphere response was more complex, owing to plasma transport and the winter anomaly.
NASA Astrophysics Data System (ADS)
Thingbijam, Kiran Kumar; Galis, Martin; Vyas, Jagdish; Mai, P. Martin
2017-04-01
We examine the spatial interdependence between kinematic parameters of earthquake rupture, which include slip, rise-time (total duration of slip), acceleration time (time-to-peak slip velocity), peak slip velocity, and rupture velocity. These parameters were inferred from dynamic rupture models obtained by simulating spontaneous rupture on faults with varying degree of surface-roughness. We observe that the correlations between these parameters are better described by non-linear correlations (that is, on logarithm-logarithm scale) than by linear correlations. Slip and rise-time are positively correlated while these two parameters do not correlate with acceleration time, peak slip velocity, and rupture velocity. On the other hand, peak slip velocity correlates positively with rupture velocity but negatively with acceleration time. Acceleration time correlates negatively with rupture velocity. However, the observed correlations could be due to weak heterogeneity of the slip distributions given by the dynamic models. Therefore, the observed correlations may apply only to those parts of rupture plane with weak slip heterogeneity if earthquake-rupture associate highly heterogeneous slip distributions. Our findings will help to improve pseudo-dynamic rupture generators for efficient broadband ground-motion simulations for seismic hazard studies.
Effects of time delay and pitch control sensitivity in the flared landing
NASA Technical Reports Server (NTRS)
Berthe, C. J.; Chalk, C. R.; Wingarten, N. C.; Grantham, W.
1986-01-01
Between December 1985 and January 1986, a flared landing program was conducted, using the USAF Total In-Flight simulator airplane, to examine time delay effects in a formal manner. Results show that as pitch sensitivity is increased, tolerance to time delay decreases. With the proper selection of pitch sensitivity, Level I performance was maintained with time delays ranging from 150 milliseconds to greater than 300 milliseconds. With higher sensitivity, configurations with Level I performance at 150 milliseconds degraded to level 2 at 200 milliseconds. When metrics of time delay and pitch sensitivity effects are applied to enhance previously developed predictive criteria, the result is an improved prediction technique which accounts for significant closed loop items.
Russell, Mark; West, Daniel J; Briggs, Marc A; Bracken, Richard M; Cook, Christian J; Giroud, Thibault; Gill, Nicholas; Kilduff, Liam P
2015-01-01
Reduced physical performance has been observed following the half-time period in team sports players, likely due to a decrease in muscle temperature during this period. We examined the effects of a passive heat maintenance strategy employed between successive exercise bouts on core temperature (Tcore) and subsequent exercise performance. Eighteen professional Rugby Union players completed this randomised and counter-balanced study. After a standardised warm-up (WU) and 15 min of rest, players completed a repeated sprint test (RSSA 1) and countermovement jumps (CMJ). Thereafter, in normal training attire (Control) or a survival jacket (Passive), players rested for a further 15 min (simulating a typical half-time) before performing a second RSSA (RSSA 2) and CMJ's. Measurements of Tcore were taken at baseline, post-WU, pre-RSSA 1, post-RSSA 1 and pre-RSSA 2. Peak power output (PPO) and repeated sprint ability was assessed before and after the simulated half-time. Similar Tcore responses were observed between conditions at baseline (Control: 37.06±0.05°C; Passive: 37.03±0.05°C) and for all other Tcore measurements taken before half-time. After the simulated half-time, the decline in Tcore was lower (-0.74±0.08% vs. -1.54±0.06%, p<0.001) and PPO was higher (5610±105 W vs. 5440±105 W, p<0.001) in the Passive versus Control condition. The decline in PPO over half-time was related to the decline in Tcore (r = 0.632, p = 0.005). In RSSA 2, best, mean and total sprint times were 1.39±0.17% (p<0.001), 0.55±0.06% (p<0.001) and 0.55±0.06% (p<0.001) faster for Passive versus Control. Passive heat maintenance reduced declines in Tcore that were observed during a simulated half-time period and improved subsequent PPO and repeated sprint ability in professional Rugby Union players.
Gupta, Charlotte C; Dorrian, Jill; Grant, Crystal L; Pajcin, Maja; Coates, Alison M; Kennaway, David J; Wittert, Gary A; Heilbronn, Leonie K; Della Vedova, Chris B; Banks, Siobhan
2017-01-01
Shiftworkers have impaired performance when driving at night and they also alter their eating patterns during nightshifts. However, it is unknown whether driving at night is influenced by the timing of eating. This study aims to explore the effects of timing of eating on simulated driving performance across four simulated nightshifts. Healthy, non-shiftworking males aged 18-35 years (n = 10) were allocated to either an eating at night (n = 5) or no eating at night (n = 5) condition. During the simulated nightshifts at 1730, 2030 and 0300 h, participants performed a 40-min driving simulation, 3-min Psychomotor Vigilance Task (PVT-B), and recorded their ratings of sleepiness on a subjective scale. Participants had a 6-h sleep opportunity during the day (1000-1600 h). Total 24-h food intake was consistent across groups; however, those in the eating at night condition ate a large meal (30% of 24-h intake) during the nightshift at 0130 h. It was found that participants in both conditions experienced increased sleepiness and PVT-B impairments at 0300 h compared to 1730 and 2030 h (p < 0.001). Further, at 0300 h, those in the eating condition displayed a significant decrease in time spent in the safe zone (p < 0.05; percentage of time within 10 km/h of the speed limit and 0.8 m of the centre of the lane) and significant increases in speed variability (p < 0.001), subjective sleepiness (p < 0.01) and number of crashes (p < 0.01) compared to those in the no eating condition. Results suggest that, for optimal performance, shiftworkers should consider restricting food intake during the night.
Yoshida, Kenji; Yokomizo, Akira; Matsuda, Tadashi; Hamasaki, Tsutomu; Kondo, Yukihiro; Yamaguchi, Kunihisa; Kanayama, Hiro-Omi; Wakumoto, Yoshiaki; Horie, Shigeo; Naito, Seiji
2015-09-01
To assess whether our ureteroscopic real-time navigation system has the possibility to reduce radiation exposure and improve performance of ureteroscopic maneuvers in surgeons of various ages and experience levels. Our novel ureteroscopic navigation system used a magnetic tracking device to detect the position of the ureteroscope and display it on a three-dimensional image. We recruited 31 urologists from five institutions to perform two tasks. Task 1 consisted of finding three internal markings on the phantom calices. Task 2 consisted of identifying all calices by ureteroscopy. In both tasks, participants performed with simulated fluoroscopy first, followed by our navigation system. Accuracy rates (AR) for identification, required time (T) for completing the task, migration length (ML), and time exposed to simulated fluoroscopy were recorded. The AR, T, and ML for both tasks were significantly better with the navigation system than without it (Task 1 with simulated fluoroscopy vs with navigation: AR 87.1 % vs 98.9%, P=0.003; T 355 s vs 191 s, P<0.0001; ML 4627 mm vs 2701 mm, P<0.0001. Task 2: AR 88.2% vs 96.7%, P=0.011; T 394 s vs 333 s, P=0.027; ML 5966 mm vs 5299 mm, P=0.0006). In both tasks, the participants used the simulated fluoroscopy about 20% of the total task time. Our navigation system, while still under development, could help surgeons of all levels to achieve better performances for ureteroscopic maneuvers compared with using fluoroscopic guidance. It also has the potential to reduce radiation exposure during fluoroscopy.
Determination of Protein Surface Hydration by Systematic Charge Mutations
NASA Astrophysics Data System (ADS)
Yang, Jin; Jia, Menghui; Qin, Yangzhong; Wang, Dihao; Pan, Haifeng; Wang, Lijuan; Xu, Jianhua; Zhong, Dongping; Dongping Zhong Collaboration; Jianhua Xu Collaboration
Protein surface hydration is critical to its structural stability, flexibility, dynamics and function. Recent observations of surface solvation on picosecond time scales have evoked debate on the origin of such relatively slow motions, from hydration water or protein charged sidechains, especially with molecular dynamics simulations. Here, we used a unique nuclease with a single tryptophan as a local probe and systematically mutated neighboring three charged residues to differentiate the contributions from hydration water and charged sidechains. By mutations of alternative one and two and all three charged residues, we observed slight increases in the total tryptophan Stokes shifts with less neighboring charged residue(s) and found insensitivity of charged sidechains to the relaxation patterns. The dynamics is correlated with hydration water relaxation with the slowest time in a dense charged environment and the fastest time at a hydrophobic site. On such picosecond time scales, the protein surface motion is restricted. The total Stokes shifts are dominantly from hydration water relaxation and the slow dynamics is from water-driven relaxation, coupled with local protein fluctuations.
Electric and hybrid electric vehicle study utilizing a time-stepping simulation
NASA Technical Reports Server (NTRS)
Schreiber, Jeffrey G.; Shaltens, Richard K.; Beremand, Donald G.
1992-01-01
The applicability of NASA's advanced power technologies to electric and hybrid vehicles was assessed using a time-stepping computer simulation to model electric and hybrid vehicles operating over the Federal Urban Driving Schedule (FUDS). Both the energy and power demands of the FUDS were taken into account and vehicle economy, range, and performance were addressed simultaneously. Results indicate that a hybrid electric vehicle (HEV) configured with a flywheel buffer energy storage device and a free-piston Stirling convertor fulfills the emissions, fuel economy, range, and performance requirements that would make it acceptable to the consumer. It is noted that an assessment to determine which of the candidate technologies are suited for the HEV application has yet to be made. A proper assessment should take into account the fuel economy and range, along with the driveability and total emissions produced.
El-Beheiry, Mostafa; McCreery, Greig; Schlachta, Christopher M
2017-04-01
The objective of this study was to assess the effect of a serious game skills competition on voluntary usage of a laparoscopic simulator among first-year surgical residents' standard simulation curriculum. With research ethics board approval, informed consent was obtained from first-year surgical residents enrolled in an introductory surgical simulation curriculum. The class of 2013 served as a control cohort following the standard curriculum which mandates completion of six laparoscopic simulator skill tasks. For the 2014 competition cohort, the only change introduced was the biweekly and monthly posting of a leader board of the top three and ten fastest peg transfer times. Entry surveys were administered assessing attitudes towards simulation-based training and competition. Cohorts were observed for 5 months. There were 24 and 25 residents in the control and competition cohorts, respectively. The competition cohort overwhelmingly (76 %) stated that they were not motivated to deliberate practice by competition. Median total simulator usage time was 132 min (IQR = 214) in the competition cohort compared to 89 (IQR = 170) in the control cohort. The competition cohort completed their course requirements significantly earlier than the control cohort (χ 2 = 6.5, p = 0.01). There was a significantly greater proportion of residents continuing to use the simulator voluntarily after completing their course requirements in the competition cohort (44 vs. 4 %; p = 0.002). Residents in the competition cohort were significantly faster at peg transfer (194 ± 66 vs. 233 ± 53 s, 95 % CI of difference = 4-74 s; p = 0.03) and significantly decreased their completion time by 33 ± 54 s (95 % CI 10-56 s; paired t test, p = 0.007). A simple serious games skills competition increased voluntary usage and performance on a laparoscopic simulator, despite a majority of participants reporting they were not motivated by competition. Future directions should endeavour to examine other serious gaming modalities to further engage trainees in simulated skills development.
Numerical Propulsion System Simulation: A Common Tool for Aerospace Propulsion Being Developed
NASA Technical Reports Server (NTRS)
Follen, Gregory J.; Naiman, Cynthia G.
2001-01-01
The NASA Glenn Research Center is developing an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). This simulation is initially being used to support aeropropulsion in the analysis and design of aircraft engines. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the Aviation Safety Program and Advanced Space Transportation. NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes using the Common Object Request Broker Architecture (CORBA) in the NPSS Developer's Kit to facilitate collaborative engineering. The NPSS Developer's Kit will provide the tools to develop custom components and to use the CORBA capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities will extend NPSS from a zero-dimensional simulation tool to a multifidelity, multidiscipline system-level simulation tool for the full life cycle of an engine.
NASA Astrophysics Data System (ADS)
Ooi, Seng-Keat
2005-11-01
Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.
Radiative Forcing of the Direct Aerosol Effect from AeroCom Phase II Simulations
NASA Technical Reports Server (NTRS)
Myhre, G.; Samset, B. H.; Schulz, M.; Balkanski, Y.; Bauer, S.; Berntsen, T. K.; Bian, H.; Bellouin, N.; Chin, M.; Diehl, T.;
2013-01-01
We report on the AeroCom Phase II direct aerosol effect (DAE) experiment where 16 detailed global aerosol models have been used to simulate the changes in the aerosol distribution over the industrial era. All 16 models have estimated the radiative forcing (RF) of the anthropogenic DAE, and have taken into account anthropogenic sulphate, black carbon (BC) and organic aerosols (OA) from fossil fuel, biofuel, and biomass burning emissions. In addition several models have simulated the DAE of anthropogenic nitrate and anthropogenic influenced secondary organic aerosols (SOA). The model simulated all-sky RF of the DAE from total anthropogenic aerosols has a range from -0.58 to -0.02 W m(sup-2), with a mean of -0.27 W m(sup-2 for the 16 models. Several models did not include nitrate or SOA and modifying the estimate by accounting for this with information slightly strengthens the mean. Modifying the model estimates for missing aerosol components and for the time period 1750 to 2010 results in a mean RF for the DAE of -0.35 W m(sup-2). Compared to AeroCom Phase I (Schulz et al., 2006) we find very similar spreads in both total DAE and aerosol component RF. However, the RF of the total DAE is stronger negative and RF from BC from fossil fuel and biofuel emissions are stronger positive in the present study than in the previous AeroCom study.We find a tendency for models having a strong (positive) BC RF to also have strong (negative) sulphate or OA RF. This relationship leads to smaller uncertainty in the total RF of the DAE compared to the RF of the sum of the individual aerosol components. The spread in results for the individual aerosol components is substantial, and can be divided into diversities in burden, mass extinction coefficient (MEC), and normalized RF with respect to AOD. We find that these three factors give similar contributions to the spread in results
Theta EEG dynamics of the error-related negativity.
Trujillo, Logan T; Allen, John J B
2007-03-01
The error-related negativity (ERN) is a response-locked brain potential (ERP) occurring 80-100ms following response errors. This report contrasts three views of the genesis of the ERN, testing the classic view that time-locked phasic bursts give rise to the ERN against the view that the ERN arises from a pure phase-resetting of ongoing theta (4-7Hz) EEG activity and the view that the ERN is generated - at least in part - by a phase-resetting and amplitude enhancement of ongoing theta EEG activity. Time-domain ERP analyses were augmented with time-frequency investigations of phase-locked and non-phase-locked spectral power, and inter-trial phase coherence (ITPC) computed from individual EEG trials, examining time courses and scalp topographies. Simulations based on the assumptions of the classic, pure phase-resetting, and phase-resetting plus enhancement views, using parameters from each subject's empirical data, were used to contrast the time-frequency findings that could be expected if one or more of these hypotheses adequately modeled the data. Error responses produced larger amplitude activity than correct responses in time-domain ERPs immediately following responses, as expected. Time-frequency analyses revealed that significant error-related post-response increases in total spectral power (phase- and non-phase-locked), phase-locked power, and ITPC were primarily restricted to the theta range, with this effect located over midfrontocentral sites, with a temporal distribution from approximately 150-200ms prior to the button press and persisting up to 400ms post-button press. The increase in non-phase-locked power (total power minus phase-locked power) was larger than phase-locked power, indicating that the bulk of the theta event-related dynamics were not phase-locked to response. Results of the simulations revealed a good fit for data simulated according to the phase-locking with amplitude enhancement perspective, and a poor fit for data simulated according to the classic view and the pure phase-resetting view. Error responses produce not only phase-locked increases in theta EEG activity, but also increases in non-phase-locked theta, both of which share a similar topography. The findings are thus consistent with the notion advanced by Luu et al. [Luu P, Tucker DM, Makeig S. Frontal midline theta and the error-related negativity; neurophysiological mechanisms of action regulation. Clin Neurophysiol 2004;115:1821-35] that the ERN emerges, at least in part, from a phase-resetting and phase-locking of ongoing theta-band activity, in the context of a general increase in theta power following errors.
Molecular dynamics simulations of shock waves in oriented nitromethane single crystals.
He, Lan; Sewell, Thomas D; Thompson, Donald L
2011-03-28
The structural relaxation of crystalline nitromethane initially at T = 200 K subjected to moderate (~15 GPa) supported shocks on the (100), (010), and (001) crystal planes has been studied using microcanonical molecular dynamics with the nonreactive Sorescu-Rice-Thompson force field [D. C. Sorescu, B. M. Rice, and D. L. Thompson, J. Phys. Chem. B 104, 8406 (2000)]. The responses to the shocks were determined by monitoring the mass density, the intermolecular, intramolecular, and total temperatures (average kinetic energies), the partitioning of total kinetic energy among Cartesian directions, the radial distribution functions for directions perpendicular to those of shock propagation, the mean-square displacements in directions perpendicular to those of shock propagation, and the time dependence of molecular rotational relaxation as a function of time. The results show that the mechanical response of crystalline nitromethane strongly depends on the orientation of the shock wave. Shocks propagating along [100] and [001] result in translational disordering in some crystal planes but not in others, a phenomenon that we refer to as plane-specific disordering; whereas for [010] the shock-induced stresses are relieved by a complicated structural rearrangement that leads to a paracrystalline structure. The plane-specific translational disordering is more complete by the end of the simulations (~6 ps) for shock propagation along [001] than along [100]. Transient excitation of the intermolecular degrees of freedom occurs in the immediate vicinity of the shock front for all three orientations; the effect is most pronounced for the [010] shock. In all three cases excitation of molecular vibrations occurs more slowly than the intermolecular excitation. The intermolecular and intramolecular temperatures are nearly equal by the end of the simulations, with 400-500 K of net shock heating. Results for two-dimensional mean-square molecular center-of-mass displacements, calculated as a function of time since shock wave passage in planes perpendicular to the direction of shock propagation, show that the molecular translational mobility in the picoseconds following shock wave passage is greatest for [001] and least for the [010] case. In all cases the root-mean-square center-of-mass displacement is small compared to the molecular diameter of nitromethane on the time scale of the simulations. The calculated time scales for the approach to thermal equilibrium are generally consistent with the predictions of a recent theoretical analysis due to Hooper [J. Chem. Phys. 132, 014507 (2010)].
Wave propagation modeling in composites reinforced by randomly oriented fibers
NASA Astrophysics Data System (ADS)
Kudela, Pawel; Radzienski, Maciej; Ostachowicz, Wieslaw
2018-02-01
A new method for prediction of elastic constants in randomly oriented fiber composites is proposed. It is based on mechanics of composites, the rule of mixtures and total mass balance tailored to the spectral element mesh composed of 3D brick elements. Selected elastic properties predicted by the proposed method are compared with values obtained by another theoretical method. The proposed method is applied for simulation of Lamb waves in glass-epoxy composite plate reinforced by randomly oriented fibers. Full wavefield measurements conducted by the scanning laser Doppler vibrometer are in good agreement with simulations performed by using the time domain spectral element method.
Design and implementation of a simple nuclear power plant simulator
NASA Astrophysics Data System (ADS)
Miller, William H.
1983-02-01
A simple PWR nuclear power plant simulator has been designed and implemented on a minicomputer system. The system is intended for students use in understanding the power operation of a nuclear power plant. A PDP-11 minicomputer calculates reactor parameters in real time, uses a graphics terminal to display the results and a keyboard and joystick for control functions. Plant parameters calculated by the model include the core reactivity (based upon control rod positions, soluble boron concentration and reactivity feedback effects), the total core power, the axial core power distribution, the temperature and pressure in the primary and secondary coolant loops, etc.
Spatial perception predicts laparoscopic skills on virtual reality laparoscopy simulator.
Hassan, I; Gerdes, B; Koller, M; Dick, B; Hellwig, D; Rothmund, M; Zielke, A
2007-06-01
This study evaluates the influence of visual-spatial perception on laparoscopic performance of novices with a virtual reality simulator (LapSim(R)). Twenty-four novices completed standardized tests of visual-spatial perception (Lameris Toegepaste Natuurwetenschappelijk Onderzoek [TNO] Test(R) and Stumpf-Fay Cube Perspectives Test(R)) and laparoscopic skills were assessed objectively, while performing 1-h practice sessions on the LapSim(R), comprising of coordination, cutting, and clip application tasks. Outcome variables included time to complete the tasks, economy of motion as well as total error scores, respectively. The degree of visual-spatial perception correlated significantly with laparoscopic performance on the LapSim(R) scores. Participants with a high degree of spatial perception (Group A) performed the tasks faster than those (Group B) who had a low degree of spatial perception (p = 0.001). Individuals with a high degree of spatial perception also scored better for economy of motion (p = 0.021), tissue damage (p = 0.009), and total error (p = 0.007). Among novices, visual-spatial perception is associated with manual skills performed on a virtual reality simulator. This result may be important for educators to develop adequate training programs that can be individually adapted.
Optimization Routine for Generating Medical Kits for Spaceflight Using the Integrated Medical Model
NASA Technical Reports Server (NTRS)
Graham, Kimberli; Myers, Jerry; Goodenow, Deb
2017-01-01
The Integrated Medical Model (IMM) is a MATLAB model that provides probabilistic assessment of the medical risk associated with human spaceflight missions.Different simulations or profiles can be run in which input conditions regarding both mission characteristics and crew characteristics may vary. For each simulation, the IMM records the total medical events that occur and “treats” each event with resources drawn from import scripts. IMM outputs include Total Medical Events (TME), Crew Health Index (CHI), probability of Evacuation (pEVAC), and probability of Loss of Crew Life (pLOCL).The Crew Health Index is determined by the amount of quality time lost (QTL). Previously, an optimization code was implemented in order to efficiently generate medical kits. The kits were optimized to have the greatest benefit possible, given amass and/or volume constraint. A 6-crew, 14-day lunar mission was chosen for the simulation and run through the IMM for 100,000 trials. A built-in MATLAB solver, mixed-integer linear programming, was used for the optimization routine. Kits were generated in 10% increments ranging from 10%-100% of the benefit constraints. Conditions wheremass alone was minimized, volume alone was minimized, and where mass and volume were minimizedjointly were tested.
Willaert, Willem I M; Aggarwal, Rajesh; Daruwalla, Farhad; Van Herzeele, Isabelle; Darzi, Ara W; Vermassen, Frank E; Cheshire, Nicholas J
2012-06-01
Patient-specific simulated rehearsal (PsR) of a carotid artery stenting procedure (CAS) enables the interventionalist to rehearse the case before performing the procedure on the actual patient by incorporating patient-specific computed tomographic data into the simulation software. This study aimed to evaluate whether PsR of a CAS procedure can enhance the operative performance versus a virtual reality (VR) generic CAS warm-up procedure or no preparation at all. During a 10-session cognitive/technical VR course, medical residents were trained in CAS. Thereafter, in a randomized crossover study, each participant performed a patient-specific CAS case 3 times on the simulator, preceded by 3 different tasks: a PsR, a generic case, or no preparation. Technical performances were assessed using simulator-based metrics and expert-based ratings. Twenty medical residents (surgery, cardiology, radiology) were recruited. Training plateaus were observed after 10 sessions for all participants. Performances were significantly better after PsR than after a generic warm-up or no warm-up for total procedure time (16.3 ± 0.6 vs 19.7 ± 1.0 vs 20.9 ± 1.1 minutes, P = 0.001) and fluoroscopy time (9.3 ± 0.1 vs 11.2 ± 0.6 vs 11.2 ± 0.5 minutes, P = 0.022) but did not influence contrast volume or number of roadmaps used during the "real" case. PsR significantly improved the quality of performance as measured by the expert-based ratings (scores 28 vs 25 vs 25, P = 0.020). Patient-specific simulated rehearsal of a CAS procedure significantly improves operative performance, compared to a generic VR warm-up or no warm-up. This technology requires further investigation with respect to improved outcomes on patients in the clinical setting.
Nousiainen, Markku T; McQueen, Sydney A; Ferguson, Peter; Alman, Benjamin; Kraemer, William; Safir, Oleg; Reznick, Richard; Sonnadara, Ranil
2016-04-01
Although simulation-based training is becoming widespread in surgical education and research supports its use, one major limitation is cost. Until now, little has been published on the costs of simulation in residency training. At the University of Toronto, a novel competency-based curriculum in orthopaedic surgery has been implemented for training selected residents, which makes extensive use of simulation. Despite the benefits of this intensive approach to simulation, there is a need to consider its financial implications and demands on faculty time. This study presents a cost and faculty work-hours analysis of implementing simulation as a teaching and evaluation tool in the University of Toronto's novel competency-based curriculum program compared with the historic costs of using simulation in the residency training program. All invoices for simulation training were reviewed to determine the financial costs before and after implementation of the competency-based curriculum. Invoice items included costs for cadavers, artificial models, skills laboratory labor, associated materials, and standardized patients. Costs related to the surgical skills laboratory rental fees and orthopaedic implants were waived as a result of special arrangements with the skills laboratory and implant vendors. Although faculty time was not reimbursed, faculty hours dedicated to simulation were also evaluated. The academic year of 2008 to 2009 was chosen to represent an academic year that preceded the introduction of the competency-based curriculum. During this year, 12 residents used simulation for teaching. The academic year of 2010 to 2011 was chosen to represent an academic year when the competency-based curriculum training program was functioning parallel but separate from the regular stream of training. In this year, six residents used simulation for teaching and assessment. The academic year of 2012 to 2013 was chosen to represent an academic year when simulation was used equally among the competency-based curriculum and regular stream residents for teaching (60 residents) and among 14 competency-based curriculum residents and 21 regular stream residents for assessment. The total costs of using simulation to teach and assess all residents in the competency-based curriculum and regular stream programs (academic year 2012-2013) (CDN 155,750, USD 158,050) were approximately 15 times higher than the cost of using simulation to teach residents before the implementation of the competency-based curriculum (academic year 2008-2009) (CDN 10,090, USD 11,140). The number of hours spent teaching and assessing trainees increased from 96 to 317 hours during this period, representing a threefold increase. Although the financial costs and time demands on faculty in running the simulation program in the new competency-based curriculum at the University of Toronto have been substantial, augmented learner and trainer satisfaction has been accompanied by direct evidence of improved and more efficient learning outcomes. The higher costs and demands on faculty time associated with implementing simulation for teaching and assessment must be considered when it is used to enhance surgical training.
DIEL FLUX OF DISSOLVED CARBOHYDRATE IN A SALT MARSH AND A SIMULATED ESTUARINE ECOSYSTEM
The concentrations of total dissolved carbohydrate (TCHO), monosaccharide (MCHO) and polysaccharide (PCHO) were followed over a total of ten diel cycles in a salt marsh and a 13 cu m seawater tank simulating an estuarine ecosystem. Their patterns are compared to those for total d...
Deviney, Frank A.; Rice, Karen; Brown, Donald E.
2012-01-01
Natural resource managers require information concerning the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates. An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.
Simulation numerique de l'accretion de glace sur une pale d'eolienne
NASA Astrophysics Data System (ADS)
Fernando, Villalpando
The wind energy industry is growing steadily, and an excellent place for the construction of wind farms is northern Quebec. This region has huge wind energy production potential, as the cold temperatures increase air density and with it the available wind energy. However, some issues associated with arctic climates cause production losses on wind farms. Icing conditions occur frequently, as high air humidity and freezing temperatures cause ice to build up on the blades, resulting in wind turbines operating suboptimally. One of the negative consequences of ice accretion is degradation of the blade's aerodynamics, in the form of a decrease in lift and an increase in drag. Also, the ice grows unevenly, which unbalances the blades and induces vibration. This reduces the expected life of some of the turbine components. If the ice accretion continues, the ice can reach a mass that endangers the wind turbine structure, and operation must be suspended in order to prevent mechanical failure. To evaluate the impact of ice on the profits of wind farms, it is important to understand how ice builds up and how much it can affect blade aerodynamics. In response, researchers in the wind energy field have attempted to simulate ice accretion on airfoils in refrigerated wind tunnels. Unfortunately, this is an expensive endeavor, and researchers' budgets are limited. However, ice accretion can be simulated more cost-effectively and with fewer limitations on airfoil size and air speed using numerical methods. Numerical simulation is an approach that can help researchers acquire knowledge in the field of wind energy more quickly. For years, the aviation industry has invested time and money developing computer codes to simulate ice accretion on aircraft wings. Nearly all these codes are restricted to use by aircraft developers, and so they are not accessible to researchers in the wind engineering field. Moreover, these codes have been developed to meet aeronautical industry specifications, which are different from those that must be met in the wind energy industry. Among these differences are the following: wind turbines operate at subsonic speeds; the cords and angles of attack of wind turbine blades are smaller than those of aircraft wings; and a wind turbine can operate with a larger ice mass on its blades than an aircraft can. So, it is important to provide wind energy researchers with tools specifically validated with the operations parameters of a wind turbine. The main goal of this work is to develop a methodology to simulate ice accretion in 2D using Fluent and Matlab, commercial software programs that are available at nearly all research institutions. In this study, we used Gambit, previously the companion tool of Fluent, for mesh generation, and which has now been replaced by ICEM. We decided to stay with Gambit, because we were already deeply involved with the meshing procedure for our simulation of ice accretion at the time Gambit was removed from the market. We validate the methodology with experimental data consisting of iced airfoil contours obtained in a refrigerated wind tunnel using the parameters of actual ice conditions recorded in northern Quebec. This methodology consists of four steps: airfoil meshing, droplet trajectory calculation, thermodynamic model application, and airfoil contour updating. The total simulation time is divided into several time steps, for each of which the four steps are performed until the total time has elapsed. The time step length depends on the icing conditions. (Abstract shortened by UMI.).
Voronin, Lois M.; Cauller, Stephen J.
2017-07-31
Elevated concentrations of nitrogen in groundwater that discharges to surface-water bodies can degrade surface-water quality and habitats in the New Jersey Coastal Plain. An analysis of groundwater flow in the Kirkwood-Cohansey aquifer system and deeper confined aquifers that underlie the Barnegat Bay–Little Egg Harbor (BB-LEH) watershed and estuary was conducted by using groundwater-flow simulation, in conjunction with a particle-tracking routine, to provide estimates of groundwater flow paths and travel times to streams and the BB-LEH estuary.Water-quality data from the Ambient Groundwater Quality Monitoring Network, a long-term monitoring network of wells distributed throughout New Jersey, were used to estimate the initial nitrogen concentration in recharge for five different land-use classes—agricultural cropland or pasture, agricultural orchard or vineyard, urban non-residential, urban residential, and undeveloped. Land use at the point of recharge within the watershed was determined using a geographic information system (GIS). Flow path starting locations were plotted on land-use maps for 1930, 1973, 1986, 1997, and 2002. Information on the land use at the time and location of recharge, time of travel to the discharge location, and the point of discharge were determined for each simulated flow path. Particle-tracking analysis provided the link from the point of recharge, along the particle flow path, to the point of discharge, and the particle travel time. The travel time of each simulated particle established the recharge year. Land use during the year of recharge was used to define the nitrogen concentration associated with each flow path. The recharge-weighted average nitrogen concentration for all flow paths that discharge to the Toms River upstream from streamflow-gaging station 01408500 or to the BB-LEH estuary was calculated.Groundwater input into the Barnegat Bay–Little Egg Harbor estuary from two main sources— indirect discharge from base flow to streams that eventually flow into the bay and groundwater discharge directly into the estuary and adjoining coastal wetlands— is summarized by quantity, travel time, and estimated nitrogen concentration. Simulated average groundwater discharge to streams in the watershed that flow into the BB-LEH estuary is approximately 400 million gallons per day. Particle-tracking results indicate that the travel time of 56 percent of this discharge is less than 7 years. Fourteen percent of the groundwater discharge to the streams in the BB-LEH watershed has a travel time of less than 7 years and originates in urban land. Analysis of flow-path simulations indicate that approximately 13 percent of the total groundwater flow through the study area discharges directly to the estuary and adjoining coastal wetlands (approximately 64 million gallons per day). The travel time of 19 percent of this discharge is less than 7 years. Ten percent of this discharge (1 percent of the total groundwater flow through the study area) originates in urban areas and has a travel time of less than 7 years. Groundwater that discharges to the streams that flow into the BB-LEH, in general, has shorter travel times, and a higher percentage of it originates in urban areas than does direct groundwater discharge to the Barnegat Bay–Little Egg Harbor estuary.The simulated average nitrogen concentration in groundwater that discharges to the Toms River, upstream from streamflow-gaging station 01408500 was computed and compared to summary concentrations determined from analysis of multiple surface-water samples. The nitrogen concentration in groundwater that discharges directly to the estuary and adjoining coastal wetlands is a current data gap. The particle tracking methodology used in this study provides an estimate of this concentration."
Impact Delivery of Reduced Greenhouse Gases on Early Mars
NASA Astrophysics Data System (ADS)
Haberle, R. M.; Zahnle, K. J.; Barlow, N. G.
2017-12-01
Reducing greenhouse gases are the latest trend in finding solutions to the early Mars climate dilemma. In thick CO2 atmospheres with modest concentrations of H2 and/or CH4, collision induced absorptions can reduce the outgoing long wave radiation enough to provide a significant greenhouse effect. To raise surface temperatures significantly by this process, surface pressures must be at least 500 mb and H2 and/or CH4 concentrations must be at or above the several percent level. Volcanism, serpentinization, and impacts are possible sources for reduced gases. Here we investigate the delivery of such gases by impact degassing from comets and asteroids. We use a time-marching stochastic impactor model that reproduces the observed crater size frequency distribution of Noachian surfaces. Following each impact, reduced gases are added to the atmosphere from a production function based on gas equilibrium calculations for several classes of meteorites and comets at typical post-impact temperatures. Escape and photochemistry then remove the reduced greenhouse gases continuously in time throughout each simulation. We then conduct an ensemble of simulations with this simple model varying the surface pressure, impact history, reduced gas production and escape functions, and mix of impactor types, to determine if this could be a potentially important part of the early Mars story. Our goal is to determine the duration of impact events that elevate reduced gas concentrations to significant levels and the total time of such events throughout the Noachian. Our initial simulations indicate that large impactors can raise H2 concentrations above the 10% level - a level high enough for a very strong greenhouse effect in a 1 bar CO2 atmosphere - for millions of years, and that the total time spent at or above that level can be in the 10's of millions of years range. These are interesting results that we plan to explore more thoroughly for the meeting.
Koga, Shunsaku; Barstow, Thomas J; Okushima, Dai; Rossiter, Harry B; Kondo, Narihiko; Ohmae, Etsuko; Poole, David C
2015-06-01
Near-infrared assessment of skeletal muscle is restricted to superficial tissues due to power limitations of spectroscopic systems. We reasoned that understanding of muscle deoxygenation may be improved by simultaneously interrogating deeper tissues. To achieve this, we modified a high-power (∼8 mW), time-resolved, near-infrared spectroscopy system to increase depth penetration. Precision was first validated using a homogenous optical phantom over a range of inter-optode spacings (OS). Coefficients of variation from 10 measurements were minimal (0.5-1.9%) for absorption (μa), reduced scattering, simulated total hemoglobin, and simulated O2 saturation. Second, a dual-layer phantom was constructed to assess depth sensitivity, and the thickness of the superficial layer was varied. With a superficial layer thickness of 1, 2, 3, and 4 cm (μa = 0.149 cm(-1)), the proportional contribution of the deep layer (μa = 0.250 cm(-1)) to total μa was 80.1, 26.9, 3.7, and 0.0%, respectively (at 6-cm OS), validating penetration to ∼3 cm. Implementation of an additional superficial phantom to simulate adipose tissue further reduced depth sensitivity. Finally, superficial and deep muscle spectroscopy was performed in six participants during heavy-intensity cycle exercise. Compared with the superficial rectus femoris, peak deoxygenation of the deep rectus femoris (including the superficial intermedius in some) was not significantly different (deoxyhemoglobin and deoxymyoglobin concentration: 81.3 ± 20.8 vs. 78.3 ± 13.6 μM, P > 0.05), but deoxygenation kinetics were significantly slower (mean response time: 37 ± 10 vs. 65 ± 9 s, P ≤ 0.05). These data validate a high-power, time-resolved, near-infrared spectroscopy system with large OS for measuring the deoxygenation of deep tissues and reveal temporal and spatial disparities in muscle deoxygenation responses to exercise. Copyright © 2015 the American Physiological Society.
Bucking logs to cable yarder capacity can decrease yarding costs and minimize wood wastage
Chris B. LeDoux
1986-01-01
Data from select time and motions studies and a forest model plot, used in a simulation model, show that logging managers planning felling, bucking, and limbing for a cable yarding operation must consider the effect of alternate bucking rules on wood wastage, yarding production rates and cost, the number of choker to fly and total logging costs. Results emphasize then...
NASA Astrophysics Data System (ADS)
Dib, Alain; Kavvas, M. Levent
2018-03-01
The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.
Hilditch, Cassie J; Dorrian, Jillian; Centofanti, Stephanie A; Van Dongen, Hans P; Banks, Siobhan
2017-02-01
Night shift workers are at risk of road accidents due to sleepiness on the commute home. A brief nap at the end of the night shift, before the commute, may serve as a sleepiness countermeasure. However, there is potential for sleep inertia, i.e. transient impairment immediately after awakening from the nap. We investigated whether sleep inertia diminishes the effectiveness of napping as a sleepiness countermeasure before a simulated commute after a simulated night shift. N=21 healthy subjects (aged 21-35 y; 12 females) participated in a 3-day laboratory study. After a baseline night, subjects were kept awake for 27h for a simulated night shift. They were randomised to either receive a 10-min nap ending at 04:00 plus a 10-min pre-drive nap ending at 07:10 (10-NAP) or total sleep deprivation (NO-NAP). A 40-min York highway driving task was performed at 07:15 to simulate the commute. A 3-min psychomotor vigilance test (PVT-B) and the Samn-Perelli Fatigue Scale (SP-Fatigue) were administered at 06:30 (pre-nap), 07:12 (post-nap), and 07:55 (post-drive). In the 10-NAP condition, total pre-drive nap sleep time was 9.1±1.2min (mean±SD), with 1.3±1.9min spent in slow wave sleep, as determined polysomnographically. There was no difference between conditions in PVT-B performance at 06:30 (before the nap). In the 10-NAP condition, PVT-B performance was worse after the nap (07:12) compared to before the nap (06:30); no change across time was found in the NO-NAP condition. There was no significant difference between conditions in PVT-B performance after the drive. SP-Fatigue and driving performance did not differ significantly between conditions. In conclusion, the pre-drive nap showed objective, but not subjective, evidence of sleep inertia immediately after awakening. The 10-min nap did not affect driving performance during the simulated commute home, and was not effective as a sleepiness countermeasure. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-01-01
The simulation was performed on 64K cores of Intrepid, running at 0.25 simulated-years-per-day and taking 25 million core-hours. This is the first simulation using both the CAM5 physics and the highly scalable spectral element dynamical core. The animation of Total Precipitable Water clearly shows hurricanes developing in the Atlantic and Pacific.
Measurements of the total cross section of natBe with thermal neutrons from a photo-neutron source
NASA Astrophysics Data System (ADS)
Liu, L. X.; Wang, H. W.; Ma, Y. G.; Cao, X. G.; Cai, X. Z.; Chen, J. G.; Zhang, G. L.; Han, J. L.; Zhang, G. Q.; Hu, J. F.; Wang, X. H.; Li, W. J.; Yan, Z.; Fu, H. J.
2017-11-01
The total neutron cross sections of natural beryllium in the neutron energy region of 0.007 to 0.1 eV were measured by using a time-of-flight (TOF) technique at the Shanghai Institute of Applied Physics (SINAP). The low energy neutrons were obtained by moderating the high energy neutrons from a pulsed photo-neutron source generated from a 16 MeV electron linac. The time dependent neutron background component was determined by employing the 12.8 cm boron-loaded polyethylene (PEB) (5% w.t.) to block neutron TOF path and using the Monte Carlo simulation methods. The present data was compared with the fold Harvey data with the response function of the photo-neutron source (PNS, phase-1). The present measurement of total cross section of natBe for thermal neutrons based on PNS has been developed for the acquisition of nuclear data needed for the Thorium Molten Salt Reactor (TMSR).
NASA Astrophysics Data System (ADS)
Wang, Chuantao (C. T.)
2005-08-01
In the past decade, sheet metal forming and die development has been transformed to a science-based and technology-driven engineering and manufacturing enterprise from a tryout-based craft. Stamping CAE, especially the sheet metal forming simulation, as one of the core components in digital die making and digital stamping, has played a key role in this historical transition. The stamping simulation technology and its industrial applications have greatly impacted automotive sheet metal product design, die developments, die construction and tryout, and production stamping. The stamping CAE community has successfully resolved the traditional formability problems such as splits and wrinkles. The evolution of the stamping CAE technology and business demands opens even greater opportunities and challenges to stamping CAE community in the areas of (1) continuously improving simulation accuracy, drastically reducing simulation time-in-system, and improving operationalability (friendliness), (2) resolving those historically difficult-to-resolve problems such as dimensional quality problems (springback and twist) and surface quality problems (distortion and skid/impact lines), (3) resolving total manufacturability problems in line die operations including blanking, draw/redraw, trim/piercing, and flanging, and (4) overcoming new problems in forming new sheet materials with new forming techniques. In this article, the author first provides an overview of the stamping CAE technology adventures and achievements, and industrial applications in the past decade. Then the author presents a summary of increasing manufacturability needs from the formability to total quality and total manufacturability of sheet metal stampings. Finally, the paper outlines the new needs and trends for continuous improvements and innovations to meet increasing challenges in line die formability and quality requirements in automotive stamping.
Carnahan, Heather; Herold, Jodi
2015-01-01
ABSTRACT Purpose: To review the literature on simulation-based learning experiences and to examine their potential to have a positive impact on physiotherapy (PT) learners' knowledge, skills, and attitudes in entry-to-practice curricula. Method: A systematic literature search was conducted in the MEDLINE, CINAHL, Embase Classic+Embase, Scopus, and Web of Science databases, using keywords such as physical therapy, simulation, education, and students. Results: A total of 820 abstracts were screened, and 23 articles were included in the systematic review. While there were few randomized controlled trials with validated outcome measures, some discoveries about simulation can positively affect the design of the PT entry-to-practice curricula. Using simulators to provide specific output feedback can help students learn specific skills. Computer simulations can also augment students' learning experience. Human simulation experiences in managing the acute patient in the ICU are well received by students, positively influence their confidence, and decrease their anxiety. There is evidence that simulated learning environments can replace a portion of a full-time 4-week clinical rotation without impairing learning. Conclusions: Simulation-based learning activities are being effectively incorporated into PT curricula. More rigorously designed experimental studies that include a cost–benefit analysis are necessary to help curriculum developers make informed choices in curriculum design. PMID:25931672
1987-06-01
Section VIII.) the total time. The reverse of this cir- culation (surface inflow, outflow at Edinger, J. E., and Buchak, E. M. "Estu- depth) and storage ...respect to their applicabil- Attempts have been made to determine the ity. Hourly sampled 70-hours time series flow characteristics in the estuary, ana- of...Integration Using Pumped Storage ." cient equations, it is obvious that the (See complete entry in Section V.) flow will not be properly simulated with
Lunar and terrestrial planet formation in the Grand Tack scenario
Jacobson, S. A.; Morbidelli, A.
2014-01-01
We present conclusions from a large number of N-body simulations of the giant impact phase of terrestrial planet formation. We focus on new results obtained from the recently proposed Grand Tack model, which couples the gas-driven migration of giant planets to the accretion of the terrestrial planets. The giant impact phase follows the oligarchic growth phase, which builds a bi-modal mass distribution within the disc of embryos and planetesimals. By varying the ratio of the total mass in the embryo population to the total mass in the planetesimal population and the mass of the individual embryos, we explore how different disc conditions control the final planets. The total mass ratio of embryos to planetesimals controls the timing of the last giant (Moon-forming) impact and its violence. The initial embryo mass sets the size of the lunar impactor and the growth rate of Mars. After comparing our simulated outcomes with the actual orbits of the terrestrial planets (angular momentum deficit, mass concentration) and taking into account independent geochemical constraints on the mass accreted by the Earth after the Moon-forming event and on the time scale for the growth of Mars, we conclude that the protoplanetary disc at the beginning of the giant impact phase must have had most of its mass in Mars-sized embryos and only a small fraction of the total disc mass in the planetesimal population. From this, we infer that the Moon-forming event occurred between approximately 60 and approximately 130 Myr after the formation of the first solids and was caused most likely by an object with a mass similar to that of Mars. PMID:25114304
Lunar and terrestrial planet formation in the Grand Tack scenario.
Jacobson, S A; Morbidelli, A
2014-09-13
We present conclusions from a large number of N-body simulations of the giant impact phase of terrestrial planet formation. We focus on new results obtained from the recently proposed Grand Tack model, which couples the gas-driven migration of giant planets to the accretion of the terrestrial planets. The giant impact phase follows the oligarchic growth phase, which builds a bi-modal mass distribution within the disc of embryos and planetesimals. By varying the ratio of the total mass in the embryo population to the total mass in the planetesimal population and the mass of the individual embryos, we explore how different disc conditions control the final planets. The total mass ratio of embryos to planetesimals controls the timing of the last giant (Moon-forming) impact and its violence. The initial embryo mass sets the size of the lunar impactor and the growth rate of Mars. After comparing our simulated outcomes with the actual orbits of the terrestrial planets (angular momentum deficit, mass concentration) and taking into account independent geochemical constraints on the mass accreted by the Earth after the Moon-forming event and on the time scale for the growth of Mars, we conclude that the protoplanetary disc at the beginning of the giant impact phase must have had most of its mass in Mars-sized embryos and only a small fraction of the total disc mass in the planetesimal population. From this, we infer that the Moon-forming event occurred between approximately 60 and approximately 130 Myr after the formation of the first solids and was caused most likely by an object with a mass similar to that of Mars. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.; Archfield, S. A.; Farmer, W. H.; Kiang, J. E.
2014-12-01
The U.S. Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the continental US. The portion of the NHM located within the Gulf Coastal Plains and Ozarks Landscape Conservation Cooperative (GCPO LCC) is being used to test the feasibility of improving streamflow simulations in gaged and ungaged watersheds by linking statistically- and physically-based hydrologic models. The GCPO LCC covers part or all of 12 states and 5 sub-geographies, totaling approximately 726,000 km2, and is centered on the lower Mississippi Alluvial Valley. A total of 346 USGS streamgages in the GCPO LCC region were selected to evaluate the performance of this new calibration methodology for the period 1980 to 2013. Initially, the physically-based models are calibrated to measured streamflow data to provide a baseline for comparison. An enhanced calibration procedure then is used to calibrate the physically-based models in the gaged and ungaged areas of the GCPO LCC using statistically-based estimates of streamflow. For this application, the calibration procedure is adjusted to address the limitations of the statistically generated time series to reproduce measured streamflow in gaged basins, primarily by incorporating error and bias estimates. As part of this effort, estimates of uncertainty in the model simulations are also computed for the gaged and ungaged watersheds.
PIC simulation of the vacuum power flow for a 5 terawatt, 5 MV, 1 MA pulsed power system
NASA Astrophysics Data System (ADS)
Liu, Laqun; Zou, Wenkang; Liu, Dagang; Guo, Fan; Wang, Huihui; Chen, Lin
2018-03-01
In this paper, a 5 Terawatt, 5 MV, 1 MA pulsed power system based on vacuum magnetic insulation is simulated by the particle-in-cell (PIC) simulation method. The system consists of 50 100-kV linear transformer drive (LTD) cavities in series, using magnetically insulated induction voltage adder (MIVA) technology for pulsed power addition and transmission. The pulsed power formation and the vacuum power flow are simulated when the system works in self-limited flow and load-limited flow. When the pulsed power system isn't connected to the load, the downstream magnetically insulated transmission line (MITL) works in the self-limited flow, the maximum of output current is 1.14 MA and the amplitude of voltage is 4.63 MV. The ratio of the electron current to the total current is 67.5%, when the output current reached the peak value. When the impedance of the load is 3.0 Ω, the downstream MITL works in the self-limited flow, the maximums of output current and the amplitude of voltage are 1.28 MA and 3.96 MV, and the ratio of the electron current to the total current is 11.7% when the output current reached the peak value. In addition, when the switches are triggered in synchronism with the passage of the pulse power flow, it effectively reduces the rise time of the pulse current.
Aircraft-type dependency of contrail evolution
NASA Astrophysics Data System (ADS)
Unterstrasser, S.; Görsch, N.
2014-12-01
The impact of aircraft type on contrail evolution is assessed using a large eddy simulation model with Lagrangian ice microphysics. Six different aircraft ranging from the small regional airliner Bombardier CRJ to the largest aircraft Airbus A380 are taken into account. Differences in wake vortex properties and fuel flow lead to considerable variations in the early contrail geometric depth and ice crystal number. Larger aircraft produce contrails with more ice crystals (assuming that the number of initially generated ice crystals per kilogram fuel is constant). These initial differences are reduced in the first minutes, as the ice crystal loss during the vortex phase is stronger for larger aircraft. In supersaturated air, contrails of large aircraft are much deeper after 5 min than those of small aircraft. A parameterization for the final vertical displacement of the wake vortex system is provided, depending only on the initial vortex circulation and stratification. Cloud resolving simulations are used to examine whether the aircraft-induced initial differences have a long-lasting mark. These simulations suggest that the synoptic scenario controls the contrail cirrus evolution qualitatively. However, quantitative differences between the contrail cirrus properties of the various aircraft remain over the total simulation period of 6 h. The total extinctions of A380-produced contrails are about 1.5 to 2.5 times higher than those from contrails of a Bombardier CRJ.
NEEMO 18-20: Analog Testing for Mitigation of Communication Latency During Human Space Exploration
NASA Technical Reports Server (NTRS)
Chappell, Steven P.; Beaton, Kara H.; Miller, Matthew J.; Graff, Trevor G.; Abercromby, Andrew F. J.; Gernhardt, Michael L.; Halcon, Christopher
2016-01-01
NASA Extreme Environment Mission Operations (NEEMO) is an underwater spaceflight analog that allows a true mission-like operational environment and uses buoyancy effects and added weight to simulate different gravity levels. Three missions were undertaken from 2014-2015, NEEMO's 18-20. All missions were performed at the Aquarius undersea research habitat. During each mission, the effects of communication latencies on operations concepts, timelines, and tasks were studied. METHODS: Twelve subjects (4 per mission) were weighed out to simulate near-zero or partial gravity extravehicular activity (EVA) and evaluated different operations concepts for integration and management of a simulated Earth-based science team (ST) to provide input and direction during exploration activities. Exploration traverses were preplanned based on precursor data. Subjects completed science-related tasks including pre-sampling surveys, geologic-based sampling, and marine-based sampling as a portion of their tasks on saturation dives up to 4 hours in duration that were designed to simulate extravehicular activity (EVA) on Mars or the moons of Mars. One-way communication latencies, 5 and 10 minutes between space and mission control, were simulated throughout the missions. Objective data included task completion times, total EVA times, crew idle time, translation time, ST assimilation time (defined as time available for ST to discuss data/imagery after data acquisition). Subjective data included acceptability, simulation quality, capability assessment ratings, and comments. RESULTS: Precursor data can be used effectively to plan and execute exploration traverse EVAs (plans included detailed location of science sites, high-fidelity imagery of the sites, and directions to landmarks of interest within a site). Operations concepts that allow for pre-sampling surveys enable efficient traverse execution and meaningful Mission Control Center (MCC) interaction across communication latencies and can be done with minimal crew idle time. Imagery and contextual information from the EVA crew that is transmitted real-time to the intravehicular (IV) crewmember(s) can be used to verify that exploration traverse plans are being executed correctly. That same data can be effectively used by MCC (across comm latency) to provide meaningful feedback and instruction to the crew regarding sampling priorities, additional tasks, and changes to the EVA timeline. Text / data capabilities are preferred over voice capabilities between MCC and IV when executing exploration traverse plans over communication latency.
Yu, Isseki; Tasaki, Tomohiro; Nakada, Kyoko; Nagaoka, Masataka
2010-09-30
The influence of hydrostatic pressure on the partial molar volume (PMV) of the protein apomyoglobin (AMb) was investigated by all-atom molecular dynamics (MD) simulations. Using the time-resolved Kirkwood-Buff (KB) approach, the dynamic behavior of the PMV was identified. The simulated time average value of the PMV and its reduction by 3000 bar pressurization correlated with experimental data. In addition, with the aid of the surficial KB integral method, we obtained the spatial distributions of the components of PMV to elucidate the detailed mechanism of the PMV reduction. New R-dependent PMV profiles identified the regions that increase or decrease the PMV under the high pressure condition. The results indicate that besides the hydration in the vicinity of the protein surface, the outer space of the first hydration layer also significantly influences the total PMV change. These results provide a direct and detailed picture of pressure induced PMV reduction.
Tang, Yunqing; Dai, Luru; Zhang, Xiaoming; Li, Junbai; Hendriks, Johnny; Fan, Xiaoming; Gruteser, Nadine; Meisenberg, Annika; Baumann, Arnd; Katranidis, Alexandros; Gensch, Thomas
2015-01-01
Single molecule localization based super-resolution fluorescence microscopy offers significantly higher spatial resolution than predicted by Abbe’s resolution limit for far field optical microscopy. Such super-resolution images are reconstructed from wide-field or total internal reflection single molecule fluorescence recordings. Discrimination between emission of single fluorescent molecules and background noise fluctuations remains a great challenge in current data analysis. Here we present a real-time, and robust single molecule identification and localization algorithm, SNSMIL (Shot Noise based Single Molecule Identification and Localization). This algorithm is based on the intrinsic nature of noise, i.e., its Poisson or shot noise characteristics and a new identification criterion, QSNSMIL, is defined. SNSMIL improves the identification accuracy of single fluorescent molecules in experimental or simulated datasets with high and inhomogeneous background. The implementation of SNSMIL relies on a graphics processing unit (GPU), making real-time analysis feasible as shown for real experimental and simulated datasets. PMID:26098742
NASA Astrophysics Data System (ADS)
Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.
2006-09-01
Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.
Middle atmosphere dynamical sources of the semiannual oscillation in the thermosphere and ionosphere
NASA Astrophysics Data System (ADS)
Jones, M.; Emmert, J. T.; Drob, D. P.; Siskind, D. E.
2017-01-01
The strong global semiannual oscillation (SAO) in thermospheric density has been observed for five decades, but definitive knowledge of its source has been elusive. We use the National Center of Atmospheric Research thermosphere-ionosphere-mesosphere electrodynamics general circulation model (TIME-GCM) to study how middle atmospheric dynamics generate the SAO in the thermosphere-ionosphere (T-I). The "standard" TIME-GCM simulates, from first principles, SAOs in thermospheric mass density and ionospheric total electron content that agree well with observed climatological variations. Diagnosis of the globally averaged continuity equation for atomic oxygen ([O]) shows that the T-I SAO originates in the upper mesosphere, where an SAO in [O] is forced by nonlinear, resolved-scale variations in the advective, net tidal, and diffusive transport of O. Contrary to earlier hypotheses, TIME-GCM simulations demonstrate that intra-annually varying eddy diffusion by breaking gravity waves may not be the primary driver of the T-I SAO: A pronounced SAO is produced without parameterized gravity waves.
Hudson, Thomas J; Looi, Thomas; Pichardo, Samuel; Amaral, Joao; Temple, Michael; Drake, James M; Waspe, Adam C
2018-02-01
Magnetic resonance-guided focused ultrasound (MRgFUS) is emerging as a treatment alternative for osteoid osteoma and painful bone metastases. This study describes a new simulation platform that predicts the distribution of heat generated by MRgFUS when applied to bone tissue. Calculation of the temperature distribution was performed using two mathematical models. The first determined the propagation and absorption of acoustic energy through each medium, and this was performed using a multilayered approximation of the Rayleigh integral method. The ultrasound energy distribution derived from these equations could then be converted to heat energy, and the second mathematical model would then use the heat generated to determine the final temperature distribution using a finite-difference time-domain application of Pennes' bio-heat transfer equation. Anatomical surface geometry was generated using a modified version of a mesh-based semiautomatic segmentation algorithm, and both the acoustic and thermodynamic models were calculated using a parallelized algorithm running on a graphics processing unit (GPU) to greatly accelerate computation time. A series of seven porcine experiments were performed to validate the model, comparing simulated temperatures to MR thermometry and assessing spatial, temporal, and maximum temperature accuracy in the soft tissue. The parallelized algorithm performed acoustic and thermodynamic calculations on grids of over 10 8 voxels in under 30 s for a simulated 20 s of heating and 40 s of cooling, with a maximum time per calculated voxel of less than 0.3 μs. Accuracy was assessed by comparing the soft tissue thermometry to the simulation in the soft tissue adjacent to bone using four metrics. The maximum temperature difference between the simulation and thermometry in a region of interest around the bone was measured to be 5.43 ± 3.51°C average absolute difference and a percentage difference of 16.7%. The difference in heating location resulted in a total root-mean-square error of 4.21 ± 1.43 mm. The total size of the ablated tissue calculated from the thermal dose approximation in the simulation was, on average, 67.6% smaller than measured from the thermometry. The cooldown was much faster in the simulation, where it decreased by 14.22 ± 4.10°C more than the thermometry in 40 s after sonication ended. The use of a Rayleigh-based acoustic model combined with a discretized bio-heat transfer model provided a rapid three-dimensional calculation of the temperature distribution through bone and soft tissue during MRgFUS application, and the parallelized GPU algorithm provided the computational speed that would be necessary for an intraoperative treatment planning software platform. © 2017 American Association of Physicists in Medicine.
Smith, Richard L.; Repert, Deborah A.; Barber, Larry B.; LeBlanc, Denis R.
2013-01-01
The consequences of groundwater contamination can remain long after a contaminant source has been removed. Documentation of natural aquifer recoveries and empirical tools to predict recovery time frames and associated geochemical changes are generally lacking. This study characterized the long-term natural attenuation of a groundwater contaminant plume in a sand and gravel aquifer on Cape Cod, Massachusetts, after the removal of the treated-wastewater source. Although concentrations of dissolved organic carbon (DOC) and other soluble constituents have decreased substantially in the 15 years since the source was removed, the core of the plume remains anoxic and has sharp redox gradients and elevated concentrations of nitrate and ammonium. Aquifer sediment was collected from near the former disposal site at several points in time and space along a 0.5-km-long transect extending downgradient from the disposal site and analyses of the sediment was correlated with changes in plume composition. Total sediment carbon content was generally low (< 8 to 55.8 μmol (g dry wt)− 1) but was positively correlated with oxygen consumption rates in laboratory incubations, which ranged from 11.6 to 44.7 nmol (g dry wt)− 1 day− 1. Total water extractable organic carbon was < 10–50% of the total carbon content but was the most biodegradable portion of the carbon pool. Carbon/nitrogen (C/N) ratios in the extracts increased more than 10-fold with time, suggesting that organic carbon degradation and oxygen consumption could become N-limited as the sorbed C and dissolved inorganic nitrogen (DIN) pools produced by the degradation separate with time by differential transport. A 1-D model using total degradable organic carbon values was constructed to simulate oxygen consumption and transport and calibrated by using observed temporal changes in oxygen concentrations at selected wells. The simulated travel velocity of the oxygen gradient was 5–13% of the groundwater velocity. This suggests that the total sorbed carbon pool is large relative to the rate of oxygen entrainment and will be impacting groundwater geochemistry for many decades. This has implications for long-term oxidation of reduced constituents, such as ammonium, that are being transported downgradient away from the infiltration beds toward surface and coastal discharge zones.
Koukos, Panagiotis I; Glykos, Nicholas M
2014-08-28
Folding molecular dynamics simulations amounting to a grand total of 4 μs of simulation time were performed on two peptides (with native and mutated sequences) derived from loop 3 of the vammin protein and the results compared with the experimentally known peptide stabilities and structures. The simulations faithfully and accurately reproduce the major experimental findings and show that (a) the native peptide is mostly disordered in solution, (b) the mutant peptide has a well-defined and stable structure, and (c) the structure of the mutant is an irregular β-hairpin with a non-glycine β-bulge, in excellent agreement with the peptide's known NMR structure. Additionally, the simulations also predict the presence of a very small β-hairpin-like population for the native peptide but surprisingly indicate that this population is structurally more similar to the structure of the native peptide as observed in the vammin protein than to the NMR structure of the isolated mutant peptide. We conclude that, at least for the given system, force field, and simulation protocol, folding molecular dynamics simulations appear to be successful in reproducing the experimentally accessible physical reality to a satisfactory level of detail and accuracy.
Design of the biosonar simulator for dolphin's clicks waveform reproduction
NASA Astrophysics Data System (ADS)
Ishii, Ken; Akamatsu, Tomonari; Hatakeyama, Yoshimi
1992-03-01
The emitted clicks of Dall's porpoises consist of a pulse train of burst signals with an ultrasonic carrier frequency. The authors have designed a biosonar simulator to reproduce the waveforms associated with a dolphin's clicks underwater. The total reproduction system consists of a click signal acquisition block, a waveform analysis block, a memory unit, a click simulator, and a underwater, ultrasonic wave transmitter. In operation, data stored in an EPROM (Erasable Programmable Read Only Memory) are read out sequentially by a fast clock and converted to analog output signals. Then an ultrasonic power amplifier reproduces these signals through a transmitter. The click signal replaying block is referred to as the BSS (Biosonar Simulator). This is what simulates the clicks. The details of the BSS are described in this report. A unit waveform is defined. The waveform is divided into a burst period and a waiting period. Clicks are a sequence based on a unit waveform, and digital data are sequentially read out from an EPROM of waveform data. The basic parameters of the BSS are as follows: (1) reading clock, 100 ns to 25.4 microseconds; (2) number of reading clock, 34 to 1024 times; (3) counter clock in a waiting period, 100 ns to 25.4 microseconds; (4) number of counter clock, zero to 16,777,215 times; (5) number of burst/waiting repetition cycle, one to 128 times; and (6) transmission level adjustment by a programmable attenuator, zero to 86.5 dB. These basic functions enable the BSS to replay clicks of Dall's porpoise precisely.
NASA Technical Reports Server (NTRS)
Cruz, Juan R.; Way, David W.; Shidner, Jeremy D.; Davis, Jody L.; Adams, Douglas S.; Kipp, Devin M.
2013-01-01
The Mars Science Laboratory used a single mortar-deployed disk-gap-band parachute of 21.35 m nominal diameter to assist in the landing of the Curiosity rover on the surface of Mars. The parachute system s performance on Mars has been reconstructed using data from the on-board inertial measurement unit, atmospheric models, and terrestrial measurements of the parachute system. In addition, the parachute performance results were compared against the end-to-end entry, descent, and landing (EDL) simulation created to design, develop, and operate the EDL system. Mortar performance was nominal. The time from mortar fire to suspension lines stretch (deployment) was 1.135 s, and the time from suspension lines stretch to first peak force (inflation) was 0.635 s. These times were slightly shorter than those used in the simulation. The reconstructed aerodynamic portion of the first peak force was 153.8 kN; the median value for this parameter from an 8,000-trial Monte Carlo simulation yielded a value of 175.4 kN - 14% higher than the reconstructed value. Aeroshell dynamics during the parachute phase of EDL were evaluated by examining the aeroshell rotation rate and rotational acceleration. The peak values of these parameters were 69.4 deg/s and 625 deg/sq s, respectively, which were well within the acceptable range. The EDL simulation was successful in predicting the aeroshell dynamics within reasonable bounds. The average total parachute force coefficient for Mach numbers below 0.6 was 0.624, which is close to the pre-flight model nominal drag coefficient of 0.615.
Observations and simulations of the ionospheric lunar tide: Seasonal variability
NASA Astrophysics Data System (ADS)
Pedatella, N. M.
2014-07-01
The seasonal variability of the ionospheric lunar tide is investigated using a combination of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) observations and thermosphere-ionosphere-mesosphere electrodynamics general circulation model (TIME-GCM) simulations. The present study focuses on the seasonal variability of the lunar tide in the ionosphere and its potential connection to the occurrence of stratosphere sudden warmings (SSWs). COSMIC maximum F region electron density (NmF2) and total electron content observations reveal a primarily annual variation of the ionospheric lunar tide, with maximum amplitudes occurring at low latitudes during December-February. Simulations of the lunar tide climatology in TIME-GCM display a similar annual variability as the COSMIC observations. This leads to the conclusion that the annual variability of the lunar tide in the ionosphere is not solely due to the occurrence of SSWs. Rather, the annual variability of the lunar tide in the ionosphere is generated by the seasonal variability of the lunar tide at E region altitudes. However, compared to the observations, the ionospheric lunar tide annual variability is weaker in the climatological simulations which is attributed to the occurrence of SSWs during the majority of the years included in the observations. Introducing a SSW into the TIME-GCM simulation leads to an additional enhancement of the lunar tide during Northern Hemisphere winter, increasing the lunar tide annual variability and resulting in an annual variability that is more consistent with the observations. The occurrence of SSWs can therefore potentially bias lunar tide climatologies, and it is important to consider these effects in studies of the lunar tide in the atmosphere and ionosphere.
A carbon balance model for the great dismal swamp ecosystem
Sleeter, Rachel; Sleeter, Benjamin M.; Williams, Brianna; Hogan, Dianna; Hawbaker, Todd J.; Zhu, Zhiliang
2017-01-01
BackgroundCarbon storage potential has become an important consideration for land management and planning in the United States. The ability to assess ecosystem carbon balance can help land managers understand the benefits and tradeoffs between different management strategies. This paper demonstrates an application of the Land Use and Carbon Scenario Simulator (LUCAS) model developed for local-scale land management at the Great Dismal Swamp National Wildlife Refuge. We estimate the net ecosystem carbon balance by considering past ecosystem disturbances resulting from storm damage, fire, and land management actions including hydrologic inundation, vegetation clearing, and replanting.ResultsWe modeled the annual ecosystem carbon stock and flow rates for the 30-year historic time period of 1985–2015, using age-structured forest growth curves and known data for disturbance events and management activities. The 30-year total net ecosystem production was estimated to be a net sink of 0.97 Tg C. When a hurricane and six historic fire events were considered in the simulation, the Great Dismal Swamp became a net source of 0.89 Tg C. The cumulative above and below-ground carbon loss estimated from the South One and Lateral West fire events totaled 1.70 Tg C, while management activities removed an additional 0.01 Tg C. The carbon loss in below-ground biomass alone totaled 1.38 Tg C, with the balance (0.31 Tg C) coming from above-ground biomass and detritus.ConclusionsNatural disturbances substantially impact net ecosystem carbon balance in the Great Dismal Swamp. Through alternative management actions such as re-wetting, below-ground biomass loss may have been avoided, resulting in the added carbon storage capacity of 1.38 Tg. Based on two model assumptions used to simulate the peat system, (a burn scar totaling 70 cm in depth, and the soil carbon accumulation rate of 0.36 t C/ha−1/year−1 for Atlantic white cedar), the total soil carbon loss from the South One and Lateral West fires would take approximately 1740 years to re-amass. Due to the impractical time horizon this presents for land managers, this particular loss is considered permanent. Going forward, the baseline carbon stock and flow parameters presented here will be used as reference conditions to model future scenarios of land management and disturbance.
A carbon balance model for the great dismal swamp ecosystem.
Sleeter, Rachel; Sleeter, Benjamin M; Williams, Brianna; Hogan, Dianna; Hawbaker, Todd; Zhu, Zhiliang
2017-12-01
Carbon storage potential has become an important consideration for land management and planning in the United States. The ability to assess ecosystem carbon balance can help land managers understand the benefits and tradeoffs between different management strategies. This paper demonstrates an application of the Land Use and Carbon Scenario Simulator (LUCAS) model developed for local-scale land management at the Great Dismal Swamp National Wildlife Refuge. We estimate the net ecosystem carbon balance by considering past ecosystem disturbances resulting from storm damage, fire, and land management actions including hydrologic inundation, vegetation clearing, and replanting. We modeled the annual ecosystem carbon stock and flow rates for the 30-year historic time period of 1985-2015, using age-structured forest growth curves and known data for disturbance events and management activities. The 30-year total net ecosystem production was estimated to be a net sink of 0.97 Tg C. When a hurricane and six historic fire events were considered in the simulation, the Great Dismal Swamp became a net source of 0.89 Tg C. The cumulative above and below-ground carbon loss estimated from the South One and Lateral West fire events totaled 1.70 Tg C, while management activities removed an additional 0.01 Tg C. The carbon loss in below-ground biomass alone totaled 1.38 Tg C, with the balance (0.31 Tg C) coming from above-ground biomass and detritus. Natural disturbances substantially impact net ecosystem carbon balance in the Great Dismal Swamp. Through alternative management actions such as re-wetting, below-ground biomass loss may have been avoided, resulting in the added carbon storage capacity of 1.38 Tg. Based on two model assumptions used to simulate the peat system, (a burn scar totaling 70 cm in depth, and the soil carbon accumulation rate of 0.36 t C/ha -1 /year -1 for Atlantic white cedar), the total soil carbon loss from the South One and Lateral West fires would take approximately 1740 years to re-amass. Due to the impractical time horizon this presents for land managers, this particular loss is considered permanent. Going forward, the baseline carbon stock and flow parameters presented here will be used as reference conditions to model future scenarios of land management and disturbance.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.
NASA Astrophysics Data System (ADS)
Bellos, V.; Mahmoodian, M.; Leopold, U.; Torres-Matallana, J. A.; Schutz, G.; Clemens, F.
2017-12-01
Surrogate models help to decrease the run-time of computationally expensive, detailed models. Recent studies show that Gaussian Process Emulators (GPE) are promising techniques in the field of urban drainage modelling. However, this study focusses on developing a GPE-based surrogate model for later application in Real Time Control (RTC) using input and output time series of a complex simulator. The case study is an urban drainage catchment in Luxembourg. A detailed simulator, implemented in InfoWorks ICM, is used to generate 120 input-output ensembles, from which, 100 are used for training the emulator and 20 for validation of the results. An ensemble of historical rainfall events with 2 hours duration and 10 minutes time steps are considered as the input data. Two example outputs, are selected as wastewater volume and total COD concentration in a storage tank in the network. The results of the emulator are tested with unseen random rainfall events from the ensemble dataset. The emulator is approximately 1000 times faster than the original simulator for this small case study. Whereas the overall patterns of the simulator are matched by the emulator, in some cases the emulator deviates from the simulator. To quantify the accuracy of the emulator in comparison with the original simulator, Nash-Sutcliffe efficiency (NSE) between the emulator and simulator is calculated for unseen rainfall scenarios. The range of NSE for the case of tank volume is from 0.88 to 0.99 with a mean value of 0.95, whereas for COD is from 0.71 to 0.99 with a mean value of 0.92. The emulator is able to predict the tank volume with higher accuracy as the relationship between rainfall intensity and tank volume is linear. For COD, which has a non-linear behaviour, the predictions are less accurate and more uncertain, in particular when rainfall intensity increases. This predictions were improved by including a larger amount of training data for the higher rainfall intensities. It was observed that, the accuracy of the emulator predictions depends on the ensemble training dataset design and the amount of data fed. Finally, more investigation is required to test the possibility of applying this type of fast emulators for model-based RTC applications in which limited number of inputs and outputs are considered in a short prediction horizon.
A Secure and High-Fidelity Live Animal Model for Off-Pump Coronary Bypass Surgery Training.
Liu, Xiaopeng; Yang, Yan; Meng, Qiang; Sun, Jiakang; Luo, Fuliang; Cui, Yongchun; Zhang, Hong; Zhang, Dong; Tang, Yue
2016-01-01
Existing simulators for off-pump coronary artery (CA) bypass grafting training are unable to provide cardiac surgery residents all necessary skills they need entering the operation room. In this study, we introduced a secure and high-fidelity live animal model to supplement the in vitro simulators for off-pump CA bypass grafting training. The left internal thoracic artery (ITA) of 3 Chinese miniature pigs was grafted to the left anterior descending CA using an end-to-side anastomosis. The free segment of the ITA was fixed on the ventricle surface, making it a simulative CA beating in synchrony with the heart. A total of 6 to 8 training anastomoses were made on each ITA. Animal Experiment Center in Fuwai Hospital. In total, 19 resident surgeons with at least 3 years of cardiac surgery work experience were trained using the new model. Their performances were recorded and reviewed. Simulative coronary arteries were successfully constructed in all 3 animals with no adverse event observed. A total of 19 anastomoses were then completed, 1 pig of 7 anastomoses and the other 2 animals of 6 anastomoses. Time consumption for the anastomosis was 782 ± 107 seconds. Anastomotic leakage was observed in 10/19 procedures. The most frequency site (7/10) was at the toe of the anastomosis. Further, the most common cause was uneven spacing or small margin of the stitches or both. Emergencies occurred during the training process included hypotension (7 procedures), tachyarrhythmia (4 procedures), and low blood oxygen saturation (1 procedure). This study demonstrated the safety and feasibility of our new live pig model in training resident surgeons. The simulative arteries can be easily accomplished and were long enough to place at least 6 anastomoses. Both on lumen diameter and motion status, they were proven to be a good substitution of the CA. Copyright © 2016. Published by Elsevier Inc.
Torrico, M; Aguilar, L; González, N; Giménez, M J; Echeverría, O; Cafini, F; Sevillano, D; Alou, L; Coronel, P; Prieto, J
2007-10-01
The aim of this study was to explore bactericidal activity of total and free serum simulated concentrations after the oral administration of cefditoren (400 mg, twice daily [bid]) versus the oral administration of amoxicillin-clavulanic acid extended release formulation (2,000/125 mg bid) against Haemophilus influenzae. A computerized pharmacodynamic simulation was performed, and colony counts and beta-lactamase activity were determined over 48 h. Three strains were used: ampicillin-susceptible, beta-lactamase-negative ampicillin-resistant (BLNAR) (also resistant to amoxicillin-clavulanic acid) and beta-lactamase-positive amoxicillin-clavulanic acid-resistant (BLPACR) strains, with cefditoren MICs of < or =0.12 microg/ml and amoxicillin-clavulanic acid MICs of 2, 8, and 8 microg/ml, respectively. Against the ampicillin-susceptible and BLNAR strains, bactericidal activity (> or =3 log(10) reduction) was obtained from 6 h on with either total and free cefditoren or amoxicillin-clavulanic acid. Against the BLPACR strain, free cefditoren showed bactericidal activity from 8 h on. In amoxicillin-clavulanic acid simulations the increase in colony counts from 4 h on occurred in parallel with the increase in beta-lactamase activity for the BLPACR strain. Since both BLNAR and BLPACR strains exhibited the same MIC, this was due to the significantly lower (P < or = 0.012) amoxicillin concentrations from 4 h on in simulations with beta-lactamase positive versus negative strains, thus decreasing the time above MIC (T>MIC). From a pharmacodynamic point of view, the theoretical amoxicillin T>MIC against strains with elevated ampicillin/amoxicillin-clavulanic acid MICs should be considered with caution since the presence of beta-lactamase inactivates the antibiotic, thus rendering inaccurate theoretical calculations. The experimental bactericidal activity of cefditoren is maintained over the dosing interval regardless of the presence of a mutation in the ftsI gene or beta-lactamase production.
Simulation-Optimization Model for Seawater Intrusion Management at Pingtung Coastal Area, Taiwan
NASA Astrophysics Data System (ADS)
Huang, P. S.; Chiu, Y.
2015-12-01
In 1970's, the agriculture and aquaculture were rapidly developed at Pingtung coastal area in southern Taiwan. The groundwater aquifers were over-pumped and caused the seawater intrusion. In order to remedy the contaminated groundwater and find the best strategies of groundwater usage, a management model to search the optimal groundwater operational strategies is developed in this study. The objective function is to minimize the total amount of injection water and a set of constraints are applied to ensure the groundwater levels and concentrations are satisfied. A three-dimension density-dependent flow and transport simulation model, called SEAWAT developed by U.S. Geological Survey, is selected to simulate the phenomenon of seawater intrusion. The simulation model is well calibrated by the field measurements and replaced by the surrogate model of trained artificial neural networks (ANNs) to reduce the computational time. The ANNs are embedded in the management model to link the simulation and optimization models, and the global optimizer of differential evolution (DE) is applied for solving the management model. The optimal results show that the fully trained ANNs could substitute the original simulation model and reduce much computational time. Under appropriate setting of objective function and constraints, DE can find the optimal injection rates at predefined barriers. The concentrations at the target locations could decrease more than 50 percent within the planning horizon of 20 years. Keywords : Seawater intrusion, groundwater management, numerical model, artificial neural networks, differential evolution
Introduction of steered molecular dynamics into UNRES coarse-grained simulations package.
Sieradzan, Adam K; Jakubowski, Rafał
2017-03-30
In this article, an implementation of steered molecular dynamics (SMD) in coarse-grain UNited RESidue (UNRES) simulations package is presented. Two variants of SMD have been implemented: with a constant force and a constant velocity. The huge advantage of SMD implementation in the UNRES force field is that it allows to pull with the speed significantly lower than the accessible pulling speed in simulations with all-atom representation of a system, with respect to a reasonable computational time. Therefore, obtaining pulling speed closer to those which appear in the atomic force spectroscopy is possible. The newly implemented method has been tested for behavior in a microcanonical run to verify the influence of introduction of artificial constrains on keeping total energy of the system. Moreover, as time dependent artificial force was introduced, the thermostat behavior was tested. The new method was also tested via unfolding of the Fn3 domain of human contactin 1 protein and the I27 titin domain. Obtained results were compared with Gø-like force field, all-atom force field, and experimental results. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
The effect of dynamic scheduling and routing in a solid waste management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johansson, Ola M.
2006-07-01
Solid waste collection and hauling account for the greater part of the total cost in modern solid waste management systems. In a recent initiative, 3300 Swedish recycling containers have been fitted with level sensors and wireless communication equipment, thereby giving waste collection operators access to real-time information on the status of each container. In this study, analytical modeling and discrete-event simulation have been used to evaluate different scheduling and routing policies utilizing the real-time data. In addition to the general models developed, an empirical simulation study has been performed on the downtown recycling station system in Malmoe, Sweden. From themore » study, it can be concluded that dynamic scheduling and routing policies exist that have lower operating costs, shorter collection and hauling distances, and reduced labor hours compared to the static policy with fixed routes and pre-determined pick-up frequencies employed by many waste collection operators today. The results of the analytical model and the simulation models are coherent, and consistent with experiences of the waste collection operators.« less
Choi, Young; Eom, Youngsub; Song, Jong Suk; Kim, Hyo Myung
2018-05-15
To compare the effect of posterior corneal astigmatism on the estimation of total corneal astigmatism using anterior corneal measurements (simulated keratometry [K]) between eyes with keratoconus and healthy eyes. Thirty-three eyes of 33 patients with keratoconus of grade I or II and 33 eyes of 33 age- and sex-matched healthy control subjects were enrolled. Anterior, posterior, and total corneal cylinder powers and flat meridians measured by a single Scheimpflug camera were analyzed. The difference in corneal astigmatism between the simulated K and total cornea was evaluated. The mean anterior, posterior, and total corneal cylinder powers of the keratoconus group (4.37 ± 1.73, 0.95 ± 0.39, and 4.36 ± 1.74 CD, respectively) were significantly greater than those of the control group (1.10 ± 0.68, 0.39 ± 0.18, and 0.97 ± 0.63 CD, respectively). The cylinder power difference between the simulated K and total cornea was positively correlated with the posterior corneal cylinder power and negatively correlated with the absolute flat meridian difference between the simulated K and total cornea in both groups. The mean magnitude of the vector difference between the astigmatism of the simulated K and total cornea of the keratoconus group (0.67 ± 0.67 CD) was significantly larger than that of the control group (0.28 ± 0.12 CD). Eyes with keratoconus had greater estimation errors of total corneal astigmatism based on anterior corneal measurement than did healthy eyes. Posterior corneal surface measurement should be more emphasized to determine the total corneal astigmatism in eyes with keratoconus. © 2018 The Korean Ophthalmological Society.
Choi, Young; Song, Jong Suk; Kim, Hyo Myung
2018-01-01
Purpose To compare the effect of posterior corneal astigmatism on the estimation of total corneal astigmatism using anterior corneal measurements (simulated keratometry [K]) between eyes with keratoconus and healthy eyes. Methods Thirty-three eyes of 33 patients with keratoconus of grade I or II and 33 eyes of 33 age- and sex-matched healthy control subjects were enrolled. Anterior, posterior, and total corneal cylinder powers and flat meridians measured by a single Scheimpflug camera were analyzed. The difference in corneal astigmatism between the simulated K and total cornea was evaluated. Results The mean anterior, posterior, and total corneal cylinder powers of the keratoconus group (4.37 ± 1.73, 0.95 ± 0.39, and 4.36 ± 1.74 cylinder diopters [CD], respectively) were significantly greater than those of the control group (1.10 ± 0.68, 0.39 ± 0.18, and 0.97 ± 0.63 CD, respectively). The cylinder power difference between the simulated K and total cornea was positively correlated with the posterior corneal cylinder power and negatively correlated with the absolute flat meridian difference between the simulated K and total cornea in both groups. The mean magnitude of the vector difference between the astigmatism of the simulated K and total cornea of the keratoconus group (0.67 ± 0.67 CD) was significantly larger than that of the control group (0.28 ± 0.12 CD). Conclusions Eyes with keratoconus had greater estimation errors of total corneal astigmatism based on anterior corneal measurement than did healthy eyes. Posterior corneal surface measurement should be more emphasized to determine the total corneal astigmatism in eyes with keratoconus. PMID:29770640
Leffondré, Karen; Abrahamowicz, Michal; Siemiatycki, Jack
2003-12-30
Case-control studies are typically analysed using the conventional logistic model, which does not directly account for changes in the covariate values over time. Yet, many exposures may vary over time. The most natural alternative to handle such exposures would be to use the Cox model with time-dependent covariates. However, its application to case-control data opens the question of how to manipulate the risk sets. Through a simulation study, we investigate how the accuracy of the estimates of Cox's model depends on the operational definition of risk sets and/or on some aspects of the time-varying exposure. We also assess the estimates obtained from conventional logistic regression. The lifetime experience of a hypothetical population is first generated, and a matched case-control study is then simulated from this population. We control the frequency, the age at initiation, and the total duration of exposure, as well as the strengths of their effects. All models considered include a fixed-in-time covariate and one or two time-dependent covariate(s): the indicator of current exposure and/or the exposure duration. Simulation results show that none of the models always performs well. The discrepancies between the odds ratios yielded by logistic regression and the 'true' hazard ratio depend on both the type of the covariate and the strength of its effect. In addition, it seems that logistic regression has difficulty separating the effects of inter-correlated time-dependent covariates. By contrast, each of the two versions of Cox's model systematically induces either a serious under-estimation or a moderate over-estimation bias. The magnitude of the latter bias is proportional to the true effect, suggesting that an improved manipulation of the risk sets may eliminate, or at least reduce, the bias. Copyright 2003 JohnWiley & Sons, Ltd.
Bauer-Nilsen, Kristine; Hill, Colin; Trifiletti, Daniel M; Libby, Bruce; Lash, Donna H; Lain, Melody; Christodoulou, Deborah; Hodge, Constance; Showalter, Timothy N
2018-01-01
To evaluate the delivery costs, using time-driven activity-based costing, and reimbursement for definitive radiation therapy for locally advanced cervical cancer. Process maps were created to represent each step of the radiation treatment process and included personnel, equipment, and consumable supplies used to deliver care. Personnel were interviewed to estimate time involved to deliver care. Salary data, equipment purchasing information, and facilities costs were also obtained. We defined the capacity cost rate (CCR) for each resource and then calculated the total cost of patient care according to CCR and time for each resource. Costs were compared with 2016 Medicare reimbursement and relative value units (RVUs). The total cost of radiation therapy for cervical cancer was $12,861.68, with personnel costs constituting 49.8%. Brachytherapy cost $8610.68 (66.9% of total) and consumed 423 minutes of attending radiation oncologist time (80.0% of total). External beam radiation therapy cost $4055.01 (31.5% of total). Personnel costs were higher for brachytherapy than for the sum of simulation and external beam radiation therapy delivery ($4798.73 vs $1404.72). A full radiation therapy course provides radiation oncologists 149.77 RVUs with intensity modulated radiation therapy or 135.90 RVUs with 3-dimensional conformal radiation therapy, with total reimbursement of $23,321.71 and $16,071.90, respectively. Attending time per RVU is approximately 4-fold higher for brachytherapy (5.68 minutes) than 3-dimensional conformal radiation therapy (1.63 minutes) or intensity modulated radiation therapy (1.32 minutes). Time-driven activity-based costing was used to calculate the total cost of definitive radiation therapy for cervical cancer, revealing that brachytherapy delivery and personnel resources constituted the majority of costs. However, current reimbursement policy does not reflect the increased attending physician effort and delivery costs of brachytherapy. We hypothesize that the significant discrepancy between treatment costs and physician effort versus reimbursement may be a potential driver of reported national trends toward poor compliance with brachytherapy, and we suggest re-evaluation of payment policies to incentivize quality care. Copyright © 2017 Elsevier Inc. All rights reserved.
Effect of training frequency on the learning curve on the da Vinci Skills Simulator.
Walliczek, Ute; Förtsch, Arne; Dworschak, Philipp; Teymoortash, Afshin; Mandapathil, Magis; Werner, Jochen; Güldner, Christian
2016-04-01
The purpose of this study was to evaluate the effect of training on the performance outcome with the da Vinci Skills Simulator. Forty novices were enrolled in a prospective training curriculum. Participants were separated into 2 groups. Group 1 performed 4 training sessions and group 2 had 2 training sessions over a 4-week period. Five exercises were performed 3 times consecutively. On the last training day, a new exercise was added. A significant skills gain from the first to the final practice day in overall performance, time to complete, and economy of motion was seen for both groups. Group 1 had a significantly better outcome in overall performance, time to complete, and economy of motion in all exercises. There was no significant difference found regarding the new exercise in group 1 versus group 2 in nearly all parameters. Longer time distances between training sessions are assumed to play a secondary role, whereas total repetition frequency is crucial for improvement of technical performance. © 2015 Wiley Periodicals, Inc. Head Neck 38: E1762-E1769, 2016. © 2015 Wiley Periodicals, Inc.
BIGNASim: a NoSQL database structure and analysis portal for nucleic acids simulation data.
Hospital, Adam; Andrio, Pau; Cugnasco, Cesare; Codo, Laia; Becerra, Yolanda; Dans, Pablo D; Battistini, Federica; Torres, Jordi; Goñi, Ramón; Orozco, Modesto; Gelpí, Josep Ll
2016-01-04
Molecular dynamics simulation (MD) is, just behind genomics, the bioinformatics tool that generates the largest amounts of data, and that is using the largest amount of CPU time in supercomputing centres. MD trajectories are obtained after months of calculations, analysed in situ, and in practice forgotten. Several projects to generate stable trajectory databases have been developed for proteins, but no equivalence exists in the nucleic acids world. We present here a novel database system to store MD trajectories and analyses of nucleic acids. The initial data set available consists mainly of the benchmark of the new molecular dynamics force-field, parmBSC1. It contains 156 simulations, with over 120 μs of total simulation time. A deposition protocol is available to accept the submission of new trajectory data. The database is based on the combination of two NoSQL engines, Cassandra for storing trajectories and MongoDB to store analysis results and simulation metadata. The analyses available include backbone geometries, helical analysis, NMR observables and a variety of mechanical analyses. Individual trajectories and combined meta-trajectories can be downloaded from the portal. The system is accessible through http://mmb.irbbarcelona.org/BIGNASim/. Supplementary Material is also available on-line at http://mmb.irbbarcelona.org/BIGNASim/SuppMaterial/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Virtual reality neurosurgery: a simulator blueprint.
Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J
2004-04-01
This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.
Demonstration of coal reburning for cyclone boiler NO{sub x} control. Appendix, Book 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Based on the industry need for a pilot-scale cyclone boiler simulator, Babcock Wilcox (B&W) designed, fabricated, and installed such a facility at its Alliance Research Center (ARC) in 1985. The project involved conversion of an existing pulverized coal-fired facility to be cyclone-firing capable. Additionally, convective section tube banks were installed in the upper furnace in order to simulate a typical boiler convection pass. The small boiler simulator (SBS) is designed to simulate most fireside aspects of full-size utility boilers such as combustion and flue gas emissions characteristics, fireside deposition, etc. Prior to the design of the pilot-scale cyclone boiler simulator,more » the various cyclone boiler types were reviewed in order to identify the inherent cyclone boiler design characteristics which are applicable to the majority of these boilers. The cyclone boiler characteristics that were reviewed include NO{sub x} emissions, furnace exit gas temperature (FEGT) carbon loss, and total furnace residence time. Previous pilot-scale cyclone-fired furnace experience identified the following concerns: (1) Operability of a small cyclone furnace (e.g., continuous slag tapping capability). (2) The optimum cyclone(s) configuration for the pilot-scale unit. (3) Compatibility of NO{sub x} levels, carbon burnout, cyclone ash carryover to the convection pass, cyclone temperature, furnace residence time, and FEGT.« less
NASA Astrophysics Data System (ADS)
Karlsen, R. H.; Smits, F. J. C.; Stuyfzand, P. J.; Olsthoorn, T. N.; van Breukelen, B. M.
2012-08-01
SummaryThis article describes the post audit and inverse modeling of a 1-D forward reactive transport model. The model simulates the changes in water quality following artificial recharge of pre-treated water from the river Rhine in the Amsterdam Water Supply Dunes using the PHREEQC-2 numerical code. One observation dataset is used for model calibration, and another dataset for validation of model predictions. The total simulation time of the model is 50 years, from 1957 to 2007, with recharge composition varying on a monthly basis and the post audit is performed 26 years after the former model simulation period. The post audit revealed that the original model could reasonably predict conservative transport and kinetic redox reactions (oxygen and nitrate reduction coupled to the oxidation of soil organic carbon), but showed discrepancies in the simulation of cation exchange. Conceptualizations of the former model were inadequate to accurately simulate water quality changes controlled by cation exchange, especially concerning the breakthrough of potassium and magnesium fronts. Changes in conceptualization and model design, including the addition of five flow paths, to a total of six, and the use of parameter estimation software (PEST), resulted in a better model to measurement fit and system representation. No unique parameter set could be found for the model, primarily due to high parameter correlations, and an assessment of the predictive error was made using a calibration constrained Monte-Carlo method, and evaluated against field observations. The predictive error was found to be low for Na+ and Ca2+, except for greater travel times, while the K+ and Mg2+ error was restricted to the exchange fronts at some of the flow paths. Optimized cation exchange coefficients were relatively high, especially for potassium, but still within the observed range in literature. The exchange coefficient for potassium agrees with strong fixation on illite, a main clay mineral in the area. Optimized CEC values were systematically lower than clay and organic matter contents indicated, possibly reflecting preferential flow of groundwater through the more permeable but less reactive aquifer parts. Whereas the artificial recharge initially acted as an intrusion of relatively saline water triggering Na+ for Ca2+ exchange, further increasing total hardness of the recharged water, the gradual long-term reduction in salinity of the river Rhine since the mid 1970s has shifted to an intrusion of fresher water causing Ca2+ for Na+ exchange. As a result, seasonal and longer term reversal of the initial cation exchange processes was observed adding to the general long-term reduction in total hardness of the recharged Rhine water.
The Inhomogeneous Reionization Times of Present-day Galaxies
NASA Astrophysics Data System (ADS)
Aubert, Dominique; Deparis, Nicolas; Ocvirk, Pierre; Shapiro, Paul R.; Iliev, Ilian T.; Yepes, Gustavo; Gottlöber, Stefan; Hoffman, Yehuda; Teyssier, Romain
2018-04-01
Today’s galaxies experienced cosmic reionization at different times in different locations. For the first time, reionization (50% ionized) redshifts, z R , at the location of their progenitors are derived from new, fully coupled radiation-hydrodynamics simulation of galaxy formation and reionization at z > 6, matched to N-body simulation to z = 0. Constrained initial conditions were chosen to form the well-known structures of the local universe, including the Local Group and Virgo, in a (91 Mpc)3 volume large enough to model both global and local reionization. Reionization simulation CoDa I-AMR, by CPU-GPU code EMMA, used (2048)3 particles and (2048)3 initial cells, adaptively refined, while N-body simulation CoDa I-DM2048, by Gadget2, used (2048)3 particles, to find reionization times for all galaxies at z = 0 with masses M(z = 0) ≥ 108 M ⊙. Galaxies with M(z=0)≳ {10}11 {M}ȯ reionized earlier than the universe as a whole, by up to ∼500 Myr, with significant scatter. For Milky Way–like galaxies, z R ranged from 8 to 15. Galaxies with M(z=0)≲ {10}11 {M}ȯ typically reionized as late or later than globally averaged 50% reionization at < {z}R> =7.8, in neighborhoods where reionization was completed by external radiation. The spread of reionization times within galaxies was sometimes as large as the galaxy-to-galaxy scatter. The Milky Way and M31 reionized earlier than global reionization but later than typical for their mass, neither dominated by external radiation. Their most-massive progenitors at z > 6 had z R =9.8 (MW) and 11 (M31), while their total masses had z R = 8.2 (both).
Sosin, Michael; Ceradini, Daniel J; Hazen, Alexes; Sweeney, Nicole G; Brecht, Lawrence E; Levine, Jamie P; Staffenberg, David A; Saadeh, Pierre B; Bernstein, G Leslie; Rodriguez, Eduardo D
2016-05-01
Cadaveric face transplant models are routinely used for technical allograft design, perfusion assessment, and transplant simulation but are associated with substantial limitations. The purpose of this study was to describe the experience of implementing a translational donor research facial procurement and solid organ allograft recovery model. Institutional review board approval was obtained, and a 49-year-old, brain-dead donor was identified for facial vascularized composite allograft research procurement. The family generously consented to donation of solid organs and the total face, eyelids, ears, scalp, and skeletal subunit allograft. The successful sequence of computed tomographic scanning, fabrication and postprocessing of patient-specific cutting guides, tracheostomy placement, preoperative fluorescent angiography, silicone mask facial impression, donor facial allograft recovery, postprocurement fluorescent angiography, and successful recovery of kidneys and liver occurred without any donor instability. Preservation of the bilateral external carotid arteries, facial arteries, occipital arteries, and bilateral thyrolinguofacial and internal jugular veins provided reliable and robust perfusion to the entirety of the allograft. Total time of facial procurement was 10 hours 57 minutes. Essential to clinical face transplant outcomes is the preparedness of the institution, multidisciplinary face transplant team, organ procurement organization, and solid organ transplant colleagues. A translational facial research procurement and solid organ recovery model serves as an educational experience to modify processes and address procedural, anatomical, and logistical concerns for institutions developing a clinical face transplantation program. This methodical approach best simulates the stressors and challenges that can be expected during clinical face transplantation. Therapeutic, V.
Simulation of the effects of time and size at stocking on PCB accumulation in lake trout
Madenjian, Charles P.; Carpenter, Stephen R.
1993-01-01
Manipulations of size at stocking and timing of stocking have already been used to improve survival of stocked salmonines in the Great Lakes. It should be possible to stock salmonines into the Great Lakes in a way that reduces the rate of polychlorinated biphenyl (PCB) accumulation in these fishes. An individual-based model (IBM) was used to investigate the effects of size at stocking and timing of stocking on PCB accumulation by lake trout Salvelinus namaycush in Lake Michigan. The individual-based feature of the model allowed lake trout individuals to encounter prey fish individuals and then consume sufficiently small prey fish. The IBM accurately accounted for the variation in PCB concentrations observed within the Lake Michigan lake trout population. Results of the IBM simulations revealed that increasing the average size at stocking from 110 to 160 mm total length led to an increase in the average PCB concentration in the stocked cohort at age 5, after the fish had spent 4 years in the lake, from 2.33 to 2.65 mg/kg; the percentage of lake trout in the cohort at the end of the simulated time period with PCB concentration of 2 mg/kg or more increased from 62% to 79%. Thus, PCB contamination was reduced when the simulated size at stocking was smallest. An overall stocking strategy for lake trout into Lake Michigan should weigh this advantage regarding PCB contamination against the poor survival of lake trout that may occur if the trout are stocked at too small a size.
NASA Astrophysics Data System (ADS)
Hartmann, Andreas; Jasechko, Scott; Gleeson, Tom; Wada, Yoshihide; Andreo, Bartolomé; Barberá, Juan Antonio; Brielmann, Heike; Charlier, Jean-Baptiste; Darling, George; Filippini, Maria; Garvelmann, Jakob; Goldscheider, Nico; Kralik, Martin; Kunstmann, Harald; Ladouche, Bernard; Lange, Jens; Mudarra, Matías; Francisco Martín, José; Rimmer, Alon; Sanchez, Damián; Stumpp, Christine; Wagener, Thorsten
2017-04-01
Karst develops through the dissolution of carbonate rock and results in pronounced spatiotemporal heterogeneity of hydrological processes. Karst groundwater in Europe is a major source of fresh water contributing up to half of the total drinking water supply in some countries like Austria or Slovenia. Previous work showed that karstic recharge processes enhance and alter the sensitivity of recharge to climate variability. The enhanced preferential flow from the surface to the aquifer may be followed by enhanced risk of groundwater contamination. In this study we assess the contamination risk of karst aquifers over Europe and the Mediterranean using simulated transit time distributions. Using a new type of semi-distributed model that considers the spatial heterogeneity of karst hydraulic properties, we were able to simulate karstic groundwater recharge including its heterogeneous spatiotemporal dynamics. The model is driven by gridded daily climate data from the Global Land Data Assimilation System (GLDAS). Transit time distributions are calculated using virtual tracer experiments. We evaluated our simulations by independent information on transit times derived from observed time series of water isotopes of >70 karst springs over Europe. The simulations indicate that, compared to humid, mountain and desert regions, the Mediterranean region shows a stronger risk of contamination in Europe because preferential flow processes are most pronounced given thin soil layers and the seasonal abundance of high intensity rainfall events in autumn and winter. Our modelling approach includes strong simplifications and its results cannot easily be generalized but it still highlights that the combined effects of variable climate and heterogeneous catchment properties constitute a strong risk on water quality.
Lunar Outpost Life Support Architecture Study Based on a High-Mobility Exploration Scenario
NASA Technical Reports Server (NTRS)
Lange, Kevin E.; Anderson, Molly S.
2010-01-01
This paper presents results of a life support architecture study based on a 2009 NASA lunar surface exploration scenario known as Scenario 12. The study focuses on the assembly complete outpost configuration and includes pressurized rovers as part of a distributed outpost architecture in both stand-alone and integrated configurations. A range of life support architectures are examined reflecting different levels of closure and distributed functionality. Monte Carlo simulations are used to assess the sensitivity of results to volatile high-impact mission variables, including the quantity of residual Lander oxygen and hydrogen propellants available for scavenging, the fraction of crew time away from the outpost on excursions, total extravehicular activity hours, and habitat leakage. Surpluses or deficits of water and oxygen are reported for each architecture, along with fixed and 10-year total equivalent system mass estimates relative to a reference case. System robustness is discussed in terms of the probability of no water or oxygen resupply as determined from the Monte Carlo simulations.
Effective precipitation duration for runoff peaks based on catchment modelling
NASA Astrophysics Data System (ADS)
Sikorska, A. E.; Viviroli, D.; Seibert, J.
2018-01-01
Despite precipitation intensities may greatly vary during one flood event, detailed information about these intensities may not be required to accurately simulate floods with a hydrological model which rather reacts to cumulative precipitation sums. This raises two questions: to which extent is it important to preserve sub-daily precipitation intensities and how long does it effectively rain from the hydrological point of view? Both questions might seem straightforward to answer with a direct analysis of past precipitation events but require some arbitrary choices regarding the length of a precipitation event. To avoid these arbitrary decisions, here we present an alternative approach to characterize the effective length of precipitation event which is based on runoff simulations with respect to large floods. More precisely, we quantify the fraction of a day over which the daily precipitation has to be distributed to faithfully reproduce the large annual and seasonal floods which were generated by the hourly precipitation rate time series. New precipitation time series were generated by first aggregating the hourly observed data into daily totals and then evenly distributing them over sub-daily periods (n hours). These simulated time series were used as input to a hydrological bucket-type model and the resulting runoff flood peaks were compared to those obtained when using the original precipitation time series. We define then the effective daily precipitation duration as the number of hours n, for which the largest peaks are simulated best. For nine mesoscale Swiss catchments this effective daily precipitation duration was about half a day, which indicates that detailed information on precipitation intensities is not necessarily required to accurately estimate peaks of the largest annual and seasonal floods. These findings support the use of simple disaggregation approaches to make usage of past daily precipitation observations or daily precipitation simulations (e.g. from climate models) for hydrological modeling at an hourly time step.
Scott Painter; Ethan Coon; Cathy Wilson; Dylan Harp; Adam Atchley
2016-04-21
This Modeling Archive is in support of an NGEE Arctic publication currently in review [4/2016]. The Advanced Terrestrial Simulator (ATS) was used to simulate thermal hydrological conditions across varied environmental conditions for an ensemble of 1D models of Arctic permafrost. The thickness of organic soil is varied from 2 to 40cm, snow depth is varied from approximately 0 to 1.2 meters, water table depth was varied from -51cm below the soil surface to 31 cm above the soil surface. A total of 15,960 ensemble members are included. Data produced includes the third and fourth simulation year: active layer thickness, time of deepest thaw depth, temperature of the unfrozen soil, and unfrozen liquid saturation, for each ensemble member. Input files used to run the ensemble are also included.
Makhov, Dmitry V.; Saita, Kenichiro; Martinez, Todd J.; ...
2014-12-11
In this study, we report a detailed computational simulation of the photodissociation of pyrrole using the ab initio Multiple Cloning (AIMC) method implemented within MOLPRO. The efficiency of the AIMC implementation, employing train basis sets, linear approximation for matrix elements, and Ehrenfest configuration cloning, allows us to accumulate significant statistics. We calculate and analyze the total kinetic energy release (TKER) spectrum and Velocity Map Imaging (VMI) of pyrrole and compare the results directly with experimental measurements. Both the TKER spectrum and the structure of the velocity map image (VMI) are well reproduced. Previously, it has been assumed that the isotropicmore » component of the VMI arises from long time statistical dissociation. Instead, our simulations suggest that ultrafast dynamics contributes significantly to both low and high energy portions of the TKER spectrum.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhov, Dmitry V.; Saita, Kenichiro; Martinez, Todd J.
In this study, we report a detailed computational simulation of the photodissociation of pyrrole using the ab initio Multiple Cloning (AIMC) method implemented within MOLPRO. The efficiency of the AIMC implementation, employing train basis sets, linear approximation for matrix elements, and Ehrenfest configuration cloning, allows us to accumulate significant statistics. We calculate and analyze the total kinetic energy release (TKER) spectrum and Velocity Map Imaging (VMI) of pyrrole and compare the results directly with experimental measurements. Both the TKER spectrum and the structure of the velocity map image (VMI) are well reproduced. Previously, it has been assumed that the isotropicmore » component of the VMI arises from long time statistical dissociation. Instead, our simulations suggest that ultrafast dynamics contributes significantly to both low and high energy portions of the TKER spectrum.« less
Solar Eclipse Effect on Shelter Air Temperature
NASA Technical Reports Server (NTRS)
Segal, M.; Turner, R. W.; Prusa, J.; Bitzer, R. J.; Finley, S. V.
1996-01-01
Decreases in shelter temperature during eclipse events were quantified on the basis of observations, numerical model simulations, and complementary conceptual evaluations. Observations for the annular eclipse on 10 May 1994 over the United States are presented, and these provide insights into the temporal and spatial changes in the shelter temperature. The observations indicated near-surface temperature drops of as much as 6 C. Numerical model simulations for this eclipse event, which provide a complementary evaluation of the spatial and temporal patterns of the temperature drops, predict similar decreases. Interrelationships between the temperature drop, degree of solar irradiance reduction, and timing of the peak eclipse are also evaluated for late spring, summer, and winter sun conditions. These simulations suggest that for total eclipses the drops in shelter temperature in midlatitudes can be as high as 7 C for a spring morning eclipse.
A strategy for quantum algorithm design assisted by machine learning
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung
2014-07-01
We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.
NASA Technical Reports Server (NTRS)
Carr, Mary-Elena
1998-01-01
A size-based ecosystem model was modified to include periodic upwelling events and used to evaluate the effect of episodic nutrient supply on the standing stock, carbon uptake, and carbon flow into mesozooplankton grazing and sinking flux in a coastal upwelling regime. Two ecosystem configurations were compared: a single food chain made up of net phytoplankton and mesozooplankton (one autotroph and one heterotroph, A1H1), and three interconnected food chains plus bacteria (three autotrophs and four heterotrophs, A3H4). The carbon pathways in the A1H1 simulations were under stronger physical control than those of the A3H4 runs, where the small size classes are not affected by frequent upwelling events. In the more complex food web simulations, the microbial pathway determines the total carbon uptake and grazing rates, and regenerated nitrogen accounts for more than half of the total primary production for periods of 20 days or longer between events. By contrast, new production, export of carbon through sinking and mesozooplankton grazing are more important in the A1H1 simulations. In the A3H4 simulations, the turnover time scale of the autotroph biomass increases as the period between upwelling events increases, because of the larger contribution of slow-growing net phytoplankton. The upwelling period was characterized for three upwelling sites from the alongshore wind speed measured by the NASA Scatterometer (NSCAT) and the corresponding model output compared with literature data. This validation exercise for three upwelling sites and a downstream embayment suggests that standing stock, carbon uptake and size fractionation were best supported by the A3H4 simulations, while the simulated sinking fluxes are not distinguishable in the two configurations.
Lee, Kyung Eun; Lee, Seo Ho; Shin, Eun-Seok; Shim, Eun Bo
2017-06-26
Hemodynamic simulation for quantifying fractional flow reserve (FFR) is often performed in a patient-specific geometry of coronary arteries reconstructed from the images from various imaging modalities. Because optical coherence tomography (OCT) images can provide more precise vascular lumen geometry, regardless of stenotic severity, hemodynamic simulation based on OCT images may be effective. The aim of this study is to perform OCT-FFR simulations by coupling a 3D CFD model from geometrically correct OCT images with a LPM based on vessel lengths extracted from CAG data with clinical validations for the present method. To simulate coronary hemodynamics, we developed a fast and accurate method that combined a computational fluid dynamics (CFD) model of an OCT-based region of interest (ROI) with a lumped parameter model (LPM) of the coronary microvasculature and veins. Here, the LPM was based on vessel lengths extracted from coronary X-ray angiography (CAG) images. Based on a vessel length-based approach, we describe a theoretical formulation for the total resistance of the LPM from a three-dimensional (3D) CFD model of the ROI. To show the utility of this method, we present calculated examples of FFR from OCT images. To validate the OCT-based FFR calculation (OCT-FFR) clinically, we compared the computed OCT-FFR values for 17 vessels of 13 patients with clinically measured FFR (M-FFR) values. A novel formulation for the total resistance of LPM is introduced to accurately simulate a 3D CFD model of the ROI. The simulated FFR values compared well with clinically measured ones, showing the accuracy of the method. Moreover, the present method is fast in terms of computational time, enabling clinicians to provide solutions handled within the hospital.
A system for EPID-based real-time treatment delivery verification during dynamic IMRT treatment.
Fuangrod, Todsaporn; Woodruff, Henry C; van Uytven, Eric; McCurdy, Boyd M C; Kuncic, Zdenka; O'Connor, Daryl J; Greer, Peter B
2013-09-01
To design and develop a real-time electronic portal imaging device (EPID)-based delivery verification system for dynamic intensity modulated radiation therapy (IMRT) which enables detection of gross treatment delivery errors before delivery of substantial radiation to the patient. The system utilizes a comprehensive physics-based model to generate a series of predicted transit EPID image frames as a reference dataset and compares these to measured EPID frames acquired during treatment. The two datasets are using MLC aperture comparison and cumulative signal checking techniques. The system operation in real-time was simulated offline using previously acquired images for 19 IMRT patient deliveries with both frame-by-frame comparison and cumulative frame comparison. Simulated error case studies were used to demonstrate the system sensitivity and performance. The accuracy of the synchronization method was shown to agree within two control points which corresponds to approximately ∼1% of the total MU to be delivered for dynamic IMRT. The system achieved mean real-time gamma results for frame-by-frame analysis of 86.6% and 89.0% for 3%, 3 mm and 4%, 4 mm criteria, respectively, and 97.9% and 98.6% for cumulative gamma analysis. The system can detect a 10% MU error using 3%, 3 mm criteria within approximately 10 s. The EPID-based real-time delivery verification system successfully detected simulated gross errors introduced into patient plan deliveries in near real-time (within 0.1 s). A real-time radiation delivery verification system for dynamic IMRT has been demonstrated that is designed to prevent major mistreatments in modern radiation therapy.
A queueing network model to analyze the impact of parallelization of care on patient cycle time.
Jiang, Lixiang; Giachetti, Ronald E
2008-09-01
The total time a patient spends in an outpatient facility, called the patient cycle time, is a major contributor to overall patient satisfaction. A frequently recommended strategy to reduce the total time is to perform some activities in parallel thereby shortening patient cycle time. To analyze patient cycle time this paper extends and improves upon existing multi-class open queueing network model (MOQN) so that the patient flow in an urgent care center can be modeled. Results of the model are analyzed using data from an urgent care center contemplating greater parallelization of patient care activities. The results indicate that parallelization can reduce the cycle time for those patient classes which require more than one diagnostic and/ or treatment intervention. However, for many patient classes there would be little if any improvement, indicating the importance of tools to analyze business process reengineering rules. The paper makes contributions by implementing an approximation for fork/join queues in the network and by improving the approximation for multiple server queues in both low traffic and high traffic conditions. We demonstrate the accuracy of the MOQN results through comparisons to simulation results.
Multak, Nina; Newell, Karen; Spear, Sherrie; Scalese, Ross J; Issenberg, S Barry
2015-06-01
Research demonstrates limitations in the ability of health care trainees/practitioners, including physician assistants (PAs), to identify important cardiopulmonary examination findings and diagnose corresponding conditions. Studies also show that simulation-based training leads to improved performance and that these skills can transfer to real patients. This study evaluated the effectiveness of a newly developed curriculum incorporating simulation with deliberate practice for teaching cardiopulmonary physical examination/bedside diagnosis skills in the PA population. This multi-institutional study used a pretest/posttest design. Participants, PA students from 4 different programs, received a standardized curriculum including instructor-led activities interspersed among small-group/independent self-study time. Didactic sessions and independent study featured practice with the "Harvey" simulator and use of specially developed computer-based multimedia tutorials. Preintervention: participants completed demographic questionnaires, rated self-confidence, and underwent baseline evaluation of knowledge and cardiopulmonary physical examination skills. Students logged self-study time using various learning resources. Postintervention: students again rated self-confidence and underwent repeat cognitive/performance testing using equivalent written/simulator-based assessments. Physician assistant students (N = 56) demonstrated significant gains in knowledge, cardiac examination technique, recognition of total cardiac findings, identification of key auscultatory findings (extra heart sounds, systolic/diastolic murmurs), and the ability to make correct diagnoses. Learner self-confidence also improved significantly. This study demonstrated the effectiveness of a simulation-based curriculum for teaching essential physical examination/bedside diagnosis skills to PA students. Its results reinforce those of similar/previous research, which suggest that simulation-based training is most effective under certain educational conditions. Future research will include subgroup analyses/correlation of other variables to explore best features/uses of simulation technology for training PAs.
Hart, Rheannon M.; Green, W. Reed; Westerman, Drew A.; Petersen, James C.; DeLanois, Jeanne L.
2012-01-01
Lake Maumelle, located in central Arkansas northwest of the cities of Little Rock and North Little Rock, is one of two principal drinking-water supplies for the Little Rock, and North Little Rock, Arkansas, metropolitan areas. Lake Maumelle and the Maumelle River (its primary tributary) are more pristine than most other reservoirs and streams in the region with 80 percent of the land area in the entire watershed being forested. However, as the Lake Maumelle watershed becomes increasingly more urbanized and timber harvesting becomes more extensive, concerns about the sustainability of the quality of the water supply also have increased. Two hydrodynamic and water-quality models were developed to examine the hydrology and water quality in the Lake Maumelle watershed and changes that might occur as the watershed becomes more urbanized and timber harvesting becomes more extensive. A Hydrologic Simulation Program–FORTRAN watershed model was developed using continuous streamflow and discreet suspended-sediment and water-quality data collected from January 2004 through 2010. A CE–QUAL–W2 model was developed to simulate reservoir hydrodynamics and selected water-quality characteristics using the simulated output from the Hydrologic Simulation Program–FORTRAN model from January 2004 through 2010. The calibrated Hydrologic Simulation Program–FORTRAN model and the calibrated CE–QUAL–W2 model were developed to simulate three land-use scenarios and to examine the potential effects of these land-use changes, as defined in the model, on the water quality of Lake Maumelle during the 2004 through 2010 simulation period. These scenarios included a scenario that simulated conversion of most land in the watershed to forest (scenario 1), a scenario that simulated conversion of potentially developable land to low-intensity urban land use in part of the watershed (scenario 2), and a scenario that simulated timber harvest in part of the watershed (scenario 3). Simulated land-use changes for scenarios 1 and 3 resulted in little (generally less than 10 percent) overall effect on the simulated water quality in the Hydrologic Simulation Program–FORTRAN model. The land-use change of scenario 2 affected subwatersheds that include Bringle, Reece, and Yount Creek tributaries and most other subwatersheds that drain into the northern side of Lake Maumelle; large percent increases in loading rates (generally between 10 and 25 percent) included dissolved nitrite plus nitrate nitrogen, dissolved orthophosphate, total phosphorus, suspended sediment, dissolved ammonia nitrogen, total organic carbon, and fecal coliform bacteria. For scenario 1, the simulated changes in nutrient, suspended sediment, and total organic carbon loads from the Hydrologic Simulation Program–FORTRAN model resulted in very slight (generally less than 10 percent) changes in simulated water quality for Lake Maumelle, relative to the baseline condition. Following lake mixing in the falls of 2006 and 2007, phosphorus and nitrogen concentrations were higher than the baseline condition and chlorophyll a responded accordingly. The increased nutrient and chlorophyll a concentrations in late October and into 2007 were enough to increase concentrations, on average, for the entire simulation period (2004–10). For scenario 2, the simulated changes in nutrient, suspended sediment, total organic carbon, and fecal coliform bacteria loads from the Lake Maumelle watershed resulted in slight changes in simulated water quality for Lake Maumelle, relative to the baseline condition (total nitrogen decreased by 0.01 milligram per liter; dissolved orthophosphate increased by 0.001 milligram per liter; chlorophyll a decreased by 0.1 microgram per liter). The differences in these concentrations are approximately an order of magnitude less than the error between measured and simulated concentrations in the baseline model. During the driest summer in the simulation period (2006), phosphorus and nitrogen concentrations were lower than the baseline condition and chlorophyll a concentrations decreased during the same summer season. The decrease in nitrogen and chlorophyll a concentrations during the dry summer in 2006 was enough to decrease concentrations of these constituents very slightly, on average, for the entire simulation period (2004–10). For scenario 3, the changes in simulated nutrient, suspended sediment, total organic carbon, and fecal coliform bacteria loads from Lake Maumelle watershed resulted in very slight changes in simulated water quality within Lake Maumelle, relative to the baseline condition, for most of the reservoir. Among the implications of the results of the modeling described in this report are those related to scale in both space and time. Spatial scales include limited size and location of land-use changes, their effects on loading rates, and resultant effects on water quality of Lake Maumelle. Temporally, the magnitude of the water-quality changes simulated by the land-use change scenarios over the 7-year period (2004–10) are not necessarily indicative of the changes that could be expected to occur with similar land-use changes persisting over a 20-, 30-, or 40- year period, for example. These implications should be tempered by realization of the described model limitations. The Hydrologic Simulation Program–FORTRAN watershed model was calibrated to streamflow and water-quality data from five streamflow-gaging stations, and in general, these stations characterize a range of subwatershed areas with varying land-use types. The CE–QUAL–W2 reservoir model was calibrated to water-quality data collected during January 2004 through December 2010 at three reservoir stations, representing the upper, middle, and lower sections of the reservoir. In general, the baseline simulation for the Hydrologic Simulation Program–FORTRAN and the CE–QUAL–W2 models matched reasonably well to the measured data. Simulated and measured suspended-sediment concentrations during periods of base flow (streamflows not substantially influenced by runoff) agree reasonably well for Maumelle River at Williams Junction, the station representing the upper end of the watershed (with differences—simulated minus measured value—generally ranging from -15 to 41 milligrams per liter, and percent difference—relative to the measured value—ranging from -99 to 182 percent) and Maumelle River near Wye, the station just above the reservoir at the lower end (differences generally ranging from -20 to 22 milligrams per liter, and percent difference ranging from -100 to 194 percent). In general, water temperature and dissolved-oxygen concentration simulations followed measured seasonal trends for all stations with the largest differences occurring during periods of lowest temperatures or during the periods of lowest measured dissolved-oxygen concentrations. For the CE–QUAL–W2 model, simulated vertical distributions of water temperatures and dissolved-oxygen concentrations agreed with measured vertical distributions over time, even for the most complex water-temperature profiles. Considering the oligotrophic-mesotrophic (low to intermediate primary productivity and associated low nutrient concentrations) condition of Lake Maumelle, simulated algae, phosphorus, and nitrogen concentrations compared well with generally low measured concentrations.
Stochastic Optimization for an Analytical Model of Saltwater Intrusion in Coastal Aquifers
Stratis, Paris N.; Karatzas, George P.; Papadopoulou, Elena P.; Zakynthinaki, Maria S.; Saridakis, Yiannis G.
2016-01-01
The present study implements a stochastic optimization technique to optimally manage freshwater pumping from coastal aquifers. Our simulations utilize the well-known sharp interface model for saltwater intrusion in coastal aquifers together with its known analytical solution. The objective is to maximize the total volume of freshwater pumped by the wells from the aquifer while, at the same time, protecting the aquifer from saltwater intrusion. In the direction of dealing with this problem in real time, the ALOPEX stochastic optimization method is used, to optimize the pumping rates of the wells, coupled with a penalty-based strategy that keeps the saltwater front at a safe distance from the wells. Several numerical optimization results, that simulate a known real aquifer case, are presented. The results explore the computational performance of the chosen stochastic optimization method as well as its abilities to manage freshwater pumping in real aquifer environments. PMID:27689362
Formation of the Giant Planets by Concurrent Accretion of Solids and Gas
NASA Technical Reports Server (NTRS)
Hubickyj, Olenka
1997-01-01
Models were developed to simulate planet formation. Three major phases are characterized in the simulations: (1) planetesimal accretion rate, which dominates that of gas, rapidly increases owing to runaway accretion, then decreases as the planet's feeding zone is depleted; (2) occurs when both solid and gas accretion rates are small and nearly independent of time; and (3) starts when the solid and gas masses are about equal and is marked by runaway gas accretion. The models applicability to planets in our Solar System are judged using two basic "yardsticks". The results suggest that the solar nebula dissipated while Uranus and Neptune were in the second phase, during which, for a relatively long time, the masses of their gaseous envelopes were small but not negligible compared to the total masses. Background information, results and a published article are included in the report.
Broadband noise limit in the photodetection of ultralow jitter optical pulses.
Sun, Wenlu; Quinlan, Franklyn; Fortier, Tara M; Deschenes, Jean-Daniel; Fu, Yang; Diddams, Scott A; Campbell, Joe C
2014-11-14
Applications with optical atomic clocks and precision timing often require the transfer of optical frequency references to the electrical domain with extremely high fidelity. Here we examine the impact of photocarrier scattering and distributed absorption on the photocurrent noise of high-speed photodiodes when detecting ultralow jitter optical pulses. Despite its small contribution to the total photocurrent, this excess noise can determine the phase noise and timing jitter of microwave signals generated by detecting ultrashort optical pulses. A Monte Carlo simulation of the photodetection process is used to quantitatively estimate the excess noise. Simulated phase noise on the 10 GHz harmonic of a photodetected pulse train shows good agreement with previous experimental data, leading to the conclusion that the lowest phase noise photonically generated microwave signals are limited by photocarrier scattering well above the quantum limit of the optical pulse train.
Numerical Investigation of a Model Scramjet Combustor Using DDES
NASA Astrophysics Data System (ADS)
Shin, Junsu; Sung, Hong-Gye
2017-04-01
Non-reactive flows moving through a model scramjet were investigated using a delayed detached eddy simulation (DDES), which is a hybrid scheme combining Reynolds averaged Navier-Stokes scheme and a large eddy simulation. The three dimensional Navier-Stokes equations were solved numerically on a structural grid using finite volume methods. An in-house was developed. This code used a monotonic upstream-centered scheme for conservation laws (MUSCL) with an advection upstream splitting method by pressure weight function (AUSMPW+) for space. In addition, a 4th order Runge-Kutta scheme was used with preconditioning for time integration. The geometries and boundary conditions of a scramjet combustor operated by DLR, a German aerospace center, were considered. The profiles of the lower wall pressure and axial velocity obtained from a time-averaged solution were compared with experimental results. Also, the mixing efficiency and total pressure recovery factor were provided in order to inspect the performance of the combustor.
NASA Technical Reports Server (NTRS)
Rustan, Pedro L., Jr.
1987-01-01
Lightning data obtained by measuring the surface electromagnetic fields on a CV-580 research aircraft during 48 lightning strikes between 1500 and 18,000 feet in central Florida during the summers of 1984 and 1985, and nuclear electromagnetic pulse (NEMP) data obtained by surface electromagnetic field measurements using a 1:74 CV-580 scale model, are presented. From one lightning event, maximum values of 3750 T/s for the time rate of change of the surface magnetic flux density, and 4.7 kA for the peak current, were obtained. From the simulated NEMP test, maximum values of 40,000 T/s for the time rate of change of the surface magnetic flux density, and 90 A/sq m for the total normal current density, were found. The data have application to the development of a military aircraft lightning/NEMP standard.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langer, Steven H.; Karlin, Ian; Marinak, Marty M.
HYDRA is used to simulate a variety of experiments carried out at the National Ignition Facility (NIF) [4] and other high energy density physics facilities. HYDRA has packages to simulate radiation transfer, atomic physics, hydrodynamics, laser propagation, and a number of other physics effects. HYDRA has over one million lines of code and includes both MPI and thread-level (OpenMP and pthreads) parallelism. This paper measures the performance characteristics of HYDRA using hardware counters on an IBM BlueGene/Q system. We report key ratios such as bytes/instruction and memory bandwidth for several different physics packages. The total number of bytes read andmore » written per time step is also reported. We show that none of the packages which use significant time are memory bandwidth limited on a Blue Gene/Q. HYDRA currently issues very few SIMD instructions. The pressure on memory bandwidth will increase if high levels of SIMD instructions can be achieved.« less
NASA Astrophysics Data System (ADS)
Mazurkiewicz, Karolina; Skotnicki, Marcin
2018-02-01
The paper presents the results of analysis of the influence of the maximum intensity (peak) location in the synthetic hyetograph and rainfall duration on the maximum outflow from urban catchment. For the calculation Chicago hyetographs with a duration from 15 minutes to 180 minutes and peak location between 20% and 50% of the total rainfall duration were design. Runoff simulation was performed using the SWMM5 program for three models of urban catchment with area from 0.9 km2 to 6.7 km2. It was found that the increase in the rainfall peak location causes the increase in the maximum outflow up to 17%. For a given catchment the greatest maximum outflow is generated by the rainfall, which time to peak corresponds to the flow time through the catchment. Presented results may be useful for choosing the rainfall parameters for storm sewer systems modeling.
Transport and discrete particle noise in gyrokinetic simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Lee, W. W.
2006-10-01
We present results from our recent investigations regarding the effects of discrete particle noise on the long-time behavior and transport properties of gyrokinetic particle-in-cell simulations. It is found that the amplitude of nonlinearly saturated drift waves is unaffected by discreteness-induced noise in plasmas whose behavior is dominated by a single mode in the saturated state. We further show that the scaling of this noise amplitude with particle count is correctly predicted by the fluctuation-dissipation theorem, even though the drift waves have driven the plasma from thermal equilibrium. As well, we find that the long-term behavior of the saturated system is unaffected by discreteness-induced noise even when multiple modes are included. Additional work utilizing a code with both total-f and δf capabilities is also presented, as part of our efforts to better understand the long- time balance between entropy production, collisional dissipation, and particle/heat flux in gyrokinetic plasmas.
NASA Astrophysics Data System (ADS)
Avelino, P. P.; Bazeia, D.; Losano, L.; Menezes, J.; de Oliveira, B. F.
2018-02-01
Stochastic simulations of cyclic three-species spatial predator-prey models are usually performed in square lattices with nearest-neighbour interactions starting from random initial conditions. In this letter we describe the results of off-lattice Lotka-Volterra stochastic simulations, showing that the emergence of spiral patterns does occur for sufficiently high values of the (conserved) total density of individuals. We also investigate the dynamics in our simulations, finding an empirical relation characterizing the dependence of the characteristic peak frequency and amplitude on the total density. Finally, we study the impact of the total density on the extinction probability, showing how a low population density may jeopardize biodiversity.
Giant Impacts on Earth-Like Worlds
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-05-01
Earth has experienced a large number of impacts, from the cratering events that may have caused mass extinctions to the enormous impact believed to have formed the Moon. A new study examines whether our planets impact history is typical for Earth-like worlds.N-Body ChallengesTimeline placing the authors simulations in context of the history of our solar system (click for a closer look). [Quintana et al. 2016]The final stages of terrestrial planet formation are thought to be dominated by giant impacts of bodies in the protoplanetary disk. During this stage, protoplanets smash into one another and accrete, greatly influencing the growth, composition, and habitability of the final planets.There are two major challenges when simulating this N-body planet formation. The first is fragmentation: since computational time scales as N^2, simulating lots of bodies that split into many more bodies is very computationally intensive. For this reason, fragmentation is usually ignored; simulations instead assume perfect accretion during collisions.Total number of bodies remaining within the authors simulations over time, with fragmentation included (grey) and ignored (red). Both simulations result in the same final number of bodies, but the ones that include fragmentation take more time to reach that final number. [Quintana et al. 2016]The second challengeis that many-body systems are chaotic, which means its necessary to do a large number of simulations to make statistical statements about outcomes.Adding FragmentationA team of scientists led by Elisa Quintana (NASA NPP Senior Fellow at the Ames Research Center) has recently pushed at these challenges by modeling inner-planet formation using a code that does include fragmentation. The team ran 140 simulations with and 140 without the effects of fragmentation using similar initial conditions to understand how including fragmentation affects the outcome.Quintana and collaborators then used the fragmentation-inclusive simulations to examine the collisional histories of Earth-like planets that form. Their goal is to understand if our solar systems formation and evolution is typical or unique.How Common Are Giant Impacts?Histogram of the total number of giant impacts received by the 164 Earth-like worlds produced in the authors fragmentation-inclusive simulations. [Quintana et al. 2016]The authors find that including fragmentation does not affect the final number of planets that are formed in the simulation (an average of 34 in each system, consistent with our solar systems terrestrial planet count). But when fragmentation is included, fewer collisions end in merger which results in typical accretion timescales roughly doubling. So the effects of fragmentation influence the collisional history of the system and the length of time needed for the final system to form.Examining the 164 Earth-analogs produced in the fragmentation-inclusive simulations, Quintana and collaborators find that impacts large enough to completely strip a planets atmosphere are rare; fewer than 1% of the Earth-like worlds experienced this.But giant impacts that are able to strip ~50% of an Earth-analogs atmosphere roughly the energy of the giant impact thought to have formed our Moon are more common. Almost all of the authors Earth-analogs experienced at least 1 giant impact of this size in the 2-Gyr simulation, and the average Earth-like world experienced ~3 such impacts.These results suggest that our planets impact history with the Moon-forming impact likely being the last giant impact Earth experienced is fairly typical for Earth-like worlds. The outcomes also indicate that smaller impacts that are still potentially life-threatening are much more common than bulk atmospheric removal. Higher-resolution simulations could be used to examine such smaller impacts.CitationElisa V. Quintana et al 2016 ApJ 821 126. doi:10.3847/0004-637X/821/2/126
NASA Astrophysics Data System (ADS)
Angulo-Martinez, Marta; Alastrué, Juan; Moret-Fernández, David; Beguería, Santiago; López, Mariví; Navas, Ana
2017-04-01
Rainfall simulation experiments were carried out in order to study soil crust formation and its relation with soil infiltration parameters—sorptivity (S) and hydraulic conductivity (K)—on four common agricultural soils with contrasted properties; namely, Cambisol, Gypsisol, Solonchak, and Solonetz. Three different rainfall simulations, replicated three times each of them, were performed over the soils. Prior to rainfall simulations all soils were mechanically tilled with a rototiller to create similar soil surface conditions and homogeneous soils. Rainfall simulation parameters were monitored in real time by a Thies Laser Precipitation Monitor, allowing a complete characterization of simulated rainfall microphysics (drop size and velocity distributions) and integrated variables (accumulated rainfall, intensity and kinetic energy). Once soils dried after the simulations, soil penetration resistance was measured and soil hydraulic parameters, S and K, were estimated using the disc infiltrometry technique. There was little variation in rainfall parameters among simulations. Mean intensity and mean median diameter (D50) varied in simulations 1 ( 0.5 bar), 2 ( 0.8 bar) and 3 ( 1.2 bar) from 26.5 mm h-1 and 0.43 mm (s1) to 40.5 mm h-1 and 0.54 mm (s2) and 41.1 mm h-1 and 0.56 mm for (s3), respectively. Crust formation by soil was explained by D50 and subsequently by the total precipitation amount and the percentage of silt and clay in soil, being Cambisol and Gypsisol the soils that showed more increase in penetration resistance by simulation. All soils showed similar S values by simulations which were explained by rainfall intensity. Different patterns of K were shown by the four soils, which were explained by the combined effect of D50 and intensity, together with soil physico-chemical properties. This study highlights the importance of monitoring all precipitation parameters to determine their effect on different soil processes.
Photonic-Doppler-Velocimetry, Paraxial-Scalar Diffraction Theory and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ambrose, W. P.
2015-07-20
In this report I describe current progress on a paraxial, scalar-field theory suitable for simulating what is measured in Photonic Doppler Velocimetry (PDV) experiments in three dimensions. I have introduced a number of approximations in this work in order to bring the total computation time for one experiment down to around 20 hours. My goals were: to develop an approximate method of calculating the peak frequency in a spectral sideband at an instant of time based on an optical diffraction theory for a moving target, to compare the ‘measured’ velocity to the ‘input’ velocity to gain insights into how andmore » to what precision PDV measures the component of the mass velocity along the optical axis, and to investigate the effects of small amounts of roughness on the measured velocity. This report illustrates the progress I have made in describing how to perform such calculations with a full three dimensional picture including tilted target, tilted mass velocity (not necessarily in the same direction), and small amounts of surface roughness. With the method established for a calculation at one instant of time, measured velocities can be simulated for a sequence of times, similar to the process of sampling velocities in experiments. Improvements in these methods are certainly possible at hugely increased computational cost. I am hopeful that readers appreciate the insights possible at the current level of approximation.« less
Measurement-noise maximum as a signature of a phase transition.
Chen, Zhi; Yu, Clare C
2007-02-02
We propose that a maximum in measurement noise can be used as a signature of a phase transition. As an example, we study the energy and magnetization noise spectra associated with first- and second-order phase transitions by using Monte Carlo simulations of the Ising model and 5-state Potts model in two dimensions. For a finite size system, the total noise power and the low frequency white noise S(f
Weng, H Y; Yadav, S; Olynk Widmar, N J; Croney, C; Ash, M; Cooper, M
2017-03-01
A stochastic risk model was developed to estimate the time elapsed before overcrowding (TOC) or feed interruption (TFI) emerged on the swine premises under movement restrictions during a classical swine fever (CSF) outbreak in Indiana, USA. Nursery (19 to 65 days of age) and grow-to-finish (40 to 165 days of age) pork production operations were modelled separately. Overcrowding was defined as the total weight of pigs on premises exceeding 100% to 115% of the maximum capacity of the premises, which was computed as the total weight of the pigs at harvest/transition age. Algorithms were developed to estimate age-specific weight of the pigs on premises and to compare the daily total weight of the pigs with the threshold weight defining overcrowding to flag the time when the total weight exceeded the threshold (i.e. when overcrowding occurred). To estimate TFI, an algorithm was constructed to model a swine producer's decision to discontinue feed supply by incorporating the assumptions that a longer estimated epidemic duration, a longer time interval between the age of pigs at the onset of the outbreak and the harvest/transition age, or a longer progression of an ongoing outbreak would increase the probability of a producer's decision to discontinue the feed supply. Adverse animal welfare conditions were modelled to emerge shortly after an interruption of feed supply. Simulations were run with 100 000 iterations each for a 365-day period. Overcrowding occurred in all simulated iterations, and feed interruption occurred in 30% of the iterations. The median (5th and 95th percentiles) TOC was 24 days (10, 43) in nursery operations and 78 days (26, 134) in grow-to-finish operations. Most feed interruptions, if they emerged, occurred within 15 days of an outbreak. The median (5th and 95th percentiles) time at which either overcrowding or feed interruption emerged was 19 days (4, 42) in nursery and 57 days (4, 130) in grow-to-finish operations. The study findings suggest that overcrowding and feed interruption could emerge early during a CSF outbreak among swine premises under movement restrictions. The outputs derived from the risk model could be used to estimate and evaluate associated mitigation strategies for alleviating adverse animal welfare conditions resulting from movement restrictions.
Relationship between Organic Carbon and Opportunistic Pathogens in Simulated Glass Water Heaters.
Williams, Krista; Pruden, Amy; Falkinham, Joseph O; Edwards, Marc; Williams, Krista; Pruden, Amy; Falkinham, Joseph O; Edwards, Marc
2015-06-09
Controlling organic carbon levels in municipal water has been hypothesized to limit downstream growth of bacteria and opportunistic pathogens in premise plumbing (OPPPs). Here, the relationships between influent organic carbon (0-15,000 µg ozonated fulvic acid /L) and the number of total bacteria [16S rRNA genes and heterotrophic plate counts (HPCs)] and a wide range of OPPPs (gene copy numbers of Acanthamoeba polyphaga, Vermamoeba vermiformis, Legionella pneumophila, and Mycobacterium avium) were examined in the bulk water of 120-mL simulated glass water heaters (SGWHs). The SGWHs were operated at 32-37 °C, which is representative of conditions encountered at the bottom of electric water heaters, with water changes of 80% three times per week to simulate low use. This design presented advantages of controlled and replicated (triplicate) conditions and avoided other potential limitations to OPPP growth in order to isolate the variable of organic carbon. Over seventeen months, strong correlations were observed between total organic carbon (TOC) and both 16S rRNA gene copy numbers and HPC counts (avg. R2 > 0.89). Although M. avium gene copies were occasionally correlated with TOC (avg. R2 = 0.82 to 0.97, for 2 out of 4 time points) and over a limited TOC range (0-1000 µg/L), no other correlations were identified between other OPPPs and added TOC. These results suggest that reducing organic carbon in distributed water is not adequate as a sole strategy for controlling OPPPs, although it may have promise in conjunction with other approaches.
Relationship between Organic Carbon and Opportunistic Pathogens in Simulated Glass Water Heaters
Williams, Krista; Pruden, Amy; Falkinham, Joseph O.; Edwards, Marc
2015-01-01
Controlling organic carbon levels in municipal water has been hypothesized to limit downstream growth of bacteria and opportunistic pathogens in premise plumbing (OPPPs). Here, the relationships between influent organic carbon (0–15,000 µg ozonated fulvic acid /L) and the number of total bacteria [16S rRNA genes and heterotrophic plate counts (HPCs)] and a wide range of OPPPs (gene copy numbers of Acanthamoeba polyphaga, Vermamoeba vermiformis, Legionella pneumophila, and Mycobacterium avium) were examined in the bulk water of 120-mL simulated glass water heaters (SGWHs). The SGWHs were operated at 32–37 °C, which is representative of conditions encountered at the bottom of electric water heaters, with water changes of 80% three times per week to simulate low use. This design presented advantages of controlled and replicated (triplicate) conditions and avoided other potential limitations to OPPP growth in order to isolate the variable of organic carbon. Over seventeen months, strong correlations were observed between total organic carbon (TOC) and both 16S rRNA gene copy numbers and HPC counts (avg. R2 > 0.89). Although M. avium gene copies were occasionally correlated with TOC (avg. R2 = 0.82 to 0.97, for 2 out of 4 time points) and over a limited TOC range (0–1000 µg/L), no other correlations were identified between other OPPPs and added TOC. These results suggest that reducing organic carbon in distributed water is not adequate as a sole strategy for controlling OPPPs, although it may have promise in conjunction with other approaches. PMID:26066310
Human Factors in Aviation Maintenance. Phase 3. Volume 1. Progress Report
1993-08-01
As a subcontractor for Galaxy Scientific, Dr. Colin Drury at the State University at New York at Buffalo is conducting a substantial research program...taxonomy. In addition, however, Dr. Drury has developed a simulated NDI task, using a SUN workstation, that incorporates the physical aspects and...to both total task time and to the decision criterion used. Drury clearly feels, with some justification, that intensive investigation of individual
Agent Based Modeling and Simulation Framework for Supply Chain Risk Management
2012-03-01
Christopher and Peck 2004) macroeconomic , policy, competition, and resource (Ghoshal 1987) value chain, operational, event, and recurring (Shi 2004...clustering algorithms in agent logic to protect company privacy ( da Silva et al. 2006), aggregation of domain context in agent data analysis logic (Xiang...Operational Availability ( OA ) for FMC and PMC. 75 Mission Capable (MICAP) Hours is the measure of total time (in a month) consumable or reparable
NASA Astrophysics Data System (ADS)
Lembège, Bertrand; Yang, Zhongwei
2018-06-01
The impact of the nonstationarity of the heliospheric termination shock in the presence of pickup ions (PUIs) on the energy partition between different plasma components is analyzed self-consistently by using a one-dimensional particle-in-cell simulation code. Solar wind ions (SWIs) and PUIs are introduced as Maxwellian and shell distributions, respectively. For a fixed time, (a) with a percentage of 25% PUIs, a large part of the downstream thermal pressure is carried by reflected PUIs, in agreement with previous hybrid simulations; (b) the total downstream distribution includes three main components: (i) a low-energy component dominated by directly transmitted (DT) SWIs, (ii) a high-energy component dominated by reflected PUIs, and (iii) an intermediate-energy component dominated by reflected SWIs and DT PUIs. Moreover, results show that the front nonstationarity (self-reformation) persists even in presence of 25% PUIs, and has some impacts on both SWIs and PUIs: (a) the rate of reflected ions suffers some time fluctuations for both SWIs and PUIs; (b) the relative percentage of downstream thermal pressure transfered to PUIs and SWIs also suffers some time fluctuations, but depends on the relative distance from the front; (c) the three components within the total downstream heliosheath distribution persist in time, but the contribution of the ion subpopulations to the low- and intermediate-energy components are redistributed by the front nonstationarity. Our results allow clarifying the respective roles of SWIs and PUIs as a viable production source of energetic neutral atoms and are compared with previous results.
The Flipped Classroom in Emergency Medicine Using Online Videos with Interpolated Questions.
Rose, Emily; Claudius, Ilene; Tabatabai, Ramin; Kearl, Liza; Behar, Solomon; Jhun, Paul
2016-09-01
Utilizing the flipped classroom is an opportunity for a more engaged classroom session. This educational approach is theorized to improve learner engagement and retention and allows for more complex learning during class. No studies to date have been conducted in the postgraduate medical education setting investigating the effects of interactive, interpolated questions in preclassroom online video material. We created a flipped classroom for core pediatric emergency medicine (PEM) topics using recorded online video lectures for preclassroom material and interactive simulations for the in-classroom session. Lectures were filmed and edited to include integrated questions on an online platform called Zaption. One-half of the residents viewed the lectures uninterrupted (Group A) and the remainder (Group B) viewed with integrated questions (2-6 per 5-15-min segment). Residents were expected to view the lectures prior to in-class time (total viewing time of approximately 2½ h). The 2½-h in-class session included four simulation and three procedure stations, with six PEM faculty available for higher-level management discussion throughout the stations. Total educational time of home preparation and in-class time was approximately 5 h. Residents performed better on the posttest as compared to the pretest, and their satisfaction was high with this educational innovation. In 2014, performance on the posttest between the two groups was similar. However, in 2015, the group with integrated questions performed better on the posttest. An online format combined with face-to-face interaction is an effective educational model for teaching core PEM topics. Copyright © 2016 Elsevier Inc. All rights reserved.
Quantitative comparison between crowd models for evacuation planning and evaluation
NASA Astrophysics Data System (ADS)
Viswanathan, Vaisagh; Lee, Chong Eu; Lees, Michael Harold; Cheong, Siew Ann; Sloot, Peter M. A.
2014-02-01
Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we describe a procedure to quantitatively compare different crowd models or between models and real-world data. We simulated three models: (1) the lattice gas model, (2) the social force model, and (3) the RVO2 model, and obtained the distributions of six observables: (1) evacuation time, (2) zoned evacuation time, (3) passage density, (4) total distance traveled, (5) inconvenience, and (6) flow rate. We then used the DISTATIS procedure to compute the compromise matrix of statistical distances between the three models. Projecting the three models onto the first two principal components of the compromise matrix, we find the lattice gas and RVO2 models are similar in terms of the evacuation time, passage density, and flow rates, whereas the social force and RVO2 models are similar in terms of the total distance traveled. Most importantly, we find that the zoned evacuation times of the three models to be very different from each other. Thus we propose to use this variable, if it can be measured, as the key test between different models, and also between models and the real world. Finally, we compared the model flow rates against the flow rate of an emergency evacuation during the May 2008 Sichuan earthquake, and found the social force model agrees best with this real data.
Lee Chang, Alfredo; Dym, Andrew A; Venegas-Borsellino, Carla; Bangar, Maneesha; Kazzi, Massoud; Lisenenkov, Dmitry; Qadir, Nida; Keene, Adam; Eisen, Lewis Ari
2017-04-01
Situation awareness has been defined as the perception of the elements in the environment within volumes of time and space, the comprehension of their meaning, and the projection of their status in the near future. Intensivists often make time-sensitive critical decisions, and loss of situation awareness can lead to errors. It has been shown that simulation-based training is superior to lecture-based training for some critical scenarios. Because the methods of training to improve situation awareness have not been well studied in the medical field, we compared the impact of simulation vs. lecture training using the Situation Awareness Global Assessment Technique (SAGAT) score. To identify an effective method for teaching situation awareness. We randomly assigned 17 critical care fellows to simulation vs. lecture training. Training consisted of eight cases on airway management, including topics such as elevated intracranial pressure, difficult airway, arrhythmia, and shock. During the testing scenario, at random times between 4 and 6 minutes into the simulation, the scenario was frozen, and the screens were blanked. Respondents then completed the 28 questions on the SAGAT scale. Sample items were categorized as Perception, Projection, and Comprehension of the situation. Results were analyzed using SPSS Version 21. Eight fellows from the simulation group and nine from the lecture group underwent simulation testing. Sixty-four SAGAT scores were recorded for the simulation group and 48 scores were recorded for the lecture group. The mean simulation vs. lecture group SAGAT score was 64.3 ± 10.1 (SD) vs. 59.7 ± 10.8 (SD) (P = 0.02). There was also a difference in the median Perception ability between the simulation vs. lecture groups (61.1 vs. 55.5, P = 0.01). There was no difference in the median Projection and Comprehension scores between the two groups (50.0 vs. 50.0, P = 0.92, and 83.3 vs. 83.3, P = 0.27). We found a significant, albeit modest, difference between simulation training and lecture training on the total SAGAT score of situation awareness mainly because of the improvement in perception ability. Simulation may be a superior method of teaching situation awareness.
Mastoidectomy performance assessment of virtual simulation training using final-product analysis.
Andersen, Steven A W; Cayé-Thomasen, Per; Sørensen, Mads S
2015-02-01
The future development of integrated automatic assessment in temporal bone virtual surgical simulators calls for validation against currently established assessment tools. This study aimed to explore the relationship between mastoidectomy final-product performance assessment in virtual simulation and traditional dissection training. Prospective trial with blinding. A total of 34 novice residents performed a mastoidectomy on the Visible Ear Simulator and on a cadaveric temporal bone. Two blinded senior otologists assessed the final-product performance using a modified Welling scale. The simulator gathered basic metrics on time, steps, and volumes in relation to the on-screen tutorial and collisions with vital structures. Substantial inter-rater reliability (kappa = 0.77) for virtual simulation and moderate inter-rater reliability (kappa = 0.59) for dissection final-product assessment was found. The simulation and dissection performance scores had significant correlation (P = .014). None of the basic simulator metrics correlated significantly with the final-product score except for number of steps completed in the simulator. A modified version of a validated final-product performance assessment tool can be used to assess mastoidectomy on virtual temporal bones. Performance assessment of virtual mastoidectomy could potentially save the use of cadaveric temporal bones for more advanced training when a basic level of competency in simulation has been achieved. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Temporal evolution modeling of hydraulic and water quality performance of permeable pavements
NASA Astrophysics Data System (ADS)
Huang, Jian; He, Jianxun; Valeo, Caterina; Chu, Angus
2016-02-01
A mathematical model for predicting hydraulic and water quality performance in both the short- and long-term is proposed based on field measurements for three types of permeable pavements: porous asphalt (PA), porous concrete (PC), and permeable inter-locking concrete pavers (PICP). The model was applied to three field-scale test sites in Calgary, Alberta, Canada. The model performance was assessed in terms of hydraulic parameters including time to peak, peak flow and water balance and a water quality variable (the removal rate of total suspended solids). A total of 20 simulated storm events were used for model calibration and verification processes. The proposed model can simulate the outflow hydrographs with a coefficient of determination (R2) ranging from 0.762 to 0.907, and normalized root-mean-square deviation (NRMSD) ranging from 13.78% to 17.83%. Comparison of the time to peak flow, peak flow, runoff volume and TSS removal rates between the measured and modeled values in model verification phase had a maximum difference of 11%. The results demonstrate that the proposed model is capable of capturing the temporal dynamics of the pavement performance. Therefore, the model has great potential as a practical modeling tool for permeable pavement design and performance assessment.